Something I just submitted to slashdot:
A small company called Neural Robotics has produced a robotic mini-helicopter armed with a rapid-fire shotgun. Based on their low-cost AutoCopter, the UAV uses neural network-based flight control algorithms to fly in either a self-stabilizing semi-autonomous mode controlled by a remote operator, or a fully-autonomous mode which can follow GPS waypoints. A video of the AutoCopter Gunship is available. (Dijjer mirror)
Stepping aside the ethical issues of replacing soldiers with flying shotgun-wielding robots for the moment, their “neural network-based” flight control system seemed like an interesting technical accomplishment. This PDF briefing has a few details.
Taking a look at page 14 of their PDF though, perhaps their control system is a little on the simplistic side. It seems to just update roll and pitch based on the current movement and facing of the helicopter, without making use of visual information or other sensors. I’m not too familiar with flight control, but using a neural network for that seems like overkill. When in fully-autonomous mode, I wonder if they make use of sensors for crash-avoidance at all, or if they just hope that nothing’s in the way of the chosen GPS coordinates.
Assuming they haven’t done so already, it would be rather neat to load some range-finding sensors on the helicopter and have it automatically avoid nearby obstacles; the basic algorithms should be fairly straightforward.
Another idea is to allow the robot to visually track a point of laser light, potentially allowing somebody to control the robot with a designated laser. The military application of this is pretty obvious: You could quickly point a laser wherever the people shooting at you are hiding, so that the robot knows what area to scope out. A laser could also be used to trace out a patrol route for the robot, so that a user doesn’t have to deal with typing in cumbersome GPS coordinates.
As for civilian applications, the AutoCopter with a stabilized camera might be useful for filming video. One could imagine a system of two designated laser pointers, one for each hand. One pointer would designate a spot for the robot to hover over, while another pointer would indicate where the robot should direct its camera. Of course, one could alternatively just hire a dedicated RC operator, so perhaps this would be of limited usefulness.