Photo: Georgia Institute of Technology
DEVELOPERS: Georgia Tech, University of Pennsylvania, and Jet Propulsion Laboratory
DESCRIPTION: Collaborative robots that can autonomously map an entire building for first-responder and military applications. Each palm-sized robot is equipped with a video camera for identifying doorways and windows and a laser scanner for measuring walls.
STATUS: Developed under U.S. Army Research Lab’s five-year, $38 million
Micro Autonomous Systems and Technology program. Mapping experiment conducted in 2010; next iteration to include small UAVs.
DESCRIPTION: Vertical takeoff and landing 8-kg micro air vehicle equipped with color and infrared video cameras for intelligence, surveillance, and reconnaissance. Can hover and observe for up to 50 minutes at altitudes of up to 3000 meters.
STATUS: Deployed in Iraq starting in 2007 for roadside bomb detection.
Surveyed damage at Fukushima nuclear power plant following March 2011 earthquake and tsunami in northeastern Japan.
Photo: Northrop Grumman
Northrop Grumman Corp.
DESCRIPTION: U.S. Navy’s stealth unmanned combat aerial vehicle designed for takeoff and landing on an aircraft carrier. Has a range of 3380 kilometers and can carry up to 2000 kg of ordnance in two weapons bays. Originated as a project of the Defense Advanced Research Projects Agency.
STATUS: First test flight February 2011. Scheduled deployment by 2018.
And with no humans in the loop to help interpret the data, reason about the data, and decide how to respond, situational understanding gets even trickier. Using current technology, no robot has all the onboard sensors needed to precisely decipher its environment. What’s more, decisions have to be made based on uncertainties and incomplete or conflicting information. If a robo-sentry armed with a semiautomatic rifle detects someone running from a store, how can it know whether that person has just robbed the store or is simply sprinting to catch a bus? Does it fire its weapon based on what it thinks is happening?
Humans, too, may struggle to read such a situation, but perhaps unsurprisingly, society holds robots to a higher standard and has a lower tolerance for their errors. This bias may create a reluctance to take the leap in designing robots for full autonomy and so may prevent the technology from moving ahead as quickly as it could. It should not take five people to fly one UAV; one soldier should be able to fly five UAVs.
On the other hand, because military robots typically operate in geopolitically sensitive environments, some added caution is certainly warranted. What happens, for example, if a faulty sensor feeds a UAV erroneous data, causing it to cross a border without authorization? What if it mistakenly decides that a “friendly” is a target and then fires on it? If a fully autonomous, unmanned system were to make such a grave mistake, it could compromise the safety of other manned and unmanned systems and exacerbate the political situation.
The Predator UAV, developed in the 1990s, went from concept to deployment in less than 30 months, which is extremely fast by military procurement standards.
Little wonder, then, that the UAV exhibited quite a few kinks upon entering the field. Among other things, it often failed when flying in bad weather, it was troublesome to operate and maintain, and its infrared and daylight cameras had great difficulty discerning targets. But because commanders needed the drone quickly, they were willing to accept these imperfections, with the expectation that future upgrades would iron out the kinks. They didn’t have time to wait until the drone had been thoroughly field-tested.
But how do you test a fully autonomous system? With robots that are remotely operated or that navigate via GPS waypoints, the vehicle’s actions are known in advance. Should it deviate from its instructions, a human operator can issue an emergency shutdown command.
However, if the vehicle is making its own decisions, its behavior can’t be predicted. Nor will it always be clear whether the machine is behaving appropriately and safely. Countless factors can affect the outcome of a given test: the robot’s cognitive information processing, external stimuli, variations in the operational environment, hardware and software failures, false stimuli, and any new and unexpected situation a robot might encounter. New testing methods are therefore needed that provide insight and introspection into why a robot makes the decisions it makes.
Gaining such insight into a machine is akin to performing a functional MRI on a human brain. By watching which areas of the brain experience greater blood flow and neuronal activity in certain situations, neuroscientists gain a better understanding of how the brain operates. For a robot, the equivalent would be to conduct software simulations to tap the “brain” of the machine. Subjecting the robot to certain conditions, we could then watch what kinds of data its sensors collect, how it processes and analyzes those data, and how it uses the data to arrive at a decision.