Drone Pilot SchoolAutonomous Robots in the Fog of War

July 29, 2021by helo-10
https://coreheli.com/wp-content/uploads/2021/07/1627581253_image.jpg

Another illuminating form of testing that is often skipped in the rush to deploy today’s military robots involves simply playing with the machines on an experimental “playground.” The playground has well-defined boundaries and safety constraints that allow humans as well as other robots to interact with the test robot and observe its behavior. Here, it’s less important to know the details of the sensor data and the exact sequence of decisions that the machine is making; what emerges on the playground is whether or not the robot’s behavior is acceptably safe and appropriate.

Moving to smarter and more autonomous systems will place an even greater burden on human evaluators and their ability to parse the outcomes of all this testing. But they’ll never be able to assess all possible outcomes, because this would involve an infinite number of possibilities. Clearly, we need a new way of testing autonomous systems that is statistically meaningful and also inspires confidence in the results. And of course, for us to feel confident that we understand the machine’s behavior and trust its decision making, such tests will need to be completed before the autonomous robot is deployed.

A swarm of small robots scatters across the floor of an abandoned warehouse. Each tread-wheeled bot, looking like a tiny tank with a mastlike antenna sticking out of its top, investigates the floor space around it using a video camera to identify windows and doors and a laser scanner to measure distances. Employing a technique called SLAM (for “simultaneous localization and mapping”), it creates a map of its surroundings, keeping track of its own position within the map. When it meets up with another robot, the two exchange maps and then head off to explore uncharted territory, eventually creating a detailed map of the entire floor.

These ingenious mapping robots, designed by researchers through the U.S. Army–funded Micro Autonomous Systems and Technology program, represent the cutting edge of robot autonomy. In future iterations, their designers plan to equip the machines with wall-penetrating radar and infrared sensors, as well as a flexible “whisker” to sense proximity to obstacles. Clever as they are, though, these robots lack a key capability that all future robots will need: They cannot easily interact with other kinds of robots.

Now consider the U.S. Navy’s Littoral Combat Ship. Rather than having a fixed architecture, it will have swappable “mission modules” that include vertical takeoff unmanned aerial vehicles, unmanned underwater vehicles, and unmanned surface vehicles. All these robotic systems will have to operate in concert with each other as well as with manned systems, to support intelligence, surveillance, and reconnaissance missions, oceanographic surveys, mine warfare, port security, and so on.

Achieving this interoperability will be no small feat. While significant progress has been made on automating a single robot as well as a team of identical robots, we are not yet at the point where an unmanned system built for the Army by one contractor can seamlessly interact with another robotic system built for the Navy by another contractor. Lack of interoperability isn’t exclusively a robotics problem, of course. For decades, developers of military systems of all kinds have tried and often failed to standardize their designs to allow machines of different pedigrees to exchange data. But as different branches of the military continue to add to the ranks of their battlefield robots, the enormous challenge of interoperability among these disparate systems only grows.

A particular difficulty is that most automation and control approaches, especially those used for collaborating, assume that all the unmanned systems have the same level of autonomy and the same software architecture. In practice, that is almost never the case, unless the robots have been designed from scratch to work together. Clearly, new approaches are needed so that you can introduce an unknown, autonomous system without having to reconfigure the entire suite of robots.

Interoperability between manned and unmanned systems is even more challenging. The ultimate goal is to have autonomous systems collaborate with humans as equal partners on a team, instead of simply following commands issued by their operators. For that to happen, though, the robots will need to understand human language and intent, and they will need to learn to communicate in a way that is natural for humans.

Interoperability also requires standards, procedures, and architectures that enable effective integration. Today, for instance, unmanned ground and maritime systems use a messaging standard called the Joint Architecture for Unmanned Systems (JAUS). The messaging standard for unmanned air systems, meanwhile, is STANAG-4586, a NATO-mandated format. Within their respective domains, both of these serve their purpose.

But when a UAV needs to communicate with an unmanned ground vehicle, should it use JAUS or STANAG-4586 or something else entirely? The most promising effort in this arena is the JAUS Tool Set, an open, standards-based unmanned vehicle messaging suite that is in beta testing. Using the tool set seems to improve interactions among unmanned vehicles. In the future, the tool set should allow the two message formats to be merged. Ultimately, that should accelerate the deployment of compatible and interoperable unmanned systems.

As robotic systems become more autonomous, they will also need the ability to consider the advice, guidance, and opinions of human users. That is, humans won’t be dictating behavior or issuing hard directives, but they should still be able to influence the robot’s planning and decision making. Integrating such information, including its vagaries, nuances, and uncertainties, will be a challenge for any autonomous system as its intelligence increases. But attaining these capabilities is within our reach. Of that, I am not skeptical.

About the Author

Lora G. Weiss is a lab chief scientist at the Georgia Tech Research Institute. Her Ph.D. work on signal processing for underwater systems first got her interested in robotics. “Signals don’t propagate well underwater, so you can’t rely on a human operator for control,” she says. “I quickly realized that the vehicles would have to start making decisions on their own.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

There is more to being a drone pilot than just buying a machine and flying in your backyard. It can be that simple, but most of us will need to understand some drone laws before we try to take to the sky.

SUBSCRIBE NOW

[contact-form-7 id=”300″ title=”Subscribe form”]
Objectively innovate empowered manufactured products whereas parallel platforms.