Military drone swarms using AI could be the future of war, raising ethical concerns


As researchers apply artificial intelligence and autonomy to lethal aerial machines, their systems pose new questions about how much humans will remain in control of modern combat.

Intelligent “drone swarms” would represent a breakthrough in warfare. Rather than soldiers piloting individual uncrewed vehicles, they could deploy air and seaborne swarms to cooperate on missions “with limited need for human attention and control,” according to a recent U.S. government report.

Why We Wrote This

Artificial intelligence-powered drone technology could eventually change warfare. But the autonomy of lethal machines raises serious ethical dilemmas around how, and whether, to regulate development, deployment, and use of AI.

The question going forward is whether the Pentagon can overcome the many technological challenges of drone warfare while also maintaining the ethics of a democratic state. There are fears that adversaries may exploit their own swarm technology in future conflicts, without ethical constraints.

How much human oversight is necessary or desirable is a key question. Humans, after all, don’t process information as quickly as machines, which may increase pressure to take humans out of the loop in order to stay competitive in battle.

“We need more people thinking about them in the context of the military, in the context of international law, in the context of ethics,” says Margaret E. Kosal, a former science and technology adviser at the Defense Department.

The proliferation of cheap drones in conflicts in Ukraine and the Middle East has sparked a scramble to perfect uncrewed vehicles that can plan and work together on the battlefield. 

These next-generation, intelligent “swarms” would represent a breakthrough in warfare. Rather than soldiers piloting individual uncrewed vehicles, they could deploy air and seaborne swarms on missions “with limited need for human attention and control,” according to a recent U.S. government report. It’s the “holy grail” for the military, says Samuel Bendett, an adviser to the Center for Naval Analysis, a federally funded research and development center. 

It’s also an ethical minefield. As researchers apply artificial intelligence and autonomy to lethal machines, their systems raise the specter of drone armies and pose new questions about the role human control should play in modern combat. And while Pentagon officials have long promised that humans will always be “in the loop” when it comes to decisions to kill, the Defense Department last year updated its guidance to address AI autonomy in weapons. 

Why We Wrote This

Artificial intelligence-powered drone technology could eventually change warfare. But the autonomy of lethal machines raises serious ethical dilemmas around how, and whether, to regulate development, deployment, and use of AI.

“It’s a very high level of approval to even proceed with testing of a fully autonomous weapons system,” says Duane T. Davis, a senior lecturer in the computer science department at the Naval Postgraduate School in Monterey, California. But it does “provide for the possibility of completely autonomous weapons systems.”  

That’s largely because much U.S. military research is driven by fears of how adversaries may exploit their own swarm technology in a future conflict with the United States or its allies. The question going forward is whether the Pentagon can overcome the myriad technological challenges of drone warfare while also maintaining the ethics of a democratic state.

The concern is that China “is not going to wrestle with these same ethical decisions in the way that we will,” says Dr. Davis. 



Source link

Leave a Comment