The Pentagon is devising some whiz-bang autonomous weaponry, but Defense Secretary Ash Carter said Friday that the U.S. military would never use robotic systems that decided on their own when to kill.
Dozens of Defense Department labs and engineering centers are working on keeping the U.S. military “on the cutting edge” with weapons like swarming drones in the sea and air, Carter said.
But Carter told a conference at Washington’s Center for Strategic and International Studies that the Pentagon will always ensure that a human pulls the trigger.
“When it comes to using autonomy in our weapons systems, we must always have a human being making decisions about the use of force,” Carter said.
Carter said Pentagon scientists were working in areas such as machine learning, undersea warfare and biotech as they contemplated and prepared for battles of the future.
“Our Navy labs are developing and prototyping undersea drones in multiple sizes and with diverse payloads – which is important since unmanned undersea vehicles can operate in shallow water where manned submarines cannot,” Carter said.
What’s very dangerous is the idea . . . of autonomous vehicles that are simply given guidance to ‘Here’s a geographic area; kill anything that moves in that area.’ . . . Whenever you take the human out of the loop, you have the possibility of that kind of outcome.
Retired Adm. James G. Stavridis, dean of Fletcher School of Law
The notion that such drones might decide on their own when to strike and kill has been debated for years, and given heartburn to international humanitarian law experts and strategic planners inside and outside the Pentagon.
“What’s very dangerous is the idea . . . of autonomous vehicles that are simply given guidance to ‘Here’s a geographic area; kill anything that moves in that area,’ ” retired Adm. James G. Stavridis said in an interview in August.
“In my view, it is a violation of the laws of war. Whenever you take the human out of the loop, you have the possibility of that kind of outcome,” added Stavridis, who is the dean of the Fletcher School of Law and Diplomacy at Tufts University
Stavridis noted that human operators “make many, many mistakes” but that confidence shouldn’t be placed in machines to make such decisions.
Tim Johnson: 202-383-6028, @timjohnson4