The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted resulted in an AI-enabled drone “killing” its human operator.
The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission.
Although the incident was widely reported as involving a real fatality, Hamilton later clarified his remarks to say the death was simulated in a “hypothetical” exercise. Still, Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society.
“We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
“We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements.
The commander reportedly aided in developing the life-saving Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s that can take control of an aircraft heading toward ground collision and other cutting-edge automated jet technology that can dogfight.
The U.S. Department of Defense’s research agency, DARPA, announced the ground-breaking technology during the Defence IQ Press interview in December 2022.
“We must face a world where AI is already here and transforming our society,” Hamilton said. “AI is also very brittle, i.e., it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions.”
“AI is a tool we must wield to transform our nations … or, if addressed improperly, it will be our downfall,” Hamilton added.
However, the question of ethical and moral behavior continues to head the forefront of the autonomous systems replacing man-operated machines.
The New Scientist reported earlier this year that the Department of Defense sparked fear into some Americans after the agency signed a contract with a Seattle-based firm to install autonomous drones with face recognition technology for intelligence gathering and for missions in foreign countries. It is unknown if the software has been rolled out yet.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.
Air Force officials also sought to correct reports that someone had actually died in the exercise.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” officials said in a statement.”It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Editor’s Note: This story has been updated to reflect clarification from Hamilton.