An Air Force general said artificial intelligence utilized by the U.S. military in modern warfare holds higher ethical standards than foreign adversaries because of the nation’s foundational “Judeo-Christian society.”
“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass,” Lt. Gen. Richard G. Moore Jr. said. “Not everybody does, and there are those that are willing to go for the ends — regardless of what means have to be employed, and we’ll have to be ready for that.”
Moore, a three-star general serving as the USAF’s deputy chief of staff for plans and programs, made those comments on Thursday during a discussion at the Hudson Institute that centered around improving cyberspace and spectrum superiority in the military branch.
Moore said the military’s approach holds more ethical standards than its adversaries when asked about the Pentagon’s position on autonomous warfare, raising concerns about using AI in future international conflicts.
“What will the adversary do?” Moore said. “It depends who plays by the rules of warfare and who doesn’t. There are societies that have a very different foundation than ours.”
The U.S. military has dominated modern combat in the cyberspace and electromagnetic spectrum for years, allowing forces to “blind its enemies and gain a potentially decisive advantage,” the Hudson Institute writes in the event’s description. But technological threats are rising in the East from nations including China, Russia, Iran, and North Korea that “employ state-sponsored cyberattacks as a tool for gray-zone aggression.”
Technology experts warned lawmakers last week to act quickly in the race for AI dominance. China reportedly is leading the world in funding innovative defense measures as the U.S. focuses on consumer services such as ChatGPT.
“We need to consider what the overall investment into military implementations looks like. And that’s where there’s a large disparity,” Scale AI founder Alexander Wang told a House Armed Services subcommittee. “If you compare as a percentage of their overall military investment, the PLA is spending somewhere between one to two percent of their overall budget into artificial intelligence whereas the DoD is spending somewhere between 0.1 and 0.2 of our budget on AI.”
Defense Department officials began working on responsible guidelines for the military to use AI and autonomy on future battlefields during the Trump administration, according to its website. After months of consultation with leading AI experts in commercial industry, government, academia, and the American public, the DOD AI strategy objective directed the U.S. military to lead in AI ethics and the lawful use of AI systems.
President Joe Biden’s State Department issued a declaration on “responsible military use of artificial intelligence and autonomy” in February, stating that the use of AI in armed conflict must comply with applicable international humanitarian law, including its fundamental principles.
The declaration states military use of AI capabilities should remain within a responsible human chain of command and control during operations to consider risks and benefits and minimize unintended bias and accidents.
According to the Pentagon’s latest budget request reported by Defense One, officials have added “several forms” of ethical AI exploration.
“The first one is what do we think we’re allowed to let AI do, the second one is how do we know how the algorithm made decisions and do we trust it, and the third one is at what point are we ready to let the algorithm start doing some things on its own that maybe we are or aren’t comfortable with.”
Alex John London, a professor of ethics and computational technologies at Carnegie Mellon University, told The Washington Post that Moore’s ethical concerns surrounding AI use in war “are broader than any single tradition.”
“There’s a lot of work in the ethics space that’s not tied to any religious perspective, that focuses on the importance of valuing human welfare, human autonomy, having social systems that are just and fair,” he said.
Moore addressed his remarks in an email to The Post that the foundation of his comments “was to explain that the Air Force is not going to allow AI to take actions, nor are we going to take actions on information provided by AI unless we can ensure that the information is in accordance with our values.”
“While this may not be unique to our society, it is not anticipated to be the position of any potential adversary,” Moore wrote.