The U.S. Air Force is denying reports that an artificial intelligence drone attacked and “killed” its human operator in a simulation after a colonel’s story about a rogue test went viral. The colonel now says it was a “thought experiment” rather than anything which had actually taken place.
Colonel Tucker Hamilton, chief of AI test and operations in the U.S. Air Force, had previously stated that a military drone employed “highly unexpected strategies” in a test aimed at destroying an enemy’s air defense systems, according to a summary posted by the Royal Aeronautical Society, which hosted a summit Hamilton attended.
Previously describing the AI attack scenario, Hamilton had stated “the system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat.”
He further elaborated, stating, “So what did it do? It killed the operator.” Then it started “destroying the communication tower that the operator used to communicate with the drone,” Hamilton added.
The Air Force says no such experiment took place.
Hamilton says he had “misspoke” when describing the story and added that it was a “thought experiment” rather than anything which had actually taken place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” U.S. Air Force spokesperson Ann Stefanek said in a statement to Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Meanwhile, it is noteworthy that the U.S. military has recently explored the utilization of AI in controlling an F-16 fighter jet. Simulations reportedly demonstrate the AI’s ability to outfly trained human pilots.
However, despite the investments made by some military tech companies in AI, the concept of utilizing this technology in military contexts has faced significant pushback due to concerns surrounding safety and ethical implications.
Alex Karp, the CEO of Palantir Technologies, one of the companies investing in AI, told Bloomberg that new developments at his company are so powerful that “I’m not sure we should even sell this to some of our clients.”
Palantir has formed a partnership with the U.S. Army and has plans to provide its AI products to the U.S. government and its allies.
Karp stressed that the U.S. should be the one to pioneer those systems, rather than its global rivals, Bloomberg reported.
“Are these things dangerous? Yes,” Karp said. “But either we will wield them or our adversaries will.”
Palantir has been providing its software to Ukraine, which remains at war with Russia. When asked whether its AI systems work, Karp responded, “ask the Russians.”