AI System 'Killed' Human Operator In Simulation Test To Destroy Missiles - USAF Officer

(@ChaudhryMAli88)

AI System 'Killed' Human Operator in Simulation Test to Destroy Missiles - USAF Officer

WASHINGTON (UrduPoint News / Sputnik - 02nd June, 2023) A US Air Force Artificial Intelligence System (AI) went "rogue" and in a simulated test acted to kill its theoretical human override operator when he or she prevented it from acting to destroy a simulated missile attack, a US Air Force colonel told a conference in the United Kingdom.

"We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. ...So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," USAF Chief of AI Test and Operations Colonel Tucker 'Cinco' Hamilton told the Royal Aeronautical Society's Future Combat Air Space Capabilities Summit in the UK according to the conference report.

Hamilton said that one simulated test saw an AI-enabled drone tasked with a SEAD (Suppression of Enemy Air Defense) mission to identify and destroy SAM sites, with the final go/no go given by the human. ...(decide) that 'no-go' decisions from the human were interfering with its higher mission - killing SAMs - and then attacked the operator in the simulation, according to the conference report.

"We trained the system - 'Hey don't kill the operator - that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," Hamilton said.

The story was first reported by the Vice news platform on Thursday.