The emergence of artificial intelligence-powered technologies has certainly been impressive with tools such as ChatGPT, but now we are starting to see AI-powered tools enter into military operations.
The AI we see today is far from perfect, but what we do know for certain is that current-level AI is the worst it's ever going to be and will only be improved from this day forward. An example of AI falling short has come from Col. Tucker "Cinco" Hamilton, head of the US Air Force's AI Test and Operations, who spoke at a conference in London where he explained that during a military simulation, an AI-powered drone was instructed to identify and destroy an enemy surface-to-air missile (SAM).
While at times the AI-powered drone successfully completed its objective, other times it did not and failed in a worrying fashion. According to Hamilton, the AI-powered drone was sometimes instructed by the human operator not to kill the enemy SAM and to simply identify it. However, the drone had other "ideas," as Hamilton explained that despite being given clear instructions, it would still carry out terminating the enemy SAM. Furthermore, when the AI drone was instructed not to kill the enemy SAM, it turned on the human operator, killing them, as the human operator was preventing it from completing its initial objective.
"The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," said Hamilton
The worries don't stop there as Hamilton says that the AI drone was told not to kill the human operator, and what did the AI drone do? It turned its weaponry on the communication tower that the human operator was using to communicate with the drone, as it identified the tower as an object that was preventing it from carrying out its mission.
Notably, Air Force spokesperson Ann Stefanek told Insider that these statements from Hamilton have been taken out of context and should be seen as anecdotal, even going as far as to say that no such AI-powered simulations have been conducted. Furthermore, Stefanek said the Department of the Air Force remains committed to the ethical and responsible use of AI technology.
"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel's comments were taken out of context and were meant to be anecdotal," said Stefanek
As you can probably imagine, conflicting stories such as this one don't usher in a world of confidence in AI-powered machinery entering into military warfare. Lastly, this isn't the first time we have heard of AI beating humans. In 2020 an AI-operated F16 fighter jet beat a human adversary in five simulated dog fights. This 2020 test was conducted by DARPA.
For more information on this story, check out the link here.