Will "killer robots" end up causing harm to humans, especially with militaries interested in developing robots that would be able to engage human forces?
Telsa and SpaceX founder Elon Musk, Apple co-founder Steve Wozniak, physicist Stephen Hawking, and more than 1,000 scientists and engineers have signed an open letter to prevent a future open arms race focused on killer artificial intelligence. The idea of robot regulation was mentioned earlier in the year, and there appears to be growing momentum to make sure things are kept within human control.
"AI technology has reached a point where the deployment of [autonomous weapons] is - practically if not legally - feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," according to the letter.
Unlike chemical and nuclear warfare, robots could be developed - and sent to the battlefield - in a more anonymous manner, which has people worried.
"In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."