Unmanned drones and other military vehicles aren't new, and it looks like robots and autonomous hardware could be the future. However, there are legal and ethical questions when it comes to weapons systems that are able to identify and engage targets with no human interaction.
The idea that a robot or drone can detect its target and begin firing at the target without a human operator is frightening - but something that more researchers believe is feasible. There is a concern, however, that robots would be unable to accurately identify enemy combatants and civilians. Though there is a counter-argument that robots would cause less collateral damage than humans remotely operating the drones.
"Technologies have reached a point at which the deployment of such systems is - practically, if not legally - feasible within years, not decades," said Stuart Russell, an AI researcher at the University of California, Berkeley, in a commentary published in "Nature." The AI weapons "have been described as the third revolution in warfare, after gunpowder and nuclear arms."