'Killer robots' with the ability to attack people without human intervention should be pre-emptively banned, a major human rights group has warned.
Such weapons do not currently exist - at least in a form which is regularly employed on the battlefield by major governments.
In fact the term 'killer robots' still brings to mind science fiction rather than a military reality.
But Human Rights Watch said in a report ('Losing Humanity: The Case Against Killer Robots') that fully autonomous weapons are closer than ever to being used, and would represent a huge step beyond drones currently used in combat, which require human intervention before firing on targets.
The New York-based group said that statements by military officials insisting current drones are not autonomous "leave open the possibility that robots could one day have the ability to make such choices on their own power".
Within 20 to 30 years it is possible that automatic killer robots could be on the battlefield, HRW said:
Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries - including China, Germany, Israel, South Korea, Russia, and the United Kingdom - have also been involved.
Above: The United Kingdom's Taranis combat aircraft, whose prototype was unveiled in 2010, is designed strike distant targets, "even in another continent."
With this possibility on the horizon, automatic 'killer robots' should be banned now, the report argues.
"Such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict," the report said.
The report, produced with Harvard Law School's International Human Rights Clinic, adds that such weapons "would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians".
The result could be that nobody would be legally responsible for the killing of civilians during conflict.
"Giving machines the power to decide who lives and dies on the battlefield would take technology too far," said Steve Goose, Arms Division director at Human Rights Watch. "Human control of robotic warfare is essential to minimising civilian deaths and injuries."