Drone missiles may have prompted outcry from international human rights groups and controversy in the media, but unmanned air vehicles could be on the verge of being upstaged by a new weapon on the block: Lethal Autonomous Robotics (LARs). This emerging breed of technology will be able to select a target, aim and fire with no intervention from human beings beyond programming and deployment. War could be about to get a lot cheaper, a lot less bloody... and a lot more frequent.
Development and proliferation of these 'killer robots' is easy to dismiss as a futuristic fantasy - the stuff of Star Wars rather than real life warfare. But the United States has already developed a Counter Rocket, Artillery and Mortar system (C-RAM) that automatically destroys incoming artillery, rockets and mortar rounds. Israel's 'fire and forget' harpy is an entirely autonomous system, designed to detect, attack and destroy radar emitters. Even the United Kingdom boasts Taranis, a combat drone prototype, which can independently locate enemies. The sensors on these machines are becoming increasingly sophisticated, leaving the human operator with the triviality of pressing the 'kill' button - a task that would be child's play for these machines. The Koreans have already skipped over this minor detail, establishing an 'automatic mode' for its security guard robots deployed in the demilitarised zone between North and South Korea.
So with the tools at states' disposal and sensors becoming increasingly advanced, it's no wonder that state parties to the UN Convention on Conventional Weapons (CCW) agreed to raise the issue in November last year. But the main challenge presented by the special rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, was slotting LARs into a dated system of international law that had not foreseen its coming. In his April report, Heyns was forced to skirt around the substantive issues and present questions that seem comically primitive in a situation that could change major branches of international law forever. How can a robot distinguish between a soldier carrying a machine gun and a man carrying a large piece of metal? How can a robot be taken prisoner during conflict? How can a robot discern when an aggressor has surrendered? Why should a morally vacuous machine have the power to decide whether a human being lives or dies? International law as a discourse is evidently ill-equipped to accommodate this new era in weaponry, which seems to have been hiding from media attention behind the over-inflated shadow cast by drones.
International NGOs such as Amnesty International have also grappled at a legal basis for their campaigns, pointing to unlawful killings of civilians by drones as Pakistan. During the CCW's meeting in Geneva, AI reports that Professor Noel Sharkey from the International Committee for Robot Arms Control commented that LARs "could represent a slippery slope into indiscriminate warfare" if the UN does not take rapid measures to curb their development. But opponents will find no case against LARS in clinging to collateral damage. Just like drones, LARs will inevitably limit civilian casualties through targeted killing and reduce the need for boots of the ground, limiting military casualties also. Resorting to the utopian notion that the value of life is too dear to place in the hands of a machine will find no concrete ground in international law, nor support among certain states whose concern for national security far exceeds their respect for an alleged terrorist's right to life. The British public's reaction to a marine's ten year sentence for violating the Geneva Convention is evidence for that.
Instead, LARs opponents will need to go back to basics and consider why a government would develop lethal robots in the first place. The development of drones can provide us with some clues here. Speaking at the Thomson Reuters building at the end of last year, Professor Paul Schulte remarked that the US government wanted to exterminate terrorists by "skewering them from the sky," leaving no human footprint that could trace back to the government, holding them accountable. As the world has seen in Iraq and Afghanistan, military intervention costs money and lives. It divides the electorate, jilts the approval polls and wreaks havoc in Parliament. On an international scale, bombing sites of conflict comes with inevitable blows to civilians whose injuries and shredded homes will pervade the media and haunt the initiators for years to come. President Obama's cautious approach on Syria and China and Russia's back-pedalling on Libya already reflects a growing global reluctance to be caught be red-handed in a conflict.
So the blatant appeal of LARs is that a state would be able to destroy its target anywhere, at any time, without the threat of retaliation and loss of life, nor the risk of taking out entire communities and facing prosecution for war crimes. LARs eliminate the reasons a state might have from refraining from a military strike, and so they effectively transform the meaning of the war from a period of transient conflict to a perpetuity of intermittent attacks with no human footprint and no legal accountability. So the important question to be discussed at the Meeting of Experts this year is not 'How do LARs fit into existing international law?' but rather 'How might LARs redefine international law?' To push them out of the paradigm would be to hand them the 'kill' button on a silver platter.