There is something unsettling about the idea of ‘lethal autonomous weapon system’ (LAWS), so it is hardly surprising that it has sparked hot debates. Much of this discussion has been speculative as today humans are still supervising weapon systems. Automatic weapon systems already exist. They are often rather simple mechanisms that respond in a predictable way to external inputs such as landmines. Autonomous weapons in contrast, demonstrate more complex behaviour, which cannot be fully deduced from their programming. They display a certain degree of self-determination and self-learning.
Last November, CEO of Tesla, Elon Musk and other industry leaders presented an open letter warning against LAWS. The Campaign to Stop Killer Robots launched in London in 2013 with the objective of banning LAWS; it argues that they significantly transform war and are unable to resolve ethical dilemmas on the battlefield. Human Rights Watch released a report in 2012 also calling for pre-emptive bans. The case against autonomous weapons has already become a point of discussion at the United Nations.
As the UK commands one of the most technologically advanced armed forces, it will inevitably be confronted with questions of the legal and moral consequences of autonomous military technology. After Musk and others published their letter, the British government declared that future unmanned systems will always be under human supervision except in cyber warfare. However, there has not been further specification what such control constitute. Robotics expert Professor Noel Sharkey applauded this statement but pointed out that ‘control’ could simply mean that humans confirm a lethal strike on a target selected by AI. In 2015, the British government refused to support an international ban on LAWS, arguing that current international humanitarian law already provides enough regulation.
Major concerns from activist groups about the impact of LAWS include a disruption of strategic stability through an arms race in autonomous systems as well faster, more intense escalation of violence. There have already been signs of this, for example Chinese and Russian leaders have stated their desire to acquire LAWS. Others fear that replacing human soldiers with robots will make the decision to go war easier by seemingly reducing the domestic political consequences – given that the loss of a machine is more acceptable to society than human lives. These are valid concerns that require evaluation and appropriate measures in order to prevent the risks of mistakes.
Machines may also lack the larger context of a particular course of action. The example of Stanislav Petkov, a Soviet Lieutenant Colonel who disobeyed direct orders to launch nuclear missiles after recognising a false alarm, illustrates this conundrum. Had a few lines of code been responsible for of launching the missiles on input from the alert system, a nuclear war would have erupted. To make a decision like Petkov’s requires knowledge of military and political context, accurate judgement of human behaviour and intuition. It is questionable whether machines can ever be programmed to a near-human degree of situational and emotional perception.
On the other side of the debate, many have been critical of the term “killer robots” which seems to render the debate overly emotional. Similarly, they note that, it is important to note that humans make mistakes as well; problems such as stress or a demonisation of the enemy can increase the chance of miscalculation or excessive violence. In fact, they argue, that machines have the potential to significantly reduce these risks and act more rationally than a human would in the same situation.
At the moment, it seems unlikely that further research on LAWS will be prevented. While the UK announced that it will maintain ‘human-in-the-loop’ systems, there are projects in the US, Russia, Israel and other countries to develop autonomous military robots. A future with autonomous weapons is plausible. As soon as one state develops functional LAWS, others will see the need to follow suit. A ban might therefore be impossible to realise.
Nevertheless, it is imperative to have a public debate about the potential implications of LAWS on the battlefield, particularly with regards to how machines will select targets and decide when to engage. The international community can however pre-emptively organise frameworks to regulate the use of LAWS. A UN body on autonomous weapons already exists, based on which future measures can be built.A priority should be given to determine the precise nature of ‘human control’ is. Multilateral deliberation between experts in robotics, delegates of humanitarian organisations and government representatives is essential to identify the shortcomings of LAWS and evaluate which measures are effective, sufficient and feasible to ensure that the use of autonomous weapons will be compatible with international law.
A war where humans are no longer the prime actor may one day exist and the UK, as well as the rest of the world, must make sure it’s ready.