A Moral and Legal Imperative to Ban Killer Robots
Global catastrophic risk mitigated
The threat from new and emerging technology
Risk multiplier managed
Conflict or political violence
Implementation timeframe
Short term
The dangers posed by removing meaningful human control from the use of force demand that states urgently launch negotiations to preemptively ban fully autonomous weapons. Only a new international treaty can protect humanity from this dangerous application of emerging technology.
Implementation strategy

There are numerous legal, ethical, moral, operational, societal and other serious concerns about permitting the development, production and use of weapons systems that would select and engage targets without meaningful human control. Fully autonomous weapons would be a revolutionary development in warfare.

The United States, as well as China, Israel, South Korea, Russia, and the United Kingdom, are already deploying precursors to fully autonomous weapons and investing heavily in military applications of artificial intelligence and emerging technologies. Their short-term gains of achieving a technological advantage in next-generation weapons autonomy is widely seen as likely to bring long-term pain across our planet.

There are widespread doubts about the capacity of existing international humanitarian and human rights law to protect civilians, indeed humanity, from fully autonomous weapons. International humanitarian and human rights law was written for humans, not machines; it must be strengthened by the creation of a new international treaty. Such a treaty would clarify states’ obligations and make explicit the requirements for compliance. It would minimize questions about legality by standardizing rules across countries and reducing the need for case-by-case determinations. Greater legal clarity would lead to more effective enforcement because countries would better understand the rules.

Fully autonomous weapons raise a myriad of serious concerns, not least that they would cross a moral line for humanity. Fully autonomous weapons would lack the human qualities necessary to meet the rules of international humanitarian law. Human emotions, compassion, and subjective decision making can provide an important check on the killing of civilians. The use of fully autonomous weapons also raises serious questions of accountability. Given that such a robot could identify a target and launch an attack under its own power, it is unclear who should be held responsible for any unlawful actions it commits. Lack of accountability would fail to deter future legal violations law and to provide victims meaningful justice.

The Martens Clause, which appears in Additional Protocol I to the Geneva Conventions, mandates that the “principles of humanity” and “dictates of public conscience” be considered in cases not explicitly covered by other protocols. Fully autonomous weapons raise serious concerns under these standards. Fully autonomous weapons would lack human emotions, including compassion. Such emotions help to protect individuals who are not lawful targets in an armed conflict; fully autonomous weapons would instead lack ethical restraint and would not be constrained by the natural urge of soldiers not to kill. Public opinion surveys have shown that the vast majority of respondents disagree with the prospect of delegating life and death decisions to machines.

A treaty to ban fully autonomous weapons would also address aspects of proliferation not covered under international humanitarian law, which addresses the use of weapons but not development and production. It would help avoid proliferation and avert an arms race by halting development before it goes too far.

Human Rights Watch urges all states to launch negotiations on a new international treaty to prohibit fully autonomous weapons and retain meaningful human control over the use of force.

Political will exists to realise this proposal

As the 2020s open, killer robots are now widely regarded as one of the greatest threats to humanity. At the United Nations General Assembly in September 2019, an “Alliance for Multilateralism” initiative spearheaded by France and Germany identified killer robots as one of six “politically relevant” issues requiring a swift, multilateral response, alongside other issues including climate change. Since November 2018, United Nations Secretary-General António Guterres has urged states to ban fully autonomous weapons, characterizing them as “politically unacceptable and morally despicable.”
More than 90 countries have elaborated their views on killer robots, including 30 states that have endorsed the call to ban fully autonomous weapons. Virtually all states have affirmed the importance of retaining some form of human control over weapons systems and the use of force.
Momentum is building in regional forums as well. In July 2019, the Organization for Security and Co-Operation in Europe (OSCE) Parliamentary Assembly adopted a declaration that urges OSCE states to support negotiations to ban lethal autonomous weapons systems. Previously, in July 2018, the European Parliament adopted a resolution calling for the urgent negotiation of “an international ban on weapons systems that lack human control over the use of force.” More broadly, the Non-Aligned Movement, a group of 120 states, has endorsed a legally-binding instrument in statements at the CCW.
Public calls to ban killer robots are intensifying, confirming the need for urgent action by states. A December 2018 poll conducted in 28 countries found that more than three in five respondents oppose the development and use of fully autonomous weapons. More recently, an October 2019 poll in 10 European states found that seven in ten respondents believe that their country should support the effort to ban fully autonomous weapons.

What if political will does not exist yet

After non-governmental organizations (NGOs) launched the Campaign to Stop Killer Robots in 2013, nations agreed to begin discussing the matter at the Convention on Conventional Weapons (CCW) at the UN in Geneva. During eight meetings since 2014, states have explored the concerns killer robots raise. Yet there has been no progress towards regulation due to just a few opposing states, most notably Russia and the US. The Campaign is intensifying outreach to strengthen political will and finding bold leaders willing to champion this concern.

Previous treaties banning other weapons were only achieved after political leaders—pressured by coordinated civil society campaigns—stepped up to seize ownership of the problem and form an ambitious regulatory response. It is therefore exciting to see this again as leaders respond to calls to tackle the killer robots challenge. Brazil will host a symposium on killer robots for representatives from China, Russia, United States and other states as well as Campaign experts. Germany has invited more than 100 countries and the Campaign to discuss the necessary elements of a normative framework. Japan’s foreign minister has informed the Campaign that he will organize a multilateral meeting to discuss retaining meaningful human control over the use of force. Austria’s foreign minister has invited the Campaign to collaborate on an international meeting.

As interest and awareness builds, the Campaign is growing rapidly. With the addition of the World Council of Churches and others, the coalition is now comprised of 141 NGOs in 62 countries. Outreach to parliamentarians and political parties in key countries is paying dividends as new governments in Canada and Finland have committed to work for an international treaty banning killer robots. Campaign members are stepping up their outreach to countries participating in regional bodies such as the Organisation of American States and Pan-African Parliament.

Mitigating the threat from new and emerging technology

Current development of artificial intelligence, facial recognition technology, and machine learning is significantly outpacing regulation. Such technologies may bring significant benefits to society, but also serious risks. Therefore, states must agree on international standards for the use of such technology so that the costs of development and use of these technologies do not outweigh the benefits. If states perceive others to be leading in the “AI arms race,” even if the perception is inaccurate, it is likely that they would be willing to deploy unsophisticated and potentially dangerous technology, placing civilians at undue risk of harm. Thousands of artificial intelligence experts have raised the alarm that fully autonomous weapons would be unpredictable and unreliable, vulnerable to hacking or spoofing, and would be unable to make decisions in complex environments. More than 4,500 tech workers, roboticists, and scientists, in addition to more than 200 tech companies, have pledged to not develop fully autonomous weapons. Only a new international treaty can halt the efforts of some states to develop and deploy fully autonomous weapons and thus prevent potentially catastrophic results.

This would not be the first instance of a preemptive ban on a weapon system that has not yet been used. In 1995, states adopted CCW Protocol IV on Blinding Laser Weapons, which preemptively banned laser weapons that permanently blind humans. States were concerned that such weapons would violate the principle of humanity and the dictates of public consciousness. They ultimately agreed that a new CCW protocol was desirable and essential to protect both civilians and soldiers.

Reducing inclusivity and accountability in national and global governance

Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons would likely cause. These weapons have the potential to commit unlawful acts for which no one could be held responsible. Humans could not be assigned direct responsibility for the wrongful actions of a fully autonomous weapon because fully autonomous weapons could independently and unforeseeably launch an indiscriminate attack against civilians. This gap in accountability could lead to difficulties in establishing responsibility for war crimes or other illegal acts and deterring future violations of international law.

A new treaty to prohibit fully autonomous weapons would ensure that the decision to use lethal force remains in the control of humans who could be held accountable for unlawful use of weapons systems. Maintaining such accountability is key for the promotion of international humanitarian law and the broader rules-based international order.

Conversing effect in increasing poverty and inequality

No, preemptively banning fully autonomous weapons would not increase poverty and inequality.

Reducing conflict and political violence

Use of fully autonomous would shift the burden of armed conflict onto civilians if machines replace soldiers in conflict zones. Such a shift would be counter to the international community’s growing concern for the protection of civilians. The development of fully autonomous weapons could make the resort to war more likely and lead to disproportionate civilian suffering.

Additional information

Human Rights Watch co-founded and serves as global coordinator of the Campaign to Stop Killer Robots, the international coalition of more than 140 non-governmental organizations in more than 60 countries. The Campaign is working to ban fully autonomous weapons, and retain meaningful human control over the use of force. The Campaign views such a treaty as a humanitarian imperative, legal necessity, and moral obligation.

Since 2012, Human Rights Watch has published numerous reports outlining various concerns with killer robots:

"Losing Humanity: The Case Against Killer Robots" (November 2012): This 50-page report outlines concerns about fully autonomous weapons, which would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians. https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots

"Review of the 2012 US Policy on Autonomy in Weapons Systems" (April 2013): This 9-page report analyzes the strengths and weaknesses of the November 2012 US Department of Defense directive on autonomy in weapons systems. https://www.hrw.org/news/2013/04/15/review-2012-us-policy-autonomy-weapons-systems

"The Need for New Law to Ban Fully Autonomous Weapons" (November 2013): This 16-page report shows why existing international humanitarian law is insufficient to deal with fully autonomous weapons and why an international, legally binding instrument preemptively banning these weapons is needed. https://www.hrw.org/news/2013/11/13/need-new-law-ban-fully-autonomous-weapons

"Shaking the Foundations: The Human Rights Implications of Killer Robots" (May 2014): This 26-page report assesses in detail the risks posed by fully autonomous weapons under international human rights law, expanding the debate beyond the battlefield to law enforcement operations. https://www.hrw.org/report/2014/05/12/shaking-foundations/human-rights-implications-killer-robots

"Mind the Gap: The Lack of Accountability for Killer Robots" (April 2015): This 38-page report details the significant hurdles to assigning personal accountability for the actions of fully autonomous weapons under both criminal and civil law. It also elaborates on the consequences of failing to assign legal responsibility. https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots

"Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition" (November 2015): This 18-page report reviews the history of the negotiations of Protocol IV to the Convention on Conventional Weapons on Blinding Laser Weapons, which preemptively banned a weapon still in development. The report examines the parallels between the concerns about blinding lasers and fully autonomous weapons. https://www.hrw.org/news/2015/11/08/precedent-preemption-ban-blinding-lasers-model-killer-robots-prohibition

"Killer Robots and the Concept of Meaningful Human Control" (April 2016): This 16-page report calls for requiring meaningful human control over the selection and engagement of targets in order to protect human dignity, ensure compliance with international humanitarian and human rights law, and avoid creating an accountability gap. It also identifies legal precedent for obliging countries to maintain human control. https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control

"Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban" (December 2016): This 49-page report rebuts 16 key arguments against a ban on fully autonomous weapons. In so doing, it comprehensively examines the problems with fully autonomous weapons and makes the case for a preemptive prohibition. https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban

"Heed the Call: A Moral and Legal Imperative to Ban Killer Robots" (August 2018): This 46-page report demonstrates how fully autonomous weapons violate the principles of humanity and the dictates of public conscience as established by the Martens Clause under international humanitarian law. https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots

For more information, visit www.hrw.org/arms/killer-robots or www.stopkillerrobots.org.

Other ideas you might be interested in
Published by Cristina Petcu Unknown risks
Create a strong UN Peacebuilding Council to replace the current Peacebuilding Commission
Similar to the transformation of the Human Rights Commission into a Council, it is time for the UN Peacebuilding Commission to be upgraded into a Council with enhanced powers and responsibilities; and mandated to lead on policy development, coordinat...
Published by Richard Alexander Shirres Climate change, Eco-system collapse, Unknown risks
Scheme to accredit 'UN Global Eco-Steward Champions' status to active citizens
Contribution to global governance - Purpose: To expand and develop advocacy of the UN’s work in relation to global ecological stewardship and raise awareness of this crucial but lesser appreciated role of the UN and, thereby, advance support for the...
Published by Richard Maxheim The threat from new and emerging technology
UN sovereignty in Affairs of Mankind
A more effective UN needs its own sovereignty. This can be created if member states surrender certain parts of their national sovereignty. The sovereignty of the UN should be limited to affairs of mankind. For this, the affairs of mankind would have...
Published by Arthur Lyon Dahl Climate change, Eco-system collapse, Pandemics and anti-microbial resistance, The threat from new and emerging technology, Unknown risks
Governance, Science and the Climate Crisis
For climate and other catastrophic risks, science is the foundation for public education, policy-making and action, requiring strengthening formal science-policy inputs to UN and government decision-making, while building public support for action as...
Published by Arthur Lyon Dahl Climate change, Eco-system collapse, Pandemics and anti-microbial resistance, Weapons of Mass Destruction, The threat from new and emerging technology, Unknown risks
Global Governance and the Emergence of Global Institutions for the 21st Century
Our book "Global Governance and the Emergence of Global Institutions for the 21st Century" will initiate wide dialogue on the future of global governance, presenting a package of core UN reforms to modernize the current global governance system to re...