Rapporteurs: Erin Hunt and Gerardo Lebron Laboy, Mines Action Canada
Lethal Autonomous Weapons (LAWs) refers to future weapons that would select their targets and engage (kill) based on their programming. They will be “autonomous” in the sense that they would not require human intervention to actuate (act or operate according to its programming). Being solely algorithmic driven, LAWs will be able to kill without any human interference or oversight.
The following arguments have been offered in support of the development of LAWs:
The following arguments have been offered against the development of LAWs:
The solution is an international preemptive ban on the development of lethal autonomous weapons adopted by the High Contracting Parties of the UN Convention on Certain Conventional Weapons. This ban would build on other humanitarian disarmament treaties and the preemptive ban of blinding laser weapons by the The Protocol on Blinding Laser Weapons, Protocol IV of the 1980 Convention on Certain Conventional Weapons.
The adoption of this solution depends in its entirety on the willingness of the parties to agree and adopt the ban. As of today, the call for the lethal autonomous weapons ban is being supported by the following 25 states:
Algeria, Argentina, Austria, Bolivia, Brazil, Chile, Costa Rica, Colombia, Cuba, Djibouti Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, and Zimbabwe. China has expressed support on a ban on the use of LAWs, not on their development.
There has been numerous citizen expressions from the technology industry in favor of the ban on the development of LAWs. Over 1,000 experts in robotics and artificial intelligence have signed two letters of the Future of Life Institute supporting the ban (Autonomous Weapons: An Open Letter) from AI & robotics Researchers; Lethal Autonomous Weapons Pledge. Signatories of these letters include Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Skype co-founder Jaan Tallinn, Google DeepMind co-founder Demis Hassabis, and others.
Seventy religious leaders, representatives and faith based organisations have signed an interreligious declaration, an initiative of PAX in cooperation with Pax Christi International, calling on states to work towards a global ban on fully autonomous weapons.
More than 20 Nobel Peace Prize Laureates have endorsed a joint statement calling for a ban on weapons that would be able to select and attack targets without meaningful human control.
The United States and Russia have expressed that an international ban on lethal autonomous weapons would be premature and instead encourage further analysis of the possible benefits this new technology could offer. The Foreign Office and the Ministry of Defence of the United Kingdom expressed its opposition to the international ban since it states that international humanitarian law already addresses the issue.
LAWs have not been fully developed yet. In fact, much of its proposed technology still does not exist. This positions the international community in a advantage point where we can actually prevent, as we did with laser blinding weapons, a humanitarian catastrophe and its consequences altogether.
In the robot context, “actuate” refers to the acts or operations of a robot caused by its programming. The terminology is rooted in Ryan Calo’s sense-think-act paradigm introduced by robots:
“The utility here of the so-called sense-think-act paradigm lies in distinguishing robots from other technologies. […] The idea of a robot or robotic system is that the technology combines all three. […] My working assumption is that a system acts upon its environment to the extent it changes that environment directly. A technology does not act, and hence is not a robot, merely by providing information in an intelligible format. It must be in some way. A robot in the strongest, fullest sense of the term exists in the world as a corporeal object with the capacity to exert itself physically.[…] [R]obots are best thought of as artificial objects or systems that sense, process, and act upon the world to at least some degree.”
Ryan Calo, “Robotics and the Lessons of Cyberlaw”, 102 Cali L Rev 2015, p.529-32 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972
Anderson, Kenneth and Matthew C. Waxman. “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can” Stanford University, The Hoover Institution (Jean Perkins Task Force on National Security and Law Essay Series) (2013). Online at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250126.
Arkin, Ronald C. “The Case for Ethical Autonomy in Unmanned Systems” Journal of Military Ethics (2010) 9.4, 332-341, http://www.cc.gatech.edu/ai/robot-lab/onlinepublications/Arkin_ethical_autonomous_systems_final.pdf.
Arkin, Ronald C.; Patrick Ulam & Alan R. Wagner, Moral Decision- making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception, 100 Proceedings of the IEEE Special Issue on Interaction Dynamics at the Interface of Humans and Smart Machines 571 (2012).
Arkin, Ronald C. “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07-11 https://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf
Asaro, Peter. “On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making” International Review of the Red Cross 94.886 (June 2012): 694-95, http://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-asaro.htm.
Bowcott, Owen. UK opposes international ban on developing 'killer robots', Activists urge bar on weapons that launch attacks without human intervention as UN discusses future of autonomous weapons. The Guardian, 13 Apr 2015 https://www.theguardian.com/politics/2015/apr/13/uk-opposes-international-ban-on-developing-killer-robots
Carnahan, Burrus M. and Marjorie Robertson (Jul 1996). “The Protocol on “Blinding Laser Weapons”: A New Direction for International Humanitarian Law”. The American Journal of International Law. 90 (3): 484–490
Domingos, Pedro. “A Few Useful Things to Know About Machine Learning”, (2012) 10 Communications of the ACM 78
Human Rights Watch, Losing Humanity: The case against Killer Robots, 19 Nov. 2012 httpS://www.hrw.org/news/2012/11/19/ban-killer-robots-it-s-too-late
Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots, 9 Apr. 2015 https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots
Human Rights Watch, Making the Case: The Danger of killer Robots and the Need for a Preemptive Ban, 9 Dec. 2016 https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban
Human Rights Watch, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, 21 Aug. 2018 https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots
Kerr, Ian and Katie Szilagyi. “Evitable Conflicts, Inevitable Technologies? The Science and Fiction of Robotic Warfare and IHL” (2013) Law Culture and the Humanities.
Kerr, Ian and Katie Szilagyi. “Asleep at the switch? How killer robots become a force multiplier of military necessity”, in ROBOT LAW 333, ed. Ryan Calo, A. Michael Froomkin, and Ian Kerr, (Edward Elgar Publishing Limited, 2016)
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Treaty.xsp?action=openDocument&documentId=D9E6B6264D7723C3C12563CD002D6CE4 ; see also https://ihl-databases.icrc.org/ihl/WebART/470-750045?OpenDocument
Sharkey, Noel. “The evitability of autonomous robot warfare” International Review of the Red Cross 94.886 (June 2012): 787-799. Online at http://www.icrc.org/eng/resources/documents/article/review-2012/irrc-886-sharkey.htm.
Turing, Alan “Computing Machinery and Intelligence”, (1950) http://www.abelard.org/turpap/turpap.htm
Urban, Tim. “The AI Revolution: The Road to Superintelligence”, http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html