Autonomous Weapons PDF
Autonomous Weapons PDF
Abstract
This report examines the use of artificial intelligence and autonomous weapon systems by the
military through the use of Stakeholder and Stasis theories. By analyzing arguments made by
subject matter experts, popular sources, public comments, and visual representation, this
report will show the arguments made by stakeholders for and against autonomous weapons.
Autonomous Weapons 3
Table of Contents
Abstract .................................................................................................. 2
Introduction ............................................................................................ 4
Methodology .......................................................................................... 4
What are Autonomous Weapons? .......................................................... 5
Why are Autonomous Weapons an Issue? ............................................. 5
What Effect Could Autonomous Weapons Have? ................................... 6
What Actions are Being Taken? .............................................................. 7
Conclusion .............................................................................................. 8
Works Cited ............................................................................................ 9
Autonomous Weapons 4
Introduction
The purpose of this report is to examine the argument over the use of artificial intelligence and
autonomous weapons by the military. In order to do this, I will discuss the information I
gathered while analyzing sources from subject matter experts, popular media, and the public.
The two opposing sides to this argument are those that believe artificial intelligence would be
beneficial on that battlefield and save human lives and those that believe that autonomous
weapons cannot be trusted and shouldn’t be used.
The stakeholders who argue that autonomous weapons would be beneficial and ethical include
defense contractors, weapons manufacturers, and the military. Those who believe the use of
autonomous weapons by the military is dangerous and unethical include human rights activists
and the United Nations.
Policy makers, roboticist, and computer scientists are stakeholders that are split on whether
they are for or against the use of AI in military weapons.
The current state of the argument is stalled in conjecture and value. Stakeholders in favor of
the development of autonomous weapons do not believe that there is reason to debate over
them and frame them as purely beneficial. Those against believe autonomous weapons are
unethical and their use should be discussed to determine what role they should serve if any at
all. There is little debate among stakeholders over the definition of autonomous weapons.
Methodology
To gather data for this report I used the internet to find articles from subject matter experts,
popular sources, and public comments. I used Evernote to compile and organize the data into a
research dossier.
To analyze the data I used two theories. The first is stakeholder theory which identifies people
or groups that have something to either gain or lose in the debate over an issue. Using this I
was able to identify some of the major stakeholders in my issue as well as the arguments they
were making.
The second theory I used is Stasis Theory. Stasis theory helps break areas of an argument into
four categories. Through using this I was able to analyze arguments presented by different
stakeholders and classify them as either points of conjecture, definition, value, or action.
Autonomous Weapons 5
Thomas Hellstrom, a computer scientist, argues that this lack of emotions and disconnect from
normal human behavior is exactly what makes the use of autonomous weapons unethical and
dangerous. "With the tele-operated battlefield robots of today, responsibility is not assigned to
the robots at all. Rather, the robots are much like regular weapons and one may even apply the
slogan of the National Rifle Association (NRA): “guns don‘t kill people, people kill people”. A
tele-operated UAV is not lethal until a human operator presses the Fire button. However, the
slogan is not directly applicable to robots such as Phalanx or the SGR-1 which may “decide by
itself” (in the sense that no operator needs to press the Fire button) if and when to fire. To be
applicable in a world with lethal autonomously powerful robots, the NRA slogan will have to be
modified to something like “guns don’t kill people, robots kill people” (Hellstrom, 2013).
Without the ability to feel human emotions such as remorse or guilt, autonomous weapons
could be making the decision to take a human life without concern over consequences. For
most stakeholders that oppose the development and use of autonomous weapons, the point of
contention is the actual decision to take a life.
The debate over the use of autonomous weapons connects closely to the use of drones in
conflicts overseas. "Far from making the battlefield a clean and surgical affair, telerobots have a
significantly high record for causing civilian casualties. Thus, at the present moment, robots are
not making modern battlefields notably more just or ethical." (J.P. Sullins, 2010) Stakeholders
against the development and use of autonomous weapons often cite the number of civilian
casualties and mishaps caused by the use of drones that were supposed to bring precision to
the battlefield. In response to this point, stakeholders supporting autonomous weapons argue
that while drone strikes have caused some civilian casualties, they have removed friendly
troops from harm’s way.
However, some believe that though autonomous weapons would remove human error from
decisions directly in battle, it is the possibility for human error to occur in the development and
programming of autonomous weapons that holds potential for problems. “the more the system
is autonomous then the more it has the capacity to make choices other than those predicted or
encouraged by its programmers… In all probability, the complexities of war cannot be
Autonomous Weapons 7
simplified, packaged, and encapsulated into algorithmic form" (Jason Borenstein, 2008). Many
experts believe that war is too complicated to be programmed and that relying on coding
would cause problems. Furthermore, many computer scientists argue that the only way to
reliably test the programming of autonomous weapons would be in real life scenarios, which
would require them to be implemented on the battlefield before fully understanding how they
will act.
Many opposing the use of fully autonomous weapons warn that they could lead to a new arms
race. "The stakes are high: LAWS (lethal autonomous weapons) have been described as the
third revolution in warfare, after gunpowder and nuclear arms." (Stuart Russel, 2015) Unlike
the nuclear arms race however, opponents cite the accessibility of this technology as the
reason it has the potential to be so devastating. "For one thing, if LAWS development
continues, eventually the weapons might be extremely inexpensive. Already today, drones can
be purchased or built by hobbyists fairly cheaply, and prices are likely to keep falling as the
technology improves. And if the US used drones on the battlefield, many of them would no
doubt be captured or scavenged. ‘If you create a cheap, easily proliferated weapon of mass
destruction, it will be used against Western countries,’ Russell told me." (Kelsey Piper, 2019).
Even with a global ban on autonomous weapons, many stakeholders against their development
fear that all it would take is for one country or organization to continue producing them to drag
other Nations into an arms race.
Many roboticist and computer scientist warn what Lethal Autonomous Weapons Systems could
look like in the near future. "In my view, the overriding concern should be the probable
endpoint of this technological trajectory... Despite the limits imposed by physics, one can
expect platforms deployed in the millions, the agility and lethality of which will leave humans
utterly defenceless. This is not a desirable future" (Stuart Russel, 2015). Stakeholders against
LAWS use their potential capabilities to show why they should be banned.
(Human Rights Watch). Other activist organizations such as The Campaign to Stop Killer Robots
have been formed specifically to act against autonomous weapons.
Currently 28 countries from the United Nations have declared their opposition to autonomous
weapons and called for a global ban. Among those is the Secretary- General of the UN, Antonio
Guterres. “Autonomous machines with the power and discretion to take lives without human
involvement are politically unacceptable, morally repugnant and should be prohibited by
international law” (Antonio Guterres).
Though still against the development of autonomous weapons, there are some members of the
public that do not see bans as being realistic. Many feel that AI driven weapons are going to be
developed by either countries ignoring the bans or smaller groups already operating outside of
policy.
Conclusion
Despite a large opposition autonomous weapons are still being researched and developed. The
debate between stakeholders for and against is largely stuck at conjecture and value. Growing
calls to action by organizations and Nations opposing the development of autonomous
weapons are bringing more attention to the argument, but agreement needs to be made on the
morality and safety of the use of artificial intelligence in weapons.
Autonomous Weapons 9
Works Cited
“Autonomous Weapons That Kill Must Be Banned, Insists UN Chief | UN News.” United Nations,
United Nations, Mar. 2019, news.un.org/en/story/2019/03/1035381.
Arkin, Ronald. “Ethical Robots in Warfare.” Ethical Robots in Warfare - IEEE Journals &
Magazine, 2009, ieeexplore.ieee.org/abstract/document/4799405/metrics#metrics.
Borenstein, Jason (2008) "The Ethics of Autonomous Military Robots," Studies in Ethics, Law,
and Technology: Vol. 2: Iss. 1, Article 2.
Coeckelbergh, Mark. “From Killer Machines to Doctrines and Swarms, or Why Ethics of Military
Robotics Is Not (Necessarily) About Robots.” Philosophy & Technology, vol. 24, no. 3,
2011, pp. 269–278., doi:10.1007/s13347-011-0019-6.
Hellström, Thomas. “On the Moral Responsibility of Military Robots.” Ethics and Information
Technology, vol. 15, no. 2, 2012, pp. 99–107., doi:10.1007/s10676-012-9301-2.
Hung, Melanie, and Mary Wareham. “Killer Robots.” Human Rights Watch,
www.hrw.org/topic/arms/killer-robots#.
Kelsey. “Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons.” The
New York Times, The New York Times, 15 Nov. 2018,
www.nytimes.com/2018/11/15/magazine/autonomous-robots-weapons.html.
Klare, Michael. “Arms Control Today.” Autonomous Weapons Systems and the Laws of War |
Arms Control Association, 2019, www.armscontrol.org/act/2019-
03/features/autonomous-weapons-systems-laws-war.
Piper, Kelsey. “Death by Algorithm: the Age of Killer Robots Is Closer than You Think.” Vox, Vox,
21 June 2019, www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-
weapons-ai-war.
Purcell, Richard. “Autonomous Weapons: The Ultimate Military Game Changer?” The National
Interest, The Center for the National Interest, 21 Oct. 2018,
nationalinterest.org/blog/buzz/autonomous-weapons-ultimate-military-game-changer-
33937.
Rohrlich, Justin. “The US Army Wants to Turn Tanks into AI-Powered Killing Machines.” Quartz,
Quartz, 3 Mar. 2019, qz.com/1558841/us-army-developing-ai-powered-autonomous-
weapons/.
Autonomous Weapons 10
Shapiro, Ari. “Autonomous Weapons Would Take Warfare To A New Domain, Without
Humans.” NPR, NPR, 24 Apr. 2018,
www.npr.org/sections/alltechconsidered/2018/04/23/604438311/autonomous-
weapons-would-take-warfare-to-a-new-domain-without-humans.
Sullins, John P. “RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield?”
Ethics and Information Technology, vol. 12, no. 3, 2010, pp. 263–275.,
doi:10.1007/s10676-010-9241-7.