January LD - LAWS
January LD - LAWS
1.1 Introduction
1.1.1 Introduction
"Super Battle Droid! Take it down!"
Yes, NSDA finally picks a topic that I am thoroughly justified when starting a topic analysis with a Star
Wars Battlefront II quote. We all know of the janky droids in Episodes I-III (if not, stop what you are
doing and watch them now) and how the Confederacy had armies of droids which were autonomous
come Attack of the Clones. Some states are trying to make droids, but they changed a few letters and
thought drones were the same as droids. Regardless, great powers such as the US, Russia, and China are
trying to implement artificial intelligence with weaponry to make what are considered lethal
autonomous weapon systems (LAWS).
Lethal autonomous weapons operate without human intervention but excludes AWS
and SAWS according to Defense Department definitions
Congressional Research Service 19 [Congressional Research Service, 12-10-2019, “Defense
Primer: U.S. Policy on Lethal Autonomous Weapon Systems,” Congressional Research Service,
https://fas.org/sgp/crs/natsec/IF11150.pdf]/Kankee
Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer
algorithms to independently identify a target and employ an onboard weapon system to engage and
destroy the target without manual human control of the system. Although these systems generally do not yet
exist, it is believed they would enable military operations in communications-degraded or -denied environments in which traditional systems
may not be able to operate. Contrary to a number of news reports, U.S. policy does not prohibit the development or
employment of LAWS. Although the United States does not currently have LAWS in its inventory , some
senior military and defense leaders have stated that the United States may be compelled to develop
LAWS in the future if potential U.S. adversaries choose to do so. At the same time, a growing number of states and
nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.
Developments in both autonomous weapons technology and international discussions of LAWS could hold implications for congressional
oversight, defense investments, military concepts of operations, treaty-making, and the future of war. U.S. Policy Definitions. There is no agreed
definition of lethal autonomous weapon systems that is used in international fora. However, Department
of Defense Directive
(DODD) 3000.09 (the directive), which establishes U.S. policy on autonomy in weapons systems , provides
definitions for different categories of autonomous weapon systems for the purposes of the U.S. military. These
definitions are principally grounded in the role of the human operator with regard to target selection and
engagement decisions, rather than in the technological sophistication of the weapon system . DODD
3000.09 defines LAWS as “weapon system[s] that, once activated, can select and engage targets without
further intervention by a human operator.” This concept of autonomy is also known as “human out of the loop” or “full
autonomy.” The directive contrasts LAWS with human supervised, or “human on the loop,” autonomous
weapon systems, in which operators have the ability to monitor and halt a weapon’s target engagement .
Another category is semi-autonomous, or “human in the loop,” weapon systems that “only engage individual
targets or specific target groups that have been selected by a human operator.” Semiautonomous
weapons include so-called “fire and forget” weapons, such as certain types of guided missiles, that deliver effects to
human-identified targets using autonomous functions. The directive does not cover “autonomous or semiautonomous
cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions
manually guided by the operator (e.g., laser- or wire-guided munitions); mines; [and] unexploded explosive
ordnance,” nor subject them to its guidelines. Role of human operator. DODD 3000.09 requires that all systems, including
LAWS, be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” As noted in
an August 2018 U.S. government white paper, “‘appropriate’ is a flexible term that reflects the fact that there is not a fixed, one-size-fits-all
level of human judgment that should be applied to every context. What is ‘appropriate’ can differ across weapon systems, domains of warfare,
types of warfare, operational contexts, and even across different functions in a weapon system.”
This definition excludes all SAWS and AWS weapons in favor of FAWS. Even though the topic seems to
be aff biased in terms of the amount of potential affs (somewhat akin to the arm sales or military aid
topics), affs being restricted to FAWS likely excludes 90% of people’s first impressions of what should be
topical. Not many weapons are actually fully autonomous – most weapon systems that exist today have
some sort of human involvement to one degree or another, and of those, hardly any are deployed. Don’t
expect to say that an aff would solve any existing conflicts like the Yemen civil war, since those already
existed beforehand and LAWS are likely a long way coming.
Nonetheless, this definition is often criticized by AI experts for being too restrictive and allowing the
development of SAWS. Expect lots and lots of topicality debates at circuit tournaments as even though
there’s some ambiguity’s in most LD topics, at least subject of the debate actually existed. For this topic,
beyond killer Skynet-esque AIs and maybe certain drones in development, there probably isn’t anything
I would say to be 100% guaranteed to be topical or inherent. Nonetheless, the DoD definition it is one of
the most authoritative and reasonable definitions that exist.
It’s the most common use of the term lethal autonomous weapons within the
literature base
Wyatt 19 [Austin Wyatt, Research Associate in the Values in Defence & Security Technology group at
The University of New South Wales at the Australian Defence Force Academy, 2019, “Charting great
power progress toward a lethal autonomous weapon system demonstration point,” Defence Studies,
DOI: 10.1080/14702436.2019.1698956]/Kankee
At the time of writing, there have been no publicly acknowledged deployments of fully autonomous weapon systems. This is largely due to the
ongoing legal and definitional uncertainty, as well as the threat of a pre-emptive development ban. The
most commonly cited
definition is derived from the 2012 U.S. Department of Defence Directive 3000.09, which defines a weapon as
autonomous if, when activated, it “can select and engage targets without further intervention by a human operator” (Defence 2012). Reflecting
the higher ethical and legal barriers to engaging humans, Directive 3000.09 restricts AWS from using lethal force against human combatants but
explicitly allows autonomous targeting where the target is not a human combatant. This definition remains central to US
understanding of LAWS despite predating the current international debate , appearing in recent official
policy and doctrinal documents,2 as well as the US statement to the the 2019 Meeting of the Group of
Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems .
While this definition has been criticised by academics such as Roff (Conn, 2016), Crootof3 and Horowitz ((Horowitz 2016a)), it has been
used as the basis for multiple definitions of autonomous weapons from non-profits (such as the
Campaign to Stop Killer Robots).
1
If you don’t like the DoD’s definitions for whatever reason, below is a list of some more definitions that
you might want to look into. There are also cards that cite the Chinese definition of LAWS throughout
this file.
Government of the Netherlands Autonomous Weapon System: “A weapon that, without human intervention ,
selects and engages targets matching certain predefined criteria , following a human decision to deploy
the weapon on the understanding that an attack, once launched, cannot be stopped by human
intervention.”33 The Dutch working definition narrows the scope of what constitutes an AWS by requiring not only weapon systems that
can select and attack without requiring human guidance or intervention, but also that these systems cannot be recalled or stopped after
deployment or launch. The justification for this narrowing scope is that the Dutch government believes that “meaningful human control in the
wider loop” still governs the “wider targeting process”. As long as humans are preselecting the criteria on which weapons make targeting
decisions at the time of attack, as well as that humans make considerations about “aspects such as target selection, weapon selection and
implementation planning (time and space), an assessment of potential collateral damage” and “battle damage assessment”, the system would
be considered permissible and presents no “additional ethical issues compared to other weapon systems”.34 The Dutch working definition
stresses the need for human engagement and accountability. The focus on multiple time frames, such as weapon design and testing to
engagement and post-attack assessment, is correct. Indeed, because the Dutch note the obligations for humans at each time phase, it appears
to fall within the human-centric approach noted above. It also reaffirms existing IHL obligations on both individual commanders and States,
such as for States to comply with Article 36 weapons reviews. While “meaningful human control in the wider loop” still governs the “wider
targeting process”, the working definition does not mention meaningful human control. As such, marrying Dutch support for meaningful human
control to the working definition may be difficult once other States enter the discussion on the definition. The Dutch definition is very narrow,
limiting the discussion to systems that select and engage targets without human intervention and cannot be stopped by humans. It seems to
imply, then, that weapon systems that select and attack without human intervention, but could be recalled or stopped, would not be
autonomous weapons. This may restrict the label of “autonomous weapon system” to very few systems, such as swarms or autonomous
submersibles without communications. Yet, as the Dutch government notes in its working paper, “even if it became technologically feasible,
there seems to be no reason why a State would have the ambition to develop a weapon system that is intrinsically not under human
control.”35 Though, if States are developing and launching systems that cannot be stopped, it would seem that at minimum a large degree of
control is lost. The concept of “wider loop” could benefit from further conceptual clarity, as the paper presumes a “narrow loop”, yet does not
describe which tasks are delegated to the system in the “narrow loop”. The Dutch paper notes a “prominent role for humans” in programming
target characteristics, target and weapon selection, elements of planning and assessment of potential collateral damage, as well as Battle
Damage Assessment. There will be antecedent design decisions made by humans, and there will be a decision by someone to deploy an AWS.
There is no technological impediment that ensures that other decisions noted as part the ‘wider loop’ will continue to be made by humans in
the future. For example, if a system can choose—that is select—and attack a target without human intervention, the system will require various
navigation, planning, sensing and engagement-related capabilities. As an example, the current F-35 already has limited local battle damage
assessment capabilities for it to be able to function with its pilot. Finally, it is unclear how the Netherlands would like to address governance of
AWS. On the one hand, it states that there are no new ethical concerns for fully autonomous systems under meaningful human control in the
wider loop. On the other hand, the paper does not explicitly reference that fully autonomous weapon systems without meaningful human
control require regulation. Rather, they state that they do “not support a moratorium on the development of fully autonomous weapon
systems”, citing the difficulty of regulating the dual-use nature of AI. Government of France “Lethal
autonomous weapons are
fully autonomous systems. [...] LAWS should be understood as implying a total absence of human
supervision, meaning there is absolutely no link (communication or control) with the military chain of
command. [...] The delivery platform of a LAWS would be capable of moving, adapting to its land, marine or
aerial environments and targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind
of human intervention or validation. [...] LAWS would most likely possess self-learning capabilities .”1 The
French working definition in part circumscribes areas of what LAWS are not. They are not: • existing automatic systems; • linked in any form of
communication or control to “the military chain of command”; • supervised in any way, or capable of “human intervention or validation”; •
liaised with “the weapons system”; • able to provide “permanent and accurate situational awareness and the operational control” to the
2
commander; or • predictable. The benefits of the French approach are in the specificity of which types of systems ought to be considered as
“autonomous”, while also indirectly providing a fuller account of what it takes “autonomy” to mean. By precluding automatic systems from
discussion, and by definition any systems that are non-lethal or less-than-lethal, the definition suggests a bright line distinction for autonomy.
Systems that are pre-programmed to act in a particular manner without any freedom of adaptation, variation or discretion would be considered
as automatic, not autonomous. Furthermore, the definition and accompanying discussion also hints at what may be required for the “selection”
of targets. “LAWS” it suggests “would most likely possess self-learning capabilities” because the complexity and diversity of potential military
scenarios could not be “preprogrammed”. The system would need to “learn,” and the “delivery platform”, which could ostensibly be separate
from other weapon system components, “would be capable of moving, adapting, […] and targeting and firing a lethal effector.” This learning,
France suggests, would mean that “the delivery system would be capable of selecting a target independently from the criteria that have been
predefined during the programming phase, in full compliance with IHL requirements.” This wording, however, implies that only systems that
continue to learn once deployed would be considered autonomous, and that any other systems possessing machine learning but not continuing
to learn once deployed would be considered as automatic. The 2016 French definition focuses on the far end of the autonomy-capabilities
spectrum, excluding, for example supervised “autonomous” systems. In the French paper, any supervision—even of a system that can act
independently and without human intervention—is excluded from the concept and definition of autonomy. Supervision necessitates some form
of communications link (whether unidirectional from user to the object, or bi-directional with the object being able to communicate to the user
as well). While it is certainly feasible that some forms of autonomous systems will operate without a communication link (at least in some
circumstances), some might claim that this may prove an overly restrictive requirement for a system to be considered “fully” autonomous.
Additionally, it appears that systems that operate for extended periods without communication but may “check in” with commanders would be
excluded from being autonomous, as would any systems that have multiple modes, or continuums, of autonomous behaviour. The definition
may preclude consideration of systems that may be comprised of many subcomponent parts or munitions, each of which is not deemed
“autonomous” in isolation, but by their use together the emergent behaviour appears to be autonomous. In these cases, there may be a “total
lack of human supervision” at the time of attack, but not during any planning or initial deployment stages. For example, a swarm of micro-
drones may fall under the heading of “automatic” in this definition, but acting in concert they may exhibit emergent behaviour. It is unclear
whether those types of systems or modular weapons systems would qualify as AWS under the French definition. France’s definition uses the
phrase “lethal autonomous weapon system”, implying that the weapon system must be directed towards human targets, as it is a lethal
weapons system, and therefore not applying to anti-materiel weapons, countermeasure systems, or non-kinetic systems. It does not address
whether permissibility of such lethal systems may rest on whether they are for purely defensive purposes, such as perimeter defence. France’s
definition raises some crucial issues about machine learning and design choices. It is true that learning systems are unpredictable, in the sense
that they may learn something unforeseen. However, it is a design choice as to whether learning is frozen prior to deployment. Furthermore,
the notion that a self-learning system will select targets “independently from the criteria that have been predefined during the programming
phase” is not inevitable. Learning systems are trained on a set of data, and how that learning takes place and the technical specifics that go
along with it, may entail that the system cannot “select” new targets outside of the training data. It may attempt to fit new knowledge into its
model of the world, but that would mean that it is incorrectly identifying some object. This may be due to some unknown relations in the
training data and the system was not validated on a set of data previously unseen, or it may be due to uncertainty. What is unpredictable is
how the system learns in a given model, and how it will extrapolate that learning to new environments. Finally, the French approach includes
two restrictive requirements: First is how “full autonomy–and the absence of liaison with the weapons system—contradicts the need for
permanent and accurate situation awareness and operational control.” Further clarity would be beneficial as it is unclear how a weapons
system could not liaise with itself. If a weapon system is a combination of one or more weapons with all related equipment, material, services,
personnel, and means of delivery and deployment required for self-sufficiency, then it will by definition “talk to” or link with itself. Second is the
emphasis on the “total absence of human supervision”. States and militaries already have the option to deploy weapons systems without
human supervision. Fire-and-forget munitions, for example, require no further guidance after launch, and in many instances, do not need to be
observed by a human operator. Likewise, some cruise missiles already possess automatic target recognition software and do not require
guidance, control, or supervision by humans. While most militaries will observe the weapons during flight and upon detonation, this is to keep
commanders aware of battlespace changes and is not a requirement under IHL. International Committee of the Red Cross Autonomous
Weapon System: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect,
identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.”1 The ICRC’s
working definition for an AWS takes a functionalist approach. The definition considers the technical, legal and ethical requirements for
“control” and, subsequently, human-machine interaction. This functionalist approach does not prejudice which functions are or are not
problematic. Rather, it states that any system that can select (with whichever capabilities the system requires for selection) and attack (with
whichever means, methods or munitions the system deploys) without intervention by a human operator qualifies as an autonomous weapon
system. In that way, the ICRC definition is quite neutral. It attends to the wider category of AWS, not all of which would be necessarily
problematic or of concern. Moreover, since the definition is without prejudice, it does not claim that a system with autonomy is prohibited per
se. Therefore, a system may permissibly have autonomy in its critical functions, so long as it complies with international humanitarian law
obligations (such as discrimination, proportionality, and precaution). However, to ensure that all new means and methods of war are compliant,
States are required to undertake legal reviews under Article 36 of Additional Protocol I to the Geneva Conventions. Given this obligation, the
ICRC’s definition provides further, though secondary, support for Article 36 obligations and also subsequent obligations for additional life-long
testing and certifications for any AWS. As the ICRC notes: The ability to carry out [an Article 36] review entails fully understanding the weapon’s
capabilities and foreseeing its effects, notably through testing. Yet foreseeing such effects may become increasingly difficult if autonomous
weapon systems were to become more complex or to be given more freedom of action in their operations, and therefore become less
predictable.36 As the ICRC states in its 2016 working paper, “a certain level of human control over attacks is inherent in, and required to ensure
compliance with, the IHL rules of distinction, proportionality and precautions in attack.” By noting that States have obligations to comply with
3
IHL, and that militaries cannot field weapons out of control, the ICRC’s approach urges States to consider human–machine interaction and
permissibility of delegating particular tasks or combinations of tasks within the targeting cycle. “From the ICRC’s perspective, a focus on the role
of the human in the targeting process and the human machine interface could provide a fruitful avenue for increasing understanding of
concerns that may be raised by autonomous weapon systems, rather than a purely technical focus on the ‘level of autonomy’.” The ICRC
definition does not address many of the questions related to the difference between “automatic” systems versus “autonomous” ones, nor does
the definition discuss weapons systems that may have various “modes” that can increase autonomous functionality (adjustment). Since its
definition is without prejudice and inclusive, it may encompass systems with varying modes, or even ones that may have emergent capabilities.
The ICRC’s language does not provide a definition of autonomy, and so it may want to include automatic and autonomous systems together in
the definition, or merely exclude the word autonomous altogether, particularly since the definition is without prejudice to the regulation or
prohibition of the class of weapons systems. However, if one were to include both automatic and AWS in a definition, without prejudice, then
one would have to have some form of agreement on all the critical functions of a weapons system. Implicitly, then, States would recognize that
such critical functions would be the same in both automatic and autonomous systems, but that due to technical difficulties in defining exact
limits or thresholds, the functions could potentially change in kind or in degree. This would remove the need to define levels of autonomy
altogether. Finally, the wording of “select” in “select and attack” may suffer from circularity. The ICRC includes “detect” and “select,” as well as
other capabilities (e.g. identify, track) to explain what it means by “select.” However, each of these terms are conceptually different, though
they may require the same or similar hardware and software technologies. Detecting a target is to sense its presence, but to select it is to
choose among potential target objects. Depending upon how one defines “select”, one could mean that a human “selects” all the target
signatures for a target library and the machine merely matches signatures to the library. Or one could mean that selection occurs at the time of
attack and the system is choosing among an array of pre-selected targets. If we define select in the first sense, then rarely—if ever—will any
system truly select a target. If we define it in the second sense, then the scope is much broader. Government of Switzerland
Autonomous Weapon Systems: “Weapons systems that are capable of carrying out tasks governed by IHL
in partial or full replacement of a human in the use of force , notably in the targeting cycle.”1 The Swiss
government’s working paper suggests a “compliance-based approach” to AWS. This definition seeks to push forward the thinking on
autonomous weapons by being as inclusive as possible in the boundaries of what may be considered an AWS. Additionally, the definition
expands the scope of potential systems for consideration by not only remaining silent on whether the system is lethal, non-lethal or less-than-
lethal, but also whether and to what extent any particular task is carried out by a system. The definition and the working paper that supports it
also does not “prejudge the appropriate regulatory response” for AWS. The strength of Switzerland’s approach lies in its inclusivity and its
flexibility, as well as how it couples the notion of autonomy to the accomplishment of particular tasks. In terms of inclusivity, the Swiss proposal
is explicitly sensitive to “facilitating compliance” and so it encourages the identification of “best practices, technical standards and policy
measures” that help to “complement, promote and reinforce” international obligations. Moreover, the flexibility of the Swiss concept is that it
can account for some of the most pressing questions related to AWS, such as what ought to be included in the definition, as well as whether
autonomy exists as a dichotomy (automatic or autonomous) or as a continuum. Because the definition looks to “the partial or full replacement
of a human in the use of force” it requires States to look at the targeting cycle as a compilation of related tasks. By requiring States to make
explicit the assemblage of component parts or tasks in the targeting cycle, whether by human or machine, it may open the door for a variety of
kinds and combinations of systems for review. Task-based analysis could then incorporate a variety of subcomponent parts, teams, or
integrations. This tasked-based analysis, moreover, could provide answers to questions pertaining to whether the system is offensive or
defensive, anti-materiel or anti-personnel, as well as which functions are “critical” to the task at hand. Functions need not necessarily relate to
engagement-related functions, but could also relate to decision aids embedded in weapon systems. For example, as the Swiss paper noted, if
an “AWS is expected to perform [a] proportionality assessment by itself, that aspect will need to be added to legal reviews of these systems” (§
23). If we define a weapon system as a combination of one or more weapons with all related equipment, materials, services, and personnel,
then, the portion of the system that is completing the task of proportionality assessment, as a decision aid to the operator, who then chooses
to fire or launch a weapon, would be under the need for assessment because that task has been delegated to the component part and not the
human. Compliance with proportionality-related tasks, then, requires analysis of how proportionality subcomponent part functions, as well as
how the output from that component may influence or affect other subcomponent parts (such as through human factors analysis). One may
consider the compliance-based-approach to be too inclusive, as it seems to admit that any weapon system that utilizes information
communication technologies constitute AWS. As the definition states that AWS simply are “weapons systems that are capable of carrying out
tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle” (§ 6). Since most weapon
systems today utilize some form of an information communication technology to complete some portion of a task previously performed by a
human during the use of force, it would seem to imply that almost all present-day systems are AWS, or else that further precision is necessary
to narrow down how ICT use in an autonomous system differs from its use in existing systems. For example, even if a military did not utilize a
precision guided munition that would take over tasks related to detecting a target object and employing force against it, and instead utilized a
“dumb” munition during the target engagement phase of an attack, the problem is that if any subcomponent parts were automated in the
weaponeering or capabilities analyses, then by the definition of a weapon system above, it would appear that due to “the partial replacement
of a human in the use of force, notably in the targeting cycle”, this system could be considered as an AWS. This conclusion, however, may go
against assertions that there are no presently existing AWS. Lastly, the Swiss working paper appears to support the notion that non-kinetic
effects, such as cyber operations, will be eventually recognized as a use of force under IHL. Government of the United
Kingdom “An
autonomous system is capable of understanding higher-level intent and direction . From this understanding and
its perception of its environment, such a system is able to take appropriate action to bring about a desired state . It
is capable of deciding a course of action , from a number of alternatives, without depending on human
4
oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be
predictable, individual actions may not be.”1 The UK provides its most recent account of autonomous weapons in conceptually robust terms.
Like the ICRC’s position, the UK focuses more on the required cognitive capabilities of an AWS rather than on critical functions. The UK
definition emphasizes: • understanding human intent; • context awareness and sensory perception; and • goal-orientation/purposiveness. The
strengths of the UK approach lie in its forward thinking about the potential abilities of AI at work in the future battlespace. In this way, the UK is
looking toward how humans and AI can interact and share “mental models” of the world and each other. That is, they would both be able to
understand the other’s actions, goals, intent and reasoning. Additionally, the UK’s definition identifies that autonomy is also about decision-
making37 capabilities, and that an autonomous system can take decisions on its own from a variety of courses of action, without human
oversight or control. These could be constrained to particular contexts or tasks, but emphasizing the decision-making capacity is central to a
definition of autonomy. This capacity, moreover, is not affected by the presence or absence of a human observing the actions of the system.
Due to the UK’s insistence on robust cognitive capabilities, the definition seeks to demarcate a bright line between automatic systems, which
are described as pre-programmed and predictable. This seems to also imply a difference in the ability to choose or “decide” targets, where
automatic systems may instead detect previously chosen or designated ones. In the UK approach it is unclear how to test that a weapons
system “understands” and possesses an “appreciation of commander’s intent,” or can understand “why” a human ordered it to do a particular
task or action. These more complex cognitive abilities would require an autonomous weapon to possess the ability to understand concepts.
However, it is unclear whether this is technologically feasible given the current state of the art in AI. Despite significant advances in natural
language processing, getting AIs to learn the meaning of language, as well as nonverbal cues, and social convention, robust “intent recognition”
are yet unresolved in AI research and is likely to remain so for some time. In a related vein, because the threshold for autonomy in this
definition is quite high, it is unclear what is or is not included in the implied definition of “automatic”. Or whether systems that possess various
modes for autonomous action are to be deemed automatic or autonomous, or how they should be evaluated (such as according to their
highest level of capability or on each discrete level). As the Joint Doctrine Publication utilizes the phrase “is programmed to logically follow a
predefined set of rules with predictable outcomes” to describe an automated system, it is unclear whether the UK definition would address
hybrid systems that mix more rule-based algorithms with learning systems, or whether “defined rules” would include systems such as
supervised learning systems. The definition may also imply single or unitary intelligence, and not the functional equivalent of unintended,
unforeseen or emergent autonomy arising from the integration or mixing of various modular “automatic” systems into a SoS. Questions remain
about the extent of learning or goal formation and whether these would be considered automatic or autonomous systems. If top-level goals are
given by a commander, but sub-level goals may be formulated by the system, is this sufficient to make the system “autonomous” or is it to be
regarded as “automatic”? For example, if the engagement-related function is pre-programmed to a particular space in location and time, but
the target-objects within that space are not preselected, is this system deemed autonomous in its target engagement functions but not in its
higher-level cognitive capabilities for understanding context and commander’s intent? The definition does not address the word lethal, or
whether discussion of AWS should include lessthan-lethal, non-lethal or non-kinetic weapons. Or whether there are particular types or classes
of weapons that would be deemed non-problematic, such as countermeasure weapons. In other forums, the UK has stated that
countermeasure weapons, such as close-in weapons systems like Phalanx that acquire and engage targets without human involvement, are
exceptions to its position that humans always make acquisition decisions.38 The doctrine states that “the operation of UK weapons will always
be under human control as an absolute guarantee of human oversight, authority and accountability. Whilst weapon systems may operate in
automatic modes there is always a person involved in setting appropriate parameters.”39Government of the United
States of America “A
weapon system that, once activated, can select and engage targets without further intervention by a
human operator. This includes human-supervised autonomous weapon systems that are designed to
allow human operators to override operation of the weapon system , but can select and engage targets
without further human input after activation .”1 The definition offered by the United States Department of Defense’s 2012
Directive 3000.09 appears to be a functionalist approach to defining AWS, like that of the ICRC. However, because the US definition is
embedded in a Department of Defense (DoD) policy, its purpose is ground discussions internal to US policy relating to the development and use
of autonomous and semiautonomous systems. As such, the definition needs to be considered with several other elements of the policy
directive, in particular to demonstrate how the US’s functional account is qualitatively different than the ICRC.
5
states or category of weapons and fiat that the individual nation states do the plan themselves without
international involvement (thereby having fiat do most the implementation work). There too was a
major international treaty that wanted to ban a certain weapon for that topic as well, and not many
people did anything with that, so I would expect the same here.
Major efforts for a preemptive ban have come from the International Campaign to Stop Killer Robots,
which is supported by all sorts of techies from Silicon Valley like Elon Musk and Steve Wozniak. Their
support for a UN process is a major reason why the focus has been on the UN as they are the biggest
contributors to the movement. However, this has some problems given that no one can agree to what
LAWS are or how to meaningfully limit their use, production, and/or developments. This is partially why
(certain countries like Russia and China also sabotage talks for their own benefit), despite years of talks
at the UN concerning LAWS, little to no progress has been made, which can be expected to remain the
same for the far future (at least until there is more pressure to limit LAWS when they become more
prominent)
Weapon PICs should be more common then they appear to be from the initial readings of the literature
given that most of it is very one sided towards AI / LAWS bad, but some affs could come about
Most aff ground is thoroughly insulated from Covid-19, which was a nice change in comparison to the
last two topics. Biden also won’t change much with the topic since he was all in for drones when he was
with Obama and hasn’t changed much since then. Not much is known about his LAWS policy however.
I really wanted to do some BMD aff/neg here, but it seems no one that is alive really gets shot by these
things intentionally, so it probably doesn’t meet the definition of lethal (unless we’re getting into really
off brand definitions where lethal just means dangerous).
A defense industrial disadvantage sounded really nice initially, but then I was like yeah, covid exists and
messed with the defense industrial base a lot and LAWS would probably make a lot of the defense
industry lose their jobs anyway. Rest in peace defense industrial base. Arm sales will forever love you.
I know that this is true on virtually every topic, but besides the trash Trump concession DA, there won’t
really be any politics disadvantages save maybe shutdown.
6
1.2 Affirmative Topic Analysis
1.2.1 Mines Bad
Here’s landmines 101. Landmines. They blow stuff up. The Mine Ban Treaty (also known as the MBT or
the Ottawa Treaty) exists and tries to make it so less countries blow stuff up with landmines. Obama
made it so we “no longer produce, acquire, or replace antipersonnel mines,” except for in support of
South Korea (more or less in line with the MBT). In 2019, Trump said everyone and their mum gets
mines by removing Obama’s de facto ban.
Specifically, this contention is (mostly) about US antipersonnel (AP) landmines, which are supposed to
blow people up as opposed to anti-tank landmines or naval mines (i.e. frag mines in Fallout 3). Both of
those are likely possible affs, but AP landmines are much more common and I could only find a card in
reference to lethal autonomous weapons in the context of AP landmines. These are said to be especially
bad since even after the war is over, the AP landmines remain. They often aren’t removed for decades
on end, even in fairly developed countries like those in Europe who still have landmine remains from
WWII.
This aff has some strategic advantages being that (A) Obama more or less did the aff in 2016 when the
US was on path for MBT ratification. Trump only reversed the policy last year (as of 2020), so it isn’t as if
the aff is an unprecedented break in civilian military norms or deterrence doctrines. And (B), most
negative offense doesn’t link given how most literature is the context of drones or cyberweapons, and
AP landmines are generally agreed to be a bad thing (hence the widely ratified MBT treaty). At best,
teams can PIC out of Korea and say that landmines are key to deterrence along the DMZ, but this
evidence isn’t spectacular and antipersonnel landmines don’t exactly deter WMD. The civilian-military
relations evidence is decent, but again, the Korea offense is easily thumped by Obama. The CMR cards
are specific to acceding to the MBT, which makes it easier to answer the Obama arguments since he
didn’t do that. The MBT only applies to AP landmines in contrast to naval mines (as some of the
evidence in the mines good contention does) and antitank landmines, so you can spike out of a lot of
offense by only specifying AP landmines or the MBT.
(A) the aff has some topicality issues as there’s only one good card that concludes that AP landmines are
considered LAWS. Almost all topicality interpretations require LAWS to include some sort of AI, but
landmines are usually super dumb and just chill until some unfortunate person gets blown up. On top of
that, most literature distinguishes LAWS and landmines and says that the MBT can be used as a model
for a LAWS treaty. Its highly suggested to run this aff in a lay circuit where theory ain’t a thing or at least
be ready for some theory debates.
(B) the aff also has inherency issues given that Biden is almost certainly set to become president absent
some major shenanigans and will revert back to the Obama era policy – there are two ways to fix this. I
would suggest either intentionally omitting evidence in this file that says that Biden would effectively
“ban” AP landmines, or focus on full enforcement and/or ratification of the MBT being key to
international landmine norms. Under Obama, the US technically didn’t really have a real ban, rather a de
facto ban with an exception for landmines used in Korea, so it isn’t as if the aff is donezo even with
Biden as president. Even then, he isn’t president for about half the topic, which means it is still viable
7
until then. It is suggested to look into Myanmar antipersonnel landmine affs given that Myanmar
actually uses their landmines rather frequently.
This aff is probably the most topical on the topic due to the evidence being mostly about drone swarms
(its like a swarm of insects but with autonomous robots that can kill you). Drone swarms require AI to
control since there’s just so many intricate movements and any small collision could take down the
whole swarm. And another important thing, the tech for this actually kinda sorta exists unlike most AWS
that debates will be focused on. Militaries can easily do most lethal drone jazz with UAVs and a human,
which is why generic drone bad arguments suffer from lots of alternate causes.
Drone swarms are super dangerous due to the fact that they accelerate conflict and are hard to defend
against. This is why
Most of potential answers to circumvention arguments are in this contention. Pick out the cards that
speak of LAWS bans being successful. Some of them won’t be applicable to other categories of weapons,
but almost all of them can be used for the Robby the (Kinda) Philosophical Robot contention if you hit a
circumvention disadvantage or have that levied against you as a solvency deficit.
Philosophy jazz. Its here. Go nuts. Maybe Kant was rarely seldom stable.
8
1.3 Negative Topic Analysis
1.3.1 Mines Good
Some folx say that landmines are useful. They’re probably wrong. Anyway, the arguments here are not
the best, but exist here as from my understanding it was a very common plan affs position read at
certain LD camps that used this topic. Most of the explanations of what AP landmines are and our policy
on them was done in the mines good section above, so there isn’t much to be said here. Obama doing
the aff puts you in a tight spot for uniqueness when it comes to disadvantages save politics or CMR, but
topicality and an inherency push are both viable options. I wouldn’t suggest CMR since Trump obviously
thumps, a lot of the military didn’t like Trump’s decision, and CMR disadvantages are generally terrible.
If affs don’t defend a plan or they write the plan text so that it includes banning naval mines (and non-
US action like if the plan was states should ban mines), the naval mines evidence is more applicable, but
don’t bother reading it if they don’t defend banning those.
1.3.2 Circumvention
Circumvention is likely one of the best core generics on this topic given that there’s no common sets of
definitions or agreements on what LAWS are. Most affs require states not so akin to international law
like Russia or China to adhere to the letter and spirit of the ban, as otherwise they’ll continue to develop
LAWS under the guise of SAWS or exclude them from their definition of LAWS. Let it be said that this
circumvention contention is not necessarily about states just not doing the plan; it’s about states finding
loopholes to avoid the spirit of the plan, which is more theoretically justified then the former. This is
functionally different then conceptions of durable fiat as states will circumvent the meaning of the word
ban in the context of autonomous weapons. Usually circumvention is tricky with theory debates, but it is
entirely justified with this topic and the vague meaning of LAWS and ban, especially in the context of
one another. If affs don’t want to lose to circumvention, they should write a better aff.
This topic is somewhat similar to the China topic and the space cooperation topic, not necessarily in the
sense that affs needs to win a say-no debate, but that the plan needs to be something desirable for
states to do or be designed around something inherently bannable for states to not circumvent.
Circumvention serves as both a case turn as well as a disadvantage since if no one besides the US really
cares about the ban, what does the aff even do besides cede US LAWS leadership. Saying that the US will
also circumvent the plan isn’t the best option either since again, that takes out the aff. Even though
there are uniqueness issues with the long term sustainability of US LAWS leadership, because of the fact
China will export more LAWS when it doesn’t need to compete with the US market share, there’s still an
impact to the disadvantage.
This disadvantage isn’t the best against well against plan affs that defend banning a very well defined
weapon (which frankly is quite rare on this topic) such as landmines since those are pretty rigorously
defined in both their lethality and autonomous qualities.
9
is some philosophical jazz that can be said here like how we are inferior at decision making and robots
like Spock are all logical.
Yes, this is probably a worse idea then anything career war criminal Yoo has to say about war powers.
Given that the counterplan leaves an exception for only the US nuclear C4ISR AI, it solves almost all affs
since they are mostly about drones or cyberweapons. This also makes it so there is a substantially
smaller risk of solvency deficits with AI wars since there is only one military AI that exists: ours. There’s
no rival cyberweapons that could hack it since those are banned.
Most of the obvious objections such as Skynet bad are answered with the no-first-use plank alongside
the AI only being permitted to use the bare minimum amount of nukes, which avoids most risks of
miscalculation or nuclear winter with the small amount of bombs being used.
It is an open question whether the counterplan should be publicly revealed or not due to potential ally
concerns, but it is probably key to deterrence postures.
Affs can’t use most of their AI bad arguments integrated into 1ACs as no reasonable government is even
close to doing this besides maybe Russia with their Dead Hand, so it will be harder to answer the
internal net benefit with the 1AR time crunch and mutual exclusivity puts perms it a tight spot.
ADJECTIVE Asubstance that is lethal can kill people or animals. ...a lethal dose of sleeping pills. Synonyms: deadly,
terminal, fatal, deathly More Synonyms of lethal 2. ADJECTIVE If you describe something as lethal, you mean that it is capable of
causing a lot of damage. Amorality and intelligence is probably the most lethal combination to be found within one personality
And yes, this is what people look at briefs for. Dictionary definitions in the context of whether we can kill
off all the mosquitos. These are weapons. To kill mosquitos.
10
Affirmative
Contention 1: Mines Bad
Landmines are LAWS
Gubrud 18 [Mark Gubrud, adjunct professor in the Curriculum in Peace, War & Defense at the
University of North Carolina with a PhD in physics from the University of Maryland, and was a
Postdoctoral Research Associate in the Program on Science and Global Security at Princeton
University,09-13-2017, “The Ottawa Definition of Landmines as a Start to Defining LAWS,” Convention
on Conventional Weapons Group of Governmental Experts Meeting on lethal autonomous weapons
systems, https://reachingcriticalwill.org/images/documents/Disarmament-
fora/ccw/2018/gge/documents/Landmines-and-LAWS.pdf]/Kankee
A working definition of “lethal autonomous weapons systems,” LAWS, suitable for negotiation and treaty language, may be
drawn from the way that the Ottawa Convention banning antipersonnel landmines defines those weapons in general:
‘Anti-personnel mine’ means a mine designed to be exploded by the presence, proximity or contact of a person and that will incapacitate, injure
or kill one or more persons. This definition can be parsed into two halves. The second part describes the lethal effects of the weapon on
persons. The first part is more interesting. It describes the mine as being “designed to be exploded by the presence, proximity or contact of a
person.” Rather than being triggered by their designated and accountable “operators,” mines are triggered
by their victims. In fact, if autonomy is defined broadly as acting without human control, guidance or
assistance, mines should be considered LAWS. Legacy mines are extremely simple in comparison to robots and artificial
intelligence, but there is no reason that more advanced landmine systems would not incorporate artificial intelligence to more
accurately discriminate targets and perhaps tailor responses to varied situations. Surely these would be of interest as potential LAWS.
Building on Precedent A 2005 recommendation of the Norwegian Ministry of Finance’s Advisory Council on Ethics 1 established a precedent
that anetworked system with sophisticated sensors and weapons , intended to replace legacy landmine
systems, would be considered Ottawa-compliant only if an operator is always required to trigger its
detonation or other kill mechanism. If throwing a “battlefield override switch” would enable lethal
autonomy, as a fallback for when ‘stuff gets real,’ as they say, then the system would be considered a banned
landmine system. The same precedent must apply in any LAWS ban if it is to be effective in safeguarding peace. Otherwise the “ban”
would only establish a norm that the most dangerous forms of autonomy may be turned on precisely when they are most dangerous — in a
crisis or hot war. The CCW LAWS negotiations may wish to exclude landmines, as already addressed by other protocols and treaties, but can
they exclude any consideration of advanced, networked and AI-driven systems? In either case, we can start with the Ottawa
definition of landmines and generalize it to define LAWS as victim- or target- or condition- triggered weapons, in contrast
with weapons triggered by humans: ‘Lethal autonomous weapons system’ means a lethal weapons system
triggered by a target or condition, rather than by a human operator. The principal advantage of this definition is
that it avoids unobservable concepts such as target selection and “critical functions” in the targeting cycle,
and is framed instead in terms of the system’s observable behavior. Triggering
11
Recently, Trump broke with Obama’s policy in line with the Mine Ban Treaty of
stopping the production or use of virtually all landmines. This will cause authoritarian
states to renege on the MBT and use landmines indiscriminately
Stohl 20 [Rachel Stohl, Vice President and directs the Conventional Defense program at the Stimson
Center, 2-10-2020, "The US Just Gave Greater Legitimacy to Landmines," Inkstick,
https://inkstickmedia.com/the-us-just-gave-greater-legitimacy-to-landmines/]/Kankee
In a move that further cements how out of step the Trump administration is from its allies and international
norms and standards, the administration has released a new US landmines policy allowing the production and
use of antipersonnel landmines for future conflicts. The policy, which rolls back decades of US practice and
approach, isolates the United States and puts American servicemembers and civilians around the globe at
risk. The Trump administration policy allows for the use of “non-persistent” landmines (those that have self-destruct or self-deactivation
mechanisms) in any area, not limited by geographic location. This means landmines can be used by US forces – with a decision by
the combatant commanders – in any area where they are deemed necessary. By comparison, the previous policy banned
landmine production and made limited exceptions for their use only on the Korean peninsula. The new policy rescinds Presidential Policy
Directive 37 (2016) which codified the Obama administration’s 2014 landmines policy updates. Specifically, in June 2014, the
Obama administration announced that the United States would no longer produce, acquire, or replace
antipersonnel mines, and in September 2014 it announced that the United States would no longer use landmines
anywhere in the world except for the Korean Peninsula. It also pledged — outside of Korea — not to assist or otherwise
encourage other countries to engage in activities prohibited by the Mine Ban Treaty and to destroy any
landmine stocks not required for the defense of South Korea. Further, in 2014, the United States announced it would
work toward US accession to the international Mine Ban Treaty . The Trump administration’s announcement
walks back from these incremental steps that had better aligned US policy with global standards and
norms against landmines, and came as a surprise to members of Congress, close US allies, and non-governmental
organizations that have worked to rid the world of landmines for decades, as the stigma against their use has grown. More
than 160 nations have now ratified the 1997 Mine Ban Treaty, which prohibits the stockpiling, production, and use of
landmines. For nearly 30 years, the US military has done without landmines and has incrementally moved
closer to joining the majority of the world in prohibiting their use . The United States last deployed anti-
personnel mines in Iraq and Kuwait in 1991 . However, since then, the United States has not used landmines ,
with an exception made for the use of a single antipersonnel mine in Afghanistan in 2002. The reversal of landmine policy is
another example of the United States turning its back on international norms , being out of step with its
allies, and needlessly putting civilians at risk. Although the policy commits to trying to minimize civilian casualties, the nature
of these weapons makes that commitment hollow. The characterization of mines as non-persistent is
artificial, as there have been and will inevitably be failures in the technology. In the field, an anti-personnel
landmine cannot distinguish between a soldier and a child . It doesn’t matter if that mine will self-
destruct in less than a month or just a few days. While active, these weapons are deadly. Numerous outstanding
questions remain about the implementation of this new policy, including from the United States’ closest allies. The European Union
has expressed its disappointment in and concern with the policy announcement, making clear allies were not
consulted nor support the reversal. Moreover, the status of joint operations where landmines are involved
could be jeopardized, undermining US operational abilities and effectiveness despite the claim that this new policy
provides the United States with increased options to counter threats. The International Committee of the Red Cross, which has decades of
experience working with landmine survivors and recovery from the humanitarian catastrophe caused by landmines, took the exceptional
measure of issuing a statement publicly expressing regret at the US decision and re-issuing its call for a global ban on landmines. The stigma
against landmines is real and the threat to civilians from these indiscriminate weapons is tremendous. The
US announcement gives a
green light to nefarious governments to use landmines with impunity and cover to justify their actions.
12
In effect, the Trump administration has
given other governments permission to act with little respect for
international law and to remain outside growing norms of international behavior. The new landmines
policy further isolates America from the rest of the world and misaligns the US from traditional partners and
those most closely relied upon to counter threats to national and international security. This decision is yet another
example of the United States, under the Trump administration, defining its own rules and ignoring global standards of behavior.
13
This spills-over to cause rogue actors and great powers to resume deploying
landmines and destroys the MBT
Axworthy and English 20 [Lloyd Axworthy, former foreign affairs minister in the Jean Chrétien
government, and John English, Canada’s special ambassador on landmines, 02-07-2020, "Opinion: The
Ottawa Treaty: Trump has to be stopped from removing landmine protections," Globe and Mail,
https://web.archive.org/web/20200209030537/https://www.theglobeandmail.com/opinion/article-the-
ottawa-treaty-trump-has-to-be-stopped-from-removing-landmine/]/Kankee
U.S. Defence Secretary Mark Esper used the following rationale: "I think landmines are an important tool that our
forces need to have available to them in order to ensure mission success, and in order to reduce risk to forces.” That specious
reasoning was debunked effectively during the debate on the landmine treaty negotiations in the 1990s, when the International
Committee of the Red Cross, supported by senior U.S. army commanders such as lieutenant-general James Hollingsworth, former U.S.
commander in Korea, pointed out that the weapons were a huge risk to civilians and soldiers alike . This was reinforced by
veterans themselves when the Vietnam Veterans of America Foundation (now Veterans for America), under the leadership of its president
Bobby Muller, was one of the founders in 1992 of the International Campaign to Ban Landmines (ICBL), reinforcing the case that the
military utility of personnel landmines was peripheral but the danger of killing and maiming soldiers and
civilians was extremely high. Fifty per cent of victims are children. The Trump administration’s declaration
ignores those compelling arguments. There is nothing new about his know-nothing approach to such important experiential
evidence or his absence of concern about the risk to people. Neither the President nor his acolytes take into account the
effectiveness and impact of the 1997 United Nations Convention on the Prohibition of the Use, Stockpiling, Production and
Transfer of Anti-Personnel Mines and on their Destruction, known as the Ottawa Treaty, to which 164 countries are signatory ,
the largest membership of any disarmam ent agreement. The ICBL’s Landmine Monitor Report estimates
the tally of people severely injured or killed from 1997 to the present to be in the range of 150,000 . But
after 1999 the levels dropped to less than 10,000 annually . Major de-mining projects are underway in countries where
landmines corrupt large swaths of land. Figures for 2019 show that Cambodia, where the Canadian Landmine Foundation supports de-mining
and educational efforts, still has millions of mines yet to be destroyed. There is also an upward trend of casualties in places such as Myanmar,
Yemen, Afghanistan, Syria, Mali and Ukraine where there is ongoing violent conflict or invasions. This increase in fatalities and injuries caused
by landmines is covertly supported by both private and governmental arms dealers. That is why the
decision to lift restrictions is so
damaging. It gives licence to rogue combatants around the world, to say nothing of major powers such
as Russia and China, which will now feel free to amend their own no-use policies. This is a dog whistle that
will be heard by authoritarians around the world. The United States did not sign the original Ottawa Treaty
because of Pentagon pressure. But since then, under successive Republican and Democratic administrations, the U.S. has adhered
to the treaty limits and been a major donor to the cause of eliminating landmines. Until now. The Trump administration has
turned its back on the risk of landmines , just as it has on international efforts on nuclear-weapons control, climate-change
adaptations and refugee protection.
14
Landmines kill thousands every year, especially children. There’s a moral obligation to
sign the Mine Ban Treaty and reduce those deaths
UNICEF 4 [United Nations Children's Fund, UN agency responsible for providing humanitarian and
developmental aid to children worldwide, “Landmines pose gravest risk for children Calling Landmines a
“Deadly Attraction” for Children, UNICEF says Countries That Care About Children Must Stop Producing
Landmines,” UNICEF, https://www.unicef.org/media/media_24360.html#:~:text=Over%2080%20per
%20cent%20of,that%20gave%20rise%20to%20them]/Kankee
Millions of antipersonnel landmines and other explosive remnants of war across the globe pose a vicious threat to
children, who are being injured, killed and orphaned by them long after wars are over , UNICEF said today.
“Landmines are a deadly attraction for children, whose innate curiosity and need for play often lure
them directly into harm’s way,” UNICEF Executive Director Carol Bellamy said, attending the first World Summit on a
Mine Free World in Nairobi. “Landmines kill, maim and orphan children. Countries have a moral responsibility to ratify the
Mine Ban Treaty and rid the world of these devastating weapons .” Over 80 per cent of the 15,000 to 20,000
landmine victims each year are civilians, and at least one in five are children, according to the
International Campaign to Ban Landmines (ICBL). The deadly legacy of landmines far outlasts the conflicts
that that gave rise to them. Among the most contaminated countries are Iraq, Cambodia, Afghanistan, Colombia, and Angola. The
countries in Asia, for instance, contain some of the most heavily mine-affected countries in the world. Landmines and unexploded ordnance
(UXO) are a danger to children in nearly half of all villages in Cambodia and nearly one-quarter of all
villages in Lao PDR, Bellamy said. Up to 800,000 tonnes of UXO and 3.5 million landmines still cover Vietnam, where
over 100,000 people have been killed or injured since 1975 . Children are at particular risk of injury and
death from landmines and other explosive remnants of war because their small size, unfamiliar shape, and colours
can make them look like toys. Refugees and displaced children returning home after war are in particular
danger of landmines because they are most likely to be unaware of the dangers of playing in or
traversing hazardous areas, Bellamy said. “The cost of playing too close to a landmine is brutal ,” Bellamy said,
citing such things as the loss of limbs, blindness, deafness, and injuries to the genital area as some of the
injuries landmines inflict on children . In part because they are physically smaller than adults, children are
more likely than adults to die from landmine injuries. An estimated 85 percent of child victims of landmines
die before reaching the hospital , Bellamy said. In many cases, landmine injuries occur far from home and
without a parent or caregiver’s knowledge. And when treatment is available, the cost can be prohibitive
for poor families, particularly because children need more care than adults . As they grow, new prostheses need
to be fitted regularly and a child survivor may have to undergo several amputations . Without adequate medical
treatment, children injured by landmines are often pulled out of school, limiting their opportunities for
socialization and education. The discrimination they face limits their future prospects for education,
employment or marriage. They are often perceived as a burden to their families . “Landmines orphan
children,” Bellamy said. “When mothers are maimed or killed, children are less likely to receive adequate nutrition , to
be immunized or to be protected from exploitation. When fathers fall victim to landmines, children are often
forced out of school and into work to supplement family income.” Since the Mine Ban Treaty went into
force five years ago, 143 states have ratified the treaty, which prohibits signatories from using, stockpiling, producing or
transferring landmines. Producing one landmine costs $3, yet once in the ground it can cost more than $1,000 to find and destroy, according to
the ICBL. Despite progress, some of the largest holders of landmines – Russia, China, India and the United States –
have yet to commit themselves to the Mine Ban Treaty . Bellamy called on these countries to join the Mine Ban Treaty,
immediately cease production and do more to assist those whose lives have been disrupted by landmines. “Landmines, meant to be
used against soldiers in war, are devastating the lives of children at peace ,” Bellamy added.
15
16
This causes massive structural violence against civilians even if they survive the
landmines themselves, but landmines are less then useless in war
Good 11 [Rachel Good, um laude graduate of Northwestern University Pritzker School of Law, Spring
2011, “Yes We Should: Why the U.S. Should Change Its Policy Toward the 1997 Mine Ban Treaty,”
Northwestern Journal of International Human Rights,
https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?
article=1113&context=njihr]/Kankee
The military utility of landmines was debated during negotiations of the 1997 Mine Ban Treaty. While
most of the world’s states
concluded that the humanitarian problems associated with landmines outweighed their limited military
utility, the U.S. continues to justify its refusal to sign the MBT based on the military utility argument.42 Proponents
of landmines posit two main arguments for the weapon’s utility. First, landmines are used to delay or deter an advancing enemy.43 Delaying an
enemy force gives armies additional time for troop protection or movement to another location.44 Landmines can also be used along borders
to deter and protect against enemy invasions.45 Second, armies use landmines to shape the battlefield by forcing enemy troops into vulnerable
positions.46 Once enemy forces are channeled into vulnerable areas, they are more susceptible to attacks by other weapons systems such as
artillery or rockets.47 In practice, however, the military utility of landmines is limited. ¶9 A study issued by the International
Committee on the Red Cross (ICRC) and conducted by a group of active and retired military leaders from nineteen
countries found that landmines have “little to no effect on the outcome of hostilities ” and only “marginal
tactical advantage” in certain specific circumstances.48 The group of military experts gathered by the ICRC asked the simple question of
whether there was empirical data to demonstrate the high military utility of landmines.49 Of twenty-six major conflicts the
experts studied, they failed to find a single case “in which the use of anti-personnel mines played a major role in
determining the outcome of a conflict .”50 Although landmines do have utility in some circumstances,51 they are never
outcome-determinative. ¶10 In the 1991 Gulf War, Iraqi forces laid 9 million mines to delay coalition forces.
Using large-scale mine plows, coalition forces cleared the minefield in only two hours.52 The increased use of
armored tanks, coupled with specialized plows and rollers to clear minefields has decreased the
effectiveness of landmines as a delay tactic.53 Also, minefields constructed to delay or deter enemies pose a risk to
friendly forces. Between 1961 and 1990, twenty-three people, including seventeen U.S. service personnel, were killed in
minefields laid by U.S. forces around Guantanamo Bay.54 After evaluating the delay capacity of landmines, ICRC’s military
experts determined that landmines do slow battles, but battles are won or lost based on leadership and other
materials.55 As for their deterrent capacity, the experts concluded that landmines “have never yet stopped a determined
advancing enemy.”56 ¶11 The same critique about the effectiveness of landmines as a delay tactic applies to their effectiveness as a
channeling tool. In both cases plows and rollers, as well as better automatic weapons and protected vehicles ,
render landmines “redundant.”57 Also, the arguments in support of landmine use assume that enemy forces
are unwilling to accept high numbers of casualties .58 Gen. Alfred Gray, a retired commander of the U.S. Marine Corps
argued against the utility of landmines when he said, “I know of no situation... where our use of mine warfare truly
channelized the enemy and brought him into a destructive power . I’m not aware of any operational advantage from
broad deployment of mines.”59 ¶12 A majority of states have determined that the limited military utility of landmines cannot justify their use
when weighed against humanitarian costs. Long
after their military use is finished in a given region, landmines remain in
the ground to kill and injure civilians . Unlike a bullet, which cannot injure except at the time it is fired, a landmine remains
lethal until it is safely removed from the ground. Not only do landmines that remain in the ground have a costly physical impact,
they also a have a psychological and economic impact on affected individuals and communities .60 It is
estimated that there are over 300,000 landmine survivors throughout the world.61 Many of these survivors live in
countries that struggle to meet the basic needs of their population, making it especially difficult to
provide extra services for mine survivors such as medical care or job training .62 ¶13 In communities where
people struggle to sustain themselves, landmine survivors are often seen as a drain on resources
because they are limited in their ability to work and provide for themselves .63 Because landmine survivors
17
are predominately located in poor areas, they are often stigmatized in their communities for their disabilities.64 This
stigmatization, and the resulting sense of helplessness, leads many landmine survivors to feel depressed
and angry.65 At the community level, landmines can also have a devastating economic impact by making swaths
of land unusable for transportation and trade, farming, herding, or animal grazing. The civilian impact of
landmines goes beyond the immediate physical injury to the individual. ¶14 Humanitarian problems such as those described above
result from every instance of landmine use, because by their nature landmines are weapons of indiscriminate
effect.66 Once a landmine is placed in the ground, there is no way to prevent a civilian from triggering its detonation .
Furthermore, landmines are inexpensive weapons to use but costly to remove. As a result, countless landmines remain
in communities after the cessation of hostilities. This makes them appealing weapons for guerrilla forces because they are
easy to acquire and can be used to depopulate or terrorize poor communities even after the fighting is stopped.67 Because landmines are
inherently indiscriminate, there is no ‘technological fix’ to the humanitarian problems they cause. This determination, coupled with the
weapon’s limited military utility, led 156 states to develop the Mine Ban Treaty banning the use, transfer, and stockpiling of landmines. In an
attempt to comprehensively deal with the humanitarian crisis caused by landmines, the Treaty also created an obligation for parties to clear
mined land and provide assistance to survivors of landmines. III. DEVELOPMENT OF THE MINE BAN TREATY AND THE U.S. ROLE
18
Landmines also cause loads of ecological damage – it harms ag, soils, and bioD
Ahmed 14 [Imtiaz Ahmed, Senior Research Fellow at the Department of Strategic and Regional
Studies for the University of Jammu, 2014, “Landmines: A Threat to Sustainable Development,” IOSR
Journal Of Humanities And Social Science, http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.1069.8858&rep=rep1&type=pdf]/Kankee
2. Ecological dimensions The impacts of landmines on soil, flora and fauna, and people are felt at different levels of the ecological system,
whether the mines have detonated or not. The ways in which landmines cause land degradation are broadly classified into five
groups: access denial, loss of biodiversity, micro-relief disruption, chemical contamination, and loss of
productivity Access denial. All the study pointed out that the most prominent ecological issue associated with landmines presence (or fear
of) is access denial to vital resources. It is estimated that landmines have denied access to or degraded 900000 km2 of
land, globally. For their military purpose, landmines guarantee that people and their movements are channelled away from strategically
significant sites, and prevent military incursion. But the use of landmines is not by any means confined to military establishments or sites of
military significance. The fear of
presence of even a single landmine can deny people access to land that they
desperately need for agriculture, water supply or to undertake conservation measures, and for technical teams
engaged in pest control. Landmines are used in large quantities around arable lands in Lebanon, Angola, Mozambique, Cambodia;
pasturelands in the Sinai, Kuwait and Iraq, forests in Nicaragua and the Demilitarized Zone (DMZ) between North and South Korea, coastal
areas in Kuwait and Egypt, borders, infrastructures (bridges, roads, electrical installations, canals and water sources) and nearby commercial
and public centres in Vietnam, Zimbabwe, Eritrea and Ethiopia, and residential areas in Serbia. Access denial was indicated as being able to
retard or stop development activities altogether. When Landmines restrict access to arable or pastoral lands, the people
who depended on those lands are pushed to use or abuse marginal resources, or move into refugee camps or urban
centres, depending on the availability of alternatives. Moreover, declining availability of land was found to increase the
need for practicing more intensive agricultural production systems that rely on heavy application of mechanical,
chemical or biological supplements for production on the safe land. At the most basic levels, some of the ways these practices could
endanger the health of the soil include: 1. Rapid exhaustion of the soil‟s mineral nutrient stock due to
continuous cultivation with no fallow or rotations; 2. Mechanically intensive agriculture and 3. Excessive
uses of chemical supplements and their consequent accumulation in the ecosystem. On the other hand, access denial has been
observed to have „positive‟ effects when the mined areas become „no-man‟s land‟. It is that, during limited anthropogenic interference flora
and fauna get a chance to flourish and recover. Formerly arable and pasture lands in Nicaragua were turned into forest and forests remained
undisturbed after the introduction of landmines. However, it need to be pointed out that, these benefits would only last as long as animals or
tree roots do not detonate the mines. In addition, in land of lesser quality, long
fallow periods could potentially end up creating or
exacerbating loss of productivity. 4. Loss of biodiversity The impact of landmines on different plant and animal populations is
considered to be a foremost environmental concern, next to access denial. As long as they receive enough mass to activate those landmines
do not differentiate between human beings or other life forms . Landmines can threaten biodiversity in a
given region by destroying vegetation cover during explosions or demining , and when animals fall victim.
Landmines pose an extra burden for threatened and endangered species . Landmines have been blamed for
pushing various species to the brink of extinction. Although it is widely believed that landmines destroy
vegetation and kill untold numbers of animals every year , this is unfortunately one of the areas where there is hardly any
numerical data to determine how many individuals of a species or where and how they fall victims. The very little data that exists on animal
population is also highly biased towards domesticated animal and little is known about the impacts suffered by wild populations. Some of the
animals that regularly fall victim to Landmines include brown bears in Croatia; barking bear, clouded leopard, snow leopard and royal Bengal
tigers and Kashmiri stag (Hangul) in J&K (India) and gazelles in Libya. Landmines are accused of threatening extinction of elephants in parts of
Africa and in Sri Lanka, and leopards in Afghanistan. Additionally, almost four percent of the very rare European brown bears were reported
killed by landmines in Croatia between 1991 and 1994 alone. Mines have killed one of the very few remaining mature, male silver backed
mountain gorillas in Rwanda and virtually eradicated gazelles from Libya. With regards to domesticated animals, studied the social costs of
Landmines in 206 communities in Afghanistan, Bosnia, Cambodia and Mozambique and reported that more than 57 000 animals were killed
due to landmines, over 35000 of which belonged to the Kuchi Nomads in Afghanistan. Another study also reported that more than 125000
camels, sheep, goats and cattle have been killed in Libya in between 1940-1980. Many of the biodiversity loss hotspots of the
world are severely affected by landmines. Nacho‟n referred to biodiversity data from the World Conservation Monitoring
19
Centre and identified a large number of species that are threatened or endangered due to many factors, including the presence of landmines in
their habitat or migratory paths. Moreover, landmines
are used for poaching endangered species of wildlife , and
refugees and IDP further contribute to loss of biodiversity when they hunt animals for food or when they
destroy their habitat in order to make shelters for themselves. Landmine impacts on plants are even less documented.
Landmines affect plant populations by causing slow-death of trees when they sustain shrapnel injuries or
abrasions of their bark or roots when fragmentation mines detonate , providing an entry site for wood-
rotting fungi. In regions where arable and pastoral activities turn out to be impossible due to landmines ,
forests become the last resort for food, fuel wood and shelter. Valuable forest products, including fruits
and timber, from previously avoided sensitive, endangered ecosystems are exploited by affected populations
looking to start new livelihood somewhere else. Moreover, wood destined for lumber becomes unsafe and troublesome when metal fragments
are embedded in it. Demining activities also influence biodiversity in many ways. Domesticated animals are frequently used for
mine clearance purposes, especially dogs, sheep and cattle. These animals are let loose in minefields as easy and fast
means of clearance. Furthermore, demining operations demand clearing all the vegetative cover, including forests
from mine suspected areas, usually by using fire. The result is removal of litter that plays crucial roles in
infiltration, protecting soil from erosion and the impact of rain drops , and providing organic matter that is important
to biota and stability of soil‟s structure. 5. Micro-relief disruption Landmine detonation causes damage to the soils‟ stability by
shattering the soil structure, and causing local compaction, and increasing the susceptibility of soil to erosion.
Deterioration of soil structure due to explosion, compaction or burning can be a slow and insidious progression, but their combination
results in long term changes that have significant , sustained impacts on moisture availability , erodibility
and productivity of the land. When a 250 gm APL detonates, it can create a crater with a diameter of approximately 30 cm. The
explosion is described as having the ability to facilitate removal and displacement of topsoil while forming a raised circumference around the
crater and compaction of soil into the side of the crater. The level of the impact can vary depending on the physical conditions of the soil; the
type and composition of the explosive and how many landmines detonate in the vicinity. The impact is greater in dry, loosely compacted and
exposed desert soils but is less severe in humid soils that have vegetation or physical protection. Susceptibility to reduced infiltration, flooding
and erosion is also higher in areas with steep slopes. In such cases, transported soil increases sediment load of drainage systems. When
soil
is compacted due to external forces, its resistance to penetration by plant roots and emerging seedlings
increases, the exchange of oxygen and carbon dioxide between the root zone of plants and the
atmosphere is also [hampered] retarded. Generally, as long as repeated explosions do not occur in the same location, the crater can
develop into a stable element of the landscape when runoff or wind erosion washes soil to its bottom. In warm and humid regions, however, it
has been reported that the crater may hold water, turn into a marsh and serve as breeding ground for
mosquitoes. Around 20 percent of the respondents highlighted that demining activities result in micro-relief disruption
by affecting the soil‟s biochemical and physical quality. A particularly harmful practice reported after the Gulf War is the
use of fuel explosive bombs. These bombs are dropped from the sky, creating heavy shock-waves that are
propagated into the ground seeking to cause buried landmines to detonate . In addition, a lot of organic
pollutants get into the soil during this aerial demining process . Fires are used to facilitate demining, thus
modifying the amount, form and distribution of biomass, organic matter and essential nutrients with in the
soil profile. The high temperature of burning causes more rapid than „normal‟ humus loss. Similarly, the temperature increase can
cause pH of soil to become more alkaline and nutrient elements may be converted into more bio-available forms, or
are lost from the soil by volatilization into the atmosphere , and transfer of ash with water or wind erosion . 3.
Chemical contamination Landmines interfere with the ability of the soil system to serve as a geochemical sink for contaminants. Depending on
density of mines per unit area; the type and composition of the mine; and the length, amount and degree of exposure of resources to the
mines, landminescan pose a serious pollutions threat, accumulation of non-biodegradable toxic waste of
casings or unexploded remnants. Moreover, after conflicts, many regions are left with a massive volume of exploded and UXO that
ruin the aesthetic quality of the area. Landmines are made of metal, timber or plastic casing and are filled with 2,4,6 Tri-nitrotoluene (TNT),
Hexahydro1,3,5trinitro1,3,5 triazine (RDX or Cyclonite). Landmines can also introduce other non-biodegradable and toxic waste,
such as depleted uranium. These compounds have been known to leach into soil and underground water as
cosing of the mines disintegrates. Specific contaminants have unique consequences the effect depends on many complex factors.
20
In laboratory experiments with rats, TNT and RDX were found to be carcinogenic, causing tumours‟ in the bladder and male reproductive
systems, and congenital defects, skin irritation, and disruption of the immunological system. Landmines, to a lesser extent, also contain
additional compounds including iron, manganese, zinc, chromium, cadmium, nickel, copper, lead and mercury, of which iron, manganese, zinc,
copper and nickel are essential micronutrients in the plant soil system. Soil contamination with heavy metals is observed in
areas surrounding mines when the mines decay or explode. In extreme cases, contaminations can be detected in as much as 6 Km from the site
of an explosion. Even higher concentrations of the heavy metals are found at the centre of the explosion site. Many of the organic and
inorganic substances and compounds that are derived from the explosives are long lasting, water-soluble and toxic even in
small amounts. The contamination can be delivered directly or indirectly into soil, water bodies, microorganisms and plants with drinking
water, food products or during respiration. These pollutant compounds can leach into subterranean waters and bio-accumulate in the organs of
land animals, fish and plants. Their effects can be mortal to some mammals and aquatic macro and microorganisms by acting as a nerve poison
to hamper growth. A significant landmine related chemical contamination threat is lead toxicity. Lead can have continuum of toxicity, meaning
it can be harmful even at very small amounts, and its effects rise with increasing concentration. In human beings lead (Pb) toxicity can result in
kidney damage, sterility, miscarriage, and birth defects. Moreover, high levels of mercury (Hg) can result in neurological disorder; while
cadmium (Cd) can cause kidney failure and Osteomalacia softening of bones and multiple bone fracture. 4. Loss of productivity Landmines
affect resource productivity whether they have detonated or not. Low availability of land (access denial), degradation of the
soil (micro-relief disruption, chemical contamination), combined with loss of flora and fauna diversity add
up to land degradation reduction in productivity of previously productive land. Landmines have restricted agricultural
production on a land area equivalent to 6 percent of the 1474 million ha of land cultivated globally. Landmines for
being partly responsible for decreased agricultural productivity and lowered food security in mine affected
countries. In 2000, it was reported that in the absence of the landmine crisis the productivity in Afghanistan could
have increase by 88-200 percent, 135 percent in Cambodia, 11 percent in Bosnia and 36 percent in Mozambique
compared to pre-war levels. As agricultural and other important lands are taken out of production, the socio-economic state of affairs of the
segments of population that were once self sufficient suffers. When people cannot get access to their land resources because it is no longer
safe to enter a whole host of problems are created. Land degradation leads to many complex socio-politico-economic problems, including but
not limited to, exploitation of available resources beyond their ecological carrying capacity, unemployment, poverty, social marginalization,
desperation, and aid dependency. III. Socio-politico and Economic Dimensions of the Ecological Crisis
21
There are no safe landmines – duds will still kill civilians and US troops for hundreds of
years
Horton 20 [Alex Horton, reporter on military affairs currently working for the Washington Post with a
BA in English from Georgetown, 01-01-2020, “Why the land mine, a persistent killer of civilians, is
coming back under Trump,” Washington Post, https://www.washingtonpost.com/national-
security/2020/02/01/land-mines-trump/]/Kankee
Nonpersistent mines would lead to a lower risk of harm to civilians, the Pentagon said, but the agency did not respond to a follow-up question
asking how that number was calculated. Experts second-guessed that confidence and have rejected the notion of a
“smart” mine as risk-free or danger-free to civilians. “Like any microchip-based electronic device, there are going to
be failures,” said Mark Hiznay, the associate arms director for Humans Rights Watch . Hiznay speculated that the
Pentagon estimate was conjured by calculating electrical component failure rates, not actual mine
deployment studies. Other evidence points to an imperfect weapon that posed a danger to U.S. troops. Nearly 120,000 “smart,”
nonpersistent mines were used in the Gulf War, which was the last time the United States used land mines in warfare outside
a single use in Afghanistan in 2002. Even though the Pentagon suggested a low dud rate , anti-personnel and
antitank weapons that failed to self-detonate littered Kuwait , a 2002 Government Accountability Office report said.
Nearly 2,000 duds were uncovered by contractors working in one sector alone out of seven , the GAO report
concluded. “Every dud is dangerous,” Hiznay said. And because of uneven and chaotic battlefield reporting, it is possible some U.S.
casualties attributed to enemy land mines and explosives were caused by these munitions, the GAO report
said. Newer land mines were developed to mitigate future harm to civilians. The Spider Networked Munition, for instance, includes a “human in
the loop” that allows troops to trigger the explosives and show locations on GPS. But it is unclear if the land mines the Pentagon
has authorized for use will include such an oversight ability. The agency did not address a question about that capability. What
does this mean for civilians and the future of war? Pentagon officials have said commanders need the option to use anti-personnel mines to
take on conventional adversaries such as China and Russia. But one problem that may arise: All
NATO partners of the United
States have signed onto the ban, potentially creating problems in theoretical coalition missions , said Stohl of
the Stimson Center. And future battlefields may be so dynamic that the mines might not disarm in time before
they harm whoever is nearby. “When it’s active, it’s not distinguishing between a civilian and a legitimate
target,” Stohl said. That alarm has been sounded by commanders for decades. ‘‘What the hell is the use of sowing [anti-
personnel mines] if you’re going to move through it next week or next month ?” former Marine Corps Commandant
Gen. Alfred M. Gray Jr. said in 1993. The Pentagon’s embrace of land mines also puts it at odds with a key diplomatic State Department
program, which has worked to find and destroy remnant explosives in 100 countries since 1993 — a $3.4 billion effort. But it’s clear that
civilians worldwide will be haunted by the threat under their feet for decades to come. In Vietnam alone,
leftover land mines and other explosives dropped by the United States have killed 40,000 people since the
end of the war, and it may take 300 years for all remaining munitions to be cleared . In other words, the last
Vietnamese person to be killed by an unexploded U.S. munition probably hasn’t even been born.
22
Biden means that landmine restrictions are inevitable, but banning landmines is key to
the treaty and global disarmament efforts
Abramson 11-11 [Jeff Abramson, senior fellow at the Arms Control Association, 11-11-2020, "Biden
should embrace the humanitarian disarmament agenda," Defense News,
https://www.defensenews.com/opinion/commentary/2020/11/11/biden-should-embrace-the-
humanitarian-disarmament-agenda/]/Kankee
In his first speech upon being declared the president-elect, Joe Biden
flagged making “America respected around the
world again” among top-line priorities. Doing so must include putting the United States in alignment with
its allies and an increasing global consensus on weapons use. Much of the agenda for doing so is advanced
by redefining security as based on human needs — a necessity made more clear each day by a global pandemic for which
kinetic weapons provide no defense. Fortunately, the “humanitarian disarmament” approach provides a good
framework and blueprint. More than 250 civil society organizations have signed a global letter laying out how a
focus on weapons use-related prevention and remediation can be helpful in moving to a better post-pandemic world.
Within this framework are existing treaties recognizing that certain weapons are indiscriminate and
should no longer be used because of the human suffering they cause . Biden can start with the Mine Ban
Treaty. Early this year, the Trump administration revised U.S. antipersonnel landmine policy to consider using
those weapons anywhere in the world. As a candidate, Biden indicated he would return to the earlier Obama-Biden
approach, which instead set the goal of eventual U.S. accession to the treaty. He should go further and
simply recognize that these weapons already have no place in the U.S. arsenal. We have neither used them
since 1991 (aside from one incident in 2002), exported them since 1992, nor produced them since 1997. All our
NATO allies and a total of 164 countries have foresworn their use via joining the treaty. Similarly, a total of 110
countries — among them the vast majority of our NATO allies — are now party to the Convention on Cluster Munitions, which bans these
namesake weapons that are currently used to international outcry in harming civilians in Nagorno-Karabakh. The last significant U.S. use of
cluster munitions was in 2003 (aside from a single attack in 2009). It’s time to recognize these too have no place in our
arsenal. Biden could and should be particularly bold on the weapon that has a history of non-use in conflict since 1945, whose sustainment
cost over the next three decades could run into the trillions of dollars, and for which any use would have devastating humanitarian
consequences: nuclear weapons.
23
US ratification bolsters the Ottawa Treaty and US standing – anything less then the
MBT won’t be credible internationally
Good 11 [Rachel Good, um laude graduate of Northwestern University Pritzker School of Law, Spring
2011, “Yes We Should: Why the U.S. Should Change Its Policy Toward the 1997 Mine Ban Treaty,”
Northwestern Journal of International Human Rights,
https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?
article=1113&context=njihr]/Kankee
VI. CONCLUSION: OBAMA SHOULD JOIN THE MINE BAN TREATY 52 After leaving office, President Clinton admitted that one of
his biggest regrets in office was his administration’s failure to sign the MBT.226 Why? Seemingly, Clinton realized that landmines’ limited
military utility does not outweigh their humanitarian effect . This understanding reflects U.S. policy before and since the
formulation of the MBT. The U.S. does not use, produce, or trade landmines. It reserves the right to, but does not use landmines with self-
destruct or deactivation mechanisms. Landmines are not necessary for the protection of South Korea , nor can they
be used in Iraq or Afghanistan without those countries violating the MBT. Finally, the U.S. has provided more
humanitarian funding for mine action programs than any other nation . President Obama also has enough
political support to join the Treaty. In May 2010, sixty-eight U.S. Senators sent President Obama a letter in
support of the U.S. joining the MBT.227 The U.S. refusal to join the Treaty rests solely on the U.S. military’s
desire to keep its stockpile of landmines , which it does not even use. The Obama administration should back the Mine
Ban Treaty because it is in the best interest of the United States. 53 The United State’s failure to join the Mine Ban Treaty
illustrates American exceptionalism at its worst. Whereas the majority of states understood that the
humanitarian situation caused by landmines warranted the strongest possible treaty, the United States
refused to join unless other states accommodated its continued use and stockpile of landmines. When its
demands were rejected, chose to United States bow of the process rather than concede to middle-power
states.228 Since then, the U.S. has consistently developed policies in an attempt stay in line with the international norm developed by the
MBT.229 As long as the U.S. stays outside of the MBT, its landmine policies will be regarded as
inadequate. 54 In the years since the U.S. refusal to join the Treaty, it has acted in an increasingly unilateral manner. The Bush
administration’s withdrawal from the Anti-Ballistic Missile Treaty and its rejection of the Kyoto Protocol, the International Criminal Court, and
the Mine Ban Treaty were regarded by the international community as acts of an isolationist nation.230 Along with the U.S.’s actions in Iraq and
Afghanistan, the U.S. established a clear doctrine of global domination and exceptionalism.231 President Obama has articulated a plan of global
reintegration and has worked to restore the U.S.’s reputation as a cooperationist nation.232 Joining
the MBT would signal to the
world that the Obama administration is serious about working with the international community . Since the
U.S. has long opposed the MBT, the international community may regard U.S. ratification of the Treaty as an
apology for its recent exceptionalist policies. Finally, the U.S. landmine policy is so close to the requirements of the MBT that
joining the Treaty would not require a drastic shift in practice. The Obama administration should correct a lasting mistake of the
Clinton administration by joining the MBT, and in doing so, indicate to the world community its desire to reengage and
repair relationships.
24
US landmines have no usage for deterring Korean conflict – loads of other things fill-in
Good 11 [Rachel Good, um laude graduate of Northwestern University Pritzker School of Law, Spring
2011, “Yes We Should: Why the U.S. Should Change Its Policy Toward the 1997 Mine Ban Treaty,”
Northwestern Journal of International Human Rights,
https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?
article=1113&context=njihr]/Kankee
The U.S. has not used landmines in almost twenty years, yet the U.S. reserves the right to use landmines in defensive operations
in countries like Korea, Iraq, and Afghanistan. First, Iraq and Afghanistan are States Parties to the MBT , and under
Article 1, they may not “assist, encourage, or induce” any other states in using landmines on their territory or any
other states territory. As such, the U.S. cannot use landmines in either country without those countries violating
the Treaty.216 Second, the mines placed along the DMZ are South Korean, not American, mines and do not
affect the ability of the U.S. to join the Treaty.217 Therefore, the U.S. argument in support of retaining landmines rests on the
possibility of the U.S. using mines in Korea as a defensive measure against an invasion by North Korea is invalid. 50 Landmines are not
an effective measure against a possible invasion of South Korea . The former commander of U.S. forces in
Korea, Lt. Gen. James Hollingsworth rated the utility of landmines in Korea as “minimal.”218 Hollingsworth
never relied on landmines to “make much of a difference” because “[t]o be blunt, if we are relying on these weapons
to defend the Korean peninsula, we are in big trouble.”219 The purported purpose of landmines in Korea is to stall a North
Korean invasion and provide time for South Korean and U.S. forces to mobilize.220 However, there are numerous other tactics
including tank traps, trenches, and barricades that can delay a North Korean invasion.221 Also, it is not
likely that U.S. and South Korean forces will to be caught off-guard by a North Korean ground invasion .
U.S. satellite and spy technologies monitor North Korean military activity and would provide advance
warning of any mobilization for a ground invasion, so U.S. forces would have ample opportunity to prepare .222
Further, U.S. military strategy is to first employ precision air and missile attacks, which could halt a Korean
attack.223 As Lieutenant General Hollingsworth said, the U.S. should not and does not have to rely on landmines to protect the Korean
Peninsula. There is some indication that the Obama administration may not defer to the military’s desire in developing a landmine policy.
During 2010 the U.S. Department of State hosted a number of interagency meetings that included representatives from U.S. and international
NGOs, the ICRC, the UN, and members of the Clinton and Bush administration landmine policy teams.224 Additionally, the U.S. has sought input
from political and military allies on whether the U.S. should join the Treaty.225 This outreach to groups other than the U.S. military is possibly a
signal that the Obama administration is not willing to simply defer to the judgment and will of the military in developing a policy on the Mine
Ban Treaty.
25
And decreased Soko-Noko tensions allows for accession – they’re jointly removing the
landmines now
Schrepferman 19 [Will Schrepferman, Associate Editor and Staff Writer for the HIR, 12-19-2019, "All
Mine: The United States and the Ottawa Treaty," Harvard International Review,
https://hir.harvard.edu/all-mine-the-united-states-and-the-ottawa-treaty/]/Kankee
The United States—along with the likes of Iran, China, Russia, Pakistan, and Syria—is not a party to the Ottawa Treaty.
Although America supported the development process of the treaty, it did not sign it in 1997. The Clinton administration declined to accede to
the Treaty under pressure from the Pentagon, which was concerned with the strategic importance of landmines along the Demilitarized Zone
(or DMZ) between North and South Korea. The United States, in conjunction with South Korea, has deployed thousands of landmines across the
DMZ to serve as a deterrent against North Korean invasion. Robert Beecroft, a State Department official under President Clinton, said that the
United States would have signed the treaty if not for the issue of Korea. Little movement was made on the issue until 2014, when the Obama
administration announced that it would accede to the Ottawa Treaty in every way except on the Korean peninsula. President Obama
announced at the time that “we will begin destroying our stockpiles not required for the defense of South Korea. And we’re going to continue
to work to find ways that would allow us to ultimately comply fully and accede to the Ottawa Convention.” The United States pledged to
destroy its stockpile of nearly 10 million landmines, not produce anymore, and totally restrict their use in warfare. Advocates of the land
mine ban applauded this move, but questioned the logic of requiring an exception for Korea. Steven Goose, arms
director at the Human Rights Watch , said that “a geographic exception to the ban is no more acceptable
today than when the treaty was negotiated.” A new opportunity presented by recent diplomatic efforts in Korea
could be the beginning of the United States’ ascension in full to the Ottawa Treaty. The recent thawing of
diplomatic tension on the Korean peninsula included a 2018 summit between South Korean President Moon
Jae-In and North Korean leader Kim Jong Un; both South and North Korea agreed to begin dismantling landmines
in the DMZ. Engineers began removing landmines from the region within days of the summit, with the
stated goal of total removal. If this policy is successful between North and South Korea, and landmines are eliminated from the DMZ,
then the United States would have no reason not to fully sign on to the Ottawa Treaty . The Ottawa Treaty is the
result of the complicated, ugly history of landmines and the subsequent late-20th century movement towards their elimination. It is a success
in progress: though it has undoubtedly saved lives and made the world safer, landmines are still in use in some countries around the world. The
United States has so far failed to accede to the Treaty, largely due to the outlier of the Korean peninsula and the necessity of landmines in the
DMZ as a deterrent. New developments towards the removal of those mines, though, may finally mean that America can add its full,
unconditional support to a global landmine ban.
26
Contention 2: Killer Clankers
Killer robot prolif coming now – it escalates wars and will cause massive human rights
violations
Garcia 18 [Denise Garcia, Associate Professor in the Department of Political Science and the
International Affairs program, and a Global Resilience Institute Faculty at Northeastern University in
Boston and a Nobel Peace Institute Fellow, 2018, “Lethal Artificial Intelligence and Change: The Future
of International Peace and Security,” International Studies Review, https://sci-
hub.se/https://academic.oup.com/isr/article-abstract/20/2/334/5018660?
redirectedFrom=fulltext]/Kankee
The Three Domains of Peace and Security The first domain of peace and security consists in the prohibition of the use of force in the conduct of
international relations. It is a norm first codified in the United Nations Charter Article 2.4, which is carried out by the charter’s mechanisms for
the peaceful settlement of disputes (through international and regional courts) and by international organizations. The revolution catalyzed by
AI weapons will give rise to two important questions before the international community (Singer 2009, 14): Shall a lethal AI weapons arms race
be prevented before it starts? Shall AI weapons be empowered to kill without human oversight? The answers to these questions represent a
predicament to international law: how will this revolution in warfare impact the existing United Nations Charter peace framework? If
the
use of unmanned aerial vehicles, known as drones (Schwarz 2017), serves as an indicator of things to come, a few countries are
already employing them in situations that could be peacefully settled using a law enforcement framework (Kreps and Kaag 2012; Knuckey 2014;
Kreps and Zenko 2014). Impediments to the use of force in international relations have been eroded to the
point where the use of force is employed without legal justification under existing international law. AI
weapons will signify diminished ceilings for war to start. More violence will ensue as a result. The erosion of the
nonuse of military force and the peaceful settlement of disputes norms will make peace and security
precarious. This is especially likely because the technologically advanced countries will have an advantage on the
ones that cannot afford AI weapons. The second domain, efforts to sustain peace and security in the twenty-first century, is based
on the rules of international protection of human rights law (HRL) and IHL. AI weapons will disrupt the regulation of war and
conflict under the rules of the UN Charter . The development of AI weapons will disrupt the observance of the
human rights and IHL legal architectures. For both IHL and HRL, the main requirement is accountability for
actions during violent conflict (Bills 2014; Hammond 2015). Taken together, HRL and IHL serve as the basis for the
protection of life and prevention of unnecessary and superfluous suffering (Heyns 2013). The universal IHL
and HRL global norms form the common legal code , which has been broadly ratified (Haque et al. 2012; Teitel
2013). It is critical to determine how to protect civilians and whether the development of new weapons and technologies will
imperil existing legal frameworks, which are already dwindling due to the wars in Syria and beyond. The combined development of
HRL and IHL spawned an era of “Humanitarian Security Regimes.” There are altruistically motivated regimes that protect civilians or control or
restrict certain weapons systems. These regimes embrace humanitarian perspectives that seek to prevent civilian casualties and guard the
rights of victims and survivors of conflict and, ultimately, to reduce human suffering and prohibit superfluous harm. The relevance of the
concept of humanitarian security regimes to the weaponization of artificial intelligence rests on two factors (Garcia 2015). First is that new
regimes can form anew in areas previously considered impenetrable to change. These new regimes can be motivated to protect human
security. The 1997 Nobel Peace Prize was awarded to Jody Williams and the International Campaign to Ban Landmines (ICBL) in recognition of
the new role played by civil society to create a new human security treaty that prohibits the use and all aspects of landmines, a weapon
previously in widespread use (Borrie 2009, 2014). Second, change
is possible within the realm of national security (i.e.,
weapons) as a result of the attempts to stem humanitarian tragedies. States
may be led to reevaluate what is important to
their national interests and be duty-bound by a clear humanitarian impetus or reputational concerns
(Gillies 2010) vis-à-vis the weaponization of AI . The key humanitarian principles, now customary, that have
been driving disarmament diplomacy in the last century are the prohibition of unnecessary suffering by
combatants, the outlawing of indiscriminate weapons , and the need to distinguish between civilians and
combatants (Henckaerts and Doswald-Beck 2005). Here is it worth nothing that, although the United States continues to, at
times, violate these principles, the latter comprise the norm nonetheless . This is evidenced in the
27
increasing legal challenges to the killing of civilians . Moreover, just because some violate it in some cases
does not mean it is not a norm. Humanitarian concerns have always been part of the equation in
multilateral disarmament diplomacy. It is only in recent years, however, that they have assumed center stage and become the
driving force, evident in the 1997 Ottawa Convention on Landmines and the 2008 Convention on Cluster Munitions. Such concerns can also be
said to have been at the core of the overwhelming international support for the Arms Trade Treaty (Erickson 2013). Finally, it is essential to
note that global rule making, which once was fundamentally anchored on consensus, may indeed be in decline (Krisch 2014). To many areas of
international law, consensus is still central; however, there are very good reasons to question its relevance. Consensus is outdated and no
longer reflects the reality and urgency of current challenges. Indeed, global governance may be leading to a new norm of lawmaking, one based
on majority rather than consensus.4 For instance, the Arms Trade Treaty abandoned consensus negotiations out of frustration and an inability
to create a legally binding document covering legal arms transfers. The Arms Trade Treaty represents a significant shift in negotiation tactics as
it broke free from the constraints of consensus negotiations and instead was shifted to a vote in the UNGA, where approving such treaties
requires only a majority. The stabilizing legal and political framework that sustains peace and security comprises several norms, such as
transparency, and mechanisms: confidence-building, alliances, arms control agreements, nuclear weapons free zones (NWFZs), joint
operations, disarmament, conflict resolution, peacekeeping, and reconciliation. New AI weapons will require a totally new and expansive
political and legal structure (Adler and Greve 2009). To maintain transparency at the global level could be difficulty with AI weapons, which will
be hard to scrutinize due to the algorithms and large data that will be used (Roff 2014). There is a legal responsibility regarding the creation of
new weapons under Article 36 of the 1977 Additional Protocol to the Geneva Conventions, which states: “In the study, development,
acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether
its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the
High Contracting Party.” Article 36 stipulates that states conduct reviews of new weapons to determine compliance with IHL. However, only a
handful of states carry out weapons reviews regularly, which makes this transparency mechanism insufficient as a tool for creating security
frameworks for future arms and technologies. The third domain of peace and security comprises the initiatives and programs in cultural,
economic, social, and environmental matters that affect all of humanity and tackle problems that can only be solved collectively. AI has
enormous potential to be used for the common good of humanity. Therefore, this third framework is based upon the UN Charter, Article 1.3:
“To achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character, and in
promoting and encouraging respect for human rights and for fundamental freedoms.” The challenge of AI can be tackled collectively and
peacefully—hence the need for a ban on weaponization. Recently, states agreed unanimously, under the auspices of the UN, on the new UN
Sustainable Development Goals and on the Paris Agreement on Climate Change. Taken together, they represent a robust map to holistically
solve some of the worst economic, social, and environmental problems facing humanity today. The attention and resources of the international
community should be drawn toward such initiatives immediately. AI presents a similar opportunity to tackle a common problem together in a
way that has been demonstrated to work before. UN Charter Article 26 constitutes the normative prescription for the nondiversion of human
and financial resources away from social and economic development toward weapons inventions that could be harmful for peace and security.
Prevention to Avoid Disruptive Change AI weapons present a dangerous challenge to the world and to international law, but there are steps
that the world can take to mitigate some of these major concerns. The adoption of “preventive security governance” as a strategy could raise
the capacity to keep peace and international order. This could be achieved by the codification of new global norms based on existing
international law that will clarify expectations and universally agreed-upon behavior. This is needed because now we have no relevant
rules, as the extant rules will probably not suffice for the challenges ahead (Garcia 2016). The precautionary principle of
international law includes three domains: to prevent harm, to shift the burden of proof to supporters of a probably harmful activity, and the
promotion of transparent decision-making that includes those who would be affected. The precautionary principle calls for action
to be taken before harm is done. Artificial intelligence presents such a case : there is scientific certainty
and consensus regarding the danger of its weaponization. There is already strong consensus in the
international community that not all weapons are acceptable and that some have the potential to be so
harmful that they should be preemptively banned. Such is the case with the prohibition on blinding laser weapons, a class of
weapons that was banned while still in development. In the same way, the weaponization of AI can be halted before its full
deployment in the battlefield, and international efforts can instead be focused on its peaceful uses. Here it is also important to
address the incentives that countries would have to ban these types of weapons, namely that they will be much more widely available than
other types of weapons and that their industries have a stake in preventing their products from being involved in civilian casualties. The issue of
proliferation is one that should make states more willing to preemptively ban such weapons. Proliferation
of AI weapons will
happen faster and at lower cost than conventional weapons or weapons of mass destruction . Industry, in
this case, also has a vested interest in ensuring that their systems not be associated with mass atrocities or
illegal forms of warfare. Their inclusion in a ban should therefore be welcome and could very well be a determining factor for its
success. An example of this can be seen in the Chemical Warfare Treaty, which the industry helped negotiate. States should focus all of their
attention on maintaining and strengthening the architecture of peace and security based upon the UN Charter. Nothing
else has the
capacity to bring the international community together at this critical juncture . Many times before,
28
states have achieved the prohibition of superfluous and unnecessary armaments . In my research, I have found
that individual states can become champions of such causes and unleash real progress in disarmament diplomacy. It is the work of such
champion states that brought to fruition extraordinary new international prohibition treaties for landmines and
cluster munitions and the first treaty to set global rules for the transfer of conventional arms. The 1997 treaty prohibiting mines and
the 2008 treaty that banned cluster munitions were success stories because they prohibited weapons that indiscriminately harmed civilians.
The 2013 Arms Trade Treaty represents a novel attempt, at the global level, to imprint more transparency and accountability on
conventional arms transfers. The presence of an “epistemic community”—a group of scientists and activists with
common scientific and professional language and views that are able to generate credible information —
is a powerful tool for mobilizing attention toward action . In the case of AI weapons, the International Committee for Robot
Arms Control serves such purpose. The launch of a transnational campaign is another key element to summon awareness at several levels of
diplomatic and global action (Carpenter 2014). The Stop Killer Robots Campaign is in place and is attracting an unprecedented positive response
from around the world. Conclusions An AI weapons global race will imperil everyone . Nuclear weapons serve as a
historic model, alerting us to what can result: an imbalanced system of haves and havenots and a fragile
balance of security. States will be better off by preventing the development and deployment of these
systems, as this is an arms race that has the potential to proliferate much more widely and rapidly than
nuclear weapons ever did. It would therefore leave everyone less secure since more states and nonstate
actors will likely be able to buy or replicate such technologies. As with nuclear weapons, AI weapons would create a new
arms race, only this one would be much more widespread , as its technology is cheaper and much easier to
develop indigenously. Preventive security governance frameworks must be put in place with principled
limits on the development of AI weapons that have the potential to violate international law (Garcia 2014,
Johnson 2004). Such preventative frameworks could promote stability and peace. Previously, states have reaped gains in
terms of national security from preemptive actions to regulate or control dangerous weapons. The prevention of harm is a moral
imperative (Lin 2010, 2012). In the case of AI-enabled weapons, even if they comply with IHL, they will have a
disintegrating effect on the commonly agreed rules of international law (Dill 2014; O’Connell 2014). AI weapons
will make warfare unnecessarily more inhumane because attribution is necessary to hold war criminals
to account, and these weapons make that so much harder . Nations today have one of the greatest opportunities in history
to promote a better future by devising preventive security frameworks that will preventatively prohibit the weaponization of artificial
intelligence and ensure that AI is only used for the common good of humanity. This is about a more prosperous future for peace and security.
29
LAWS will soon be easily proliferated and a ban is key to stop mass production of
LAWS which stops rogue and covert LAWS
Freedberg 19 [Sydney J. Freedberg Jr., deputy editor for Breaking Defense with masters’ degrees from
Cambridge and Georgetown, citing Stuart Russell, professor of computer science and director of the
Center for Intelligent Systems at UC Berkeley, 3-8-2019, "Genocide Swarms & Assassin Drones: The Case
For Banning Lethal AI," Breaking Defense, https://breakingdefense.com/2019/03/genocide-swarms-
assassin-drones-the-case-for-banning-lethal-ai/]/Kankee
So what Russell really worries about is not robotic tanks — though he’d definitely prefer a world without them — but what happens when the
technology is developed and the precedent is set. “Given the cost of a new M1A2 around $9 million…there are far cheaper ways to flatten a city
and/or kill all of its inhabitants,” Russell told me. “The
problem with full autonomy is that it creates cheap, scalable
weapons of mass destruction.” It’s already possible to build assassin drones by combining off-the-shelf
quadcopters, small amounts of homebrewed explosive, and the kind of facial-recognition technology
Facebook uses to tag other people’s bad pictures of you . “My UAV colleagues tell me they could build a
weapon that could go into a building , find an individual, and kill them as a class project,” Russell said. “Skydio
plus self-driving cars plus AlphaStar more or less covers it.” (Skydio’s a drone you can buy on Amazon; AlphaStar is a version of the DeepMind AI
that beats humans at complex strategy games like Starcraft). In fact, he said, Switzerland’s
domestic security agency, DDPS,
“made some to see if they would work — and they do.” Not only would they work, they’ve already been tried . ISIS
has already used mini-drones as “flying IEDs,” and someone attempted to assassinate Venezuelan president Nicolàs Maduro with a pair of
exploding drones. Small Drones, Big Kills Now what happens when you scale this up ? Russell and fellow activists actually
produced a video, Slaughterbots, in which swarms of mini-drones attack, among other groups, every member of Congress from a particular
party. But that’s still thinking small. Remember, once
you’ve written the software, you can make infinite copies; lone
cranks can make explosives; and mini-drones
are getting cheaper by the day . Remember also that the Chinese
government has personal information on some 22.1 million federal employees, contractors, and their family
members from the Office of Personnel Management breach two years ago. Now imagine one out of every
thousand shipping containers imported from China is actually full of mini-drones programmed to go to
those addresses and explode in the face of the first person to leave the house . Imagine they do this the
day before China invades Taiwan. How effectively would the US government react? A rogue state or terrorist group could go
further. How about programming your mini-drones to kill everyone who looks white, or black or Asian? (One Google facial recognition
algorithm classified African-Americans as “gorillas,” not humans, so racist AI is a mature technology). It
would be genocide by
swarm. Such a tactic might only work once, much like hijacking airliners with box cutters on 9/11. “Small drones are vulnerable to jamming,
to high-powered microwaves, to other drones that might intercept them, to nets,” said Paul Scharre, an Army Ranger turned thinktank analyst.
“Bullets work pretty well… I have a buddy who shot a drone out of the sky back in Iraq in 2005.” (Unfortunately, the drone was American). At
least some object-recognition algorithms can be tricked by carefully applied reflective tape. “People are working on countermeasures today,”
Scharre told me, “and the bigger the threat becomes, the more people have an incentive to invest in countermeasures.” But how do you stop
tiny drones from becoming a big threat in the first place? While technology to build a “working prototype” already exists, Russell told me, the
barrier is mass production. No national spy agency or international monitoring regime can find and stop everyone trying to make small numbers
of drones. But, Russell argues fervently, a
treaty banning “lethal autonomous weapons systems” would prevent
countries and companies from openly producing swarms of them, and a robust inspection mechanism —
perhaps modeled on the Organisation for the Prohibition of Chemical Weapons — could detect covert attempts at mass
production. Without a ban, Russell said, legal mass production could make lethal swarms as easy to obtain
as, say, assault rifles — except, of course, one person can’t aim and fire thousands of rifles at once . Thousands of
drones? Sure. So don’t fear robots who rebel against their human masters . Fear robots in the hands of the
wrong human.
30
weapons?," Middle East Institute, https://mei.edu/publications/will-covid-19-hasten-rise-lethal-
autonomous-weapons]/Kankee
The latest figures from the Pentagon indicate that the total number of COVID-19 cases among members
of the U.S. military has topped 60,000 since the onset of the pandemic. COVID-19 and other similar
outbreaks could become an increasingly important consideration in the calculus of future military
deployments. They could add impetus to the Pentagon’s development of lethal autonomous weapons (LAWs) or
at least be cited as a perfect reason to do so. This could, in turn, have significant implications for the future
of both U.S. military operations in the Middle East and the U.S. military presence in the region, which has long been the subject of
political disagreement in Washington. Already, there are numerous land-, air-, and sea-based weapons capable of
performing surveillance and voice recognition, as well as tracking and independently choosing to attack targets
autonomously using artificial intelligence (AI) — what UN Secretary-General António Guterres has referred to as “machines with the
power and discretion to take lives without human involvement.” Implications for US military operations Incorporating AI and LAWs into new or
existing platforms will have enormous implications for U.S. military operations in general. By its very nature, military
conflict is a costly
undertaking ethically, morally, politically, economically, and psychologically. Altering or altogether removing
these costs will change the course of military operations and war fare in profound ways. In recent years, U.S.
operators of remotely operated drones (i.e. manned drones) in the Middle East have already been reportedly grappling with the altered
psychological and moral costs of being far removed from the battlefield. Their experience of killing people halfway across the globe has been
described as more akin to a video game than the significantly more difficult psychological and personal experience of battlefield soldiers. The
future use of lethal autonomous drones (i.e. unmanned drones) could remove human involvement altogether —
and along with it many of the associated costs. U.S. leaders have historically needed to secure financial
resources, mobilize populations, and risk their credibility when undertaking military operations , especially in
the Middle East. The use of LAWs effectively lowers this threshold and makes it is easier for them to do so .
There are also significant systemic implications that are more difficult to predict. Across a variety of sectors, system complexity is
increasing in ways that are hard to comprehend . The U.S. subprime housing bubble is an excellent example of this from the
financial sector: Layers upon layers of complexity created a system in which there were risks that few understood or predicted, and which
would bring it to collapse once stressed. Facebook is another example from the technology sector: Its original vision was to connect people and
improve their access to information, furthering democratization across the globe. Instead, its algorithms fueled the polarization, populism, and
misinformation we see today. The impact of autonomous systems on the governance of military operations could be equally disruptive
but potentially more difficult to discern. They could
impact the traditional cycle involved in the initiation, plateauing, and conclusion of
conflicts by making leaders less willing to consider diplomatic and political means to resolve crises . This
trend has already been set in motion by the rise of manned drones and cyber weapons , which have been
widely used in the Middle East. In such instances, military operations no longer have discernable beginnings
and endings, but instead consist of a prolonged low-level conflict where a conclusive victory is never fully
realized. It could also mean that the underlying drivers of conflicts are further overlooked . Drone-related
targeting in Yemen and Pakistan may have eliminated immediate terrorist threats but it has done little to address the
underlying causes of terrorism — a bleak reminder that the strategic utility of these weapons is
disconnected from the political and economic context . From an operational perspective, it is also unclear how
autonomy would impact command-and-control doctrines and the communication chain during
operations, let alone the needed checks and balances impacting the rules of engagement of forces ,
immunity from prosecution, humanitarian law, and the accountability of leaders at all levels. There are potentially
even domestic U.S. implications to consider: The Department of Defense is among the largest employers (public or
private) in the U.S., and while concerns over rushing to outsource jobs to machines are not new, reducing the number of jobs in
the U.S. military could further fuel unemployment and disenfranchisement . Impact on the US regional presence
31
That will cause a LAWS arms race
Horowitz 19 [Michael C. Horowitz, faculty with a specialty in Political Science at the University of
Pennsylvania, 10-29-2019, “Deterrence and Crisis Stability How might the deployment of LAWS influence
deterrence and the prospect for wartime escalation, including with nuclear-armed countries?,”
https://www-tandfonline-
com.ezproxy.library.unlv.edu/doi/full/10.1080/01402390.2019.1621174]/Kankee
Autonomous arms race dynamics If many actors may have the ability to produce, adapt or import some types of LAWS, will that lead to arms
races, as opposed to proliferation? Note that while talking about AI developments in a macro sense as an arms race is conceptually flawed,
because of the numerous potential applications of very different types of AI across wide military and civilian spectrums, it is theoretically
possible that arms races could occur across particular dimensions of AI connected to specific military missions. Specific examples of LAWS
could, in theory, be such a dimension. Arms races do happen, after all, even given the definitional caveats raised above. During the Cold
War, for example, the United States and Soviet Union engaged in an arms race over nuclear weapons, with each side seeking to build more
sophisticated nuclear weapons and delivery systems to gain an edge.54 Russian President Vladimir Putin infamously discussed
leadership in AI in explicitly competitive terms in 2017 when he stated that AI ‘[C]omes with colossal opportunities, but
also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the
world’.55 All arms races share an underlying political dynamic whereby fear of developments by one or multiple other
actors, and the inability to verify that those actors are not developing particular capabilities , fuels more
intense development of new weapon systems than would happen otherwise.56 An arms race in the area of machine
autonomy would be no different in that dimension. The root would be inherently political.57 Actors would also have to
believe that they would gain an advantage from the developing LAWS , or least be at a significant
disadvantage if they did not develop those weapon systems. Jervis argues that arms races occur due to a security
dilemma when states have the ability to measure each other’s capabilities , but not their intentions.58 The
opacity surrounding LAWS development might generate increased risk for arms competition because of
potential opacity about capabilities , in addition to the ‘normal’ opacity that exists about intentions. First, it will be
extremely difficult for states to credibly demonstrate autonomous weapon capabilities . The difference
between a remotely piloted system and an autonomous system is software , not hardware, meaning
verification that a given country is operating an autonomous system at all would be difficult. Second, uncertainty about the
technological trajectory of machine learning and specific military applications means that countries might have
significant uncertainty about other countries’ capabilities. Thus, countries might invest a lot in AI
applications to military systems due to fear of what others are developing . The heightened role of
uncertainty about what other countries are developing would make an LAWS arms competition different than many
historical arms races – for example, the Anglo-German naval arms race prior to World War I. In the Anglo-German naval arms race case,
both sides could see the ships being produced by the other side because those ships left port for testing, and were subject to reporting by spies
who could observe construction patterns.59 Even though there was some uncertainty about the specific capabilities of battleships and
battlecruisers, each side could count the number and size of the guns deployed on each ship. Third, the rules of
engagement for
LAWS would also likely be unknown – and use of an LAWS by a state in one engagement might not generate
predictability, since a state could change the programming of the system prior to the next engagement .
Thus, opacity surrounding AI capabilities could , potentially, lead to worse case assumptions about capability
development by potential adversaries, thus making arms race dynamics more likely. Research on bargaining and
war also suggests that uncertainty about capabilities makes it harder for countries to come to agreements
when they enter into disputes. Private information about military capabilities means both sides can believe they
are likely to win if a dispute escalates.60 The dispute then becomes harder to resolve and more likely to
escalate. To the extent that machine learning systems generate more uncertainty due to their opacity , an
arms race over machine learning systems might therefore be somewhat more likely to escalate. The extent of the
effect would be difficult to determine, however. Another risk is that competitive dynamics mean countries will accelerate
32
their weapons development cycles and deploy LAWS before fully testing them , due to a fear of falling
behind. This would essentially resolve the trust dilemma in a way that makes accidents more likely.61 Given concern that LAWS could be
more prone to accidents, such a development would be especially dangerous. This risk of an LAWS arms race causing countries to take short
cuts in weapons development seems unlikely, however. Militaries want weapons they can control (excluding potential exceptions noted above),
and they are unlikely to approve of deploying weapon systems they view as less able to accomplish a mission, or more likely to put their own
forces in danger, than alternatives. Thus, the incentive to deploy effective systems will hedge at least somewhat against short-cuts in the
weapons development process. Moreover, public awareness about the risks of AI could play a role in shaping how militaries consider deploying
AI systems, even in a competitive scenario. Fear of AI should lead most militaries to be more careful, rather than less careful, in the testing and
development of LAWS. The pressure to stay ahead will compete with the pressure to deploy effective systems. Deterrence and Crisis Stability
33
LAWS prolif collapses deterrence postures, sets warfighting at ludicrous speed, and
undermines arms control
Klare 18 [Michael T. Klare, professor emeritus of peace and world security studies at Hampshire
College and senior visiting fellow at the Arms Control Association, 8-27-2018, "The Challenges of
Emerging Technologies," Arms Control Association, https://www.armscontrol.org/act/2018-
12/features/challenges-emerging-technologies]/Kankee
Artificial Intelligence AI is a generic term used to describe a variety of techniques for investing machines with an ability to monitor their
surroundings in the physical world or cyberspace and to take independent action in response to various stimuli. To invest machines with these
capacities, engineers have developed complex algorithms, or computer-based sets of rules, to govern their operations. An AI-equipped aerial
drone, for example, could be equipped with sensors to distinguish enemy tanks from other vehicles on a crowded battlefield and, when some
are spotted, choose on its own to fire at them with its onboard missiles. AI can also be employed in cyberspace, for example to watch for
enemy cyberattacks and counter them with a barrage of counterstrikes. In the future, AI-invested machines may be empowered to determine if
a nuclear attack is underway and, if so, initiate a retaliatory strike.4In this sense, AI is an “omni-use” technology, with multiple implications for
war-fighting and arms control.5 Many analysts believe that AI will revolutionize warfare by allowing military commanders to bolster or, in some
cases, replace their personnel with a wide variety of “smart” machines. Intelligent systems are prized for the speed with which they can detect
a potential threat and their ability to calculate the best course of action to neutralize that peril. As warfare among the major powers grows
increasingly rapid and multidimensional, including in the cyberspace and outer space domains, commanders may choose to place ever-greater
reliance on intelligent machines for monitoring enemy actions and initiating appropriate countermeasures. This could provide an advantage on
the battlefield, where rapid and informed action could prove the key to success, but also raises numerous concerns, especially regarding
nuclear “crisis stability.” Analysts worry that machines will accelerate the pace of fighting beyond human
comprehension and possibly take actions that result in the unintended escalation of hostilities, even leading
to use of nuclear weapons. Not only are AI-equipped machines vulnerable to error and sabotage, they lack
an ability to assess the context of events and may initiate inappropriate or unjustified escalatory steps that
occur too rapidly for humans to correct. “Even if everything functioned properly, policymakers could nevertheless
effectively lose the ability to control escalation as the speed of action on the battlefield begins to eclipse their
speed of decision-making,” writes Paul Scharre, who is director of the technology and national security
program at the Center for a New American Security .6 As AI-equipped machines assume an ever-growing number and range
of military functions, policymakers will have to determine what safeguards are needed to prevent unintended, possibly catastrophic
consequences of the sort suggested by Scharre and many others. Conceivably, AI could bolster nuclear stability by providing enhanced
intelligence about enemy intentions and reducing the risk of misperception and miscalculation; such options also deserve attention. In the near
term, however, control efforts will largely be focused on one particular application of AI: fully autonomous weapons systems. Autonomous
Weapons Systems Autonomous weapons systems, sometimes called lethal autonomous weapons systems, or “killer robots,”
combine AI and drone technology in machines equipped to identify, track, and attack enemy assets on their own. As defined by the U.S.
Defense Department, such a device is “a weapons system that, once activated, can select and engage targets without further intervention by a
human operator.”7 Some such systems have already been put to military use. The Navy’s Aegis air defense system, for example, is
empowered to track enemy planes and missiles within a certain radius of a ship at sea and, if it identifies an imminent threat, to fire missiles
against it. Similarly, Israel’s Harpy UAV can search for enemy radar systems over a designated area and, when it locates one, strike it on its own.
Many other such munitions are now in development, including undersea drones intended for anti-submarine warfare and entire fleets of UAVs
designed for use in “swarms,” or flocks of armed drones that twist and turn above the battlefield in coordinated maneuvers that are difficult to
follow.8 The deployment of fully autonomous weapons systems poses numerous challenges to international security and arms control,
beginning with a potentially insuperable threat to the laws of war and international humanitarian law. Under these norms, armed belligerents
are obligated to distinguish between enemy combatants and civilians on the battlefield and to avoid unnecessary harm to the latter. In addition,
any civilian casualties that do occur in battle should not be disproportionate to the military necessity of attacking that position. Opponents of
lethal autonomous weapons systems argue that only humans possess the necessary judgment to make such fine distinctions in the heat of
battle and that machines will never be made intelligent enough to do so and thus should be banned from deployment.9 At this point, some 25
countries have endorsed steps to enact such a ban in the form of a protocol to the Convention on Certain Conventional Weapons (CCW).
Several other nations, including the United States and Russia, oppose a ban on lethal autonomous weapons systems, saying they can be made
compliant with international humanitarian law.10 Looking further into the future, autonomous weapons systems could pose a
potential threat to nuclear stability by investing their owners with a capacity to detect, track, and destroy
enemy submarines and mobile missile launchers. Today’s stability, which can be seen as an uneasy nuclear balance of
terror, rests on the belief that each major power possesses at least some devastating second-strike , or
retaliatory, capability, whether mobile launchers for intercontinental ballistic missiles (ICBMs), submarine-launched ballistic missiles
34
(SLBMs), or both, that are immune to real-time detection and safe from a first strike . Yet, a nuclear-armed
belligerent might someday undermine the deterrence equation by employing undersea drones to pursue
and destroy enemy ballistic missile submarines along with swarms of UAVs to hunt and attack enemy
mobile ICBM launchers. Even the mere existence of such weapons could jeopardize stability by
encouraging an opponent in a crisis to launch a nuclear first strike rather than risk losing its deterrent
capability to an enemy attack. Such an environment would erode the underlying logic of today’s strategic nuclear
arms control measures, that is, the preservation of deterrence and stability with ever-diminishing numbers of warheads and launchers,
and would require new or revised approaches to war prevention and disarmament.11 Hypersonic Weapons
35
Watch those wrist rockets - one LAWS can wipe out whole fleets
Byrnes 14 [Michael W. Byrnes, USAF captain, May-June 2014, “Nightfall Machine Autonomy in Air-to-
Air Combat,” Air & Space Power Journal,
https://www.airuniversity.af.edu/Portals/10/ASPJ/journals/Volume-28_Issue-3/F-Byrnes.pdf]/Kankee
36
LAWS arm races will cause nuclear war – they decimate deterrence and escalation
control mechanisms
Horowitz 19 [Michael C. Horowitz, faculty with a specialty in Political Science at the University of
Pennsylvania, 10-29-2019, “Deterrence and Crisis Stability How might the deployment of LAWS influence
deterrence and the prospect for wartime escalation, including with nuclear-armed countries?,”
https://www-tandfonline-
com.ezproxy.library.unlv.edu/doi/full/10.1080/01402390.2019.1621174]/Kankee
Deterrence and Crisis Stability How might the deployment of LAWS influence deterrence and the prospect for wartime escalation, including
with nuclear-armed countries? The relationship between speed and crisis stability in a world of deployed LAWS represents one of the clearest
risk factors associated with autonomous weapons. The United States and the Soviet Union avoided nuclear war during
the Cold War in part due to the development of mutually assured destruction, a situation where each side believed that, even if it
struck first, the target would still have enough nuclear forces remaining to destroy the aggressor. The countries developed complicated and
overlapping systems for command and control, as well as different types of nuclear strike systems.62 Ballistic missiles, for example, represented
the ‘autonomous’ weapons of their day, because they could not be recalled, which was unique at the time. There was also a trade-off between
perceived attack capabilities and perceived strategic stability. Ballistic missiles with multiple independently targeted re-entry vehicles could
allow countries to maximise damage in a first strike, but those very capabilities also made them disruptive to strategic stability.63 The Soviet
Union also allegedly deployed an automated system called ‘Perimeter’, known as the Dead Hand system, in response to fears of decapitation.
Evidence from former Soviet military and nuclear officials suggest that the Soviets designed the system to enable retaliation against a US
nuclear first strike even if Soviet command and control was decapitated. Soviet leadership could active Perimeter in a crisis if they feared they
might lose active control of their nuclear arsenal due to a US strike.64 The speed
associated with LAWS could potentially
threaten first strike stability in a crisis. The ability to fight at machine speed means a state could win
faster – but it also means that state could lose faster. Countries could fear that an aggressor, using LAWS
or related systems operating at machine speed, could quickly knock out their command and control capabilities,
eliminating their ability to retaliate (regardless of whether one or both sides has nuclear weapons). This
fear would create incentives for many of the least stable military postures developed during the Cold War ,
including strategic weapons on high alert, launch on warning postures, and others. A country fearing it
might not have the ability to respond in time if its command and control capabilities are devastated by machine-speed attack could
also have incentives for pre-delegation. Autonomous weapon systems could therefore place pressure on escalation
control mechanisms. LAWS, if they prove effective battlefield weapons, could also threaten deterrence by
undermining a nuclear deterrent itself.65 Imagine, for example, undersea or above-ground swarms of autonomous
systems with the ability to target ballistic missile submarines or ICBM silos . Some also fear a situation where
undersea LAWS track adversary submarines . Fear that those tracking systems could undermine the sea-
based deterrent of a country, especially a nuclear-armed country, could in theory create first strike incentives as well. These
scenarios currently seem very unlikely for a variety of technical reasons, including power restrictions and communications challenges
underwater.66 Now, there is nothing necessarily unique about the weapons being autonomous in this scenario – fast weapon systems that can
threaten command and control systems can place pressure on strategic stability in general. For example, precisely because hypersonic missiles
could hit over-the-horizon targets in a fraction of the time it would take existing ballistic or cruise missiles, many analysts believe they would
undermine strategic stability.67 For this situation to come about, autonomous tracking systems that could attack would have to be credible,
already observed by the target, and something the target would not have the ability to defend themselves from. The uncertainty about
survivability that LAWS would create in this situation could be mitigated by defensive systems, in theory. The high degree of uncertainty
about LAWS, especially for first moving states, before actors in the international system gain a more complete understanding
of the realm of the possible with LAWS, could also impact deterrence in two ways. First, as discussed above, there could be
uncertainty over whether the weapon will function as programmed . That generates uncertainty an
aggressor might try to use to coerce a target. A second type of uncertainty could also create instability. Software will
determine how an autonomous weapon will function, including the rules of engagement, and countries are not going
to allow potential adversaries to read their software code. In crisis situations, there could be greater
uncertainty that opponents lack experience in dealing with , relative to facing human opponents.
Consider the motivating example for this article, the Cuban Missile Crisis. Imagine that the US Navy ships were
37
deployed in a picket line, but the Soviet Union did not know how the ships were programmed to respond . The
ships might be programmed to allow Soviet ships running the blockade to proceed to Cuba , or they might
fire on the Soviet ships. Moreover, given that humans would not be in the loop , there would be no chance for
either frailty on the part of a ship commander to not use force , or over-aggressive behaviour. It is possible
that this uncertainty would have made the Soviet Navy more likely to back down, though it is hard to say. But the level of uncertainty would
potentially have been higher. Of course, there is already uncertainty already concerning the rules of engagement for human-piloted systems
during crisis situations. However, uncertainty about an autonomous system would not simply be about whether LAWS would follow commands,
but about what the autonomous system was programmed to do. This is another reason why decision makers might therefore want an override
switch. Given the challenges of signaling that a weapon is actually autonomous, the terminal impact of LAWS on deterrence could wash out.
That being said, the discussion above illustrates risks that countries will have to manage.
38
LAWS cause US-Russia war
Laird 20 [Burgess Laird, RAND Senior International Researcher with a BA in politics from the University
of Dallas, 06-03-2020, "The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict
Escalation in Future U.S.-Russia Confrontations," RAND, https://www.rand.org/blog/2020/06/the-risks-
of-autonomous-weapons-systems-for-crisis.html]/Kankee
Implications for Crisis Stability and Conflict Escalation in U.S.-Russia Confrontations While holding out the promise of significant operational
advantages, AWS simultaneously could increase the potential for undermining crisis stability and fueling conflict
escalation in contests between the United States and Russia. Defined as “the degree to which mutual
deterrence between dangerous adversaries can hold in a confrontation ,” as my RAND colleague Forrest Morgan
explains, crisis stability and the ways to achieve it are not about warfighting, but about “building and posturing forces in
ways that allow a state, if confronted, to avoid war without backing down” on important political or military
interests. Thus, the military capabilities developed by nuclear-armed states like the United States and Russia and how they posture
them are key determinants of whether crises between them will remain stable or devolve into conventional
armed conflict, as well as the extent to which such conflict might escalate in intensity and scope , including to
the level of nuclear use. AWS could foster crisis instability and conflict escalation in contests between the
United States and Russia in a number of ways; in this short essay I will highlight only four. First, a state facing an adversary
with AWS capable of making decisions at machine speeds is likely to fear the threat of sudden and
potent attack, a threat that would compress the amount of time for strategic decisionmaking. The
posturing of AWS during a crisis would likely create fears that one's forces could suffer significant, if not
decisive, strikes. These fears in turn could translate into pressures to strike first—to preempt—for fear of having to
strike second from a greatly weakened position . Similarly, within conflict, the fear of losing at machine speeds
would be likely to cause a state to escalate the intensity of the conflict possibly even to the level of
nuclear use. Second, as the speed of military action in a conflict involving the use of AWS as well as hypersonic
weapons and other advanced military capabilities begins to surpass the speed of political decisionmaking , leaders
could lose the ability to manage the crisis and with it the ability to control escalation. With tactical and
operational action taking place at speeds driven by machines , the time for exchanging signals and
communications and for assessing diplomatic options and offramps will be significantly foreclosed.
However, the advantages of operating inside the OODA loop of a state adversary like Iraq or Serbia is one thing, while operating inside the
OODA loop of a nuclear-armed adversary is another. As the renowned scholar Alexander George emphasized (PDF), especially in
contests between nuclear armed competitors , there is a fundamental tension between the operational
effectiveness sought by military commanders and the requirements for political leaders to retain control
of events before major escalation takes place. Third, and perhaps of greatest concern to policymakers should be the
likelihood that, from the vantage point of Russia's leaders, in U.S. hands the operational advantages of
AWS are likely to be understood as an increased U.S. capability for what Georgetown professor Caitlin Talmadge refers to as
“conventional counterforce” operations. In brief, in crises and conflicts, Moscow is likely to see the United States
as confronting it with an array of advanced conventional capabilities backstopped by an interconnected shield of theater
and homeland missile defenses. Russia will perceive such capabilities as posing both a conventional war-
winning threat and a conventional counterforce threat (PDF) poised to degrade the use of its strategic
nuclear forces. The likelihood that Russia will see them this way is reinforced by the fact that it currently
sees U.S. conventional precision capabilities precisely in this manner. As a qualitatively new capability that promises
new operational advantages, the addition of AWS to U.S. conventional capabilities could further cement Moscow's view
and in doing so increase the potential for crisis instability and escalation in confrontations with U.S. forces. In
other words, the fielding of U.S. AWS could augment what Moscow already sees as a formidable U.S. ability to
threaten a range of important targets including its command and control networks, air defenses, and early warning radars, all of which are
39
unquestionably critical
components of Russian conventional forces . In many cases, however, they also serve as
critical components of Russia's nuclear force operations. As Talmadge argues, attacks on such targets, even if
intended solely to weaken Russian conventional capabilities, will likely raise Russian fears that the U.S.
conventional campaign is in fact a counterforce campaign aimed at neutering Russia's nuclear capabilities .
Take for example, a hypothetical scenario set in the Baltics in the 2030 timeframe which finds NATO forces employing swarming AWS to
suppress Russian air defense networks and key command and control nodes in Kaliningrad as part of a larger strategy of expelling a Russian
invasion force. What to NATO is a logical part of a conventional campaign could well appear to Moscow as initial
moves of a larger plan designed to degrade the integrated air defense and command and control networks
upon which Russia's strategic nuclear arsenal relies . In turn, such fears could feed pressures for Moscow to
escalate to nuclear use while it still has the ability to do so. Finally, even if the employment of AWS does not drive
an increase in the speed and momentum of action that forecloses the time for exchanging signals, a future conflict in which
AWS are ubiquitous will likely prove to be a poor venue both for signaling and interpreting signals . In such a conflict,
instead of interpreting a downward modulation in an adversary's operations as a possible signal of
restraint or perhaps as signaling a willingness to pause in an effort to open up space for diplomatic
negotiations, AWS programmed to exploit every tactical opportunity might read the modulation as an
opportunity to escalate offensive operations and thus gain tactical advantage. Such AWS could also
misunderstand adversary attempts to signal resolve solely as adversary preparations for imminent
attack. Of course, correctly interpreting signals sent in crisis and conflict is vexing enough when humans are making all the decisions, but in
future confrontations in which decisionmaking has willingly or unwillingly been ceded to machines, the
problem is likely only to be magnified. Concluding Thoughts Much attention has been paid to the operational advantages to be
gained from the development of AWS. By contrast, much less attention has been paid to the risks AWS potentially
raise. There are times in which the fundamental tensions between the search for military effectiveness and the
requirements of ensuring that crises between major nuclear weapons states remain stable and escalation does not
ensue are pronounced and too consequential to ignore. The development of AWS may well be increasing the likelihood
that one day the United States and Russia could find themselves in just such a time . Now, while AWS are still in
their early development stages, it is worth the time of policymakers to carefully consider whether the putative operational advantages from
AWS are worth the potential risks of instability and escalation they may raise.
40
And US-China war – it causes miscalc and preemptive strikes through crisis
acceleration and fears of losing the AI race
Allen and West 7-12 [John Allen, retired four-star military general and Brookings Institution
president, and Darrell West, vice president of governance studies at Brookings, 07-12-2020, “Op-ed:
Hyperwar is coming. America needs to bring AI into the fight to win — with caution,” CNBC,
https://www.cnbc.com/2020/07/12/why-america-needs-to-bring-ai-into-the-upcoming-hyperwar-to-
win.html]/Kankee
The United States recently sent two aircraft carrier strike groups into the South China Sea in a show of military
strength. The move of multiple American warships is in reaction to China holding military exercises in
international waters that are contested by Vietnam and the Philippines . The stand-off raises global
tensions at a time when each superpower has developed advanced technological capabilities in terms of
artificial intelligence, remote imaging, and autonomous weapons systems. It is important officials in each nation
understand how emerging technologies speed up decision-making but through crisis acceleration run the
risk of dangerous miscalculation. Harkening back to Prussian general and military theorist Carl von Clausewitz’s famous work “On
War,” military doctrine the world over has been rooted in an understanding of the ever-changing character of war, the ways in which war
manifests in the real world, and the never-changing nature of war, those abstractions that differentiate war from other acts — namely its
violent, political, and interactive elements. Military scholars and decision makers alike have discussed and debated these definitions time and
time again, with the character of war often being defined by the technologies of the day, and the nature of war being articulated as the human
element of armed conflict. How AI changes the definition of war With the advent of AI and other emerging technologies, though, these time-
honored definitions are likely to change. At a fundamental level, battle, war, and conflict are time-competitive processes.
From time immemorial, humans have sought to be faster in the ultimate competition of combat, in an absolute as
well as in a relative sense. And in that regard, AI will dramatically change the speed of war. It will not only
enhance the human role in conflict, but will also leverage technology as never before . For not only is technology
changing, the rate of that alteration is accelerating. This is the central issue before us for armed conflict, and the side that can create ,
master, and leverage an equilibrium between the nature of war and the character of war, especially within the new environment of AI, data
analytics, and supercomputing, will inevitably prevail in conflict . In a geopolitical environment increasingly defined by new and
emerging technologies, national defense stands as one of the most consequential areas of development for the 21st century. It is important to
assess the revolutionary impacts of artificial intelligence and other emerging technologies on nearly every facet of national security and armed
conflict, including the accelerated pace of warfare and the critical role of continued human control. AI, once fully realized, has
the
potential to be one of the single greatest force multiplier for military and security forces in human
history. Ultimately, there are significant opportunities to deploy AI-based tools , as well as major rising
threats that need to be considered and addressed . A variety of technologies can improve decision-making, speed, and
scalability — some to a dizzying degree. But, as with so many other AI applications, policy and operational shifts are necessary to facilitate the
proper integration and innovation of these emerging technologies and make sure they strengthen, not weaken, leadership capacity, general
readiness, and performance in the field. Throughout human history, militaries have operated as the most overt political tool available to
governments and society. Clausewitz himself famously wrote, “War is a continuation of politics by other means.” And while modern security
forces play a variety of interchangeable roles (peacekeeping, stabilization, and national defense), they invariably represent the threat of
violence — the “mailed fist” purpose-built to ensure a particular outcome. With this as context, it is no surprise that the
ability to assure
outcomes and plan for all contingencies, violent or otherwise, takes up a significant portion of military
leadership and military strategists’ time and energy . Here, through anything from predictive analytics to lightning-fast target
acquisition, AI, once fully realized, has the potential to be one of the single greatest force multiplier for military and security forces in human
history. Indeed, as noted in a Congressional Research Service report: “ AI
has the potential to impart a number of
advantages in the military context, [though] it may also introduce distinct challenges. AI technology could, for
example, facilitate autonomous operations, lead to faster, more informed military decision-making, and
increase the speed and scale of military action . However, it may also be unpredictable or vulnerable to
unique forms of manipulations.” The interim report of the U.S. National Security Commission on Artificial Intelligence warns that:
“How the United States adopts AI will have profound ramifications for our immediate security, economic well-being,
41
and position in the world. Developments in AI cannot be separated from the emerging strategic competition with China and
developments in the broader geopolitical landscape. We are concerned that America’s role as the world’s leading innovator is threatened. We
are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies, and our values. We know
strategic competitors are investing in research and application. It is only reasonable to conclude that AI-enabled capabilities could
be used to threaten our critical infrastructure, amplify disinformation, and wage war.” AI already in a wide variety
of weapons systems AI’s role in the military and on the battlefield is thus one of catalytic power, both for good
and ill. Yet, the strength of AI does not manifest in the way a bomb or new weapons platform might perform or act. Its utility is much broader.
As noted by Brookings Institution scholar Chris Meserole, AI is being deployed in myriad ways by the American military: “Rather than
constituting a single weapon system itself, AI is instead being built into a wide variety of weapons systems and core infrastructure. Tanks,
artillery, aircraft, submarines—versions of each can already detect objects and targets on their own and maneuver accordingly.” This dynamic
becomes particularly clear within the context of the spectrum of conflict modern militaries deal with today, notably hybrid conflict. Looking
ahead, it will also define warfare of the future, namely through what John Allen and Amir Husain have coined as “hyperwar.” The
distinctions between hybrid warfare and hyperwar are important. As noted in a recent NATO report: “Hybrid threats combine military and non-
military as well as covert and overt means, including disinformation, cyber attacks, economic pressure, deployment of irregular armed groups
and use of regular forces. Hybrid methods are used to blur the lines between war and peace, and attempt to sow doubt in the minds of target
populations.”In this environment, AI can super-charge an adversary’s ability to sow chaos in the battlespace and incorporate deception and
surprise into their tactics in new and novel ways. By contrast, hyperwar may be defined as a type of conflict where human decision-making is
almost entirely absent from the observe-orient-decide-act (OODA) loop, a popular framework developed by U.S. Air Force Colonel John Boyd
for training individuals to make time-sensitive decisions as quickly as possible, especially when there is limited time to gather information. As a
consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these
developments are many and game changing, ranging across the spectrum from advanced technological modernization to how leaders in the era
of hyperwar are recruited, educated and trained. The topics of speed, efficiency, and accuracy as well as the necessity for human control of AI-
powered military capabilities represent the heart of the issue in the current AI-national security debate. With U.S. and
Chinese forces
maneuvering in the relatively tight operational environment of the South China Sea, reaction times are already short.
As more sophisticated AI eventually enables faster and more comprehensive intelligence collection and
analysis, rapid decision support, and even wide area target acquisition, we could see a premium being
placed on “going first” and not risk being caught flat-footed. While AI has the capacity to magnify military capabilities and
accelerate the speed of conflict, it can also be inherently destabilizing. Now is the time for the U.S. and China to
have the hard conversations about norms of behavior in an AI enabled, hyperwar environment. With both
sides moving rapidly to field arsenals of hypersonic weapons, action and reaction times will become shorter and shorter
and the growing imbalance of the character and nature of war will create strong incentives, in moments of
intense crisis, for conflict not peace. This is foreseeable now, and demands the engagement of both powers to
understand, seek, and preserve the equilibrium that can prevent the sort of miscalculation and high-speed
escalation to the catastrophe that none of us wants.
42
LAWS terrorism is coming now, but stigmatization via international bans prevents
breakout and a massive terrorist upsurge
Ware 19 [Jacob Ware, writer with a master’s in security studies from Georgetown and an MA in IR and
modern history from the University of St Andrews, 9-24-2019, "Terrorist Groups, Artificial Intelligence,
and Killer Drones," War on the Rocks, https://warontherocks.com/2019/09/terrorist-groups-artificial-
intelligence-and-killer-drones/]/Kankee
In 2016, the Islamic State of Iraq and the Levant (ISIL) carried out its first successful drone attack in combat, killing two
Peshmerga warriors in northern Iraq. The attack continued the group’s record of employing increasingly
sophisticated technologies against its enemies, a trend mimicked by other nonstate armed groups around the world.
The following year, the group announced the formation of the “Unmanned Aircraft of the Mujahedeen,” a division
dedicated to the development and use of drones , and a more formal step toward the long-term
weaponization of drone technology. Terrorist groups are increasingly using 21st-century technologies, including
drones and elementary artificial intelligence (AI), in attacks. As it continues to be weaponized, AI could prove a formidable
threat, allowing adversaries — including nonstate actors — to automate killing on a massive scale. The
combination of drone expertise and more sophisticated AI could allow terrorist groups to acquire or
develop lethal autonomous weapons, or “killer robots,” which would dramatically increase their capacity to
create incidents of mass destruction in Western cities. As it expands its artificial intelligence capabilities, the U.S. government
should also strengthen its anti-AI capacity, paying particular attention to nonstate actors and the enduring threats they pose. For the purposes
of this article, I define artificial intelligence as technology capable of “mimicking human brain patterns,” including by learning and making
decisions. AI Could Turn Drones into Killer Robots The aforementioned ISIL attack was not the first case of nonstate actors employing drones in
combat. In January 2018, an unidentified Syrian rebel group deployed a swarm of 13 homemade drones carrying small submunitions to attack
Russian bases at Khmeimim and Tartus, while an August 2018 assassination attempt against Venezuela’s Nicolas Maduro used exploding
drones. Iran and its militia proxies have deployed drone-carried explosives several times, most notably in the September 2019 attack on Saudi
oil facilities near the country’s eastern coast. Pundits fear that the
drone’s debut as a terrorist tool against the West is not far
off, and that “the long-term implications for civilian populations are sobering ,” as James Phillips and Nathaniel
DeBevoise note in a Heritage Foundation commentary. In September 2017, FBI Director Christopher Wray told the Senate that drones
constituted an “imminent” terrorist threat to American cities, while the Department of Homeland Security warned of
terrorist groups applying “battlefield experiences to pursue new technologies and tactics, such as unmanned aerial systems.” Meanwhile, ISIL’s
success in deploying drones has been met with great excitement in jihadist circles. The group’s al-Naba newsletter celebrated a 2017 attack by
declaring “a new source of horror for the apostates!” The use of drones in combat indicates an intent and capability to
innovate and use increasingly savvy technologies for terrorist purposes , a process sure to continue with
more advanced forms of AI. Modern drones possess fairly elementary forms of artificial intelligence, but the technology is advancing:
Self-piloted drones are in development, and the European Union is funding projects to develop autonomous swarms to patrol its borders. AI
will enable terrorist groups to threaten physical security in new ways , making the current terrorism
challenge even more difficult to address. According to a February 2018 report, terrorists could benefit from commercially available
AI systems in several ways. The report predicts that autonomous vehicles will be used to deliver explosives; low-skill terrorists
will be endowed with widely available high-tech products; attacks will cause far more damage; terrorists will create
swarms of weapons to “execute rapid, coordinated attacks”; and, finally, attackers will be farther removed
from their targets in both time and location. As AI technology continues to develop and begins to proliferate ,
“AI [will] expand the set of actors who are capable of carrying out the attack, the rate at which these actors can
carry it out, and the set of plausible targets.” For many military experts and commentators, lethal autonomous
weapon systems, or “killer robots,” are the most feared application of artificial intelligence in military
technology. In the words of the American Conservative magazine, the difference between killer robots and current AI-drone technology is
that, with killer robots, “the software running the drone will decide who lives and who dies.” Thus, killer robots, combining drone technology
with more advanced AI, will possess the means and power to autonomously and independently engage humans. The lethal
autonomous
weapon has been called the “third revolution in warfare,” following gunpowder and nuclear weapons, and is expected to reinvent
43
conflict, not least terrorist tactics. Although completely autonomous weapons have not yet reached the world’s battlefields, current
weapons are on the cusp. South Korea, for instance, has developed and deployed the Samsung SGR-A1 sentry gun to its border with North
Korea. The gun supposedly can track movement and fire without human intervention. Robots train alongside marines in the California desert.
Israel’s flying Harpy munition can loiter for hours before detecting and engaging targets, while the United States and Russia are developing
tanks capable of operating autonomously. And the drones involved in the aforementioned rebel attack on Russian bases in Syria were equipped
with altitude and leveling sensors, as well as preprogrammed GPS to guide them to a predetermined target. Of particular concern is the
possibility of swarming attacks, composed of thousands or millions of tiny killer robots, each capable of engaging its own target. The
potentially devastating terrorist application of swarming autonomous drones is best summarized by Max
Tegmark, who has said that “if a million such killer drones can be dispatched from the back of a single
truck, then one has a horrifying weapon of mass destruction of a whole new kind : one that can
selectively kill only a prescribed category of people , leaving everybody and everything else unscathed .”
Precisely that hypothetical scenario was illustrated in a recent viral YouTube video, “Slaughterbots,” which depicted the release of thousands of
small munitions into British university lecture halls. The drones then pursued and attacked individuals who had shared certain political social
media posts. The video also depicts an attack targeting sitting U.S. policymakers on Capitol Hill. The video has been viewed over three million
times, and was met with increasing concern about potential terrorist applications of inevitable autonomous weapons technology. So far,
nonstate actors have only deployed “swarmed” drones sparingly, but it points to a worrying innovation: Swarming, weaponized killer
robots aimed at civilian crowds would be nearly impossible to defend against, and, if effective, cause massive
casualties. Terrorists Will Be Interested in Acquiring Lethal Autonomous Weapons Terrorist groups will be interested in
artificial intelligence and lethal autonomous weapons for three reasons — cost, traceability, and effectiveness. Firstly, killer
robots are likely to be extremely cheap, while still maintaining lethality. Experts agree that lethal
autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups
looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are likely to cost little more
than a smartphone.” Additionally, killer robots will minimize the human investment required for terrorist attacks,
with scholars arguing that “greater degrees of autonomy enable a greater amount of damage to be done by a
single person.” Artificial intelligence could make terrorist activity cheaper financially and in terms of
human capital, lowering the organizational costs required to commit attacks. Secondly, using autonomous weapons
will reduce the trace left by terrorists. A large number of munitions could be launched — and a large
amount of damage done — by a small number of people operating at considerable distance from the
target, reducing the signature left behind. In Tegmark’s words, for “a terrorist wanting to assassinate a
politician … all they need to do is upload their target’s photo and address into the killer robot : it can then fly
to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was
responsible.” With autonomous weapons technology, terrorist groups will be able to launch increasingly complex attacks, and,
when they want to, escape without detection. Finally, killer robots could reduce, if not eliminate, the physical costs
and dangers of terrorism, rendering the operative “essentially invulnerable.” Raising the possibility of “fly and forget”
missions, lethal autonomous weapons might simply be deployed toward a target , and engage that target without
further human intervention. As P. W. Singer noted in 2012, “one [will] not have to be suicidal to carry out attacks
that previously might have required one to be so . This allows new players into the game, making al-Qaeda 2.0 and the next-
generation version of the Unabomber or Timothy McVeigh far more lethal.” Additionally, lethal autonomous weapons could
potentially reduce human aversion to killing, making terrorism even more palatable as a tactic for political
groups. According to the aforementioned February 2018 report, “AI systems can allow the actors who would otherwise be
performing the tasks to retain their anonymity and experience a greater degree of psychological distance
from the people they impact”; this would not only improve a terrorist’s chances of escape , as mentioned, but
reduce or even eliminate the moral or psychological barriers to murder . Terrorist Acquisition of Lethal Autonomous
Weapons Is Realistic The proliferation of artificial intelligence and killer robot technology to terrorist organizations is
realistic and likely to occur through three avenues — internal development, sales, and leaks. Firstly, modern terrorist organizations have
advanced scientific and engineering departments, and actively seek out skilled scientists for recruitment. ISIL, for example, has appealed for
scientists to trek to the caliphate to work on drone and AI technology. The individual technologies behind swarming killer robots — including
unmanned aerial vehicles, facial recognition, and machine-to-machine communication — already exist, and have been adapted by terrorist
44
organizations for other means. According to a French defense industry executive, “the technological challenge of scaling it up to swarms and
things like that doesn’t need any inventive step. It’s just a question of time and scale and I think that’s an absolute certainty that we should
worry about.” Secondly, autonomous weapons technology will likely proliferate through sales. Because AI research is led by private firms,
advanced AI technology will be publicly sold on the open market. As Michael Horowitz argues, “militant groups and less-capable states may
already have what they need to produce some simple autonomous weapon systems, and that capability is likely to spread even further for
purely commercial reasons.” The current framework controlling high-tech weapons proliferation — the Wassenaar Arrangement and Missile
Technology Control Regime — is voluntary, and is constantly tested by great-power weapons development. Given interest in developing AI-
guided weapons, this seems unlikely to change. Ultimately, as AI expert Toby Walsh notes, the world’s weapons companies can, and will, “make
a killing (pun very much intended) selling autonomous weapons to all sides of every conflict.” Finally, autonomous weapons technology is likely
to leak. Innovation in the AI field is led by the private sector, not the military, because of the myriad commercial applications of the technology.
This will make it more difficult to contain the technology, and prevent it from proliferating to nonstate actors. Perhaps the starkest warning has
been issued by Paul Scharre, a former U.S. defense official: “We are entering a world where the technology to build lethal autonomous
weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.” Counter-
Terrorism Options Drones and AI provide a particularly daunting counter-terrorism challenge , simply because
effective counter-drone or anti-AI expertise does not yet exist. That said, as Daveed Gartenstein-Ross has noted, “in recent
years, we have seen multiple failures in imagination as analysts tried to discern what terrorists will do with emerging technologies. A failure in
imagination as artificial intelligence becomes cheaper and more widely available could be even costlier.” Action is urgently needed, and for
now, counter-terrorism policies are likely to fit into two categories, each with flaws: defenses and bans. Firstly, and most likely, Western
states could strengthen their defenses against drones and weaponized AI. This might involve strengthening current counter-
drone and anti-AI capabilities, improving training for local law enforcement, and establishing plans for mitigating drone or autonomous
weapons incidents. AI technology and systems will surely play an important role in this space, including in the development of anti-AI tools.
However, anti-AI defenses will be costly, and will need to be implemented across countless cities
throughout the entire Western world, something Michael Horton calls “a daunting challenge that will require
spending billions of dollars on electronic and kinetic countermeasures .” Swarms, Scharre notes, will prove
“devilishly hard to target,” given the number of munitions and their ability to spread over a wide area . In
addition, defenses will likely take a long time to erect effectively and will leave citizens exposed in the
meantime. Beyond defenses, AI will also be used in counter-terrorism intelligence and online content moderation, although this will surely
spark civil liberties challenges. Secondly, the international community could look to ban AI use in the military
through an international treaty sanctioned by the United Nations. This has been the strategy pursued by activist groups such as the
Campaign to Stop Killer Robots, while leading artificial intelligence researchers and scientific commentators have published open letters
warning of the risk of weaponized AI. That said, great powers are not likely to refrain from AI weapons development, and a ban might outlaw
positive uses of militarized AI. The
international community could also look to stigmatize, or delegitimize, weaponized AI and
lethal autonomous weapons sufficiently to deter terrorist use. Although modern terrorist groups have
proven extremely willing to improvise and innovate , and effective at doing so, there is an extensive list
of weapons — chemical weapons, biological weapons, cluster munitions, barrel bombs, and more —
accessible to terrorist organizations, but rarely used. This is partly down to the international stigma
associated with those munitions — if a norm is strong enough, terrorists might avoid using a weapon. However, norms take a long
time to develop, and are fragile and untrustworthy solutions. Evidently, good counter-terrorism options are limited. The U.S. government and
its intelligence agencies should continue to treat AI and lethal autonomous weapons as priorities, and identify new possible counter-terrorism
measures. Fortunately, some progress has been made: Nicholas Rasmussen, former director of the National Counterterrorism Center, admitted
at a Senate Homeland Security and Governmental Affairs Committee hearing in September 2017 that “there is a community of experts that has
emerged inside the federal government that is focused on this pretty much full time. Two years ago this was not a concern … We are trying to
up our game.” Nonstate actors are already deploying drones to attack their enemies. Lethal autonomous weapon systems are likely to
proliferate to terrorist groups, with potentially devastating consequences. The United States and its allies should urgently
address the rising threat by preparing stronger defenses against possible drone and swarm attacks, engaging with the defense
industry and AI experts warning of the threat, and supporting realistic international efforts to ban or stigmatize military
applications of artificial intelligence. Although the likelihood of such an event is low, a killer robot attack could cause massive
casualties, strike a devastating blow to the U.S. homeland, and cause widespread panic. The threat is imminent, and the time has come to act.
45
Terrorist drone swarms leads to bioweapon attacks – drone dispersal skirts traditional
defense
Kallenborn and Bleek 19 [Zachary Kallenborn, researcher specializing in chemical, biological,
radiological, and nuclear weapons, terrorism, and drone swarms, and Phillip C. Blake, Associate
Professor, Nonproliferation and Terrorism Studies at the Middlebury Institute of International Studies,
02-14-2019, "Drones of Mass Destruction: Drone Swarms and the Future of Nuclear, Chemical, and
Biological Weapons," War on the Rocks, https://warontherocks.com/2019/02/drones-of-mass-
destruction-drone-swarms-and-the-future-of-nuclear-chemical-and-biological-weapons/]/Kankee
Chemical and Biological Weapons Proliferation Drone swarm technology is likely to encourage chemical and biological
weapons proliferation and improve the capabilities of states that already possess these weapons .
Terrorist organizations are also likely to be interested in the technology, especially more sophisticated actors like
the Islamic State, which has already shown interest in drone-based chemical and biological weapons
attacks. Drone swarms may also aid counter-proliferation, prevention, and response to a chemical or biological attack, but those applications
appear less significant than the offensive applications. Indeed, swarms have the potential to significantly improve
chemical and biological weapons delivery. Sensor drones could collect environmental data to improve
targeting, and attack drones could use this information in the timing and positioning for release , target
selection, and approach. For example, attack drones may release the agent earlier than planned based on
shifts in wind conditions assessed by sensor drones. Dispersed attacks also allow for more careful
targeting. Instead of spraying large masses of agent, drones could search for and target individuals or
specific vulnerabilities such as air ventilation systems . This also means the drones would not need to carry
as much agent. Moreover, drone swarms enable the use of combined arms tactics. Some attack drones
within the swarm could be equipped with chemical or biological payloads, while others could carry
conventional weapons. Chemical or biological attack drones might strike first to force adversary troops
into protective gear that inhibits movement, then follow up with conventional strikes. Although combined arms
tactics are possible with current delivery systems, drone swarms allow much closer integration between conventional
and unconventional weapons. These improvements in chemical and biological delivery could conceivably
weaken both the military and moral justifications for the relative marginalization of weapons in
international politics (with some key exceptions). As far as military utility goes, chemical and especially biological
weapons are often unreliable modes of attack. Environmental and territorial conditions such as precipitation, wind, humidity,
and vegetation reduce the efficacy of the agent, while protective gear may significantly or wholly mitigate the harm they cause. But drone-
based environmental sensors could make these weapons much more reliable, while combined arms tactics
could mitigate the impact of, or even gain advantage from, adversary use of protective gear. The moral opposition to
chemical and biological weapons has much to do with their indiscriminate nature and the consequential risk of collateral harm. In 1968, wind
blew a cloud of VX nerve agent from the Dugway Proving Grounds in Utah into a nearby farm, killing thousands of sheep. Public opposition to
the event helped catalyze the Nixon administration’s review of the U.S. chemical and biological weapons programs, culminating in an end to the
bioweapons program. With
improved targeting, including employing drone-based environmental sensors , it’s
possible to imagine less error-prone, more discriminate chemical and biological weapon delivery systems
that might be less morally objectionable . Of course, just because these weapons are more usable does not necessarily mean
they will reemerge. Modern chemical and biological weapons emerged in a different security environment. Various international laws may
constrain rearmament and significant usage, as might popular opinion or political leadership. Still, it’s worth considering how advances in
technology could make previously indiscriminate weapons more discriminate. At the same time, drone swarms may also help prevent
and respond to chemical and biological weapon attacks . Drone swarms could aid counter-proliferation efforts by, for
example, coordinating searches for previously unknown chemical and biological facilities to secure stockpiles after a war. They could similarly
coordinate searches along national borders to identify potential smuggling activity, including CBRN material smuggling, or searches through
cities to search for gaseous plumes. Notably, swarms could serve as mobile platforms for chemical or biological detectors with different types of
sensors to mitigate false positives. If an attack is successful, drones could coordinate mapping of affected areas to help guide responders.
46
Drones could even have sprayers to help clean up after an attack, without risking harm to humans. But given
the rarity of chemical
and biological weapons attacks and the technical uncertainty of creating reliable , drone-based CBRN
detectors, these applications appear less significant than the improvements to offensive capabilities .
Conclusion
47
An autonomous CBRN and/or drone swarm ban can be effective and won’t be
circumvented – the alternative is unwavering conventional and biological attacks
Kallenborn 10-14 [ Zachary Kallenborn, researcher specializing in chemical, biological, radiological,
and nuclear weapons, terrorism, and drone swarms, 10-14-2020, "A Partial Ban on Autonomous
Weapons Would Make Everyone Safer," Foreign Policy, https://foreignpolicy.com/2020/10/14/ai-
drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-
safer/]/Kankee
48
the weapon operates under predesigned rules or is being controlled remotely. However, no human can reasonably control a
swarm of thousands of drones. The complexity is simply too much. They must monitor hundreds of
video, infrared, or other feeds, while planning the swarm’s actions and deciding who to kill . Such a
massive swarm must be autonomous, may be a weapon of mass destruction in its own right , and could
carry traditional weapons of mass destruction. Discussion of autonomous weapons takes place under the auspices of the
Convention on Certain Conventional Weapons, assuming the weapon fires bullets, bombs, or missiles. But an autonomous weapon
could just as readily be armed with CBRN agents . Autonomous vehicles are a great way to deliver
chemical, radiological, and biological weapons. An autonomous vehicle cannot get sick with anthrax , nor
choke on chlorine. Drones can more directly target enemies, while adjusting trajectories based on local
wind and humidity conditions . Plus, small drones can take to the air, fly indoors, and work together to
carry out attacks. Operatives from the Islamic State in Iraq and Syria were reportedly quite interested in
using drones to carry out radiological and potentially chemical attacks. North Korea also has an arsenal of
chemical, biological, and nuclear weapons and a thousand-drone fleet. When robots make decisions on
nuclear weapons, the fate of humanity is at stake. In 1983, at the height of the Cold War , a Soviet early
warning system concluded the United States had launched five nuclear missiles at the Soviet Union. The
computer expressed the highest degree of confidence in the conclusion. The likely response: immediate nuclear retaliation to level U.S. cities
and kill millions of American civilians. Fortunately, Stanislav Petrov, the
Soviet officer in charge of the warning system ,
concluded the computer was wrong. Petrov was correct. Without him, millions of people would be dead. New
restrictions on autonomous CBRN weapons should b e a relatively easy avenue for new restrictions. A
wide range of treaties already restrict production, export, and use of CBRN weapons from the Geneva
Convention to the Nuclear Non-Proliferation Treaty and the Chemical Weapons Convention. At minimum,
governments could collectively agree to incorporate autonomous weapons in all applicable CBRN
weapons treaties. This would signal a greater willingness to adopt restrictions on autonomous weapons
without a requirement to resolve the question of autonomous weapons with conventional payloads . Of
course, a ban may require giving up capabilities like a nuclear “dead hand”—in the words of proponents, “an
automated strategic response system based on artificial intelligence”— but nuclear weapons experts are overwhelmingly
against the idea. The risks to great powers of increased CBRN weapons prolif eration and accidental
nuclear war are far greater than any deterrent advantage already gained with a robust conventional and
nuclear force. Placing autonomous weapons on the global agenda in the first place is a definite success—a global treaty can never be made
if no one cares enough to even talk about it—but the question is what happens next. Do government experts simply keep talking or do these
meetings lead to actionable treaties? What combination of inducements, export controls, transparency measures, sanctions, and, in extreme
events, the use of force are best suited to preventing the threat? Historically, comprehensive bans took decades—the global community took
about 70 years to go from the Geneva Protocols against chemical weapons usage to states giving up the weapons—but autonomous weapons
are growing and proliferating rapidly. Countries might not be willing to ban the weapons outright, but banning
the highest-risk
autonomous weapons—drone swarms and autonomous weapons armed with CBRN agents —could
provide a foundation for reducing autonomous weapons risks . Great powers would give up little, while improving their
own security.
49
Drone prolif causes global miscalc – includes Iran, Israel, Indo-Pak, China-Japan and
terrorism
Boyle 15 [Michael J. Boyle, Associate Professor of Political Science at La Salle University and Senior
Fellow at the Foreign Policy Research Institute, 11-24-2014, “The Race for Drones”, Foreign Policy
Research Institute, https://www.sciencedirect.com/science/article/pii/S0030438714000763]/Kankee
Accidents and Spirals Another reason to be concerned about the growing drone arms race is the danger of
accidents and the conflict spirals that can come from them. While drones are becoming more sophisticated, they are
still prone to frequent accidents. According to an estimate in 2010, the United States has experienced at least 79
drone accidents costing at least $1 million each , as well 38 Predator and Reaper drone crashes during
combat missions in Afghanistan and Iraq.76 Drones such as the Pioneer and Shadow drones have even higher rates of
accidents.77 A later estimate in 2014 put the total number of major drone crashes at over 400 since 2001.78
Although it is estimated that many of these accidents are caused by human error, and that accident rate is declining, these rates are still
much higher than comparable manned aircraft. 79 It is also probable that less sophisticated and robust models
sold by China and other new suppliers will have a higher rate of accident than the more robust American
models. Simply as a matter of probability, it is likely that drone accidents will become more commonplace as more
drones take to the skies in the future. Drone accidents are more than just unfortunate for those who happen to be hurt by them
when they fall from the skies. There is a serious risk that drones may interfere with civilian aircraft and cause
accidents with more substantial loss of life. In 2004, a German UAV nearly crashed into an Ariana Airlines Airbus A300 carrying
100 people in the skies over Kabul.80 Over the last ten years, drones have been equipped with anti-collision software
designed to avert such crashes, but dangers remain. One estimate in 2012 found that at least seven U.S. Predator
or Reaper drones have crashed overseas in the vicinity of civilian airports .81 In September 2013, the United
States was forced to move its drone operations from Camp Lemonnier in Djibouti due to concerns that drones
would crash into passenger planes from a nearby airport.82 This problem has also occurred close to home. In March 2013, a small
private drone came within 200 feet of an Alitalia commercial jet over John F. Kennedy Airport in New York. 83 One year later, an American
Airlines jet had a near-miss with a drone in Florida.84 Drones are particularly dangerous around civilian aircraft because they are hard to detect
on radar and because their owners are difficult to trace once an incident has been reported. The FAA has launched investigations of 23
incidents of illegal drone use near civilian airports or in proximity to aircraft, but in most cases the owners of the drones have never been
found.85 By 2015, it is estimated that 30,000 drones will be in American skies. A prominent airline pilots association has expressed concern
over this development and argued that the widespread introduction of drones into domestic airspace could
“profoundly degrade the safety of both commercial and general aviation flight operations ” unless they are
integrated into the FAA systems in a comprehensive way.86 As drones prolif erate around the world , the dangers of
conflict spirals from accidents and collisions with civilian aircraft will multiply. With drones in the hands
of more governments and private suppliers, and regulation relatively weak in many countries , it will be
hard to control where drones fly and to keep them away from civilian aircraft. The rapid expansion of drones into
these markets, as well as their use by private parties in those governments, will increase the risks of an
accident between a drone and civilian aircraft —particularly in countries where the airspace is less well-
regulated than the United States and Europe . For example, in December 2013, Chinese police arrested four men for flying a
modified drone into the airspace of the Beijing airport, causing two flights to be diverted and multiple delays.87 Similarly, it will be harder
to keep drones away from sensitive locations where the costs of an accident might be very high. In
November 2014, France arrested three people for a series of 14 unexplained drone flights near its nuclear facilities over a three-week period.88
Many drone accidents will be merely unfortunate events, but some will carry with them a risk of a conflict spiral
particularly if the incident is misinterpreted as an intentional downing of a civilian aircraft. If a U.S. drone
struck an Iranian passenger airliner, for example, it is not hard to imagine the incident causing a serious
international crisis, along the same lines as the accidental U.S. downing of IranAir Flight 655 in 1988. The risk of a conflict
spiral from a drone accident between India and Pakistan, or Israel and one of its neighbors, should not
50
be ignored. Similarly, a collision between a Chinese drone and a Japanese civilian aircraft in the East China
Sea could produce disastrous consequences. It is also possible that insurgents will try to hijack drones and
even redirect them for attack in ways that generate adverse political consequences or spirals of conflict. In
2009, Iraqi insurgents managed to hack into the feed of a Predator drone , while in 2011 Iran made a widely-disputed
claim that it hijacked a U.S. stealth drone by feeding it false GPS coordinates to make it land in Iran itself ,
rather than Afghanistan.89 Hijacked drones are a particularly attractive way to test the nerves of a potential
rival by a weaker opponent, as they are less traceable and can shield the perpetrator with some degree of plausible deniability. As
drones wind up in the hands of more unscrupulous actors, and less reliable drones sold by China and others flood the market,
governments around the world will face a vastly increased risk of a conflict spiral from drone misuse ,
hijacking or collision with a civilian aircraft . Limiting the Race
51
Autonomous nanobots are coming now and will cause nuclear terror (with mini-nukes
which avoids tradition defense) and bioweapon attacks – it outweighs nuclear war and
causes extinction
Daniels 17 [Jeff Daniels, career journalist and Communications Manager for UCLA’s YY&S Nazarian
Center for Israel Studies with a MBA in Finance from the University of La Verne and a BA in Journalism
from CSU Northridge, 03-17-2017, “Mini-nukes and mosquito-like robot weapons being primed for
future warfare,” CNBC, https://www.cnbc.com/2017/03/17/mini-nukes-and-inspect-bot-weapons-
being-primed-for-future-warfare.html]/Kankee
Several countries are developing nanoweapons that could unleash attacks using mini-nuclear bombs and
insect-like lethal robots. While it may be the stuff of science fiction today, the advancement of nanotechnology in the
coming years will make it a bigger threat to humanity than conventional nuclear weapons, according to an expert. The
U.S., Russia and China are believed to be investing billions on nanoweapons research. “Nanobots are the real
concern about wiping out humanity because they can be weapons of mass destruction,” said Louis Del
Monte, a Minnesota-based physicist and futurist. He’s the author of a just released book entitled “Nanoweapons: A Growing Threat To
Humanity.” One unsettling prediction Del Monte’s made is that terrorists could get their hands on nanoweapons as early
as the late 2020s through black market sources. According to Del Monte, nanoweapons are much smaller than a strand
of human hair and the insect-like nanobots could be programmed to perform various tasks, including injecting
toxins into people or contaminating the water supply of a major city. Another scenario he suggested the
nanodrone could do in the future is fly into a room and drop a poison onto something, such as food, to presumably target
a particular individual. The federal government defines nanotechnology as the science, technology and engineering of things so small
they are measured on a nanoscale, or about 1 to 100 nanometers. A single nanometer is about 10 times smaller than the width of a human’s
DNA molecule. While nanotechnology has produced major benefits for medicine, electronics and industrial applications, federal research
is currently underway that could ultimately produce nanobots . For one, the Defense Advanced Research Projects Agency, or
DARPA, has a program called the Fast Lightweight Autonomy program for the purpose to allow autonomous drones to
enter a building and avoid hitting walls or objects . DARPA announced a breakthrough last year after tests in a hangar in
Massachusetts. Previously, the Army Research Laboratory announced it created an advanced drone the size of a fly complete with a set of “tiny
robotic legs” — a major achievement since it presumably might be capable of entering a building undetected to perform surveillance, or used
for more nefarious actions. Frightening details about military nanotechnologies were outlined in a 2010 report from the Pentagon’s Defense
Threat Reduction Agency, including how “transgenic insects could be developed to produce and deliver protein-
based biological warfare agents, and be used offensively against targets in a foreign country.” It also
forecast “microexplosives” along with “nanobots serving as [bioweapons] delivery systems or as micro-
weapons themselves, and inhalable micro-particles to cripple personnel .” In the case of nanoscale robots, Del
Monte said they can be the size of a mosquito or smaller and programmed to use toxins to kill or immobilize
people; what’s more, these autonomous bots ultimately could become self-replicating. Last month’s targeted assassination
of Kim Jong-nam, the half-brother of North Korea’s ruler, was a stark reminder that toxins are available from a variety of
sources and can be unleashed in public location s. It’s also been alleged by Russia’s Pravda paper that nanoweapons were
used by the U.S. against foreign leaders. A Cambridge University conference on global catastrophic risk found a 5
percent risk of nanotech weapons causing human extinction before the year 2100. As for the mini-nukes, Del
Monte expects they represent “the most horrific near-term nanoweapons .” Nanotechnology opens up the
possibility to manufacture mini-nuke components so small that they are difficult to screen and detect .
Furthermore, the weapon (capable of an explosion equivalent to about 100 tons of TNT) could be compact enough to fit into
a pocket or purse and weigh about 5 pounds and destroy large buildings or be combined to do greater damage to an area.
“When we talk about making conventional nuclear weapons, they are difficult to make,” he said. “Making a mini-
nuke would be difficult but in some respects not as difficult as a full-blown nuclear weapon.” Del Monte explained that
52
the mini-nuke weapon is activated when the nanoscale laser triggers a small thermonuclear fusion bomb
using a tritium-deuterium fuel. Their size makes them difficult to screen , detect and also there’s “essentially
no fallout” associated with them. Still, while the mini-nukes are powerful in and of themselves, he expects they are unlikely to wipe out
humanity. He said a larger concern is the threat of the nanoscale robots, or nanobots because they are “the technological equivalent
of biological weapons.” The author said controlling these “smart nanobots” could become an issue since if lost, there could be
potentially millions of these deadly nanobots on the loose killing people indiscriminately . Earlier in his career, Del
Monte said he held a secret clearance when he worked on Defense Department programs at Honeywell, ranging from
missiles to satellites. He also previously worked on advanced computers at IBM and has several patents on
microelectronics. In those roles, he led development of microelectronics and sensors.
53
Banning LAWs solves – international stigmatization stops rogue development while
regulation alone fails
Wareham 17 [Mary Wareham, Human Rights Watch advocacy director of the Arms Division with
bachelor’s and master’s degrees in political science from Victoria University of Wellington, 11-9-2017,
"It’s Time For a Binding, Absolute Ban on Fully Autonomous Weapons," Human Rights Watch,
https://www.hrw.org/news/2017/11/09/its-time-binding-absolute-ban-fully-autonomous-
weapons]/Kankee
Since 2013, 19 countrieshave endorsed this ban objective and dozens more have affirmed the importance of
retaining meaningful or appropriate or adequate human control over critical combat functions of weapons systems. Yet
multilateral deliberations on this topic have proceeded at a snail’s pace while technology that will enable the development of fully autonomous
weapons bounds ahead. While international humanitarian law already sets limits on problematic weapons and their use, responsible
governments have in the past found it necessary to supplement existing legal frameworks for weapons that
by their nature pose significant humanitarian threats. Some contend that conducting weapons reviews before developing or
acquiring fully autonomous weapons would sufficiently regulate the weapons. Weapons reviews are required under Article 36 of Additional
Protocol I to the Geneva Conventions to assess the legality of the future use of a new weapon during its design, development and acquisition
phases. Yet weapons reviews are not universal, consistent or rigorously conducted, and they fail to address
the implications of weapons outside of an armed conflict context. Few governments conduct weapons
reviews and those that do follow varying standards . Reviews are often too narrow in scope sufficiently to
address every danger posed. States are also not obliged to release their reviews , and none are known to have
disclosed information about a review that rejected a proposed weapon . A binding, absolute ban on fully
autonomous weapons would reduce the chance of misuse of the weapons, would be easier to enforce,
and would enhance the stigma associated with violations . Moreover, a ban would maximise the
stigmatisation of fully autonomous weapons, creating a widely recognised norm and influencing even
those that do not join the treaty. Precedent shows that a ban would be achievable and effective. After three
years of informal talks with no outcome, it’s time for states to negotiate and adopt an international , legally binding
instrument that prohibits the development , production and use of fully autonomous weapons . If that is not
possible under the auspices of the CCW, states should explore other mechanisms to ban fully autonomous weapons without delay. The future
of our humanity depends on it.
54
A LAWS ban can be successful and avoids widespread arms races and drone terror
Russell et al. 15 [Stuart Russell, professor of computer science and director of the Center for
Intelligent Systems at UC Berkeley, Max Tegmark, professor of physics at MIT and co-founder of the
Future of Life Institute, and Toby Walsh, professor of AI at the University of New South Wales and
NICTA, 08-03-2015, "Why We Really Should Ban Autonomous Weapons: A Response," IEEE Spectrum,
https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-we-really-should-ban-
autonomous-weapons]Kankee
Note that his first two arguments apply to any weapons system. Yet the world community has rather successfully banned
biological weapons, space-based nuclear weapons, and blinding laser weapons; and even for arms such as
chemical weapons, land mines, and cluster munitions where bans have been breached or not universally ratified, severe
stigmatization has limited their use. We wonder if Ackerman supports those bans and, if so, why. Argument (2) amounts to the
claim that as long as there are evil people, we need to make sure they are well armed with the latest technology; to prevent them from gaining
access to the most effective means of killing people is to “blame the technology” for the evil inclinations of humans. We disagree. The
purpose of preventing them from gaining access to the technology is to prevent them from killing large
numbers of people. A treaty can be effective in this regard by stopping an arms race and preventing large-
scale manufacturing of such weapons. Moreover, a treaty certainly does not apply to defensive anti-robot weapons, even if they
operate in autonomous mode. The question (3) is in our opinion a rather irrelevant distraction from the more important question of whether to
start an arms race. It is an interesting point that we discuss in the open letter and it represents exactly the pro-weapon position espoused over
the last several years by some participants in the debate. The current answer to this question is certainly no: AI systems are incapable
of exercising the required judgment. The answer might eventually change, however, as AI technology improves. But is it actually
“the real question,” as Ackerman asserts? We think not. His argument, like those of others before him, has an implicit ceteribus
paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities —
numbers, times, locations, places, circumstances, victims—will be exactly those that would have occurred
with human soldiers, had autonomous weapons been banned. This is rather like assuming that cruise
missiles will only be used in exactly those settings where spears would have been used in the past .
Obviously, the assumption is false. Autonomous weapons are completely different from human soldiers
and would be used in completely different ways . As our open letter makes clear, the key issue is the likely
consequences of an arms race—for example, the availability on the black market of mass quantities of low-
cost, anti-personnel micro-robots that can be deployed by one person to anonymously kill thousands or
millions of people who meet the user’s targeting criteria . Autonomous weapons are potentially weapons of
mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists
might find them irresistible. Which leads to Ackerman’s fourth point: his proposed alternative plan of making
autonomous armed robots ethical. But what more specifically is this plan? To borrow a phrase from the movie Interstellar, in
Ackerman’s world robots will always have their “humanitarian setting” at 100 percent. Yet he worries about
enforcement of a ban in his first argument: how would it be easier to enforce that enemy autonomous weapons are
100 percent ethical than to enforce that they are not produced in the first place? Moreover, one cannot
consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of
war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist
groups are so good at following the rules of war that they will never choose to deploy robots in ways
that violate these rules. One point on which we agree with Ackerman is that negotiating and implementing a ban will be hard. But as
John F. Kennedy emphasized when announcing the Moon missions, hard things are worth attempting when success will greatly benefit the
future of humanity.
55
And a ban increases US credibility and general adherence to international law
Garcia 15 [Denise Garcia, Associate Professor in the Department of Political Science and the
International Affairs program, and a Global Resilience Institute Faculty at Northeastern University in
Boston and a Nobel Peace Institute Fellow, 1-16-2015, "Killer Robots: Why the US should Lead the Ban,"
Wiley Online Library, https://onlinelibrary-wiley-com.ezproxy.library.unlv.edu/doi/full/10.1111/1758-
5899.12186]/Kankee
The need for a champion state The US stands to gain a great deal of lost moral legitimacy from preventively
banning killer robots. Rapid proliferation means that these weapons will not provide a durable
comparative advantage, so it gives up little. The US can stop malicious use of these systems by leading a
ban, as has occurred with chemical or biological weapons . The US is pursuing killer robots because it can. This pursuit is
not based on any well‐considered military justification but just a bureaucracy run amok . With the hits
the US has taken to its international standing it could really use an astounding win . And this is not just any
win; it is a win for all of humanity that will show that the US cares about preventing civilian casualties
and unimaginable tragedies and works to promote human rights, particularly the right to life. Historically,
several states have embraced efforts to prohibit superfluous and unnecessary armaments. In my research, I have found that individual
states can become champions of such causes and unleash real progress in disarmament diplomacy. This
occurs when states become invested in changing a prevailing widespread behavior and are able to build
momentum and galvanize action influencing others to do the same . Recently, champion states have helped
create extraordinary new international treaties: two prohibiting two classes of superfluous arms – landmines and
cluster munitions – and another that created the first conventional arms transfers treaty. The 1997 treaty prohibiting
mines and the 2008 one that banned cluster munitions were success stories because they prohibited weapons that harmed civilians
indiscriminately. These treaties’ successful implementation in the last decades has made real difference to human
security worldwide. The 2013 Arms Trade Treaty, the first global legal agreement on the transfer of conventional arms is a significant
first step to imprinting more transparency and accountability in the unruly trade of arms (Garcia, 2011; Rodine‐Hardy, 2013). The presence of
an ‘epistemic community’ – a group of scientists and activists with common scientific and professional language and views that are able to
generate credible information – is a powerful tool for mobilizing attention towards action (Garcia, 2006). In the case of autonomous weapons,
ICRAC serves such a purpose. The launch of a transnational campaign is another key element to summon awareness at several levels of
diplomatic and global action. The Stop Killer Robots Campaign is in place and is attracting unprecedented response. The
US would be
the right candidate to become a champion state for a ban . The US considers itself a nation with high
moral values, and leading a ban would additionally benefit its national security. The US has led in the creation of some of
the cornerstones of existing international law and is well placed to lead the way towards prohibiting the
great indignity of allowing machines to decide who to kill . This noble goal would enhance the US's standing
and prestige. It would help it to regain some of its lost luster and moral clout. In this highly interconnected world,
these are qualities that may indirectly augment national security. The US is the most powerful nation in the
world both economically and militarily . Therefore the US has the diplomatic resources to advance a ban
quickly; this is particularly true due to both its standing on the UN Security Council and worldwide
alliances. Killer robots technology will add an unnecessary element of indeterminacy and uncertainty to both a defense policy looking at
cutting costs and a foreign policy struggling to maintain a meaningful footprint in a shifting global power landscape. There is a small window of
opportunity for the US to stop the automation of killing‐yielding warfare before mass proliferation occurs. Autonomous weapons will spread
not only to other states but also to hostile nonstate actors, or tyrants. If drone proliferation serves as a gage of how proliferation of fully
autonomous weapons will proliferate, then there is great cause for worry. According to the US Government Accountability Office 2012 report
(Heyns, 2010), a total of 76 countries have some form of drone and 16 countries possess armed drones: the US, UK, Sweden, Italy, Israel,
France, Germany, Saudi Arabia, UAE, Iran, Russia, China, Lebanon, Taiwan, India and Pakistan. Killer robots are an unreasonable idea but could
become an unacceptable reality. The time for global action to stop the spread of this precarious and perilous killing‐yielding technology is now
and the US is perfectly positioned to keep humanity ‘in the loop’ and to do so at the UN.
56
Contention 3: Robby the Philosophical Robot
LAWS lead to forever wars and civilian deaths while supporting colonialism
-humanitarian regulation fails
Pasquale 10-15 [Frank Pasquale, professor of law at Brooklyn Law School, 10-15-2020, "‘Machines set
loose to slaughter’: the dangerous rise of military AI," Guardian,
https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-drone-swarm-
autonomous-weapon]/Kankee
In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take
place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence
done in the heat of passion, but for the premeditated murderer who coolly planned his attack . The history of
warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system
is likely to be designed with some kind of override feature , which would be controlled by human operators, subject to all the
normal human passions and irrationality. Any attempt to code law and ethics into killer robots raises enormous
practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to
programme a robot warrior with reactions to the infinite array of situations that could arise in the heat
of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors , an
autonomous weapon system in the fog of war is dangerous. Most soldiers would testify that the everyday
experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts
of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked
best where there is a massive dataset with clearly understood examples of good and bad, right and wrong.
For example, credit card companies have improved fraud detection mechanisms with constant analyses of
hundreds of millions of transactions, where false negatives and false positives are easily labelled with
nearly 100% accuracy. Would it be possible to “datafy” the experiences of soldiers in Iraq, deciding
whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for
occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)? Given these difficulties, it is
hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to
support dangerous fantasies of pushbutton wars and guiltless slaughters. International humanitarian
law, which governs armed conflict, poses even more challenges to developers of autonomous weapons . A key ethical
principle of warfare has been one of discrimination: requiring attackers to distinguish between combatants and civilians. But guerrilla or
insurgent warfare has become increasingly common in recent decades, and combatants in such situations rarely
wear uniforms, making it harder to distinguish them from civilians. Given the difficulties human soldiers
face in this regard, it’s easy to see the even greater risk posed by robotic weapons systems. Proponents of such
weapons insist that the machines’ powers of discrimination are only improving. Even if this is so, it is a massive leap in logic to
assume that commanders will use these technological advances to develop just principles of
discrimination in the din and confusion of war. As the French thinker Grégoire Chamayou has written, the category of
“combatant” (a legitimate target) has already tended to “be diluted in such a way as to extend to any form of
membership of, collaboration with, or presumed sympathy for some militant organization”. The principle of
distinguishing between combatants and civilians is only one of many international laws governing warfare. There is also the rule that military
operations must be “proportional” – a balance must be struck between potential harm to civilians and the military advantage that might result
from the action. The US air force has described the question of proportionality as “an inherently subjective determination that will be resolved
on a case by case basis”. No
matter how well technology monitors, detects and neutralises threats, there is no
evidence that it can engage in the type of subtle and flexible reasoning essential to the application of
even slightly ambiguous laws or norms. Even if we were to assume that technological advances could
reduce the use of lethal force in warfare , would that always be a good thing? Surveying the growing influence of
57
human rights principles on conflict, the historian Samuel Moyn observes a paradox: warfare
has become at once “more humane
and harder to end”. For invaders, robots spare politicians the worry of casualties stoking opposition at home.
An iron fist in the velvet glove of advanced technology , drones can mete out just enough surveillance to
pacify the occupied, while avoiding the kind of devastating bloodshed that would provoke a revolution
or international intervention. In this robotised vision of “humane domination”, war would look more and more like an
extraterritorial police action. Enemies would be replaced with suspect persons subject to mechanised detention instead of lethal
force. However lifesaving it may be, Moyn suggests, the massive power differential at the heart of technologised
occupations is not a proper foundation for a legitimate international order . Chamayou is also sceptical. In his
insightful book Drone Theory, he reminds readers of the slaughter of 10,000 Sudanese in 1898 by an Anglo-Egyptian force armed with machine
guns, which itself only suffered 48 casualties. Chamayou
brands the drone “the weapon of amnesiac postcolonial
violence”. He also casts doubt on whether advances in robotics would actually result in the kind of precision
that fans of killer robots promise. Civilians are routinely killed by military drones piloted by humans.
Removing that possibility may involve an equally grim future in which computing systems conduct such
intense surveillance on subject populations that they can assess the threat posed by each person within
it (and liquidate or spare them accordingly ). Drone advocates say the weapon is key to a more discriminating and humane
warfare. But for Chamayou, “by ruling out the possibility of combat , the drone destroys the very possibility of
any clear differentiation between combatants and noncombatants”. Chamayou’s claim may seem like hyperbole, but
consider the situation on the ground in Yemen or Pakistani hinterlands: Is there really any serious resistance
that the “militants” can sustain against a stream of hundreds or thousands of unmanned aerial vehicles
patrolling their skies? Such a controlled environment amounts to a disturbing fusion of war and policing , stripped of
the restrictions and safeguards that have been established to at least try to make these fields
accountable. How should global leaders respond to the prospect of these dangerous new weapons technologies? One option is to try to
come together to ban outright certain methods of killing. To understand whether or not such international arms control agreements could
work, it is worth looking at the past. The antipersonnel landmine, designed to kill or maim anyone who stepped on or near it, was an early
automated weapon. It terrified combatants in the first world war. Cheap and easy to distribute, mines continued to be used in smaller conflicts
around the globe. By 1994, soldiers had laid 100m landmines in 62 countries.
58
LAWS harm international criminal courts and government accountability
Roach 16 [Steven C. Roach, Associate Professor of IR at the School of Interdisciplinary Global Studies at
the University of South Florida-Tampa, 11-20-2016, "Holding Killer Robots Accountable? The New Moral
Challenge of 21st Century Warfare," Columbia Journal of International Affairs,
https://jia.sipa.columbia.edu/online-articles/holding-killer-robots-accountable]/Kankee
More and more military robots are being programmed to kill. The hope is that the absence of emotions and promotion of technical precision
will save lives. But officials also expect these same robots to become fully autonomous , or capable of making their own
decisions, when it comes to killing. The possibility that they may begin to kill indiscriminately is a scenario that more and more military
strategists and ethicists are taking seriously. The most powerful countries's military strategists are betting that autonomous robotic weapons
will give them an advantage militarily. In fact, the U.S., Britain, and China have already begun research on the development of new Lethal
Autonomous Weapons Systems (LAWS), or advanced robotic weapons systems that carry their own sense detectors. In 2015, the United States,
for instance, unveiled the design and section of the X-47B, a new pod-shaped aircraft that can be autonomously refueled in mid-air, while
Britain, not to be outdone, is working on the Tauris aircraft equipped with automatic laser sensors. With nearly USD 72 billion invested in such
technology, the U.S. continues to maintain a position that such special technology poses few risks to civilians. More importantly, it will allow the
US to better protect itself from outside threats. However, it is also true that such technologies may incur greater human
collateral damage. Indeed, software malfunction or programming errors have already exposed the
limitations of LAWS technology. Signs of this threat have surfaced from earlier incidents involving limited-
supervision LAWS, or semi-autonomous LAWS. In 2007, a South African semi-autonomous anti-aircraft system
accidently fired upon and killed 7 South African soldiers; and in 1988, the U.S. air defense system (ADY) mistakenly
shot down an Iranian passenger airline jet. Both incidents raise the question of whether we can afford to
ignore the moral and political fallout of producing LAWS. It is a question that has also called attention to the thorny issue of
whether killer robots can ever be held accountable for their actions. Indeed, with no human at the helm , or person whose
emotions and conscious actions can be targeted/traced , it becomes increasingly unclear as to how to
prosecute the destructive actions of robots. One option is to file civil charges, effectively holding the civil
programmers of these robots liable for damages . This is not likely to curb the destructive actions of killer
robots, since proof is required that the maker had knowledge of a programming defect . With no reliable
criteria for establishing intent, then, criminal accountability of LAWS continues to remain a pressing issue,
particularly given the U.S.'s plan of making robots fully autonomous within the next ten years. In this case, any perceived military
advantage will beg the question of who holds responsibility ; if fully autonomous robots lack emotions
and mental states (conscious thoughts), there will be little or no legal basis for establishing direct and even
command responsibility (that commanders had the fore knowledge to prevent such destructive outcomes). The result is that
the guilt and intent of a growing population of robot killers will become increasingly displaced within the
corpus of international criminal law. This, in turn, will defy the rapid evolution and efficacy of international
criminal law and its many rules of procedure for determining intent and knowledge of war criminals. The
International Criminal Court (ICC) and International Criminal Tribunals , for example, have brought hundreds of war
criminals to justice and arguably helped deter criminal behavior. Such deterrent effects, which rely on
the capacity of courts to tap into the mental state of perpetrators , cannot possibly apply to fully
autonomous robots programmed to kill. This has led many to proclaim an accountability gap between
international criminal accountability and autonomous robots . For some, bridging this gap will require a
complete ban on LAWS. Human Rights Watch (HRW) has been in the forefront of this movement, along with The Campaign to Stop
Killer Robots, a coalition of nongovernmental organizations (NGOs) working to ban fully autonomous weapons. In a report issued in April 2015,
HRW documented the rapid rise of many semi-autonomous weapons, arguing that regulation will do little to stop the destructive impact of fully
autonomous killer robots. HRW lawyers and activists recently voiced their concerns at a delegate meeting of the Convention on Certain
Conventional Weapons, an agreement signed by 121 countries that have pledged to eliminate weapons that indiscriminately kill civilians. The
meeting did little to change the political reality of LAWS or the most powerful countries’ commitment produce better, more sophisticated
LAWS. As the political strategists, Peter Singer and August Cole argue, it is far more realistic to erect new laws and rules to hold humans
accountable for any lethal mistake made by the robots they produced. By clarifying which maker is and is not responsible, the hope is that
authorities will adopt rules constraining the reckless behavior of states and corporations. New ethical guidelines to regulate the moral conduct
59
of those programming the robots should be developed. This involves first and foremost challenging the prevailing moral criterion that the most
powerful countries have an obligation to use LAWS because of the few risks they pose to civilians. Such an obligation can only lead to more
human collateral damage, which makes it morally unsustainable in light of the estimated 474 civilian deaths caused by drone strikes from 2009
to 2015. The priority now is to formulate moral criteria that will allow us to address the above accountability gap through flexible obligations,
principles, and rules of cooperation. Conceptually, this would entail readapting just war theory to address the legal status of robots as a special
type of combatant. Development of this framework will involve political costs and sacrifice, but this will result in a reward worth the cost.
60
LAWS cause loads of accidental and illegitimate deaths
Sparrow 16 [Robert Sparrow, Cambridge professor in the philosophy program, a chief investigator in
the Australian Research Council Centre of Excellence for Electromaterials Science, and an adjunct
professor in the Centre for Human Bioethics at Monash University, 3-10-2016, "Robots and Respect:
Assessing the Case Against Autonomous Weapon Systems," Cambridge Core,
https://www.cambridge.org/core/journals/ethics-and-international-affairs/article/robots-and-respect-
assessing-the-case-against-autonomous-weapon-
systems/D3FBB27E12F68AAF399EAE966A4EC827]/Kankee
Difficulties with Discrimination Critics of Arkin's proposal have been quick to point out just how far existing robots are
from being able to outperform human beings when it comes to adherence to the requirements of jus in bello.
23 In particular, Arkin systematically underestimates the extent of the challenges involved in designing
robots that can reliably distinguish legitimate from illegitimate targets in war. Despite many decades of research—
and much progress in recent years—perception remains one of the “hard problems” of engineering. It is
notoriously difficult for a computer to reliably identify objects of interest within a given environment
and to distinguish different classes of objects. This is even more the case in crowded and complex
unstructured environments and when the environment and the sensor are in motion relative to each
other. In order for AWS to be able to identify, track, and target armed men, for instance, they would need to be able to
distinguish between a person carrying an assault rifle and a person carrying a metal tube or a folded
umbrella. Moreover, in order to be able to assess the likelihood of collateral damage and thus the extent to which a
particular attack would satisfy the jus in bello requirement of proportionality, autonomous weapons will need to be able to
identify and enumerate civilian targets reliably, as well as potential military targets. Thus, it will not be
sufficient for AWS simply to identify and track armed persons (by recognizing the LIDAR 24 signature of an AK-47, for
instance)—they must also be able to identify and track unarmed persons , including children , in order to
refrain from attacks on military targets that would involve an unacceptably high number of civilian
casualties. Weapons intended to destroy armored vehicles must be capable of distinguishing them from among
the almost countless different cars and trucks manufactured around the world; autonomous submarines must be able to
distinguish warships from merchant vessels, and so on. Moreover, AWS must be capable of achieving these
tasks while their sensors are in motion, from a wide range of viewing angles in visually cluttered
environments and in a variety of lighting conditions . These problems may be more tractable in some domains than others,
but they are all formidable challenges to the development of AWS. In fact, the problem of discriminating between legitimate and illegitimate
targets is even more difficult than the foregoing demonstrates. For instance, not every person carrying a weapon is directly
engaged in armed conflict (in many parts of the world carrying a weapon is a matter of male honor ); with
prior approval, foreign warships can pass through the territorial waters of another state ; neutral troops
or peacekeeping forces are sometimes present in areas in which legitimate targets are located ; and children
sometimes climb on decommissioned tanks placed in playgrounds . Thus, in order to discriminate between combatants
and noncombatants, it is not sufficient to be able to detect whether someone (or something) is carrying a weapon.
Discrimination is a matter of context, and often of political context. It will be extremely difficult to program robots to be
able to make this kind of judgment. 25 Even if a weapon system could reliably distinguish combatants from noncombatants, this is
not the same as being able to distinguish between legitimate and illegitimate targets. According to jus in bello conventions, attacks on
combatants may be illegitimate in at least three sorts of circumstances: first, where such attacks may be expected to
cause a disproportionate number of civilian casualties (Additional Protocol I to the Geneva Conventions, Article 57); 26
second, where they would constitute an unnecessarily destructive and excessive use of force ; 27 and third,
where the target has indicated a desire to surrender or is otherwise hors de combat (Additional Protocol I to the
Geneva Conventions, Article 41). 28 Before it would be ethical to deploy AWS, then, the systems will need to be capable of
61
making these sorts of discriminations , all of which involve reasoning at a high level of abstraction. Thus, for instance, how many
noncombatant deaths it would be permissible to knowingly cause in the course of an attack on a legitimate military target depends on the
military advantage that the destruction of the target is intended to serve; the availability of alternative means of attacking the target; the
consequences of not attacking the target at that time (which in turn is partially a function of the likelihood that an opportunity to attack the
target will arise again); the availability of alternative means of achieving the desired military objective; and the weaponry available to conduct
the attack. Similarly, whether an attack would constitute an unnecessarily destructive use of force (which it may, even where there is no risk of
killing noncombatants) is a function of the nature of the military object being targeted; the extent of the military advantage the attack is
intended to secure; and the availability of alternative, less destructive, means of achieving this advantage. Assessing
these matters
requires extensive knowledge and understanding of the world, including the capacity to interpret and
predict the actions of human beings. In particular, assessing the extent to which an attack will achieve a
definite military advantage requires an understanding of the balance and disposition of forces in the
battlespace, the capacity to anticipate the probable responses of the enemy to various threats and
circumstances, and an awareness of wider strategic and political considerations . 29 It is difficult to
imagine how any computer could make these sorts of judgments short of the development of a human-level
general intelligence—that is, “strong” AI. 30 Identifying when enemy forces have surrendered or are otherwise
hors de combat is also a profound challenge for any autonomous system . 31 Perhaps it will be possible to program
AWS to recognize the white flag of surrender or to promulgate a convention that all combatants will carry a “surrender beacon” that
indicates when they are no longer participating in hostilities. 32 Yet these measures would not resolve the problem of
identifying those who are hors de combat. A gravely wounded soldier separated from his comrades is
not a legitimate target even if he has not indicated the desire to surrender (indeed, he may have had no
opportunity to do so), but it may be extremely hard for a robot to distinguish such a person from one lying in
ambush. Similarly, a ship that has had its guns destroyed or that has been holed below the water so that all
hands are required just to remain afloat —and is therefore no military threat—will not always have a different
radar or infrared profile from a functioning warship . Human beings can often—if not always—recognize such
situations with reference to context and expectations about how people will behave in various
circumstances. Again, short of possessing a human-level general intelligence, it is difficult to imagine how a computer could make these
discriminations. Possible Solutions? “Ethical” Robots and Human Oversight Arkin has offered two responses to these sorts of criticisms. I believe
both are inadequate. First, Arkin has suggested that it should be possible to build into the weapon system the capacity to comply with the
relevant ethical imperatives through what he calls an “ethical governor.” 33 This will not, of course, address the problems of identifying and
classifying objects in complex environments, although it is possible that improvements in computer vision technology will reduce these
problems to a manageable level. More fundamentally, it presumes an impoverished account of ethics as a system of clearly defined rules with a
clearly defined hierarchy for resolving clashes between them. The sketches of deontological or utilitarian systems of ethics that philosophers
have developed are just that—sketches. The task of ethical theory is to try to explain and systematize the ethical intuitions that properly
situated and adequately informed persons evince when confronted with various ethical dilemmas. These intuitions are extremely complex and
context dependent, which is why philosophers are still arguing about whether they are primarily deontological or consequentialist or perhaps
virtue-theoretical. It is these—still poorly understood and often highly contested—intuitions that a machine would need to be capable of
replicating in order for it to “do” ethics. Moreover, even the schematized accounts of some subsets of these intuitions that philosophers have
developed require agents to reason at a high level of abstraction and to be able to make complex contextual judgments for their application.
For instance, consequentialists must be capable of predicting the effects of our actions in the real world, making a judgment about when this
attempt to track consequences—which are, after all, essentially infinite—may reasonably be curtailed, and assessing the relative value of
different states of the world. It is unclear whether even human beings can do this reliably (which itself is a reason to be cautious about
embracing consequentialism), but it seems highly unlikely that, short of achieving human-level general intelligence, machines will ever be able
to do so. Similarly, Kantian ethics requires agents to identify the moral principles relevant to their circumstances and resolve any clashes
between them—again a task that requires a high degree of critical intelligence. 34 However, the most fundamental barrier to building an
“ethical robot” is that ethics is a realm of meanings. That is to say, understanding the nature of our actions—what they mean—is fundamental
to ethical reasoning and behavior. 35 For instance, most of the time intentionally killing a human being is murder —but
not during a declared armed conflict, when both the killer and the victim are combatants ; or in situations of
self-defense; or when it has been mandated by the state after a fair criminal trial. Thus, in order to be able to judge whether a
particular killing is murder or not, one must be able to track reliably the application of concepts like
intention, rights, legitimacy, and justice—a task that seems likely to remain well beyond the capacity of any
computer for the foreseeable future. Perhaps more importantly, the meaning of murder—why it is a great evil—is not
62
captured by any set of rules that distinguishes murder from other forms of killing , but only by its place
within a wider network of moral and emotional responses . The idea that a properly programmed machine could behave
ethically, short of becoming a full moral agent, only makes sense in the context of a deep-seated behaviorism of the sort that has haunted
computer science and cognitive science for decades. Arkin's second suggestion is that weaponized robots could be designed to allow a human
operator to monitor the ethical reasoning of the robot. The operator could then intervene whenever she anticipates that the robot is about to
do something unethical. 36 Other authors have suggested that AWS could be designed to contact and await instruction from a human operator
whenever they encounter a situation their own programming is unable to resolve. 37 This is problematic for two reasons. First, the need to
“phone home” for ethical reassurance would mitigate two of the main military advantages of autonomous weapons: their capacity to make
decisions more rapidly than human beings, 38 and their ability to operate in environments where it is difficult to establish and maintain reliable
communications with a human pilot. 39 If an “autonomous” weapon has to rely on human supervision to attack targets in complex
environments, it would be, at most, semi-autonomous. 40 Second, it presumes that the problem of accurately identifying the ethical questions
at stake and/or determining when the ethics of an attack is uncertain is more tractable than resolving uncertainty about the ethics of a given
action. However, the
capacity of AWS to assess their own ability to answer an ethical question would itself
require the capacity for ethical deliberation at the same level of complexity needed to answer the
original ethical question. Thus, if we cannot trust a machine to make ethical judgments reliably , we cannot
trust it to identify when its judgments themselves might be unreliable. Arkin and His Critics: Round II
63
That, alongside the lack of interpersonal relationships with LAWS, violates the
principles of jus in bello
Sparrow 16 [Robert Sparrow, Cambridge professor in the philosophy program, a chief investigator in
the Australian Research Council Centre of Excellence for Electromaterials Science, and an adjunct
professor in the Centre for Human Bioethics at Monash University, 3-10-2016, "Robots and Respect:
Assessing the Case Against Autonomous Weapon Systems," Cambridge Core,
https://www.cambridge.org/core/journals/ethics-and-international-affairs/article/robots-and-respect-
assessing-the-case-against-autonomous-weapon-
systems/D3FBB27E12F68AAF399EAE966A4EC827]/Kankee
64
officer intends to kill. Neitherthe fact that the person who authorizes the launch does not know precisely who
she is killing when she sends an AWS into action nor the fact that the identity of those persons may be
objectively indeterminate at the point of launch , seems to rule out the possibility of the appropriate sort of relationship of
respect. When a missile officer launches a cruise missile to strike a set of GPS coordinates 1,000 kilometers away, it
is highly unlikely that she knows the identity of those she intends to kill . 64 Similarly, mines and improvised
explosive devices (IEDs) kill anyone who happens to trigger them and thus attack persons whose identity is actually
indeterminate and not merely contingently unknown . If an interpersonal relationship is possible while using these weapons,
it is not clear why there could not be an interpersonal relationship between the commanding officer launching AWS and the people these
weapons kill. Thus, neither of these features of AWS would appear to function as an absolute barrier to the existence of the appropriate
relationship of respect. That said, however, it is important to note that this comparison is not entirely favorable to either AWS or these other
sorts of weapons. People often do feel uneasy about the ethics of anonymous long-range killing and also—perhaps especially—about
landmines and IEDs. 65 Highlighting the analogies with AWS might even render people more uncomfortable with these more familiar weapons.
Nevertheless, insofar as contemporary thinking about jus in bello has yet decisively to reject other sorts of weapons that
kill persons
whose identity is unknown or actually indeterminate without risk to the user, it might appear illogical to reject AWS
on these grounds. It is also worth noting that the language of machine autonomy sits uneasily alongside the claim that autonomous systems are
properly thought of merely as tools to realize the intentions of those who wield them. 66 The more advocates of robotic
weapons laud their capacity to make complex decisions without input from a human operator, the more
difficult it is to believe that AWS connect the killer and the killed directly enough to sustain the
interpersonal relationship that Nagel argues is essential to the principle of distinction. That is to say, even if the machine is
not a full moral agent, it is tempting to think that it might be an “artificial agent” with sufficient agency, or
a simulacrum of such, to problematize the “transmission” of intention. This is why I have argued elsewhere that the use of
such systems may render the attribution of responsibility for the actions of AWS to their operators
problematic. 67 As Heather Roff has put it, 68 drawing on the work of Andreas Matthias, 69 the use of autonomous weapons seems to risk
a “responsibility gap”; and where this gap exists, it will not be plausible to hold that when a commander sends AWS
into action he or she is acknowledging the humanity of those the machines eventually kill . However, this
argument about responsibility has been controversial and ultimately, I suspect, turns upon an understanding of autonomy that is richer and
more demanding than that which I have assumed here. 70 At least some of the “autonomous” weapons currently in early development seem
likely to possess no agency whatsoever and thus arguably should be thought of as transmitting the intentions of those who command their use.
What the Use of AWS Says About Our Attitude Toward Our Enemies Yet this is not the end of an investigation into the implications of a concern
for respect for the ethics of AWS. As Nagel acknowledges, there is a conventional element to our understanding of the requirements of respect.
71 What counts as the humane or inhumane treatment of a prisoner, for instance, or as the desecration of a corpse, is partially a function of
contemporary social understandings. Thus, certain restrictions on the treatment of enemy combatants during wartime have ethical force simply
by virtue of being widely shared. Moreover, there is ample evidence that existing social understandings concerning the respectful treatment of
human beings argue against the use of AWS being ethical. A recent public opinion survey, for example, found high levels of hostility to the
prospect of robots being licensed to kill. 72 Most people already feel strongly that sending a robot to kill would express a profound disrespect
of the value of an individual human life. 73 Evidence that what we express when we treat our enemies in a certain way is sometimes crucial to
the morality of warfare is provided by how widely shared is the intuition that the mutilation and mistreatment of corpses is a war crime. Such
desecration does not inflict “unnecessary suffering” on the enemy; rather, it is wrong precisely because and insofar as it expresses a profound
disrespect for their humanity. Importantly, while the content of what counts as a “mistreatment” or “mutilation” is conventional and may
change over time, the intuition that we are obligated to treat even the corpses of our enemies with respect is deeper and much less susceptible
to revision. The ethical principles of jus in bello allow that we may permissibly attempt to kill our enemy, even using means that will inevitably
leave them dying horribly. Yet these principles also place restrictions on the means we may use and on our treatment of the enemy more
generally. I have argued—following Nagel—that this treatment should be compatible with respect for the humanity of our enemy and that the
content of this concept is partially determined by shared social understandings regarding what counts as respectful treatment. Furthermore, I
have suggested that widespread public revulsion at the idea of autonomous weapons should be interpreted as conveying the belief that the use
of AWS is incompatible with such respect. If I am correct in this, then even if an interpersonal relationship may be held to exist between the
commanding officer who orders the launch of an autonomous weapon system and the individuals killed by that system, it should be
characterized as one of disrespect. Interestingly, conceiving of AWS simply as the means whereby the person who authorizes the launch of the
robot attempts to kill the intended targets vitiates an influential criticism of the campaign to ban these systems. 74 Defenders of AWS have
suggested that robotic weapons could not be morally problematic “in themselves” because those who might be killed by robots would die as a
result of the effects of weapons that are—understood in a more narrow sense—identical to those that a human might use. In conventional
military terminology, Predator drones—and by extension, perhaps, future AWS—would ordinarily be understood as platforms from which a
weapon (such as a Hellfire missile) may be delivered. 75 Correspondingly, defenders of AWS claim that it could make no difference to the
suffering or the nature of the death of those killed whether the Hellfire missile was fired from an AWS, from a (remotely piloted) Predator
65
drone, or from a (manned) Apache helicopter. Yet if, when it comes to the question of the presence or absence of an
“interpersonal” relationship, we are going to understand AWS as a means of attacking targets , we must
also understand them as the means the user employs to kill others when it comes to the evaluation of the
nature of that means. Indeed, it is quite clear that a combatant who launches AWS is not herself launching
Hellfire missiles. Consequently, there is nothing especially problematic with the idea that AWS might be an
illegitimate means of killing by virtue of being profoundly disrespectful of the humanity of our enemy.
The Case for Banning AWS I believe that the contemporary campaign to ban autonomous weapons should be
understood as an attempt to entrench a powerful intuitive objection to the prospect of a disturbing new
class of weapons in international law: AWS should be acknowledged as mala in se by virtue of the extent to
which they violate the requirement of respect for the humanity of our enemies , which underlies the
principles of jus in bello. 76 That the boundaries of such respect are sometimes—as in this case—determined by convention (in the
sense of shared social understandings rather than formal rules) does not detract from the fact that it is fundamental to the ethics of war. A
number of critics of the campaign to ban AWS have objected that this proposal is premature and that until we have seen
robot weapons in action, we cannot judge whether they would be any better or worse, morally speaking, than existing weapons systems. 77 Yet
insofar as a ban on AWS is intended to acknowledge that the use (rather than the effects) of robotic weapons
disrespects the humanity of their targets, this objection has little force. There is, of course, something more than a
little intellectually unsettling about the attempt to place a class of weapons in the category of mala in se through legislation or (legal)
convention: what is mala in se should ideally be recognized independently of positive law. Yet if we are honest about the matter, we will admit
that there has always been controversy about the extent of this class of weapons, and that some weapons now held to be evil in themselves
were once widely believed to be legitimate means of waging war. Only after a period of contestation and moral argument were technologies
such as chemical and nuclear weapons acknowledged as prohibited. 78 The current situation regarding the campaign against AWS is therefore
analogous to the way in which the campaigns against the use of chemical weapons at the beginning of the twentieth century and against the
use of cluster munitions in the 1990s proceeded. 79 Should this campaign ultimately prove successful, we will understand it to have recognized
truths about these weapons that existed independently of—and prior to—the resulting prohibition. 80 In the meantime, the strength and
popular currency of the intuition that the use of AWS would profoundly disrespect the humanity of those they are tasked to kill is sufficient
justification to try to establish such a prohibition. Conclusion
66
AWS treats humans as mere means and undermines human dignity
Ulgen 17 [Ozlem Ulgen, Reader in International Law and Ethics in the School of Law at Birmingham City
School of Law with a PhD from the University of Nottingham, 02-06-2017, “Human Dignity in an Age of
Autonomous Weapons: Are We in Danger of Losing an “Elementary Consideration of Humanity”?,”
EUROPEAN SOCIETY OF INTERNATIONAL LAW, https://papers.ssrn.com/sol3/papers.cfm?
abstract_id=2912002]/Kankee
3. Autonomous Weapons and Human Dignity as a Status Kant‟s reference to humanity as an objective end and humans as rational agents with
autonomy of will help explain how human dignity represents a status. Below I will explain the meaning and content of each of these elements
and how autonomous weapons impact on them. 3.1 From Humanity as an Objective End to Relative Ends What Kant refers to as humanity as an
objective end is part of his process to establish human dignity as a fundamental principle. Kant distinguishes “relative ends” from “objective
ends”. Relative
ends are values based on personal desires, wants, hopes, and ambitions. They are easily
replaced and replaceable. Objective ends, however, cannot be replaced with an equivalent . They are
reasons for morals governing human conduct which are capable of universalization and valid for all
rational beings. Objective ends are superior because they possess a particular moral value; dignity. Humanity as an
objective end is expressed in Kant‟s maxim, “Act in such a way that you always treat humanity, whether in your own person or in the person of
any other, never simply as a means but always at the same time as an end”. 14 What does it mean to treat someone as “an end” rather than
“as a means”? Rational beings have intrinsic worth and a self-determining capacity to decide whether or not
to do something. They are not mere objects or things to be manipulated, used or discarded on the basis
of relative ends (e.g. personal wants, desires, hopes, and ambitions). Human dignity gives a person a reason for doing or
not doing something. That reason takes precedence over all others. It means setting moral and rational limits to the
way we treat people in pursuit of relative ends.15 How does this relate to autonomous weapons? First, autonomous weapons are
used for a relative end (i.e. the desire to eliminate a human target in the hope of preventing harm to
others). Relative ends, as we know from Kant‟s formulation, are lesser values capable of being replaced by an equivalent. This is
not to say that preventing harm to others per se is a relative value. In fact, it is an objective end because it is something that all rational beings
could freely and rationally agree to and abide by. But killing
a human being in the hope that it will prevent further harm
is insufficiently morally grounded to override human dignity and may be reckless if alternatives and
consequences are not considered. This sort of quantitative assessment of life for prospective greater good treats the
humans sacrificed as mere objects and creates a hierarchy of human dignity. In Germany, where the State has a
constitutional duty to respect and protect human dignity for all, such an approach was rejected by the Constitutional Court in 2006 when it
declared void and unconstitutional an aviation security legislation allowing shooting down of hijacked planes. To sacrifice passengers‟ and
aircrew lives was to treat them as mere objects and call into question their quality and status as human beings with dignity.16 Thus, unless
autonomous weapons can only be used to track and identify rather than eliminate the human target ,
they would extinguish a priceless and irreplaceable objective end possessed by all rational beings; humanity dignity. Second, using
autonomous weapons to extinguish life removes the reason for having morals in the first place: human dignity
of rational beings with autonomy of will . In doing so a relative end is given priority over an objective end.
Heyns warns, “it presents a very bleak picture of the international order if ethical norms are explicitly
excluded from consideration. An approach that ignores ethical norms presents the spectre of an order that will find itself increasingly
unsupported by the fundamental values of the people whose interests it is supposed to serve. Human rights norms such as the right to life and
dignity have to be given contents in terms of ethical standards.”17 From a positivist or natural law theory approach, there is a basic existential
reason for rules; to ensure States, peoples, and individuals can survive within the international legal order. But a
rule that allows for
life to be extinguished anywhere in the world by an autonomous weapon undermines the existential
reason. Judge Weeramantry expanded on this point in relation to nuclear weapons in the Legality of the Threat or Use of Nuclear Weapons:
“members of the international community have for the past three centuries been engaged in the task of formulating a set of rules and
principles for the conduct of that society - the rules and principles we call international law. In so doing, they must ask themselves whether
there is a place in that set of rules for a rule under which it would be legal, for whatever reason, to eliminate members of that community or,
indeed, the entire community itself. Can the international community, which is governed by that rule, be considered to have given its
acceptance to that rule, whatever be the approach of that community - positivist, natural law, or any other? Is the community of nations, to use
Hart‟s expression a “suicide club”?”18 Third, without face-to-face killing certain humans are deemed more valuable
67
and priceless than
others, which creates a hierarchy of human dignity . Military personnel, remote pilots,
commanders, programmers, and engineers are immune from rational and ethical decision-making to kill
another human being and do not witness the consequences. By replacing the human combatant with a
machine the combatant’s human dignity is not only preserved but elevated above the human target. This
can also be seen as a relative end in that it selfishly protects your own combatants from harm at all costs
including violating the fundamental principle of humanity as an objective end.19 Arguably there is a moral duty to
protect your own soldiers from harm, thereby protecting their own human dignity.20 From a national interest and utilitarian perspective this
may sound logical and sensible but it
fails to recognize that the inherent asymmetry in human dignity status leads
to insecurity and unpredictability in warfare which makes neither the combatant nor target safe. Using
an autonomous weapon means a combatant is not in direct harm’s way or at risk of losing his life . But that
relates to life not dignity - as status and respectful treatment. Also, potential loss of combatant lives in war is expected and an unavoidable risk
(unless the State is negligent in preparing and equipping troops). Thus, replacing
combatants with autonomous weapons
would undermine the former’s dignity by not recognizing their professional training and military ethics of
courage and respect for human targets . There would also appear to be no reason for having armies. 3.2 The Cycle of Irrationality
and Irrational Agents We have already established that Kant considers humanity as formed by rational beings with the
capacity to create, amend, and abide by moral rules. Individuals engaged in immoral conduct are not excluded from
humanity and, therefore, cannot lose their human dignity. Autonomy of will is key to Kant‟s conception of
the rational being because it means individuals are not coerced to create , amend, and abide by moral rules.21
Autonomy of will does not refer to the capacity to achieve personal objectives, which are relative ends. It is about freely and willingly accepting
rules that achieve objective ends (e.g. preventing harm to humans in order to respect their human dignity). But the introduction of
autonomous weapons actually makes us irrational agents who relinquish our autonomy of will. Humans
are removed from the rational thinking process of when and how to use lethal force, and abdicate a key
characteristic of humanity to a machine. This begs the question whether we need rationality at all if we
can so easily delegate it to machines. “Human central thinking activities”22 are critical during warfare and
involve the ability to feel, think and evaluate, and the capacity to adhere to a value-based system in which violence is
not the norm governing human relations. This uniquely identifies how humans engage in qualitative analysis through
exercising judgment and reasoning. A combination of knowledge, experience, environment, and critical evaluation skills influence
“human central thinking activities” enabling difficult decisions to be made on the extent and timing of force. Pre-programmed machines
perform cost effective and speedy peripheral processing activities based on quantitative analysis, repetitive actions, and sorting data. But they
do not possess the human attributes to appraise a given situation, exercise judgment, refrain from
taking action, or to limit harm. Stating that there will be human control over autonomous weapons is not enough to allay concerns
about removing “human central thinking activities” from the lethal force decision-making process. The type of human control is
critical. A human operated on/off switch to trigger an attack does not demonstrate exercising rational
thinking. There is also the problem of automation bias where the human operator accepts what the
machine approves as legitimate targets. Sharkey refers to the need for “meaningful human control”, which means allowing
human deliberation about a target before initiating an attack.23 Without this rational capacity, do we then revert to a
State of nature? Human targets are denied the status of rational agents with autonomy of will, and arbitrarily
deemed irrational agents subject to extrajudicial killings or sub-humans not worthy of human face-to-
face contact. Remember that under the Kantian notion of human dignity immoral conduct does not lead to loss of human dignity so no
matter what the human target has or has not done, they still have human dignity. By excluding the human target from human
dignity on the basis of their alleged immoral conduct there is no opportunity to convince them of the
validity of moral laws or to engage nonlethal methods . In fact, an opportunity is lost to build what Kant
refers to as the “kingdom of ends” in which rational beings create and abide by moral rules recognizing
human dignity. By violently ousting human targets for perceived irrational and immoral conduct,
autonomous weapons perpetuate a cycle of irrationality in which humans become irrational agents . 4.
Autonomous Weapons and Human Dignity as Respectful Treatment
68
69
AWS mistreats targets’ dignity
Ulgen 17 [Ozlem Ulgen, Reader in International Law and Ethics in the School of Law at Birmingham City
School of Law with a PhD from the University of Nottingham, 02-06-2017, “Human Dignity in an Age of
Autonomous Weapons: Are We in Danger of Losing an “Elementary Consideration of Humanity”?,”
EUROPEAN SOCIETY OF INTERNATIONAL LAW, https://papers.ssrn.com/sol3/papers.cfm?
abstract_id=2912002]/Kankee
4. Autonomous Weapons and Human Dignity as Respectful Treatment Kant’s approach to ethical conduct is rooted in rational beings with
autonomy of will having an inclination towards respect for moral rules. This inclination derives from rationality and recognition of the intrinsic
worth of human dignity. It is not based on self-interest or coercion. It follows from the status of human dignity that by respecting
the
rights of others there is recognition of human dignity . What are these rights? Respectful treatment of yourself and others is
a manifestation of human dignity or humanity as an objective end. For example, human dignity resides in individuals taking care of their own
moral worth through avoiding immoral conduct and constantly striving to move from a State of nature to an improved rightful or lawful
condition. Individual morality is moderated by self-restraint and openness.24 Too much selfrestraint is contrary to human dignity (e.g. denial of
basic human needs for some greater good). Too much openness in seeking personal pleasure at the expense of others is also contrary to human
dignity (e.g. avarice, arrogance). As regards respecting others, Kant expresses this as a negative formulation. We restrain our words and deeds
towards others and thereby respect their human dignity. Kant‟s writings on human value, State powers of punishment, and rights in war
provide a basis for understanding human dignity as respectful treatment. 4.1 Mistreatment of Rational Beings and Wrongdoers Not
mistreating human beings is Kant’s negative formulation of the duty to respect human dignity in others. All
humans, including wrongdoers, are rational beings with autonomy of will deserving respect of their human
dignity. Recall that dignity means recognition of another‟s worth that has no price and cannot be exchanged by an equivalent. If we do
not respect a wrongdoer’s dignity or treat them less favorably we are judging them as worthless and, in
Kant‟s terms, with contempt. For Kant a dangerous wrongdoer is no object of contempt and no less worthy of
respect because he remains a human being even if his deeds are unworthy .25 In relation to how to treat the
dangerous wrongdoer, Kant refers to certain “disgraceful punishments” that cannot be justified because they
“dishonour humanity itself … [and] … make a spectator blush with shame at belonging to the species that
can be treated that way”. 26 Examples include quartering a man, having him torn by dogs, cutting off his nose and ears. These are
severe acts against physical integrity and dignity of the person and when seen alongside Kant‟s remarks about a judicially prescribed death
sentence without mistreatment provide illustrations of the duty not to mistreat humans. More subtle illustrations of mistreatment, referred to
as “vices”, include arrogance, defamation, and ridicule.27 A. “Outrages upon personal dignity” and Inhumane Treatment Kant‟s notion of
human dignity conceptualizes the generic category of “wrongdoers” to help us understand that even
if a person is suspected of
wrongdoing or has done wrong, or is an enemy combatant, they are still entitled to status and certain
treatment. Kant‟s “disgraceful punishments” are today transposed into international humanitarian law
through prohibition of certain acts and forms of conduct. Common Article 3 of the Geneva Conventions provides fundamental guarantees
(applicable to both non-international and international armed conflicts) that civilians and hors de combat “shall in all circumstances be treated
humanely”.28 Article 3(1)(a) prohibits violence to life and person, in particular murder of all kinds, mutilation, cruel treatment and torture.
Article 3(1)(c) prohibits “outrages upon personal dignity, in particular humiliating and degrading treatment”. The Elements of Crimes for the
International Criminal Court defines “outrages upon personal dignity” as acts which humiliate, degrade, or otherwise violate the ignity of a
person to such a degree “as to be generally recognized as an outrage upon personal dignity”.29 The fundamental guarantees of Common
Article 3 are also provided for enemy combatants under Articles 1(2) and 75 of Additional Protocol I (API). Enemy combatants are afforded
protection under “the principles of international law derived from established custom, from the principles of humanity and from the dictates of
public conscience”; and if they do not benefit from more favorable treatment under the Geneva Conventions or API, they must be “treated
humanely in all circumstances”. The law‟s moral basis derives from “principles of humanity” and “the dictates of public conscience”, which
although not defined are intended to overcome any ambiguities or uncertainties by anchoring the law back to what would be in the interest of
humanity. This moral basis prevents the assumption that something which is not prohibited in law is therefore permissible, and applies
regardless of developments in weapons technology.30 It has normative force to provide additional protection by appropriately controlling
military behavior. 31 These provisions establish obligations to take account of others‟ interests, including the human dignity of enemy
combatants. Use
of autonomous weapons to kill “wrongdoer” human targets completely bypasses such obligations
and represents a modern-day example of Kant‟s “disgraceful punishments” amounting to “outrages upon
personal dignity”. The human target is treated as an inanimate object without any interests; easily removed
and destroyed by a faceless and emotionless machine. No value is placed on the life taken. No “human central thinking
70
activities” are involved in the interpretation and application of international humanitarian law on prevention
of unnecessary suffering, taking precautionary measures, and assessing proportionality . The lack of human
discretion in these decisions violates Articles 35, 51, 57 API.32 There is currently no prohibition on the use or development of autonomous
weapons but this does not make them permissible when judged against human dignity as a principle of humanity. Autonomous
weapons would devalue humanity by treating humans as disposable inanimate objects rather than ends
with intrinsic value and rational thinking capacity . All individuals targeted and killed by such weapons are
entitled to respect for their human dignity. Whether or not they are designated enemy combatants or
terrorists, they have rational capacity, possess a moral value of dignity which cannot be replaced by an
equivalent, and cannot lose such status through immoral acts. If the autonomous weapon is capable of
causing unnecessary suffering in the human target this would constitute mistreatment. For example, certain types of
Hellfire missiles used on UAV cause burning in targets and incineration of bodies.33 The AGM-114N MAC (“metal augmented charge”) variant
uses a thermobaric warhead that can “suck the air out of a cave, collapse a building, or produce an astoundingly large blast radius out in the
open.”34 It contains a “fluorinated aluminum powder layered between the warhead casing and the PBXN-112 explosive fill. When the PBXN-
112 detonates, the aluminum mixture is dispersed and rapidly burns. The resultant sustained high pressure is extremely effective against
enemy personnel and structures”. 35 B. Does it matter whether Mistreatment comes from Man or Machine? It may be argued that
international humanitarian law allows use of lethal force against an enemy so that death resulting from use of autonomous weapons is not
unlawful per se. But this avoids moral and legal considerations of methods and means of warfare, which are at the heart of human dignity as
respectful treatment. To say that human targets are indifferent as to whether they are killed by autonomous
weapons or soldiers undermines human dignity in the person and runs contrary to evidence of the effects
and repercussions of American UAV strikes in Pakistan and Yemen.36 Apart from causing civilian casualties, UAV strikes have
caused loss of livelihood due to fear of venturing outside and severe psychological harm officially
diagnosed as PTSD. 37 “Decapitation strikes” intended to weaken the organizational capability of al-Qaeda and the Taliban by removing
key players or leaders have not achieved that objective, and UAV strikes in Pakistan have fuelled recruitment into militant
organizations and solidified resistance against the Pakistani State. 38 Victims’ accounts of targeted killings in
Yemen reveal extreme physical, psychological and economic harm : targeted vehicles continue burning with
victims inside; clothes fused to survivor‟s skin; skin burned off; local population living in fear and terror
from hearing planes; women suffering miscarriages; children frightened to go outside; dependents of
individuals killed unable to support themselves economically ; local population suffering shock after
strikes; inability to sustain a living from the land due to fear of being outside .39 Cases of combatants committing
war crimes amounting to “outrages upon personal dignity” may appear to bolster the argument supporting use of
autonomous weapons (i.e. the latter will act more rationally and be less prone to human flaws leading to atrocities).40 This assumes humans
have no capacity for preventing unethical conduct and that machines will act more ethically than humans. There are many different reasons
why combatants commit war crimes, not necessarily related to inherent human flaws.41 In addition, human emotions, as part of “human
central thinking activities”, play a vital role in navigating complex social environments in combat, especially where it is necessary to perceive
and interpret human behavior (e.g. children playing ball rather than throwing a hand grenade, someone running with a stick rather than a gun,
a young man of military age in the vicinity of an attack).42 The judgment, reasoning, and discretion exercised by a human cannot
be performed by a machine. Far from advocating replacement of human combatants with machines, war crimes cases serve
as barometers of public conscience on acceptable conduct in warfare . They promote human dignity by
recognizing that only human action justifies lethal force and , therefore, requires human accountability and
responsibility. In R v. Blackman a British military officer was found guilty of murder and sentenced to life imprisonment for shooting in the
chest a seriously wounded Afghan insurgent. The insurgent was entitled to be treated with dignity, respect, and humanity, yet the officer
treated him “with contempt and murdered him in cold blood”.43 The officer had failed to ensure medical assistance was quickly provided,
allowed officers under his command to manhandle the wounded insurgent causing him additional pain, ordered those providing first aid to
stop, and waited for the military surveillance helicopter to be out of sight before shooting the insurgent. Whilst mitigating factors pointed to
human flaws in combat (e.g. the effect of fellow officers‟ injuries and deaths, combat stress) these were not extraordinary or unexpected risks
to sufficiently displace military ethics and respect for humanity: “[…] thousands of other Service personnel have experienced the same or
similar stresses. They exercised self-discipline and acted properly and humanely; you did not. […] while this sort of offence is extremely rare, if
not unique, those Service personnel who commit crimes of murder, or other war crimes or crimes against humanity while on operations will be
dealt with severely. This is a message of deterrence but it is also to reassure the international community that allegations of serious crime will
be dealt with transparently and appropriately.”44 Although murder is separate from and not an “outrage upon person dignity”,45 the officer‟s
acts and omissions preceding the killing can be characterized as “animated by contempt for the human dignity of another person”46 and
71
therefore falling into the particular category of inhumane treatment. This case makes clear the importance of interaction and interrelatedness
between warring parties in the application of international humanitarian law, and the court emphasized that military personnel acting with
brutality and savagery lose the support and confidence of those they seek to protect, and provoke the enemy to act more brutally in retribution
or reprisal. With
use of autonomous weapons there is no interaction and interrelatedness between warring
parties, which creates a dangerous human accountability and responsibility gap. C. Mistreatment of the Deceased
“Outrages upon personal dignity ” and inhumane acts can also be committed against the dead as seen in cases before
international criminal tribunals. In the Trial of Max Schmid a German medical officer was found guilty of willfully, deliberately and wrongfully
mutilating the deceased body of a US serviceman, and sentenced to ten years imprisonment.47 In Niyitegeka the Minister of Information in the
Rwandan Interim Government was convicted, among other crimes, of other inhumane acts as crimes against humanity. He was jubilant at the
capture of a prominent Tutsi and rejoiced when he was killed, decapitated, castrated, his skull pierced through the ears with a spike, and his
genitals hung on a spike for the public to see. Niyitegeka‟s jubilation, especially in light of his leadership role in the attack, supported and
encouraged the attackers and thereby aided and abetted the commission of crimes. He subsequently ordered men to undress the deceased
body of a Tutsi women and insert a sharpen piece of wood into her genitalia. The Trial Chamber considered the order an aggravating factor for
its “cruel and insensitive disregard for human life and dignity” and found that both incidents “would cause mental suffering to civilians, in
particular, Tutsi civilians, and constitute a serious attack on the human dignity of the Tutsi community as a whole”.48 In the specific context of
preventing outrages against the dignity of the dead, the Geneva Conventions establish extensive State obligations to
search for the dead and prevent their being despoiled or ill-treated .49 When considered in relation to use of
autonomous weapons, where there is a lack of face-to-face contact between human combatants and human
targets, these provisions are rendered redundant. Death caused by conventional combat (i.e. human combatants
in-situ) whether on land, at sea, or in the air appears to be accorded greater protection against outrages upon
personal dignity than death by autonomous weapons . 4.2 Preconditions for Punishment of Wrongdoers and Treatment of
Enemy Combatants Some ambiguities exist in Kant‟s formulation of dignity which may raise difficulties in its application to those targeted and
killed by autonomous weapons. What if those targeted have killed humans? Do they lose rational capacity? Should they be afforded dignity?
There are two possible answers here. The first focuses on rational capacity as a potential rather than actual human characteristic and,
therefore, dignity cannot be lost by committing immoral acts. This corresponds to the principle of equality based on innate humanity of all
persons so that if you kill the person committing an immoral act, you kill yourself .50 The second answer provides a
potential exception under punishment of such individuals. Kant regards retribution punishment, specifically the death penalty for murderers, as
a matter of justice. Life and death are not the same and there is no substitute for taking a life other than death. But the punishment exception
has preconditions: punishment and sentence must be imposed by a judge, and even if the wrongdoer is facing an imminent death sentence “he
must still be freed from any mistreatment that could make the humanity in the person suffering it into something abominable.”51 Kant gives an
example of mistreatment as dangerous physical experiments on a murderer for greater medical good. These can never be consented to by the
murderer or society because they are contrary to the murderer‟s human dignity and “justice ceases to be justice if it can be bought for any
price whatsoever”. 52 Kant‟s non-mistreatment precondition is pervasive in today‟s international humanitarian law. Even in situations
where security or repressive measures are necessary against certain individuals, the dictates of humanity
requires that the law provides protection from mistreatment in order to preserve human dignity.53 For
example, Articles 13 and 14 Geneva Convention III and Article 11(1) API require prisoners of war to be treated humanely at all times; not be
subjected to physical mutilation or to medical or scientific experiments, even with their consent; to be protected from acts of violence or
intimidation and against insults and public curiosity; and to be entitled in all circumstances to “respect for their persons and their honour”.
Thus, even if those targeted by autonomous weapons have killed human beings , they are still entitled to
humane treatment. The preconditions for punishment of wrongdoers mentioned above (i.e. judicial punishment and
sentence, no mistreatment) keep human dignity intact by not legitimizing cruel and arbitrary treatment of
wrongdoers as human outcasts without any moral rights. Human dignity is innate, priceless and an objective end in itself. But
the difference with autonomous weapons is that there is no due process to determine guilt or
innocence. There is no prior determination of punishability and a judge is not imposing a sentence. Although
lawful killing of enemy combatants in armed conflict generally does not require a prior judgment because it is not about punishment,
autonomous weapons as a means of killing represent a form of punishment without preconditions
because they undermine human dignity and deny the possibility of interaction and interrelatedness
between warring parties. By targeting and attacking the wrongdoer the autonomous weapon is a means of avoiding
judicial pronouncement and authorization of punishment . The human target is treated as a means to an
end without human dignity and subhuman treatment of efficient disposal is justified. The non-existence of
preconditions for punishment of wrongdoers is contrary to Kant‟s ideal State of the “kingdom of ends” (a
commonwealth of persons who legislate universal laws that are rational and based on humanity as an end in itself). Kant is not advocating a
72
world super-State but recognizes the need for some form of State apparatus to enable legislating in this kingdom of ends. State apparatus
necessarily includes coercive and punishment powers. Those who go against the moral rules can be punished but not in
a way that mistreats them or is contrary to human dignity . There must be an opportunity for the wrongdoer to avoid the
punishment and any punishment must be judicially prescribed and administered. As we have already seen, Kant considers the death
penalty a legitimate sentence for murderers but only under judicially prescribed conditions and only if the
murderer is not mistreated in any other way .54 4.3 Limitations on Methods and Means of Warfare Kant‟s idea of individual
and State morality is based on a trajectory from a State of nature to a rightful or lawful condition. It forms the basis for his views on rights in
war. Kant describes war as “barbaric” and to be expected while States remain in a State of nature.55 A State of rightfulness would involve
States voluntarily coming together in a congress to uphold perpetual peace. Conceding that the State of nature will involve war, he then
discusses rights in war. Where there is an “unjust enemy” States are entitled to unite against and deprive the enemy State of its power. An
“unjust enemy” is one “whose publicly expressed will (whether by word or deed) reveals amaxim by which, if it were made a universal rule, any
condition of peace among nations would be impossible and, instead, a State of nature would be perpetuated.”56 A. Kantian just war impact on
jus in bello
73
Negative
74
Contention 1: Mines Good
COUNTERPLAN – The United States ought to ban antipersonnel landmines by acceding
to the Mine Ban Treaty with an exception for antipersonnel landmines laid on the
Korean Peninsula.
The CP solves the aff, but the plan and perm causes Noko to invade the South and
destroys the Soko alliance by banning US-Soko military cooperation
Beauchamp 14 [Zack Beauchamp, senior correspondent at Vox with an expertise on IR and foreign
policy with a MSc from the London School of Economics in IR, 6-27-2014, "Why the US has been a global
outlier on landmines," Vox, https://www.vox.com/2014/6/27/5849610/land-mine-treaty-ottawa-
koreas]/Kankee
The Ottawa Convention banning the use of landmines in war came into effect 15 years ago . At the time, the
United States was the highest-profile country to refuse to sign it, and remains part of the 20 percent of
nations who haven't joined it — along with Russia, China, and Syria. Today, the Obama administration took the United
States' biggest step yet towards joining the international ban on landmines, saying that the United State would no longer acquire new
landmines or replace old ones in order to "ultimately allow the United States to accede to the Ottawa Convention." What took the US so long?
And why is the US not just adhering fully to the ban right away? It's obviously complicated, but the single most important factor is
probably the conflict between the Koreas. During the late 90s, when Americans debated whether or not to join the Ottawa
Convention, the Clinton administration was consistently skeptical of a total ban on landmines . Despite the fact that
the US was providing more support than any other country for dismantling existing land mines around the world, the treaty's hard restriction
on military use was, administration officials thought, too inflexible. Among other objections, the Clinton team pushed for a specific
exception to the treaty for the demilitarized zone (DMZ) between North and South Korea, which has separated the
two countries since they signed an armistice in 1953. Clinton State Department official Robert Beecroft even said the US would sign the treaty if
it could find a way to replace landmines in the DMZ. Why is Korea such a big sticking point for the US ? In very simple terms,
North Korea vastly outnumbers its southern neighbor in troops. The North Korean military is almost
double the size of its South Korean counterpart (roughly 1.2 million to 700,000). The massive quantity of
landmines planted in the DMZ, in the US's view, would considerably slow down any attempt by the North
Korean military to rapidly overwhelm the South by dint of sheer numbers. In the 1990s, many of those landmines were
American-owned mines, not Korean. So if the US had accepted a treaty commitment to dismantle its mine stock , it
would have had to dismantle weapons it believed were deterring a North Korean invasion. Today, though,
South Korea technically controls all of the mines — not the US. However, joining the Ottawa Convention would prohibit any
US-led forces from military cooperation with nations that use landmines during wartime. Considering
that there are 30,000 US troops in South Korea , signing the treaty would severely constrain the US's
ability to work with South Korea. "That's the one thing that I think is now hanging up the Obama review" on joining the Ottawa
Convention against landmines, Human Rights Watch arms control director Steven Goose told the LA Times in 2012. Clearly, with the Obama
administration now taking this step toward signing the treaty, things have changed. What happened? We don't know for sure. It's possible the
administration concluded, as anti-landmine campaigners have long maintained, that mines don't actually deter aggression. Moreover, South
Korean military strength has grown over the past two decades. The North's equipment, by contrast, has become increasingly dated. South
Korean officials may have concluded that they don't really need landmines anymore.
75
Landmines are key to assuring Soko – they specifically hate the plan because of Noko
fears and it rescinding Obama’s promise to exempt them from US Ottawa
implementation
Rowland 14 [Ashley Rowland, graduate of the University of Alabama working on a master’s degree in
IR at Troy University, 09-25-2014, "North Korean threat underscores need for land mine exemption,"
Stars and Stripes, https://www.stripes.com/news/north-korean-threat-underscores-need-for-land-mine-
exemption-1.304913]/Kankee
The Obama administration’s decision to remove all of its land mine stockpiles , except those in South Korea,
underscores the constant security threat posed by North Korea and serves as a reminder that little has changed in the
decades-old military standoff here. “No other country besides South Korea faces such a huge military
confrontation like the one on the Korean peninsula . There’s no comparison,” a spokesman for South Korea’s
Ministry of National Defense said, speaking on customary condition of anonymity. Citing security concerns, neither South Korean defense
officials nor U.S. Forces Korea would disclose how many land mines are buried in the 160-mile-long, 2.5-mile-wide Demilitarized Zone that
separates the two Koreas, though officials have previously estimated it’s more than 1 million. Meant
to deter a land attack, the
mines are part of the massive arrays of military forces and equipment that make the DMZ the mostly
heavily guarded — and dangerous — border in the world. The buffer zone — and particularly the Joint Security Area — is a surreal
place, part tense border, part tourist attraction. Busloads of visitors tour infiltration tunnels dug by the North and watch as troops from both
countries glare at each other from across the Military Demarcation Line, separated by just yards in land but miles in dueling ideologies. Both
Koreas maintain villages in the DMZ. On the southern side is Daesungdong, home to some 200 people and a small school where students
practice evacuating in case hostilities flare up. On northern side is the ghost village of Kijong-dong, which has no residents and is best known for
the massive North Korean flag that flies over its empty buildings. And while a ground invasion might seem unlikely, it
is estimated that
millions of North Korean soldiers would flood across the border if war broke out again. Vestiges of the past
remain along the northern edge of the South’s territory, like the overpasses rigged to explode and stymie the advancement of North Korean
troops. South Korean troops posted in lookout points along barb-wired riverbanks scan for infiltrators. Along with hundreds of thousands of
South Korean military personnel, more than 28,500 U.S. troops are stationed on the peninsula as a deterrent to the North, whose nuclear
capabilities remain a key security concern in the region. The North periodically threatens to use them against South Korea and even the U.S.
mainland. Earlier this week, theState Department announced the U.S. is exempting the Korean peninsula from a
pledge it made earlier this year to quit producing land mines and get rid of its current stockpiles. Another MND spokesman said
South Korea respects and welcomes the U.S. decision and views it as a sign that the U.S. understands its
“unique” security situation. “We have confidence in the strength of combined U.S.-ROK capabilities to
defend against North Korean military action and maintain appropriate capabilities to meet U.S. defense
requirements to defend the ROK ,” a USFK spokesman said. South Korea has spent millions of dollars to remove land mines south of
the DMZ in recent years, primarily from rural areas. A legacy of the 1950-53 Korean War, they may be unearthed due to farming or uncovered
by heavy rains and washed far from where they were buried.
76
Naval mines stop Taiwan war
Axe 20 [David Axe, defense editor of The National Interest4-22-2020, "Check Out Taiwan’s New Fleet-
Killer," National Interest, https://nationalinterest.org/blog/buzz/check-out-taiwan%E2%80%99s-new-
fleet-killer-146756]/Kankee
Taiwan’s loading up on new minelaying vessels. And it’s not hard to see why. With no realistic prospect of
matching the Chinese navy warship for warship, the Taiwanese fleet is hoping that underwater minefields
might help to sink an invasion fleet. Lungteh shipyard on April 17, 2020 laid the keel for the third and fourth Min Jiang-class
minelayer. The Republic of China Navy plans to begin accepting the minelayers in 2021. The Taiwanese fleet’s existing minelayers are modified
landing craft. The Min Jiangs are not large. Just 120 feet long and displacing around 400 tons, they are lightly built and minimally armed with a
handful of guns. Their mission, in wartime, is to use their automated mine-deploying systems quickly to lay
minefields in the path of a Chinese invasion fleet . The minefields presumably would be close to shore. “The
minelayer ships were designed to face down an attack by amphibious vehicles trying to land in Taiwan,” a
Taiwanese defense official said at the keel-laying ceremony for the first Min Jiang. Sea mines are among the most dangerous
naval weapons. It’s not for no reason that Iran leans heavily on mines in its strategy for closing the
strategic Strait of Hormuz. It only helps navies such as Taiwan’s that many rival fleets struggle to maintain
adequate minesweeping forces. The Min Jiangs are part of a three-way approach to an “asymmetric” naval
strategy. Instead of trying to match China’s scores of big, heavily-armed -- and expensive -- frigates, destroyers, cruisers and aircraft carriers,
Taiwan plans to exploit specific Chinese weaknesses in order to raise the cost of an invasion . In addition to
the Min Jiangs, Taiwan also is building at least 11 new catamaran missile corvettes of the Tuo Chiang. Each of the speedy, 600-tons-
displacement vessels carries 16 anti-ship missiles. The Tuo Chiangs will complement 42 older missile boats when they enter service beginning in
2021. The third part of Taiwan’s asymmetric naval strategy lags by a few years. In addition to minelayers and missile corvettes, Taiwan is trying
to build eight new diesel-electric attack submarines to replace four very old submarines currently in the fleet. Since none of the world’s major
submarine-builders will risk China’s wrath by selling an existing sub design to Taiwan, Taipei is spending potentially billions of dollars developing
the submarines on its own, albeit with the help of foreign consultants. Work on the new boats began in May 2019 at a shipyard in Kaohsiung.
The coronavirus pandemic that swept East Asia starting the following December slowed the work. Taipei’s ban on foreign visitors, meant to halt
the virus’s spread, also denied entry to Taiwan for dozens of foreign consultants working on the submarine project. Expect work to resume as
soon as possible. The submarine program and the other asymmetric naval efforts are top priorities in Taiwan. After all, losing
a large
number of amphibious ships and landing craft to submarines, missiles and sea mines could compel China to call
off an invasion, or at least delay the invasion long enough for U.S. forces to intervene.
77
Mines are their best defense – its key to their asymmetric defense strategy
Chan 20 [Minnie Chan, SCMP journalist with a master's in international public affairs from The
University of Hong Kong, 09-23-2020, "Chinese military steps up anti-mine drills as Taiwan builds sea
defences," South China Morning Post,
https://www.scmp.com/news/china/military/article/3102577/chinese-military-steps-anti-mine-drills-
taiwan-builds-sea]/Kankee
China’s military incorporated mine-sweeping exercises into some of its latest naval drills targeting Taiwan, according to recent reports from
state media. Taiwan
has been investing in minelayers and developing sophisticated underwater mines in the
hope of thwarting the People’s Liberation Army’s plan to take over the island within three days – a move that
could buy time for other forces to come to the island’s aid , according to military analysts. In two recent naval
exercises conducted by the PLA’s Eastern and Southern theatre commands, the military’s mine sweepers and minehunters played important
roles in helping the flotillas clear sea lanes from underwater mines, state-run China Central Television reported. The anti-mine training comes as
the PLA steps up its island encirclement drills by sending fighter jets and warships into the Taiwan Strait at a time when the island’s authorities
are growing closer to the United States. Beijing, which regards the island as a breakaway province and has never renounced the use of force to
bring it back into its fold, has accused Washington and Taipei of promoting independence. Taiwanhas been buying torpedoes from the
US and also trying to improve its minelaying abilities as a way of hampering any PLA attempt to land forces
on the island, according to Lu Li-Shih, a former instructor at the naval academy in Taiwan . Lu said the strategy
was designed to buy more time for the US Navy and its allies in the region to intervene. “If the PLA tries
to attack Taiwan, it will definitely be an asymmetric battle … one of the best measures the Taiwanese
military can take is to lay underwater mines along Taiwan’s shores and major channels connecting the
mainland and the island to hinder the PLA fleet’s approach,” Lu said. He said different mines were designed for different
purposes. For example, drifting mines were to attack surface ships and moored mines for submarines while “the
most deadly” was a smart rising mine. The rising mines, which were usually laid in deep water with a floating payload, could release
torpedoes when the system detected enemy ships passing , Lu added. Last month, Hsiao Bi-khim, Taiwan’s most
senior envoy to the US, told the Washington-based think tank the Hudson Institute that Taipei was looking to buy
underwater mines and cruise missiles to boost its coastal defences, according to Reuters. On August 4, Taiwan
launched its first locally constructed fast minelayer , underscoring Taiwanese President Tsai Ing-wen’s
commitment to strengthening its domestic defence capabilities . In her inauguration speech at the start of her second
term in May, Tsai said her number one priority was to boost Taiwan’s asymmetric defence capabilities. Macau-based military observer Antony
Wong Tong said developing advanced mine sweepers and hunters had been a major goal for the PLA to ensure to could fulfil its reunification
agenda. “The PLA has had painful experiences clearing up all mines laid by Taiwan since 1949 [when the
defeated Nationalists fled to the island at the end of the civil war] because of a lack of advanced detecting equipment in the
beginning,” Wong said.
78
Restricting landmines harms CMR and credibility of the JCS
Good 11 [Rachel Good, cum laude graduate of Northwestern University Pritzker School of Law, Spring
2011, “Yes We Should: Why the U.S. Should Change Its Policy Toward the 1997 Mine Ban Treaty,”
Northwestern Journal of International Human Rights,
https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?
article=1113&context=njihr]/Kankee
Clinton’s second reason for not signing the Treaty was the lack of a timetable to phase out mines.163 Until the U.S. developed alternative
technologies, landmines were considered an essential part of the U.S. arsenal. To highlight this point, Clinton stated that
landmines are necessary along the DMZ . He further explained that “[i]n the event of an attack...[o]ur antipersonnel
mines are a key part of our defense line in Korea .”164 Clinton also sought to justify his decision as essential to
protecting “the safety and security of our men and women [people] in uniform.”165 Although these were Clinton’s publicly stated
reasons for not signing the Treaty, behind the scenes politics also played a key role in Clinton’s decision. 36 Unlike other countries
w[h]ere the landmine ban was dealt with primarily as a humanitarian issue, in the United States it remained
squarely a military issue.166 The U.S. Joint Chiefs of Staff was resistant to removing landmines from its
arsenal.167 The Pentagon actively lobbied Clinton not to sign the Treaty, even though military officials did
not necessarily have plans to use landmines.168 In their view, landmines might save soldiers’ lives in some
circumstances, and the U.S. should not ban the weapons without replacement technologies.169 On the other hand,
Secretary of State Madeline Albright supported U.S. participation in the Treaty.170 The State Department, however, failed to
launch a campaign in support of the Treaty to adequately rival that of the Pentagon, because it was uncertain of Clinton’s
political commitment to the issue.171 37 Clinton’s deference to the Joint Chiefs may have had to do with his fear of
straining his relationship with the military. Not having served in Vietnam, the military treated Clinton as an outsider.172 Clinton
acknowledged that the landmine issue in particular strained his relationship with the Joint Chiefs to the point
where he concluded that could not “risk a breach” with them over the issue.173 When Clinton publicly denounced
the MBT he did so in the language of the Pentagon , labeling landmines as a military necessity essential to protecting
soldiers.174 This language, far different than that he used at the U.N. a few years earlier when he called for a global ban on landmines, was an
indication that Clinton bowed to the pressure of the Joint Chiefs and adopted their position on the
MBT.175 38 Clinton’s priorities in Congress were another factor in the U.S.’s failure to sign the MBT. At the time, Jesse Helms, a pro-military
senator who was hostile toward treaties in general, headed the Senate Foreign Relations Committee.176 Clinton’s top priority was obtaining
Senate approval for the expansion of NATO and he did not want to risk spending political capital by pushing the MBT.177 Clinton expected to
face criticism for his decision both domestically and internationally, so to mitigate the outcry, he launched a series of policies limiting the use of
certain landmines and committed the U.S. to sign the MBT in the future.
79
A landmine ban causes massive military backlash due to fears of spill-over to other
weapons like drones
NCR 14 [National Catholic Reporter Editorial Staff, entirely unrelated to the New California Republic
and their conflict with Caesar’s Legion, 7-16-2014, "Editorial: Why won't the US sign a land mine
treaty?," National Catholic Reporter, https://www.ncronline.org/news/justice/editorial-why-wont-us-
sign-land-mine-treaty]/Kankee
Speaking in Maputo in June, U.S. Ambassador to Mozambique Douglas Griffiths said the U.S. is pursuing solutions that would allow it to
accede to the treaty, though it continues to reserve the right to use its current stockpile anywhere it feels it must
until the mines expire. This was encouraging -- but far short of enough. The question that needs to be asked is this: Are we, as a nation,
incapable of renouncing weapons that kill mostly children and other innocent civilians? The
Pentagon cites the need to use land
mines as a deterrent along the North and South Korea border. But does a nation that spends tens of billions annually on
military procurements really have to depend on these weapons of war? Land-mine opponents say that the U.S. signing the treaty would
substantially strengthen it. Why the resistance? It seems plausible, as some treaty advocates have stated, that
the treaty's unique
evolution is viewed as a reason -- possibly the major reason -- that our military resists being part of it. The
top brass fears that giving up land mines could encourage similar treaty efforts by human rights groups
to seek bans on other controversial weapons. Land mines today, drones tomorrow? Meanwhile, the treaty
has already spawned at least one child. It's called the Convention on Cluster Munitions , an international treaty that
prohibits the use, transfer and stockpiling of cluster bombs , another indiscriminate explosive weapon that scatters
bomblets over a wide area. The convention was adopted in May 2008, opened for signatures in December 2008, and entered into force in
August 2010. As of September 2013, 108 states have signed. And, oh, yes, Washington has refused to sign.
80
Contention 2: Circumvention
US will win the LAWS arms race now, but any wavering causes Russia and China to
take the lead
Haner and Garcia 19 [Justin Haner, doctoral candidate in the Department of Political Science at
Northeastern University, and Denise Garcia, Northeastern Associate Professor of Political Science and
International Affairs, 9-26-2019, "The Artificial Intelligence Arms Race: Trends and World Leaders in
Autonomous Weapons Development," Wiley Online Library,
https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12713#gpol12713-sec-0003-title]/Kankee
The United States Witha defense budget greater than the combined military spending of China, Russia, South
Korea, and all 28 EU member‐states combined, it is no surprise that the United States is the world leader
in the development of lethal AWS (SIPRI, 2019). Autonomy has been an official component of United States
national security strategy since 2012 with the release of Department of Defense (DoD) Directive 3000.09. This policy was
the first of its kind and allows for semi‐autonomous systems to engage targets pre‐selected by human operators, as well
as for fully autonomous weapons to select and engage targets after senior level DoD approval (Department of Defense, 2012).
Further support for autonomy in war can be seen in the United States ‘Third Offset Strategy’ where it is
listed as one of the main pillars. However, despite being a clear policy priority for defense officials, not all Americans support this
effort. Only 25 per cent of American citizens trust AI, and some employees at major companies have resisted developing AI for military
purposes, as seen in Google's internal rebellion against Project Maven, an AI development contract for the United States military (Ipsos, 2018).
The United States is the outright leader in autonomous hardware development and investment
capacity. By 2010, the United States had already invested $4 billion into researching AWS with a further $18
billion earmarked for autonomy development through 2020 (Boulanin and Verbruggen, 2017). Despite already
owning over 20,000 autonomous vehicles, the United States is projected to spend $17 billion on drones
through 2021, including 3,447 new unmanned ground , sea, and aerial systems (Gettinger, 2018; Statista, 2019). In
the military AI expertise race, the United States started before the opening gun even went off by
investing $1 billion in ‘strategic computing’ back in 1983, and since then has consistently outspent its
competitors (Boulanin and Verbruggen, 2017). In addition to having the most AI companies in the world , the
United States has the most AI‐related publications for a single country, the most AI patent applications
and accepted AI patents, as well as the largest pool of talented AI researchers, including those in the top
ten per cent of their field, more than any other single country in the world (CISTP, 2019a, 2019b, 2019c; IPlytics
GmbH, 2019; Shoham et al., 2018). China China is the clear rising contender in lethal AWS and AI development and
has outlined in its ‘Next Generation Artificial Intelligence Development Plan’ that it intends to utilize AI on the battlefield
in association with AWS (China State Council, 2017; Kania, 2017). With a combination of 70 per cent citizen trust in AI
(the highest of the 24 countries surveyed ) and the heavy pressure it can exert on companies to transfer
technology to the state, it is unlikely to face significant internal resistance to AWS development (Ipsos,
2018). China's capacity for weapons development is high with an estimated annual budget of $250 billion
and projected spending of $4.5 billion on drone technology by 2021 (SIPRI, 2019; Statista, 2019). Most impressively,
Chinese companies have tested swarming technology with over 1,000 synchronized drones (Kania, 2017).
However, while some countries , such as South Korea, Israel, and Japan, seek AWS development to augment their
soldiers and fill near‐term gaps in security, China, with the world's largest army, does not have this
problem. This frees China to focus the bulk of its resources on long‐term strategic investments in AI.
China publicly plans to become the world leader in AI development by 2030 (China State Council, 2017). China's
controversial methods of intellectual property procurement have allowed them to make technological
leaps forward in a non‐linear fashion. With heavy ‘civil‐military fusion’ investment, China's State Council estimates their
81
AI industries to be worth $22 billion by 2020, $59 billion by 2025, and $150 billion by 2030 (China State Council,
2017; Kania, 2017). By some metrics, China has already taken the lead in AI. Despite lagging in total publications, between 2011 and 2015
Chinese scientists published 41,000 papers on AI, almost double the United States during the same period (Baker, 2017). Further, Chinese
investment and financing in AI projects between 2013 and 2018 is estimated to be 60 per cent of the entire worlds funding of such projects,
again more than doubling United States investment during the same period (CAICT and Gartner, 2019). However, China does face a
problem of top expertise flight as despite having over 18,000 talented AI developers , when it comes to
those who rank among the world's best, the United States and EU each have more than five times as
many of the top experts (CISTP, 2019a). Russia Despite scoring low across several capacity and expertise metrics, Russia is a
leader in the lethal AWS race because it is its most brazen supporter. Russia is openly looking to remove
humans from the decision‐making loop and does not intend to comply with any international efforts to
curtail or ban AWS use in combat (Bendett, 2017; Tucker, 2017). In accordance with Russian programs for the ‘Creation of
Prospective Military Robotics through 2025’ and ‘Concept for Deployment of Robotic Systems for Military Use until 2030’, Russia plans to
have autonomous systems guarding their weapons silos by 2020 and aims to have thirty percent of their
combat power to be partially or fully autonomous by 2030 (Bendett, 2017; Moscow Times, 2014). Russia is acutely
focused on near‐term hardware development. Despite a comparatively low annual GDP and total budget for defense, Russia is
intent on spending almost as much as China on drones by 2021 , has a military robotics‐focused
rearmament budget of $346 billion, and hosts annual conferences on the roboticization of its armed
forces (Bendett, 2017; Sputnik News, 2013; Statista, 2019). Their autonomous Uran‐9 robotic tank has already been
deployed to Syria (Mizokami, 2018). President Vladimir Putin has publicly stated whoever becomes the leader in AI
will ‘become the ruler of the world ’, however, Russian investments in AI are significantly lacking (Bendett, 2017). Even basic AI
statistics on Russia are hard to come by and one potential explanation may be that significant development is not happening on a comparable
scale. Despite having at least ten research centers dedicated to AI use in warfare, Russia's annual domestic military spending on AI is estimated
to be as low as $12.5 million annually, just 0.01 per cent of the unclassified AI budget for the United States military (Bendett, 2017, 2018).
International sanctions may be part of the problem as Russia has been forced to cut its defense budget by 7 per cent in 2017, 3.2 per cent in
2018 and estimated 4.8 per cent for 2019 (Kofman, 2017). South Korea
82
Russia and China will water down and sabotage any ban – verification and
enforcement is impossible and countries will cheat
Chan 19 [Melissa K. Chan, foreign affairs journalist for Time, 9-3-2019, "Could China Develop Killer
Robots in the Near Future? Experts Fear So," Time, https://time.com/5673240/china-killer-robots-
weapons]/Kankee
Russia started sabotaging the discussion from the very first session. Throughout the morning of Aug. 21, its
diplomats at the United Nations in Geneva took the floor, nitpicking language in a document meant to pave
the way for an eventual ban on lethal autonomous weapons, also known as killer robots, an emerging category of weapons
that would be able to fight on their own and decide who to target and kill. “They were basically trying to waste time,” says Laura
Nolan of the International Committee for Robot Arms Control, who watched with frustration in the hall. But while Russia vigorously
worked to derail progress, it had a quieter partner: China. “I very much get the impression that they’re working
together in some way,” says Nolan. “[The Chinese] are letting the Russians steamroll the process, and they’re
happy to hang back.” China has stayed coy at these discussions, which have taken place at least once a year since 2014.
Its delegates contribute just the minimum, and often send ambiguous signals on where they stand . They
have called killer robots a “humanitarian concern,” yet have stepped in to water down the text being debated. Stakes are
high for the emerging military power. The robots in question — while not yet humanoid, techno-thriller Terminators — would nevertheless be
deadly: Imagine dozens of drones swarming like bees on the attack, or intelligent vehicles patrolling a border with shoot-to-kill orders. At times,
Beijing has given some hope to activists demanding a ban on such weapons. According to the Campaign to Stop Killer Robots, the coalition
Nolan’s organization is a part of, China last year joined 28 other states in saying it would support prohibiting fully autonomous weapons — but,
Beijing clarified, just against their use on the battlefield, not their development nor production. That has raised eyebrows among experts
skeptical of its intentions. “They’re
simultaneously working on the technology while trying to use international
law as a limit against their competitors,” observes Peter Singer, a specialist on 21st century warfare. Quite a
few countries at these meetings might levy the same accusation against the United States. While Washington has not obstructed the talks, it
has not appeared keen to move things forward, either. Part of the reluctance from major military powers over a ban stems from the extent
artificial intelligence (AI) has affected their defense industries. In addition to the U.S. and China, these states also include the U.K., Australia,
Israel, South Korea, and a few others. But it is China
that has become the most formidable challenger in the AI
competition against the American superpower. President Xi Jinping has called for the country to become a
world leader in AI by 2030, and has placed military innovation firmly at the center of the program ,
encouraging the People’s Liberation Army (PLA) to work with startups in the private sector, and with universities. Chinese AI companies
are also making substantial contributions to the effort. Commercial giants such as SenseTime, Megvii, and Yitu sell smart
surveillance cameras, voice recognition capabilities, and big data services to the government and for export. Such technology has most notably
been used to police the country’s far western territory of Xinjiang, where the U.N. estimates up to 1 million Uighurs, an ethnic minority, have
been detained in camps and where facial recognition devices have become commonplace. “These technologies could easily be a
key component for autonomous weapons,” says Daan Kayser of PAX, a European peace organization. Once a robot can
accurately identify a face or object, only a few extra lines of code would transform it into an automatic
killing machine. In addition to technology from commercial companies, the PLA has said it plans to develop new types of
combat forces, including AI and unmanned — in other words autonomous or near-autonomous — combat systems.
The country’s domestic arms industry has obliged. A few examples include manufacturer Ziyan’s new Blowfish
A2 drone. The company boasts it can carry a machine gun, independently fly as a swarm group without human
operators, and “engage the target autonomously .” On land, Norinco’s Cavalry, an unmanned ground
vehicle with a machine gun and rocket launchers, advertises near autonomous features . And by sea,
Chinese military researchers are building unmanned submarines . The 912 Project, a classified program, hopes to
develop underwater robots over the next few years. “Killer robots don’t exist yet, but what we see is a trend towards increasing
autonomy,” says Kayser of PAX. “We’re very close to crossing that line, and a lot of the projects that countries
are working on — of course they don’t say they’re going to be killer robots. But if we see terms like
‘autonomy in targeting’ — that’s getting very close to something that would be an autonomous weapon .”
83
All things considered, China’s behavior at the U.N. makes practical sense. Like other states, it is already developing intelligent weapons. The
technology is fast outpacing the process at the U.N., where discussions will continue for another two years, if
not longer. Without any clear international legal parameters , major militaries are feeling the pressure to
invest in autonomous capabilities on the assumption that others are. Such thinking especially characterizes the
discourse around AI and autonomous weapons systems between China and the U.S. “Essentially you have two
sides that are worried about the other gaining an advantage,” says Singer. “That then has the ironic result of them both plowing resources into
it, competing against each other, and becoming less secure.” The other frontier unbound by international law is space. Here, China sees some
opportunities to leapfrog American technology. It’s also where Beijing believes the U.S. would be most vulnerable in any conflict because of its
dependence on information technology such as GPS, which not only helps soldiers and civilians get around, but services like stock exchanges
and ATMs. The country’s Shiyan-7 satellite, able to maneuver and dock with larger space objects, would in theory, experts say, also be able to
latch on to and disable enemy space assets. More recently, China has been testing satellite SJ-17. It moves around with precision at very high
altitudes — 22,000 miles above Earth. Satellites in orbit fly at tens of thousands of miles per hour. They possess the kinetic potency to shatter
anything in their path, essentially acting as kamikazes against another country’s satellite. The U.S. military worries this is what China has in mind
when developing satellites that can move so unusually in space. Advanced space weapons, killer robots, and the U.S. and China
preparing for World War III. It may all sound surreal, like a spectacular science fiction, but in the staid halls of the U.N., over
the draft documents bureaucrats pass around, they are exactly what countries are anticipating . What makes their
work more challenging than past international weapons bans is the preemptive nature of it, and the
technology involved that would make enforcement and verification difficult, if not impossible. Kayser knows time
is running out. “An AI arms race would have no winners,” he says. Preventing one from happening would depend on the
major powers. He isn’t optimistic. “They are not taking their responsibility to ensure that international peace
and security is maintained. They are actually taking steps that are dangerous and risky for international peace.”
84
Russia will ignore any LAWS ban – durable fiat doesn’t apply since the resolution
doesn’t fiat a ban would be enforced or interpreted correctly
Tucker 17 [Patrick Tucker, Defense One Technology editor, 11-22-2017, "Russia to the United Nations:
Don’t Try to Stop Us From Building Killer Robots," Defense One,
https://www.defenseone.com/technology/2017/11/russia-united-nations-dont-try-stop-us-building-
killer-robots/142734/]/Kankee
Arms control advocates had reason for hope when scores of countries met at the United Nations in Geneva last week to discuss the future of
lethal autonomous weapons systems, or LAWS. Unlike previous meetings, this one involved a Group of Governmental Experts, a big bump in
diplomatic formality and consequence, and those experts had a mandate to better define lethal autonomy in weapons. But hopes for even a
small first step toward restricting “killer robots” were dashed as the meeting unfolded. Russia announced that it would
adhere to no
international ban, moratorium or regulation on such weapons. Complicating the issue, the meeting was run in a way that
made any meaningful progress toward defining (and thus eventually regulating) LAWS nearly impossible. Multiple
attendees pointed out that that played directly toward Russia’s interests. Russia’s Nov. 10 statement amounts to a lawyerly attempt to
undermine any progress toward a ban. It argues that defining “lethal
autonomous robots” is too hard, not yet necessary, and
a threat to legitimate technology development . “According to the Russian Federation, the lack of working samples
of such weapons systems remains the main problem in the discussion on LAWS…this can hardly be
considered as an argument for taking preventive prohibitive or restrictive measures against LAWS being a by
far more complex and wide class of weapons of which the current understanding of humankind is rather approximate,” it says and goes on to
warn that too much effort to ban lethal robots could have an unintended chilling effect on AI generally .
“The difficulty of making a clear distinction between civilian and military dev elopments of autonomous
systems based on the same technologies is still an essential obstacle in the discussion on LAWS. It is hardly
acceptable for the work on LAWS to restrict the freedom to enjoy the benefits of autonomous technologies being the future of humankind.” An
attendee who did not feel comfortable providing a name on the record, given the highly sensitive nature of the talks, said that “ the
Russians are not interested in making progress on this." When asked if the lack of progress during the
meeting, an effect of the unusual way the meeting was run, seemed to serve Russia’s interests, the participant
responded: “Yes, of course.” Multiple attendees put much of the blame for that on Indian Ambassador Amandeep Singh Gil, the
chairperson of the Group of Governmental Experts, essentially, the UN official sanctioned to run the meeting. In both Gil’s comments and in a
position paper he put forward, he echoed aspects of the Russian position. More importantly, Gil approached the entire five-day meeting in such
a way that any made any progress toward defining and thus, perhaps one day, regulating, killer robots very difficult, they said. Rather than look
at serious proposals and position papers put forward by governmental delegations, Gil presided over a chaotic and ultimately inconsequential
discussion of AI generally, barely touching on the stated purpose of the meeting during the five days. At one point, he even shut down
ambassadors and delegates who tried to turn the meeting back to the work of defining lethal robots. “A lot states came prepared to talk about
definitions. That’s what the mandate was” said one participant. For a governmental delegation “to put out a position paper like that, it has to
get vetted through a lot of parts of your government… it was discouraging. It’s important that States feel like they’re vested in the process.”
That didn’t happen, said the participant. Other attendees noted that Russian
defense contractors, notably Kalashnikov, are already
marketing weapons with artificial intelligence features such as autonomous targeting and firing . Defining
a killer robot doesn’t seem to be an obstacle when the objective is selling them . “One of the things that's a bit
incongruous about Russia's position is that their own defense companies have made claims about developing
autonomous weapons: So while you have Russia saying ‘we shouldn't talk about these weapons because they don't exist,’ it sure looks
like Russian companies are racing to develop them ,” said Paul Scharre, a senior fellow and director of the
Technology and National Security Program at the Center for a New American Security . (Scharre is also the
author of the forthcoming book, Army of None: Autonomous Weapons and the Future of War.) He pointed to numerous instances where
Russian commanders had essentially announced both the intent and the willingness to develop the sorts of
weapons that they can’t define. “I would like to hear Russia clarify its position and intentions. The United States has a detailed policy
in place on how it intends to approach the issue of autonomous weapons,” he said. But Sam Bendett, an associate research analyst with the
Center for Naval Analyses’ Russia Studies Program and a fellow in Russia Studies at the American Foreign Policy Council, argued that the Russian
position was more nuanced than the strongest language in their statement suggests. “Russians are also unsure how exactly AI-driven military
robotics systems would function given that artificial intelligence in a battlefield capacity is still an evolving concept,” he said. But Bendett’s work
also documents growing Russian interest in developing and fielding weapons that use increasingly sophisticated AI. In 2014, the Russian
85
Ministry of Defense launched a comprehensive plan for the development of prospective military robotics through 2025. In 2016 the Russians
launched an annual conference, “Roboticization of the Armed Forces Of the Russian Federation.” Bendett believes that Russian defense
spending in AI will grow since the Ministry of Defense has at least 10 research centers looking at applications for autonomy in warfare. And of
course Russian President Vladimir Putin has even said that the nation that leads in AI will rule the world. “Russia taking a defensive stance
against an international body seeking to regulate weapons other than destructive nuclear bombs should not have been such a surprise. After
all, in many international forums, Russia
stresses the ‘sovereignty of nations free to pursue their own
political/military/economic course’ as a cornerstone of an international order they envision as a better alternative to the unipolar
world with the United States in the lead,” said Bendett.
86
China cheats too – they want to exploit a ban to get an advantage over the US
Kania 18 [Elsa B. Kania, adjunct senior fellow with the Technology and National Security Program at
CNAS and a doctoral student in Harvard University's Department of Government, 4-17-2018, "China’s
Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems," Lawfare,
https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-
weapons-systems]/Kankee
On April 13, China’s delegation to United Nations Group of Governmental Experts on lethal autonomous weapons systems announced
the “desire to negotiate and conclude” a new protocol for the Convention on Certain Conventional
Weapons “to ban the use of fully autonomous lethal weapons systems.” According to the aptly named Campaign to Stop
Killer Robots, the delegation “stressed that [the ban] is limited to use only.” The same day, the Chinese air force
released details on an upcoming challenge intended to evaluate advances in fully autonomous swarms of drones , which
will also explore new concepts for future intelligent-swarm combat. The juxtaposition of these announcements
illustrates China’s apparent diplomatic commitment to limit the use of “fully autonomous lethal weapons
systems” is unlikely to stop Beijing from building its own. Although momentum towards a ban on “killer robots” may seem
promising—with a total of twenty-six countries now supporting such a measure— diplomacy and discussion about autonomous
weapons may still struggle to keep up with technological advancement. Moreover, great-power militaries like the U.S.
and U.K. believe a ban would be premature . Even as multiple militaries are developing or have already attained autonomous
weapon systems, the U.N. group has yet to reach a consensus on what even constitutes a lethal autonomous
weapons system, “fully autonomous” or otherwise. And despite emerging consensus on the importance of human control of
these systems—however they might be defined—the U.S., Russia, Israel, France and the United Kingdom have
explicitly rejected proposals for a ban. Countries recognize that artificial intelligence is a strategic technology that could be
critical to future military power, so it is hardly surprising that major militaries may hesitate to constrain development ,
particularly at a time when rivals and potential adversaries are actively seeking an advantage . Why might
China, unlike the U.S. and Russia, have chosen to publicly support a ban? Clearly, the Chinese military is equally
focused on the importance of artificial intelligence in national defense, anticipating the emergence of a new
“Revolution in Military Affairs” that may transform the character of conflict. As my report “Battlefield Singularity: Artificial Intelligence, Military
Revolution, and China’s Future Military Power” described, the
Chinese military is actively pursuing a range of
applications, from swarm intelligence to cognitive electronic warfare or AI -enabled support to command
decision-making. While China’s engagement in the U.N. group should be welcomed, its objectives and
underlying motivations merit further analysis. China’s involvement is consistent with the country’s stated commitment under
its 2017 artificial intelligence development plan, which calls for China to “strengthen the study of major international common problems” and
“deepen international cooperation on AI laws and regulations.” In historical perspective, China’s integration into international security
institutions shows at least mixed success, as post-Mao China has proven willing in some cases to undertake “self-constraining commitments to
arms control and disarmament treaties,” as Iain Johnston’s research has demonstrated. However, China’s recent engagement
with
cyber issues reflects a mixed record , including the aggressive advancement of “cyber sovereignty,” which
reflects Beijing’s security priorities . In 2017, China’s reported rejection of the final report of the U.N. group
on information security contributed to the collapse of that process . Meanwhile, Beijing’s repeated
denouncements of U.S. “cyber hegemonism” (sic)—and calls for cooperation and a “community of shared future” in cyberspace
—have not constrained its own development of offensive cyber capabilities through the military’s new “Strategic
Support Force.” Will China seek to leverage this latest Group of Governmental Experts process to condemn U.S. efforts
without restraining its own development of new capabilities? China’s two position papers for the group indicate an interesting
evolution in its diplomatic posture on autonomous weapon systems, which remains characterized by a degree of strategic ambiguity and
apparent preference for optionality. The first paper, from the December 2016 session, declared, “China supports the development of a legally
binding protocol on issues related to the use of LAWS, similar to the Protocol on Blinding Laser Weapons, to fill the legal gap.” However, the
latest April 2018 position paper—released just a few days before its delegation called for a ban—did not include support for such
87
an agreement. It merely highlighted the importance of “full consideration of the applicability of general legal norms” to lethal autonomous
weapons. Notably, this latest position paper characterizes autonomous weapon systems very narrowly, with
many exclusions. China argues that lethal autonomous weapons are characterized by: lethality; autonomy, “which means absence of
human intervention and control during the entire process of executing a task” “impossibility for termination” such that “once started there is
no way to terminate the device”; “indiscriminate effect,” in that it will “execute the task of killing and maiming regardless of conditions,
scenarios and targets”; and “evolution,” “through interaction with the environment the device can learn autonomously, expand its functions
and capabilities in a way exceeding human expectations” (emphasis added throughout). Banning
weapons systems with those
characteristics could be a symbol, while implicitly legitimizing the development of semi-autonomous or even fully
autonomous systems that do not possess such qualities . By such a standard, a weapons system that operates
with a high degree of autonomy but involves even limited human involvement , with the capability for
distinction between legitimate and illegitimate targets , would not technically be a LAWS, nor would a system
with a failsafe to allow for shutdown in case of malfunction. Interestingly, this particular definition is much more stringent than the Chinese
military’s own definition of the concept of “artificial intelligence weapon.” According to the dictionary of People’s Liberation Army Military
Terminology, an artificially intelligent weapon is “a weapon that utilizes AI to automatically [] pursue, distinguish, and destroy enemy targets;
often composed of information collection and management systems, knowledge base systems, assistance to decision systems, mission
implementation systems, etc.,” such as military robotics. Because this definition dates back to 2011, the Chinese military’s thinking has evolved
as technology has advanced. It is important, therefore, to consider that there
may be daylight between China’s diplomatic
efforts on autonomous weapons and the military’s approach . The Chinese military does not have a legal
culture analogous or directly comparable to that of the U.S. military. It’s also important to recognize that Beijing’s
military has traditionally approached issues of international law in terms of legal warfare, seeking to exploit
rather than be constrained by legal frameworks. The military’s notion of legal warfare focuses on what it calls
seizing “legal principle superiority ” or delegitimizing an adversary with “restriction through law.” In line with this
approach, China might be strategically ambiguous about the international legal considerations to allow
itself greater flexibility to develop lethal autonomous weapons capabilities while maintaining rhetorical
commitment to the position of those seeking a ban—as it does in its latest position paper. (The paper does articulate concern for the
capability of LAWS in “effectively distinguishing between soldiers and civilians,” calling on “all countries to exercise precaution, and to refrain, in
particular, from any indiscriminate use against civilians.”) It is worth considering whether China’s
objective may be to exert
pressure on the U.S. and other militaries whose democratic societies are more sensitive to public opinion on
these issues. Despite the likely asymmetries in its approach to law, it seems unlikely that the military would unleash fully autonomous “killer
robots” on the battlefield. Beyond the fact that the AI technology remains too nascent and brittle for such an approach to be advantageous, the
military will likely concentrate on the security and controllability of its weapons systems. The core of China’s military command culture
prioritizes centralized, consolidated control. In the absence of a culture of trust, the military is hesitant to tolerate even the uncertainty
associated with giving humans higher degrees of autonomy, let alone machines. Even if the military someday trusts artificial intelligence more
than humans, it may still face issues of control, given the potential unpredictability of these complex technologies. (As the armed wing of
Chinese Communist Party, the military is required to “remain a staunch force for upholding the CCP’s ruling position” and preserve social
stability. A chatbot in China was taken offline after its answer to the question “Do you love the Communist Party?” was simply “No.”) China’s
position paper highlights human-machine interaction as “conducive to the prevention of indiscriminate killing and maiming … caused by
breakaway from human control.” The military appears to have fewer viscerally negative reactions against the notion of having a human “on”
rather than “in” the loop (i.e., in a role that is not directly in control but rather supervisory), but assured controllability is likely to remain a
priority. As the U.N.’s autonomous-weapons group continues its work, China’s evolving approach to these issues—including whether Beijing will
aim for rhetorical dominance on the issue—will remain an important bellwether of how a great power that aspires to possess a world-class
military may approach the legal and ethical concerns inherent with the advent of artificial intelligence. While continued engagement with
Beijing will remain critical, it is also important to recognize that theChinese military will almost certainly continue to pursue
military applications of artificial intelligence (likely with limited transparency). Unsurprisingly, China’s position
paper emphasizes the importance of artificial intelligence to development and argues that “there should not be any pre-set
premises or prejudged outcome which may impede the development of AI technology.” At the same time, the
boundaries between military and civilian applications of AI technology are blurred—especially by China’s
national strategy of “civil-military fusion.” China’s emergence as an artificial intelligence powerhouse may
enable its diplomatic leadership on these issues , for better and worse, while enhancing its future military power.
88
Treaties and/or international verification can’t check – China will circumvent with
strict AWS definitions and dual use weapons
Kania 20 [ELSA B. KANIA, adjunct senior fellow with the Technology and National Security Program at
CNAS and a doctoral student in Harvard University's Department of Government, 04-2020, ““AI
WEAPONS” IN CHINA'S MILITARY INNOVATION,” Brookings, https://www.brookings.edu/wp-
content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf]/Kankee
Yet aninherent challenge of evaluating progress and capabilities is that the level of autonomy, relative to
possibility of remote control, cannot be readily assessed by appearance alone. Moreover, as China strives to close
the gap with the United States, its efforts are complicated by persistent bottlenecks in its indigenous defense industrial capabilities.51 While
cyber theft and industrial espionage enabled by a range of techniques of tech transfer have enabled and accelerated Chinese military
modernization, persistent obstacles and bottlenecks remain, including shortfalls in the technical workforce and engineering experience.52
Chinese leaders are cognizant of critical weaknesses, such as the semiconductors, particularly specialized developments in AI chips, necessary
to enable and deploy AI/ML systems, and actively investing to overcome them. While
there is currently no direct evidence
that the PLA has formally fielded a weapons system fully consistent with the definition of “AI weapon,” a
number of systems are analogous or comparable in their functionality. The Chinese defense industry’s
attempts to make cruise and ballistic missiles more “intelligent” build upon work on automatic target
recognition that predates the recent concern with autonomous weapons.53 The Chinese military has reportedly converted
older models of tanks to operate via remote control or with some degree of autonomy.54 There are also reports that
variants of aircraft have been modified to be operated via remote control or potentially autonomously, perhaps to
overwhelm air defenses in a potential invasion scenario against Taiwan.55 The PLAN has tested and
operated a range of undersea gliders and unmanned underwater vehicle (UUVs) for scientific or military missions,56
including the HN-1 glider used in exercises in the South China Sea in 2018.57 Often, limited technical information is
available, rendering the disclosure of capabilities and signaling — including the potential for misdirection
or disinformation — important to evaluate carefully.58 FUTURE TRENDS IN RESEARCH AND DEVELOPMENT These advances in
PLA capabilities are taking shape through the efforts of Chinese military research institutes, the Chinese defense industry, and the emerging
ecosystem of commercial enterprises supporting military-civil fusion.59 For instance, the Key Laboratory of Precision Guidance and Automatic
Target Recognition at the PLA’s National University of Defense Technology researches a range of automatic target recognition techniques. The
available technical literature also points to interest in applying neural networks to the guidance of
hypersonic glide vehicles, enabling adaptive control and greater autonomy .60 For new directions in research, the
Tianjin Binhai Artificial Intelligence Military-Civil Fusion Center was established in partnership with the PLA’s Academy of Military
Science, and pursues developments in autonomy and the capacity for coordination of unmanned systems ,
such as in undersea drones.61 Future Chinese aerospace capabilities will be enabled and enhanced by research currently underway
within the major state-owned defense conglomerates. Starting in 2015, the China Aerospace Science and Industry Corporation (CASIC) 3rd
Academy 35th Research Institute began pursuing breakthroughs in core technologies including target detection and recognition techniques
based on deep learning and deep neural network compression, and smart sensors, combining data from multiple radars.62 Notably, in 2016,
this CASIC team organized an innovation competition for “AI-Based Radar Target Classification and Recognition,”63 the Chinese defense
industry’s first major event of this kind; it involved companies and universities with AI research proficiency applying that expertise to finding
intelligent processing solutions for targeting.64 According to a senior missile designer from CASIC, “our future cruise missiles will have a very
high level of AI and autonomy,” such that commanders will be able “to control them in a real-time manner, or to use a fire-and-forget mode, or
even to add more tasks to in-flight missiles.”65 Future
missiles might have increasingly sophisticated capabilities in
sensing, decisionmaking, and implementation — even potentially gaining a degree of “cognition” and
continual learning capability .66 Significantly, the PLA’s development of hypersonic weapons systems has also
incorporated advances in techniques for greater autonomy and adaptive control .67 Chinese naval
capabilities may be augmented by advances in military robotics and autonomy. During a September 2018 defense
exhibition, a subsidiary of the China Shipbuilding Industry Corporation revealed “JARI,” a multi-purpose unmanned surface
vessel reportedly designed for use by the PLAN and also intended for export as a warship.68 CSIC has also
displayed the “Sea Iguana” (or Marine Lizard, 海蜥蜴), an unmanned surface vehicle (USV) that could be leveraged in
89
support of future amphibious operations.69 Reportedly, the PLAN and Chinese defense industry are also
developing AI-enabled submarines to advance Chinese capabilities in undersea warfare ,70 through a classified
military program disclosed in Englishlanguage reporting, the 912 Project.71 While fully autonomous submarines appear to remain a longterm
objective, the introduction of AI/ML techniques for target detection and decision support — including to improve acoustic signal processing —
could prove more feasible in the meantime.72 Beyond state-owned defense conglomerates, a
growing number of new
contenders are pursuing advances in unmanned and autonomous weapons systems , from companies, such as
Yunzhou Tech, to leading universities, including the Beijing Institute of Technology.73 PRC ARMS SALES AND APPROACHES TO GLOBAL
GOVERNANCE Increasingly, U.S. officials
express concerns about Chinese development and potential
proliferation of unmanned systems and the capabilities for autonomy . In November 2019, U.S. Secretary of Defense
Mark Esper warned that Chinese weapons manufacturers were selling drones to the Middle East “advertised as
capable of full autonomy, including the ability to conduct targeted strikes .”74 While not specifying which weapons
systems provoked concern, it appears he may have had in mind a weapons system produced by the Chinese company Ziyan.75 Only a month
before, the Chinese delegation to the UN General Assembly Thematic Discussion on Conventional Arms Control argued, “China believes it is
necessary to reach an international legally-binding instrument on fullyautonomous lethal weapons in order to prevent automated killing by
machines.”76 Yet like other major powers, China does not seem eager to tie its own hands when it comes to
the research, development, and potential deployment of autonomous weapons systems.77 This was not the first
time China’s diplomatic proclamations have contradicted its apparent intentions or activities around
autonomous weapons systems. During the April 2018 session of the UN Group of Governmental Experts (GGE) on Lethal
Autonomous Weapons Systems (LAWS), the Chinese delegation articulated an intention to ban “the use of fully
autonomous lethal weapons systems.”78 However, the definition the Chinese delegation provided was
convoluted enough to exclude the types of weapons systems most militaries, including the PLA, are
actually developing.79 This very restrictive definition includes the following characteristics: (1) lethality, (2)
autonomy, defined as “the absence of human intervention and control during the entire process of
executing a task,” (3) “impossibility of termination” once the device is set in motion, (4) “indiscriminate
effect,” “regardless of conditions, scenarios and targets,” and (5) “evolution,” such that “the device can learn
autonomously through interaction with the environment , expanding its functions and capabilities in a way exceeding
human expectations.”80 This definition of autonomy is perplexing yet revealing, given that no militaries appear
interested in pursuing weapons systems that entirely remove the possibility of termination by a human
operator.81 Moreover, the phrasing “indiscriminate effect” implies the capability as defined would inherently
violate the requirement of distinction from the law of armed conflict . Finally, the notion of “evolution”
seems to envision online machine learning that is ongoing in the operational environment,82 which could introduce
vulnerabilities and challenges, including the potential for exploitation by adversaries attempting to
manipulate that process of learning.83 The Chinese delegation to the UN GGE on LAWS has not further clarified its position. By
claiming to support a weapons systems ban with these extreme characteristics, the Chinese government
appeared to be positioning itself in support of the ban movement,84 while still continuing to pursue a
broad array of autonomous weapons systems. China’s approach to international law can be
characterized by “legal warfare” (法律战), seeking to exploit legal mechanisms to constrain and
delegitimize adversaries, while circumventing legal constraints itself.85 The PRC position on these issues may evolve,
given China’s attempts to become more actively involved in shaping global governance of AI, from technical standards to debates on law,
norms, and ethics.86 China’s
ambiguous definition of autonomous weapons systems also poses a potential
challenge to arms control. The U.S. Department of Defense Directive 3000.09 regulates the development and use of autonomous and
semi-autonomous functions in weapons systems, and defines autonomous weapons systems as those that “once activated, can select and
engage targets without further intervention by a human operator.”87 The Chinese military, conversely, has no known parallel
to DOD Directive 3000.09, and employs various definitions of autonomous weapons, AI weapons, or what the Chinese military has called
“intelligentized” weapons. In some cases, Chinese military and defense researchers reference the concept of “levels of intelligence” (智能等级)
when discussing the “intelligent capabilities” of a specific system.88 Different concepts and terminology between the U.S. and PRC — for
90
instance, the divergence between Chinese notions of human-machine collaboration (人机协同) and human-machine integration (人机融合),89
and the American emphasis on human-machine teaming90 — will merit clarification.91 IMPLICATIONS FOR GLOBAL SECURITY AND STABILITY
91
There’s virtually no difference between SAWS and LAWS, which ruins any arms control
effort
Gubrud 14 [Mark Gubrud, adjunct professor in the Curriculum in Peace, War & Defense at the
University of North Carolina with a PhD in physics from the University of Maryland, and was a
Postdoctoral Research Associate in the Program on Science and Global Security at Princeton University,
5-9-2014, "Autonomy without Mystery: Where do you draw the line?," 1.0 Human, http://gubrud.net/?
p=272]/Kankee
Short-Circuiting Arms Control The Pentagon’s distinctionbetween semi-autonomous and autonomous weapons
would also fail to support meaningful arms control. The first type of SAWS includes systems which have every
technical capability for fully autonomous target selection and engagement , requiring only a one-bit
signal, the “Go” command, to initiate violence. A SAWS of this type could be converted to a fully autonomous weapon
system (FAWS = AWS in the Pentagon’s lexicon) by a trivial modification or hack. Thus, permitting their development,
production and deployment would mean permitting the perfection and proliferation of autonomous
weaponry, with only the thinnest, unverifiable restraint remaining on their use as FAWS . For the second
type, there is not even a thin line to be crossed . Fire and Forget
92
Lack of US AI weapon leadership is bad – it causes rapidfire nuclear prolif, loss of US
hegemony, and the breakup of alliances
Kallenborn 19 [Zachary Kallenborn, researcher specializing in chemical, biological, radiological, and
nuclear weapons, terrorism, and drone swarms, 9-3-2019, "What if the U.S. Military Neglects AI? AI
Futures and U.S. Incapacity," War on the Rocks, https://warontherocks.com/2019/09/what-if-the-u-s-
military-neglects-ai-ai-futures-and-u-s-incapacity/]/Kankee
Advances in computing power, data collection, and machine learning suggest a new era of AI prominence. Advanced unmanned systems,
enhanced by sophisticated AI, allow militaries to replace expensive, multi-mission platforms with low-cost, single mission systems. Managing
masses of autonomous systems will be a challenge, but improvements in AI will make that easier too. Some AI experts also believe hardware
advances will keep AI progress steady in the near term. If AI dominates the battlefield, a United States without robust AI
capabilities would lose its conventional superiority. Although the United States would retain considerable capabilities in its
existing ships, tanks, and aircraft, in a full-scale conflict, adversaries could overwhelm U.S. forces with masses of drones.
Conventional weaknesses would exacerbate threats to U.S. alliance networks, as U.S. security guarantees
would be weaker. Allied states already seek increased strategic autonomy. Nonetheless, the United States remains largely secure from
existential harm due to its nuclear deterrent and asymmetric information warfare. Threats of nuclear annihilation could still shield the United
States against existential threats. The United States would retain nuclear parity with Russia and nuclear superiority to China. If even limited
North Korean nuclear weapons can plausibly hold back the United States, American nuclear threats could plausibly hold back a new robotic
superpower. Unfortunately, reliance on nuclear deterrence in this scenario would encourage broader proliferation.
The United States likely would be forced to abandon commitments under the Nuclear Nonproliferation Treaty to
work towards denuclearization. Some states may develop nuclear weapons as the nonproliferation regime
collapses. Asymmetric information warfare would allow the United States to resist some adversary conventional threats. The United States
could create and disseminate fake images or videos designed to manipulate adversary AI software. The United States could use cyber means to
put such images in data collections used to train adversary AI algorithms, or subtly alter robotic control systems to induce mistakes or slow
algorithms. Such sabotage would create weaknesses in any robotic system using them. Anti-space weapons may also disable or damage orbital
or ad hoc satellite networks used to control adversary robots. Electronic warfare capabilities might be able to defeat some older robotic
systems or send false signals to confuse or control adversary drones. But in the world of AI Explosion, advances
in autonomy will
limit the harm from information-based attacks . New autonomous systems are less dependent on
information — more autonomous platforms and weapons mean less need for external commands via
satellite or electronic signal. Nonetheless, U.S. information capabilities might inflict sufficient harm to prevent adversaries from
achieving some objectives. The United States would also be likely to face new homeland security risks from non-state actors. Already, open-
source information allows non-state actors to build crude robotic weapons. State sponsors and open-source resources may be sufficient to
cause considerable harm. Novel forms of attack against chemical facilities, airports, and stadiums could cause mass casualties with simple
drones. AI Trinity The year is 2040: AI and robotics threaten nuclear deterrence and dominate the battlefield. Swarms of drones guard national
borders with a mixture of advanced air and missile defenses, while massive undersea swarms rove the sea in search of nuclear submarines.
Cheap drone-mounted sensors virtually eliminated costly advantages in stealth, made the ocean vastly more transparent, and created
significant uncertainty in submarines as reliable second-strike platforms. Other AI capabilities help manage the system, optimize processes to
keep costs low, and reduce false positives and negatives. A series of short but bloody conflicts between the United States, China and Russia in
the late 2030s raised specters of new great power conflict unconstrained by nuclear weapons. AI could threaten the credibility of
the U.S. nuclear deterrent. Although constant, real-time tracking of all nuclear submarines is difficult to imagine due to the massive
size of the oceans, technology improvements and some luck could allow an adversary to know the locations of
second-strike platforms for long enough to eliminate them in a first strike. Swarms of undersea drones and
big data analysis offer great potential for new and improved anti-submarine platforms, weapons, and sensor
networks. Already, some missile defenses use simple automation that could be improved with AI. Drones can also help
track missiles, serve as platforms to defeat them, or simply collide with incoming missiles and aircraft . AI
improvements generally enable more advanced robotic weapons, more sophisticated swarms, and better insights into data. Of course, the long
history of failed attempts and huge costs of missile defense suggest elimination of nuclear deterrence is highly unlikely, but all
of these
developments could add up to serious risks to the reliability of nuclear deterrence . In such a world, a United
States without robust military AI capabilities is extremely insecure . The United States has neither
93
conventional superiority nor a reliable nuclear deterrent , and must drastically rethink American grand
strategy. U.S. extended deterrence guarantees would be far less effective and some states under the
umbrella would likely seek their own nuclear weapons instead. South Korea and Saudi Arabia would
likely become nuclear weapons states due to their established civilian nuclear programs , high relative
wealth, and proximity to hostile powers in possession or recent pursuit of nuclear weapons. The United
States could expand its nuclear arsenal to mitigate the harms of a less reliable deterrent , but that would
require abandoning the New Strategic Arms Reduction Treaty and other arms control treaties. Ensuring
national security would mean avoiding conflict or focusing on homeland defense — rather than a forward defense
posture with forces stationed on the Eurasian landmass — to increase adversary costs. Diplomacy, soft power, and international institutions
remain key to national security. However, a soft-power strategy would be extremely challenging. The factors
that could inhibit
development of AI — domestic dysfunction, high debt, and international isolation — would cause considerable
harm to U.S. soft power. American soft power is arguably already in decline and funding for the State Department and U.S. Agency for
International Development have been cut considerably. Likewise, any abandonment of arms control treaties to support the nuclear arsenal
would cause further damage. In short, in AI Trinity, a United States without AI is no longer a serious global power. AI
Fizzle
94
It also causes China to undercut the US LAWS market share with inferior products
more likely to cause accidents and violate the laws of war – they’ll also be exported to
terrorists and rogue states
Kania 20 [ELSA B. KANIA, adjunct senior fellow with the Technology and National Security Program at
CNAS and a doctoral student in Harvard University's Department of Government, 04-2020, ““AI
WEAPONS” IN CHINA'S MILITARY INNOVATION,” Brookings, https://www.brookings.edu/wp-
content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf]/Kankee
IMPLICATIONS FOR GLOBAL SECURITY AND STABILITY The advent of AI/ML systems and greater autonomy in defense will
impact deterrence and future warfighting among great powers. This military-technological competition
could present new threats to strategic stability, which Chinese military officers and strategists are starting to recognize and
debate.92 Given the emphasis of Chinese military leaders on pursuing innovation to catch up with and
surpass more powerful militaries, namely that of the United States, there are reasons for concern the Chinese military
may fail to dedicate adequate attention to issues of safety and testing in the process. The advent of greater
autonomy in weapons systems introduces added complexity, and complex systems tend to be more
prone to failures and accidents, particularly in contested environments.93 The PLA has not yet released any public
policies or official statements that describe its practices for testing.94 However, at least a limited number of Chinese experts and military
scientists are starting to dedicate more attention to risks associated with the development and use of autonomous weapons systems.95 “ The
Chinese military lacks contemporary operational experience, and its insufficient firsthand knowledge of
the “fog of war’ may result in mistakes or unrealistic expectations about the prospects for technology
on the battlefield. For instance, Chinese assessments of American intentions and capabilities tend to be
relatively exaggerated. The PLA does have a history of and experience with the testing and verification of weapons systems, including
at several bases dedicated to these activities.96 Yet a significant difference exists between testing and training
compared to the unpredictability of accidents or unintended engagements that can occur on the
battlefield.97 The PLA appears to be relatively pragmatic about issues of safety and reliability with new technologies. However, a risk
remains that it might be more likely to make mistakes given its lack of operational experience , for which
realistic training and advanced simulations can only partially compensate .98 Absent official policy or
guidance from Chinese military leaders , it is difficult to anticipate how the PLA will approach issues of
human control over autonomous systems, particularly as these capabilities progress and evolve. Historically, Chinese leaders
have prized centralized, consolidated control over the military. They may therefore be generally disinclined to relinquish control to individual
humans, let alone machines, fearing loss of the Party’s “absolute command.”99 At the same time, Chinese military scholars and scientists
appear to be relatively pragmatic in how they approach and discuss the nuances of having a human in, on, or out of the loop. Given technical
constraints and uncertainties, there are reasons to expect in the near term that the Chinese military will keep humans “in the loop” or at least
“on the loop,” but it is harder to anticipate whether the PLA or any military will maintain that position if conditions and technical considerations
change.100 However, to date, discussion of meaningful human control appears to be less established in Chinese
writings than in U.S. debates on these topics.101 There is no clear evidence indicating that the Chinese military is more inclined to
pursue autonomy and/ or automation in a manner that removes humans from decisionmaking relative to other militaries. In the near term,
human involvement in command and control appears to be deemed necessary for technical reasons, and the Chinese military is actively
exploring concepts that leverage synergies between human and artificial intelligence, such as that of “humanmachine” intelligent integration
(人机智能融合).102 In the future, operational expediency concerns could supersede safety if having a human in
the loop became a liability, as greater involvement of AI systems in command decisionmaking is
considered potentially advantageous.103 Yet at the level of strategic decisionmaking, including for decisions that involve the
employment of nuclear weapons,104 it is all but certain that one human will remain “in the loop,” for the foreseeable future: Xi Jinping. While
the debate over lethal autonomous weapons systems raises complex legal and ethical issues, such concerns
about the impact of
advances in autonomy are less prevalent or prominent in Chinese military discourse and writing to date.
The PLA lacks the U.S. military’s overseas operational experience and its institutionalized architecture of
legal expertise to apply the law of war in actual operations. Nonetheless, as the PLA looks to expand global operations
95
and takes on new missions defending overseas interests, its attention to these concerns has necessarily increased. At present, the Chinese
military lacks a specialty or career trajectory directly analogous to the U.S. military’s Judge Advocate General’s Corps.105 However, certain
Chinese military officers with legal expertise have advocated for a more direct incorporation of legal experts into the chain of command to
provide legal support for operations and decisionmaking.106 The PLA’s Academy of Military Science also organized a conference in September
2019 to address the legal issues that arise with military applications of AI.107 The Chinese government has launched a charm offensive on AI
ethics, including releasing new principles that echo debates on “AI for good.”108 Yet reasons remain to question
whether Chinese
leadership will actually prioritize and institutionalize these commitments in ways that create substantive
constraints.109 China’s stated commitment to ethical AI use principles is contradicted by the CCP
prioritization of AI as an instrument for maintaining social control and coercion, enabling crimes against
humanity in Xinjiang and beyond.110 Certain Chinese military scholars have criticized the ethics of the U.S. military’s employment
of unmanned systems, yet the Chinese government may also seek to use U.S. precedent as justification for
similar PRC actions in the future.111 Ultimately, the PLA itself is the armed wing of the CCP, bound to “obey the Party’s command” and
ensure regime security.112 A notable nexus can exist between security/defense (安防) applications and the leveraging of these technologies
for military purposes, including techniques for monitoring and manipulating public opinion with applications in influence operations.113 The
proliferation of AI-enabled and/or autonomous weapons systems presents a range of risks to global
security. China could export this technology to potential adversaries or militaries with poor human rights
records, undermining U.S. values and interests. Occasionally, Chinese armed drones have experienced problems in their
performance, including crashing in some cases114 However, exports may facilitate data and metrics gathering for
performance improvements.115 Moreover, the availability of these technologies to nonstate actors could
empower terrorist organizations.116 The Islamic State group has already used Chinese drones —
manufactured by DJI — for surveillance and as improvised explosive devices.117 Beyond stalwarts in the arms
industry, a growing number of new enterprises are entering the field, advertising and exporting weapons systems said to possess some level of
autonomy. To date, over 90% of armed drone sales have been by Chinese companies.118 To the extent this trend continues, China will also
drive the diffusion of AI-enabled and autonomous weapons systems. POLICY OPTIONS AND RECOMMENDATIONS POLICY OPTIONS AND
RECOMMENDATIONS The United States must confront the prospect of longterm competition, and at worst even the
potential for conflict, with the People’s Republic of China. At the same time, today’s technological transformations
present new risks of accidents or unintended escalation. In response to these challenges, the U.S. military and national
security policymakers should consider the following recommendations: • Improve intelligence and awareness of Chinese military and
technological advancements. To mitigate the risks of surprise, the United States must continue to track and monitor new directions in Chinese
military modernization. In particular, the U.S. intelligence community should improve its capacity to leverage open-source intelligence (OSINT)
techniques and reprioritize targeting of collection activities as necessary.119
96
Contention 3: International Humanitarian Law
LAWS lead to global nuclear disarm – it maintains MAD without the existential risk of
nuclear winter
Umbrello et al. 18 [Steven Umbrello, Managing Director of the Institute for Ethics and Emerging
Technologies with a Honours Bachelor of Arts in philosophy from the University of Toronto, Phil Torres,
Affiliate Scholar at the Institute for Ethics and Emerging Technologies with a master's degree in
Neuroscience from Brandeis University, and Angelo F. De Bellis, faculty at the University of Edinburgh,
12-18-2018, ”The future of war: could lethal autonomous weapons make conflict more ethical?,” AI &
SOCIETY, https://doi.org/10.1007/s00146-019-00879-x]/Kankee
More generally speaking, the growing use of UAVs in conflict situations is consistent with a broader trend toward
high-precision weaponry and away from larger, more destructive weapons like those in the world’s nuclear
arsenals (Wilson 2013). There are some reasons for welcoming this shift. For example, the use of high-precision weapons like
LAWs to achieve a state’s military objectives could reduce the probability and proportion of indiscriminate harm,
thus violating the LoW and “rules of engagement” (RoE) less than might otherwise have been possible . Even
more, the “ease-of-use” of LAWs that are fully autonomous could enhance the “balance of terror” that
prevents conflict from breaking out by providing a credible means for retaliation: “If you strike me first, I
will unleash a swarm of LAWs that devastate your infrastructure, poison your streams, set fire to your farms,
destroy your armies, and assassinate your leaders.” The precision and effectiveness of LAWs could also
accelerate the process of nuclear disarmament, seeing as the conception of LAWS regards them as agents capable of conventional
weapons use rather non-conventional weapons platforms. First, consider that research on the potential climatic consequences
of a nuclear war resulted in the replacement of MAD (“mutually-assured destruction”) with SAD (“self-
assured destruction”). The reason is that an exchange of nuclear weapons—even a regional one [citation]—could
initiate a “nuclear winter” that causes global agricultural failures, widespread starvation, the spread of
infectious disease, and other catastrophic sequelae that cannot be contained within national borders (Mills et al.
2014; Xia et al. 2015). Consequently, a nuclear war would all but guarantee the self-annihilation of states involved. As
Seth Baum (2015) notes, though, LAWs could provide a kind of “winter-safe deterrence” by providing states with
a credible threat of retaliation without the global catastrophic risks of nuclear conflict. Thus, LAWs could
render the world’s nuclear arsenals irrelevant and, in doing so, lower the overall risk of human annihilation.
1.3 Distinguishing valid targets
97
LAWS are way more ethical then humans
Umbrello et al. 18 [Steven Umbrello, Managing Director of the Institute for Ethics and Emerging
Technologies with a Honours Bachelor of Arts in philosophy from the University of Toronto, Phil Torres,
Affiliate Scholar at the Institute for Ethics and Emerging Technologies with a master's degree in
Neuroscience from Brandeis University, and Angelo F. De Bellis, faculty at the University of Edinburgh,
12-18-2018, ”The future of war: could lethal autonomous weapons make conflict more ethical?,” AI &
SOCIETY, https://doi.org/10.1007/s00146-019-00879-x]/Kankee
2.1 Human flaws and foibles Yet, we would argue, such positions are predicated on an unfounded fear that taking
control away from humans will enable robotic weaponry to demolish current, human involved warfare practices.
Extrapolating techno-development trends into the future, it is reasonable to expect future robotic weapons to acquire the
capacity to reliably and accurately differentiate between combatants and noncombatants (Sharkey 2012;
Egeland 2016); this could even occur in the near future (see Guizzo 2016). Indeed, Ronald Arkin (2008) anticipates such
technologies—in particular, recognition software—to not only be developed but surpass human performance
capabilities (see also O’Meara 2011; Egeland 2016). As he writes, “we must protect the innocent noncombatants in the
battlespace far better than we currently do . Technology can, must, and should be used toward that end.” Like Nadeau, Arkin
believes that moral LAWs would act in an ethically superior way to humans in war , saying that: The
commonplace occurrence of slaughtering civilians in conflict over millennia gives rise to my pessimism in
reforming human behaviour yet provides optimism for robots being able to exceed human moral
performance in similar circumstances (Arkin 2015). One must also take into account the consequences of humans personally engaging in
warfare. Historical records, including those of concurrent military engagements, recount numerous acts of
barbarism as a result of the harsh conditions that combatants are exposed to (Arkin 2015). In fact, Lin et al. (2008)
discuss how one of the most attractive prospects of LAWs is their inability to be affected by emotions on the
battlefield (Lin et al. 2008). It is the emotional distress that often causes combatants to mistreat the enemy and
commit war crimes. Hence, the introduction of LAWs that are unaffected by such emotional stress serves as an
incentive for continued development (Klincewicz 2015).3 Second, the emotional and physical pressures that
human combatants must endure during wartime have performance costs. The fatigue of a long and drawn-out battle
affects the ability of individual soldiers to perform optimally, and thus affects the accuracy of their shots
(Burke et al. 2007; Nibbeling et al. 2014). LAWs are naturally unaffected by similar physical pitfalls and can always—
as long as the physical infrastructure is designed optimally from the start—permit the LAWs to continually perform accurately and
as expected. The ability for LAWs to engage in unwavering, precise combat also resolves some ethical issues that arise
from human-waged war. In light of the fact that LAWs do not possess emotions to guide their behaviors or personal stakes that affect
their combat approaches, LAWs will always perform duties accurately under even the most physically —or to a
human, emotionally—stressful conditions, thus enabling them to, at least more often than not, kill in a more humane
manner. LAWs can be programmed to only engage targets in manners deemed most ethical based on the
dynamics of war at the time of combat: the changing environment, the weapons being used by both the
aggressor and the defender, and the characteristics of the target (human, robot, or physical structure). Already,
computerized weapons platforms can engage targets far more accurately than any human counterpart can
(Geibel 1997; Shachtman 2007; Katz and Lappin 2012; United States Navy 2017). Strong arguments can be levied that LAWs outfitted with
such weapons platforms could engage in otherwise normal wartime duties but in a means that is far more accurate and
thus ethical4 as a consequence of LAWs’ technological superiority. Part of this ethical prowess exhibited by LAWs is not only because they
never tire, but because they are impervious to the psychological shortcomings of humans. Though a contentious topic,
several high-profile cognitive psychologists suggest that humans fabricate reasons for their actions after
committing them (Davidson 1982; Nadeau 2006). Thus, it is human to be irrational, to make unreasoned decisions
98
toward an action that is then validated after carrying through . Such is not the nature of a robot. As mentioned, LAWs
do not have any particular affinity to or personal interests in surviving battle; they do not have any drive
to exhibit particular harshness against enemies of a certain culture; and they do not, outside of their goals,
worry about winning the war and heading back home after using any unsavory methods to do so. What
they do mind is their particular set of rules, their value-laden code that dictates how they are to conduct themselves in an ethical
manner during combat. In sum, the two above arguments (1) the lack of an agreed upon universal moral framework
coupled with (2) the emotional and psychological impacts of war on humans and the consequent tragedies
and irrational behaviors that follow, provide a strong case for the development and utilization of an emotionally
uncompromisable artificial moral combatant—a moral LAW.
99
LAWS are the only ethical agents during wartime – they have superior decision making
and free will
Umbrello et al. 18 [Steven Umbrello, Managing Director of the Institute for Ethics and Emerging
Technologies with a Honours Bachelor of Arts in philosophy from the University of Toronto, Phil Torres,
Affiliate Scholar at the Institute for Ethics and Emerging Technologies with a master's degree in
Neuroscience from Brandeis University, and Angelo F. De Bellis, faculty at the University of Edinburgh,
12-18-2018, ”The future of war: could lethal autonomous weapons make conflict more ethical?,” AI &
SOCIETY, https://doi.org/10.1007/s00146-019-00879-x]/Kankee
100
LAWS increase accountability for militaries
Müller 16 [Vincent C. Müller, Professor of Philosophy at Eindhoven University of Technology, 2016,
“Autonomous Killer Robots Are Probably Good News,” Ashgate
https://www.researchgate.net/publication/289796125_Autonomous_killer_robots_are_probably_good
_news/link/5aa7f2960f7e9b0ea30797b7/download]/Kankee
3.2.3. Narrowing the responsibility gap The responsibility framework outlined above shows how responsibility should be ascribed for many of
the wrongful killings that could be committed by killer
robots. The technology gives rise to a related and further beneficial
effect, which is often not noted. Holding someone accountable for their action, e.g. for actual conviction
for a war crime requires reliable information—which is often unavailable. The ability to acquire and store
full digital data records of LAWS’ action and pre-mission inputs allows a better determination of the
facts, and thus of actual allocation of responsibility, than is currently possible in the ‘fog of war’. As well as
allowing allocation of responsibility, the recording of events is also likely to diminish the likelihood of wrongful
killings. There is already plenty of evidence that, for example, police officers who have to video their own
actions are much less likely to commit crimes. So, killer robots would actually reduce rather than widen
responsibility gaps. 3.2.4. Regulation and standards The foregoing has the following implication: moral interest should be focused on the
determination of the technical standards of reliability which robots—including killer robots—should meet. The recent EU ‘RoboLaw’ report
makes a parallel point, in arguing that we should resist the urge to say that ‘robots are special’ in terms of responsibility. Rather, we should
adopt a functional perspective and see whether the new technology really does require new legal regulation, and in which areas (based on
Bertolini 2014; Palmerini et al. 2014: 205f). This seems to be a move in the right direction: We already devise automated systems (e.g.
automated defence of ships against air attacks) where the ‘rules of engagement’ are put into software. The same ‘due care’ is to be expected
for the manufacture and use of LAWS. Just like for civil autonomous cars, we need to specify standards that LAWS manufacturers must abide
by. These standards must ensure that the robot acts according to the principles of distinction and proportionality (this is already possible now if
one thinks of targeting tanks, ships, planes or artillery, for example). Both manufacturing and distributing LAWS that do not abide by these
standard would be a war crime. If a killer robot is manufactured with due care according to these standards but commits a war crime, due to
use in situations for which it was not designed or licensed, the crime is the responsibility of the soldier/user. The responsible person for a
particular command or action can be identified in the military chain of command – this is a deeply entrenched tradition. Finally, if the soldiers
can show that they exercised due care, then the deaths are accidents.
101
LAWS reduce civilian deaths and adherence to international law
Arkin 18 [Ronald Arkin, Regents' Professor of computing at the Georgia Institute of Technology, 01-
2018, “Lethal Autonomous Systems and the Plight of the Non-combatant,” AISB Quarterly,
https://www.cc.gatech.edu/ai/robot-lab/online-publications/aisbq-137.pdf]/Kankee
102
There’s a moral obligation to use killer robots
Arkin 18 [Ronald Arkin, Regents' Professor of computing at the Georgia Institute of Technology, 01-
2018, “Lethal Autonomous Systems and the Plight of the Non-combatant,” AISB Quarterly,
https://www.cc.gatech.edu/ai/robot-lab/online-publications/aisbq-137.pdf]/Kankee
Addressing some of the counter-arguments But there are many counterarguments as well. These include the challenge of establishing
responsibility for war crimes involving autonomous weaponry, the potential lowering of the threshold for entry into war, the military’s possible
reluctance to give robots the right to refuse an order, proliferation, effects on squad cohesion, the winning of hearts and minds, cybersecurity,
proliferation, and mission creep. There are good answers to these concerns I believe, and are discussed elsewhere in my writings16. If the
baseline criteria becomes outperforming humans in the battlefield with respect to adherence to IHL (without
mission performance erosion), I consider this to be ultimately attainable, especially under situational conditions where bounded
morality [narrow, highly situation-specific conditions] applies17, but not soon and not easily. The full moral faculties of humans need not be
reproduced to attain to this standard. There are profound technological challenges to be resolved, such as effective in situ target discrimination
and recognition of the status of those otherwise hors de combat, among many others. But if
a warfighting robot can eventually
exceed human performance with respect to IHL adherence, that then equates to a saving of
noncombatant lives, and thus is a humanitarian effort. Indeed if this is achievable, there may even exist a moral
imperative for its use, due to a resulting reduction in collateral damage, similar to the moral imperative
Human Rights Watch has stated with respect to precision guided munitions when used in urban settings18. This
seems contradictory to their call for an outright ban on lethal autonomous robots19 before determining via research if indeed better protection
for non-combatants could be afforded. Let us not stifle research in the area or accede to the fears that Hollywood and science fiction in general
foist upon us. By
merely stating these systems cannot be created to perform properly and ethically does not
make it true. If that were so, we would not have supersonic aircraft , space stations, submarines, self-
driving cars and the like. I see no fundamental scientific barriers to the creation of intelligent robotic
systems that can outperform humans with respect to moral behavior . The use and deployment of ethical autonomous
robotic systems is not a short-term goal for use in current conflict, typically counterinsurgency operations, but rather will take considerable
time and effort to realize in the context of interstate warfare and situational context involving bounded morality. A plea for the noncombatant
How can we meaningfully reduce human atrocities on the modern battlefield? Why is there persistent failure and perennial commission of war
crimes despite efforts to eliminate them through legislation and advances in training? Can technology help solve this problem? I believe that
simply being human is the weakest point in the kill chain, i.e., our biology works against us in complying
with IHL. Also the oft-repeated statement that “war is an inherently human endeavor” misses the point, as then atrocities are also an
inherently human endeavor, and to eliminate them we need to perhaps look to other forms of intelligent
autonomous decision-making in the conduct of war . Battlefield tempo is now outpacing the warfighter’s
ability to be able to make sound rational decisions in the heat of combat . Nonetheless, I must make clear the obvious
statement that peace is unequivocally preferable to warfare in all cases, so this argument only applies when human restraint fails once again,
leading us back to the battlefield.
103
Contention 4: American Dead Head
COUNTERPLAN – States except for the United States should ban lethal autonomous
weapons and the executive branch of the United States should:
- Covertly establish a nuclear second-strike policy and command, control, and communications
system undergirded by artificial intelligence tested by the Department of Defense that responds
accordingly to perceptions of a nuclear first strike
- Establish a nuclear no-first-use policy for that artificial intelligence and allow it only to use the
minimum amount of nukes necessary for deterrence postures.
- Update ballistic missile defense with artificial intelligence
- Ban all other lethal autonomous weapons
104
States with a surprise attack, prevent a guaranteed retaliatory strike, or prevent the United States from
effectively commanding and controlling its nuclear forces . That perception begins with an assured ability to
detect, decide, and direct a second strike. In this area, the balance is shifting away from the United States . While
many opponents of nuclear modernization oppose the current plan to field the ground-based strategic deterrent and long-range stand-
off cruise missile, we believe these programs, while necessary, do not fundamentally solve the attack-time compression
challenge. Rather than simply replacing current systems with a new version, it is time to fundamentally rethink the U.S. approach to nuclear
deterrence. U.S. adversaries are not interested in maintaining the status quo. They are actively working to change it. U.S. adversaries are
working on their own fait accompli that will leave the United States in a position where capitulation to a new geostrategic order is its only
option. The United States cannot allow that. The United States must re-examine its view of an old concept in light of fundamental technological
change. Moving forward as if twentieth-century paradigms are still valid is not an option. It is time both sides of the nuclear arms debate come
to that realization.
105
Otherwise Russian and Chinese first strikes are guaranteed - hypersonics, cruise
missiles, and dual use weaponry skirts any defense
Lowther and McGiffin 19 [Adam Lowther, Director of Research and Education at the Louisiana Tech
Research Institute, and Curtis McGiffin is an Associate Dean at the School of Strategic Force Studies at
the Air Force Institute of Technology and an adjunct professor for Missouri State University’s
Department of Defense and Strategic Studies, 8-16-2019, "America Needs a “Dead Hand”," War on the
Rocks, https://warontherocks.com/2019/08/america-needs-a-dead-hand/]/Kankee
America’s nuclear command, control, and communications (NC3) system comprises many component systems that were designed and
fielded during the Cold War — a period when nuclear missiles were set to launch from deep within Soviet territory, giving the United States
sufficient time to react. That era is over. Today, Russian
and Chinese nuclear modernization is rapidly compressing
the time U.S. leaders will have to detect a nuclear launch, decide on a course of action, and direct a
response. Technologies such as hypersonic weapons, stealthy nuclear-armed cruise missiles, and
weaponized artificial intelligence mean America’s legacy NC3 system may be too slow for the president
to make a considered decision and transmit orders . The challenges of attack-time compression present a
destabilizing risk to America’s deterrence strategy. Any potential for failure in the detection or
assessment of an attack, or any reduction of decision and response time , is inherently dangerous and
destabilizing. If the ultimate purpose of the NC3 system is to ensure America’s senior leadership has the information and time needed to
command and control nuclear forces, then the penultimate purpose of a reliable NC3 system is to reinforce the desired deterrent effect. To
maintain the deterrent value of America’s strategic forces , the United States may need to develop
something that might seem unfathomable — an automated strategic response system based on artificial intelligence .
Admittedly, such a suggestion will generate comparisons to Dr. Strangelove’s doomsday machine, War Games’ War Operation Plan Response,
and the Terminator’s Skynet, but the prophetic imagery of these science fiction films is quickly becoming reality. A rational look at the NC3
modernization problem finds that it is compounded by technical threats that are likely to impact strategic
forces. Time compression has placed America’s senior leadership in a situation where the existing NC3
system may not act rapidly enough. Thus, it may be necessary to develop a system based on artificial
intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such
speed that the attack-time compression challenge does not place the United States in an impossible
position. Threats Are the Problem The compression of detection and decision time is not a new phenomenon. In the 1950s, Soviet bombers
would take hours to reach the United States. With the advent of the missile age, that time was compressed to about 30 minutes for a land-
based intercontinental ballistic missile and about 15 minutes for a submarine-launched ballistic missile. These technologies fostered the
development of both space-based and underwater detection and communication, as well as advanced over-the-horizon radar. Despite this
attack-time compression, U.S. officials remained confident that America’s senior leaders could act in sufficient time. The United States believed
the Soviets would be deterred by its ability to do so. However, over the past decade Russia
has vigorously modernized its
nuclear arsenal, with a particular emphasis on developing capabilities that are difficult to detect because
of their shapes, their materials, and the flight patterns they will take to U.S. targets. Examples of the systems include the
Kaliber-M and Kh-102 cruise missiles, Poseidon Ocean Multipurpose System Status-6 unmanned underwater vehicle, and the Avangard Objekt
4202 hypersonic weapon, which all have the potential to negate the United States’ NC3 system before it can
respond. This compression of time is at the heart of the problem. The United States has always expected to have enough time to detect,
decide, and direct. Time to act can no longer be taken for granted , nor can it be assumed that the Russians or
Chinese, for that matter, will act tactically or strategically in the manner expected by the United States . In fact,
policymakers should expect adversaries to act unpredictably. Neither the American intelligence community nor Beltway intellectuals predicted
the Russian invasion of Crimea, among other recent Russian acts of aggression. The Russians, to their credit, are adept at surprising the United
States on a regular basis. These newtechnologies are shrinking America’s senior-leader decision time to such a narrow
window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time. In
the wake of a nuclear attack, confusion and paralysis by information and misinformation could occur when the
NC3 system is in a degraded state. Understanding the new technologies that are reshaping strategic deterrence is instructive. Two
106
types of nuclear-armed hypersonic weapons have emerged: hypersonic glide vehicles and hypersonic cruise missiles. Rich
Moore, RAND Corporation senior engineer, notes, “Hypersonic cruise missiles are powered all the way to their targets using an advanced
propulsion system called a SCRAMJET. These are very, very, fast. You may have six minutes from the time it’s launched until
the time it strikes.” Hypersonic cruise missiles can fly at speeds of Mach 5 and at altitudes up to 100,000 feet. Hypersonic glide vehicles are
launched from an intercontinental ballistic missile and then glide through the atmosphere using aerodynamic forces to maintain stability, flying
at speeds near Mach 20. Unlike ballistic missiles, glide vehicles can maneuver around defenses and to avoid detection
if necessary, disguising their intended target until the last few seconds of flight — a necessary capability as nations
seek to develop ever better defenses against hypersonic weapons. Richard Speier, also of RAND Corporation, states: We don’t currently
have effective defenses against hypersonic weapons because of the way they fly . They’re maneuverable
and fly at an altitude our current defense systems are not designed to operate ; our whole defensive system is based
on the assumption that you’re going to intercept a ballistic object. In addition to the hypersonic cruise missile threat, there is the
proliferation of offensively postured, nuclear-armed, low-observable cruise missiles. Whereas the hypersonic
cruise missile threat is looming because adversary systems are still in the developmental stage, low-observable cruise missiles are here and the
Russians understand how to employ these weapons on flight paths that are hard to track , which makes
them hard to target. According to the 2019 Missile Defense Review, “Russia and China are developing advanced cruise
missiles and hypersonic missile capabilities that can travel at exceptional speeds with unpredictable
flight paths that challenge our existing defensive systems.” And finally, Russia has threatened nuclear first use
strikes against U.S. allies and partners. Land-attack cruise missiles can be launched from any platform, including aircraft, ships,
submarines, or ground-based launchers. Land-attack cruise missiles are a challenge for today’s detection and air
defense systems. Cruise missiles can fly at low altitudes, use terrain features, and fly circuitous routes to
a target, avoiding radar detection, interception, or target identification. Improved defensive capabilities
and flight paths have made low-observable or land-attack cruise missiles (LACMs) even less visible. They can also
be launched in a salvo to approach a target simultaneously from different directions. According to the National Air and
Space Intelligence Center: The Club-K cruise missile “container launcher” weapon system, produced and marketed by a Russian firm, looks like a
standard shipping container. The company claims the system can launch cruise missiles from cargo ships, trains, or commercial trucks.
Beginning in fall 2015, Russia fired LACMs from surface ships, submarines, and aircraft in support of ongoing
military operations in Syria. The analysis went on to add, “The cruise missile threat to US forces is increasing . The
majority of LACMs are subsonic, but supersonic and hypersonic missile will be deployed in the future. LACMs also have increased
survivability by minimizing radar signature and/or the use of chaff and decoys .” The newest generation of these
missiles poses a real threat, specifically to the U.S. NC3 system, and they may be used as precursor attack weapons to disable or destroy critical
nodes within that system.
107
AI prevents every accidental war
Sankaran 19 [Jaganath Sankaran, assistant professor at the Lyndon B. Johnson School of Public Affairs
at the University of Texas at Austin., 4-25-2019, "A Different Use for Artificial Intelligence in Nuclear
Weapons Command and Control," War on the Rocks, https://warontherocks.com/2019/04/a-different-
use-for-artificial-intelligence-in-nuclear-weapons-command-and-control/]/Kankee
Artificial intelligence (AI) is expected to change the way the United States and other nations operate their nuclear command and control. For
instance, a recent RAND report surveyed AI and nuclear security experts and notes that “AI is expected to become more widely used in aids to
decisionmaking” in command-and-control platforms. The report also indicated the possibility that narrow AI could in the future act as a
“trusted advisor” in nuclear command and control. In this article, I will examine the advice such an advisor might provide to
decision-makers in a nuclear crisis, focusing on the possibility that an
algorithm could offer compelling evidence that an
incoming nuclear alert was a false alarm, thereby counseling restraint rather than confrontation. Decision-
makers who stand guard at the various levels of the nuclear weapons chain of command face two different forms of stress. The first
form of stress is information overload, shortage of time, and chaos in the moment of a crisis . The second is
more general, emerging from moral tradeoffs and the fear of causing loss of life on an immense scale . AI
and big data analysis techniques have already been applied to address the first kind of stress . The current U.S. nuclear
early warning system employs a “dual phenomenology” mechanism designed to ensure speed in
detecting a threat and in streamlining information involved in the decision-making process . The early
warning system employs advanced satellites and radars to confirm and track an enemy missile almost
immediately after launch. In an actual nuclear attack, the various military and political personnel in the chain of
command would be informed progressively as the threat is analyzed , until finally the president is
notified. This structure substantially reduces information overload and chaos for decision-makers in a crisis.
However, as Richard Garwin writes, the system also reduces the role of the decision-maker “simply to endorse the claim of the sensors and the
communication systems that a massive raid is indeed in progress.” While the advanced technologies and data processing techniques used in the
early warning system reduces the occurrence of false alerts, it does not completely eliminate the chances of one occurring. In
order to
address decision-makers’ fear of inadvertently starting a nuclear war , future applications of AI to nuclear
command and control should aspire to create an algorithm that could argue in the face of overwhelming
fear of an impending attack that a nuclear launch isn’t happening. Such an algorithm could verify the
authenticity of an alert from other diverse perspectives, in addition to a purely technological analysis.
Incorporating this element into the nuclear warning process could help to address the second form of stress , reassuring
decision-makers that they are sanctioning a valid and justified course of action. Command and Control During the
Cold War: The Importance of Big Data In the world of nuclear command and control, the pursuit of speed and analysis of big data is old news. In
the early 1950s, before the advent of nuclear intercontinental ballistic missiles (ICBMs), the United States began developing the SAGE
supercomputer. SAGE, which was built at approximately three times the cost of the Manhattan Project, was the quintessential big data
processing machine. It used the fastest and most expensive computers at the time – the Whirlwind II (AN/FSQ-7) IBM mainframe computers –
at each of 24 command centers to receive, sort, and process data from the many radars and sensors dedicated to identifying incoming Soviet
bombers. The SAGE supercomputer then coordinated U.S. and Canadian aircraft and missiles to intercept those bombers. Its goal was to
supplement “the fallible, comparatively slow-reacting mind and hand of man” in anticipating and defending against a nuclear bomber
campaign. The proliferation of ICBMs in the 1960s, however, made the SAGE command centers “extraordinarily vulnerable.” The U.S. Air Force
concluded that Soviet ICBMs could destroy “the SAGE system long before the first of their bombers crossed the Arctic Circle.” In 1966, speaking
at a congressional hearing, Secretary of Defense Robert McNamara argued that “the elaborate defenses which we erected during the 1960s no
longer retain their original importance. Today with no defense against the major threat, Soviet ICBMs, our anti-bomber defense alone would
contribute very little…” The SAGE command centers were shut down. McNamara formed a National Command and Control Task Force,
informally referred to as the Partridge Commission, to study the problem of nuclear command and control in the early days of the ICBM era.
The commission concluded “that the capabilities of US [nuclear] weapon systems had outstripped the ability to command and control them”
using a decentralized military command and control structure. The commission recommended streamlining and centralizing command and
control with much stronger civilian oversight. The commission also advocated the formation of the modern-day North American Aerospace
Defense Command, better known as NORAD, with its advanced computer and communication systems, early warning satellites, and forward-
placed radars designed to track any missile launch on the planet before it could reach the continental United States. NORAD and its computer
and communication systems were designed to resolve the stress from information overload by compartmentalizing and automating the process
of evaluating a threat. Depending on its particular trajectory, an enemy nuclear missile may take anywhere between 35 minutes to just eight
108
minutes to reach its target. When the launch of an enemy missile occurs, it is first picked up by early warning satellite sensors within seconds.
The satellites track these missiles while the engines are still ignited. Once the missile comes over the horizon, forward-deployed radars
independently track them. The data from the two systems is then assessed in the context of the prevailing geostrategic intelligence by NORAD.
NORAD would then pass the assessment up the military and political chain of command. This sequence of steps ensures that senior decision-
makers are not overwhelmed with information. By the time decision-makers are notified, the decision to retaliate to an apparent attack “must
be made in minutes.” Future advances in AI might only add incremental improvements to the speed and quality of information processing to
this already advanced nuclear early warning system. Using
AI to Prevent Inadvertent Nuclear War These advances in
nuclear command and control still do not directly address the second form of stress , one that emerges from the
fear of a nuclear war and the accompanying moral tradeoffs. How can AI mitigate this problem? History reminds us that technological
sophistication cannot be relied upon to avert accidental nuclear confrontations. Rather, these confrontations have been prevented by
individuals who, despite having state-of-the-art technology at their disposal, proffered alternate explanations for a nuclear warning alert.
Operating under the most demanding conditions, they insisted on a “gut feeling” that evidence of an impending nuclear war alert was
misleading. They chose to disregard established protocol, fearing that a wrong choice would lead to accidental nuclear war. Consider for
example a declassified President’s Foreign Intelligence Advisory Board report investigating the decision by Leonard Perroots, a U.S. Air Force
lieutenant general, not to respond to incoming nuclear alerts. The incident occurred in 1983 when NATO was conducting a large simulated
nuclear war exercise code-named Able Archer. The report notes that Perroots’ “recommendation, made in ignorance, not to raise US readiness
in response” was “a fortuitous, if ill-informed, decision given the changed political environment at the time.” The report also states: the military
officers in charge of the Able Archer exercise minimized this risk by doing nothing in the face of evidence that parts of the Soviet armed forces
were moving to an unusual level of [nuclear] alert. But these officers acted correctly out of instinct, not informed guidance. Perroots later
complained in 1989, just before retiring as head of the U.S. Defense Intelligence Agency, “that the U.S. intelligence community did not give
adequate credence to the possibility that the United States and Soviet Union came unacceptably close to [accidental] nuclear war.” In the same
year, Stanislav Petrov, a commanding officer involved in Soviet nuclear operations, also dismissed a nuclear alert from his country’s early
warning system. In the face of data and analysis that confirmed an incoming American missile salvo, Petrov decided the system was wrong.
Petrov later said, “that day the satellites told us with the highest degree of certainty these rockets were on the way.” Still, he decided to report
the warning as a false alert. His decision was informed by fears that he “didn’t want to be the one responsible for starting a third world war.”
Later recalling the incident, he said: “I had a funny feeling in my gut. I didn’t want to make a mistake. I made a decision, and that was it. When
people start a war, they don’t start it with only five missiles.” Both, Perroots and Petrov feared the moral consequences of a nuclear war,
particularly one initiated accidentally. They distrusted the data and challenged protocol. Conclusion Fred Iklé once remarked, “if any witness
should come here and tell you that a totally reliable and safe launch on warning posture can be designed and implemented that man is a fool.”
If that is true, how close can AI get us to reliable and safe nuclear command and control? AI-enabled systems may aspire to reduce
some of the mechanical and human errors that have occurred in nuclear command and control . Prior
instances of false alerts and failures in early warning systems should be used as a training dataset for an
AI algorithm to develop benchmarks to quickly test the accuracy of an early warning alert. The goal of
integrating AI into military systems should not be speed and accuracy alone. It should also be to help decision-makers exercise
judgment and prudence to prevent inadvertent catastrophes.
109
AI is inevitable, but an American Dead hand is key to deterrence and stopping rogue AI
prolif
Straub 18 [Jeremy Straub, Assistant Professor of Computer Science at North Dakota State University,
1-29-2018, "Artificial intelligence is the weapon of the next Cold War," Conversation,
https://theconversation.com/artificial-intelligence-is-the-weapon-of-the-next-cold-war-86086]/Kankee
Use of AI for nuclear weapons control Threats posed by surprise attacks from ship - and submarine-based nuclear
weapons and weapons placed near a country’s borders may lead some nations to entrust self-defense
tactics – including launching counterattacks – to the rapid decision-making capabilities of an AI system. In case of an attack,
the AI could act more quickly and without the potential hesitation or dissent of a human operator. A
fast, automated response capability could help ensure potential adversaries know a nation is ready and
willing to launch, the key to mutual assured destruction’s effectiveness as a deterrent. AI control of non-
nuclear weapons AI can also be used to control non-nuclear weapons including unmanned vehicles like drones
and cyberweapons. Unmanned vehicles must be able to operate while their communications are impaired – which requires onboard AI
control. AI control also prevents a group that’s being targeted from stopping or preventing a drone attack
by destroying its control facility, because control is distributed, both physically and electronically .
Cyberweapons may, similarly, need to operate beyond the range of communications. And reacting to them may require such
rapid response that the responses would be best launched and controlled by AI systems. AI-coordinated
attacks can launch cyber or real-world weapons almost instantly, making the decision to attack before a
human even notices a reason to. AI systems can change targets and techniques faster than humans can
comprehend, much less analyze. For instance, an AI system might launch a drone to attack a factory,
observe drones responding to defend, and launch a cyberattack on those drones, with no noticeable
pause. The importance of AI development A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide
use of AI-powered cyberattacks may still be some time away. Countries might agree to a proposed Digital Geneva
Convention to limit AI conflict. But that won’t stop AI attacks by independent nationalist groups, militias,
criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain,
therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out
of a desire to be prepared to defend themselves . With Russia embracing AI, other nations that don’t or
those that restrict AI development risk becoming unable to compete – economically or militarily – with countries
wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those
without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many
countries could provide a deterrent against attacks, as happened with nuclear weapons during the Cold
War.
110
Contention 5: Mosquito Killers
Mosquito killer drones are LAWS – they’re coming now
Williams 20 [Gregory M. Williams, Adjunct Professor at Rutgers University Department of Entomology
with a PhD in Entomology from the University of Delaware and Manager of the Hudson Regional
Mosquito Control Program, Yi Wang, faculty for the Center for Vector Biology for Rutgers University,
Devi S. Suman, faculty for the Zoological Survey of India at New AliporeIsik Unlu, and Randy Gaugler,
faculty for the Center for Vector Biology for Rutgers University, 9-18-2020, "The development of
autonomous unmanned aircraft systems for mosquito control ," PLOS One,
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0235548]/Kankee
111
to automatically recharge the batteries and refill the pesticides, patiently awaiting their next mission. Just as the public once welcomed the
sight of our spray trucks so will they welcome the sight of our UAS, knowing they will be just a little safer and more comfortable.
Mosquito killer drones selectively remove mosquitos infected with viruses like Zika –
those are key to stopping disease spread
Holland 18 [Kimberly Holland, writer for Salon, 12-18-2018, "Could a Mosquito-Filled Drone Be the
Key to Battling Viruses Like Zika?," Redshift EN, https://redshift.autodesk.com/mosquito-drone/]/Kankee
Mosquitoes are not just annoying summertime pests. Each year, the buzzing insects are responsible for millions of deaths
worldwide, according to the World Health Organization (WHO)—making them the world’s deadliest animal by
far. Malaria causes more than 435,000 deaths annually ; other mosquito-borne diseases like the Zika virus,
West Nile virus, Chikungunya virus, and dengue account for hundreds of thousands more . Swiss NGO WeRobotics
believes one answer to the world’s mosquito problem is, well, mosquitoes. It’s bringing innovative engineering and design to the
problem of mosquito-borne disease with a new drone-based mosquito-distribution model. Why would you want to release more mosquitoes
into affected areas? These aren’t just any mosquitoes; they’re nonbiting, sterile
male mosquitoes, which increase
competition for breeding from nonsterilized males, preventing infected mosquitoes from multiplying .
Spreading Hope, Not Disease For decades, the standard techniques for eliminating mosquito populations have
remained unchanged. Insecticides and fumigation are still the primary tools to reduce mosquito
populations and control the spread of disease . This alternative approach, the sterile insect technique (SIT), is a well-
understood insect-management method used for a variety of insect-borne illnesses. If male insects can’t fertilize females
during mating, the eggs can’t hatch, and the infected population dwindles. SIT works best with a continual supply of
sterile males in the affected environment. The sterile insects should outnumber the nonsterile by a factor of at least 10 to one for disease
control. But SIT hits a roadblock in rural settings. Roads don’t always reach affected areas, and even the
most adventurous human can’t cover enough ground on foot to make a dent in the mosquito
population. However, where humans can’t go on the ground, drones can go in the sky. “The reason we think
drones are potentially important is that drones can be programmed to reach remote areas,” says Andrew Schroeder, cofounder and
director of research and analysis at WeRobotics. “[A drone] could distribute evenly , so the hypothesis is that the release
mechanism alone should make a significant difference in the likelihood of success for sterilized insect
programs.” WeRobotics’s drones would cover towns, cities, and rural areas with hundreds of thousands of
sterile mosquitoes in one brief flight, releasing the insects to stymie the population of disease-infected
insects, specifically those carrying the Zika virus. “The drones are supporting a process, which is how to distribute sterilized
mosquitoes,” Schroeder says. “There are other ways. You could release them from a backpack. You could ride a bicycle around and have them
come out.” WeRobotics partnered with the UN’s International Atomic Energy Agency (IAEA) and its collaborator, Moscamed Brasil. In turn,
these organizations linked WeRobotics to groups that could supply sterilized mosquitoes and communities where it could test its drones. “I
don’t know much about the sterilized insect method,” Schroeder says. “I have limits to what I’m able to be responsible for—our whole team
does. But through collaboration, through discourse and dialogue, it’s been a very fruitful, mutual effort at getting these ideas organized.” A
Drone That Sprays Cold Mosquitoes With these partners in place, WeRobotics had to equip the drone. The first, challenging step, says
WeRobotics Head of Engineering Jürg Germann, was figuring out how to store the tiny bugs so they could live through transportation and
storage and then thrive once they exited the drone through the release cylinder. The team found the best solution was to make the mosquitoes
chill out. Mosquitoes are very active at warmer temperatures—so active that they quickly injure one another in confined spaces. But
temperatures between 7 and 10 degrees Celsius (44.6–50 degrees Fahrenheit) render the mosquitoes immobile. The mosquitoes are still very
much alive, but their immobility keeps them from hurting themselves and the other insects in the storage chamber. The humidity inside the
container must be kept below 60 percent. Any higher and the mosquitoes can get wet, which leads to mosquito clumping; these clumps of
mosquitoes get crushed as they exit the dispensing cylinder. The WeRobotics team attached its cooled storage device to a modified DJI M600
Pro hexacopter. There was, Schroeder says, no need to reinvent the wheel but instead to focus on the mosquito-delivery system. Plus,
commercial products could make the WeRobotics device more accessible once it had mastered the mosquito-dispensing challenge. At about
100 meters off the ground, the drone’s motor turns on. The cylinder turns, and the immobile mosquitos tumble out into warm air. As they fall
and warm up, the insects reinvigorate into an active state. From there, they can fly and engage with the local mosquito community. For the
development phases, the engineers and design team used Autodesk Inventor to simulate their designs, modeling changes before 3D printing
components. This helped the team perform early and regular tests of the mechanical parts needed to arrive at a successful design. “None of us
112
had worked with mosquitos before, and no one anticipated how fragile they are,” Germann says. “Passing through a mechanism designed to
separate them into smaller batches could easily damage their wings, making them useless as mates.” In addition to temperature and humidity
challenges, the team had to find out how deep the holding canister could be before the mosquitoes on the bottom were crushed under the
weight of those on top. Using cumin seeds as substitutes for the top layers of bugs, several tests proved the bottom layer of mosquitoes to be
susceptible to damage from weight, making the maximum acceptable canister height only 5 cm. After a year of rapid prototyping, the team was
ready to try its design in a real-world setting: Brazil. Brazil has mass-rearing facilities that gave the team the necessary amount of sterile
mosquitoes—and the country shares a similar climate with many others that also face mosquito-borne disease crises. After several flights and
field tests, WeRobotics researchers found that more than 90 percent of the 50,000 mosquitoes released in each flight
made it through storage, ejection, and reinvigoration. A single flight can cover dozens of hectares ,
enough for a small town. A day of multiple flights can shower more than 100 hectares . “The first time we
collected mosquitoes from the drone in our mosquito traps—indicating that our method is indeed viable—was a fantastic moment,”
Germann says. The Future of Vector Control Initial testing provided WeRobotics and its partners with data, insights, and ideas—but they’re not
ready to deploy this solution on a larger scale just yet. “We don’t yet have the evidence base to say that this does make a significant
difference,” Schroeder says. “But in theory, if you do this over generations, do it in an even way, do it at the right time—timing is key—then
we should see dramatic drops in population.” As the need to control deadly mosquitoes grows, this research points to the
future—and to the hope that this technology used in large-scale campaigns could improve the lives and health of
communities worldwide.
113