0% found this document useful (0 votes)
47 views

LAWS and Yes

1. The document discusses lethal autonomous weapons systems (LAWS) and the potential control problem with advanced artificial intelligence (AI). 2. It provides definitions of LAWS and examples of current autonomous weapons like drones and Israel's Iron Dome system. 3. Arguments for and against developing LAWS are presented, including maintaining meaningful human control versus claims of increased safety and precision.

Uploaded by

Jonan Low
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

LAWS and Yes

1. The document discusses lethal autonomous weapons systems (LAWS) and the potential control problem with advanced artificial intelligence (AI). 2. It provides definitions of LAWS and examples of current autonomous weapons like drones and Israel's Iron Dome system. 3. Arguments for and against developing LAWS are presented, including maintaining meaningful human control versus claims of increased safety and precision.

Uploaded by

Jonan Low
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Lethal Autonomous Weapons

Systems & rogue AGI

Week 2 tutorials
Welcome to CS0888: AI & New Tech Law
3 tasks:
1. Nametags for the 2. Seating chart for the
semester—the coloured semester is being passed
papers around
• Print the name you’d like to • Please sit where you’d like to
be called (e.g., MARK) stay
• Use my markers, not pen, so 3. Download handout for
it’s legible from a distance today’s tutorial
• Each week I re-collect them, • NTULearn >> Tutorial
then redistribute them the handouts >> “wk2 LAWS
following week handout”
2
13
Lethal Autonomous Weapons Systems &
rogue AGI
• Laws
• Definition & state of the art
• Arguments against/for LAWS
• Examples & options
• Contrast rogue AI & the control problem

15
Lethal Autonomous Weapons Systems: definition

LAWS Alternative conceptions


• “weapon systems that use • Some definitions don’t require
[AI] to identify, select, & kill complete autonomy—e.g., can
targets without human be a human override
intervention” • Related: Autonomous Weapons
• Future of Life Institute Systems—not necessarily lethal

16
State of the art: Some unknowns
• Fully autonomous weapons
researched, probably developed
• Not yet prominently deployed
• Include:
• Offensive weapons for attacking
• Defensive weapons for repelling
attacks

17
State of the art: Israel’s Iron Dome system
• AWS that intercepts
(detects & destroys)
unmanned rockets
targeted at populated
areas

• High degree of
autonomy—but human
at least “on the loop”

18
State of the art: Drones
• Unmanned (uncrewed) Aerial Vehicles (drones)
with varying mixes of autonomy & human
operators are routinely used:
• Both Russia & Ukraine (Feb 2022-now)
• Both Israel & Hamas war (Oct 2023-now)—both
sides
• US assassinated Iranian General Qassem
Soleimani, 2020, with Predator drone
• For surveillance & as weapons (“kamikaze” or
recoverable)
19
Lethal Autonomous Weapons Systems &
rogue AI
• LAWS
• Definition & state of the art
• Arguments against/for LAWS
• Examples & options
• Contrast rogue AI & the control problem

20
Argument against (L)AWS
• Must maintain “meaningful human control”
• Human “in the loop”
• Loop: decision-making process
• Terms used to indicate that human involved, but less so:
• Human “nearby the loop” or “on the loop”

• So, basically: Stop further development

Terms used in UN discussion, Pentagon documents, elsewhere


21
Rationale for opposing LAWS:
Agree or disagree?
• LAWS that make autonomous decisions to kill—
without a human in the loop—should not be
developed because:
• “Machines don’t have our moral compass, our
compassion & our emotions. Machines are not moral
beings.”
• Toby Walsh, U New South Wales

22
Afghan scout scenario
• Girl sent to report US/allied
positions to Taliban soldiers
• She was a legal target, under
int’l law of war
• Soldiers’ mercy based on
ethical (not legal)
considerations

Paul Scharre, Center for a New American Security


23
Argument for LAWS: Safety
• Reduces risk to attacker
• Compare Gatling machine gun (19th century):
• Proponents: Same firepower from fewer soldiers
• But net effect of machine gun battles: more casualties
• Eliminate combatant fatigue & poor judgment
• Limit trauma to attacker
• Precise targeting minimises collateral damage
including civilian casualties
24
Argument for LAWS: How different is it from
following orders?
• In most battlefield situations, human soldiers execute
commands…
• …without making independent ethical decisions

25
Lethal Autonomous Weapons Systems
• LAWS
• Definition & state of the art
• Arguments against/for LAWS
• Examples & options
• Contrast rogue AI & the control problem

26
LAWS: A current threat
• “a psychopathic leader in control of
a sophisticated ANI system
portends a far greater risk in the
near term”…
• …than a rogue AGI
• Amir Husain, security-software
entrepreneur

27
LAWS example: Swarming drones
• “as flying robots become smaller, their
manoeuvrability increases…. They have a
shorter range, yet they [could] carry a lethal
payload—perhaps a one-gram shaped charge
to puncture the human cranium.
• “[O]ne can expect [them to be] deployed in
the millions, the agility & lethality of which
will leave humans utterly defenceless.”

Stuart Russell, University of California, Berkeley, in Nature,28 2015


LAWS example: Swarming drones
• “Slaughterbots”: 2017 film by:
• Campaign to Stop Killer Robots (NGO)
• Future of Life Institute (NGO)
• Prof Stuart Russell
• Hypothetical future vision:
• Drones with precise targeting thru
facial recognition—& social media
• Deployed by state or non-state actors

5:17 on
LAWS option 1: Regulate development, use,
possession with int’l treaties
• Restrict what machines can do, globally—through an
international treaty
• E.g., prohibit fully autonomous combat decisions
• NGO Campaign to Stop Killer Robots lobbying for global
ban; Int’l Red Cross support
• No current United Nations agreements on LAWS, after a
decade of effort
• Opposed: Australia, Israel, Russia, UK, US

30
LAWS option 1: Regulate development, use,
possession with int’l treaties: Precedents
• Compare int’l agreements on Weapons of Mass Destruction
• “a class of weaponry with the potential to, in a single moment, kill
millions of civilians, jeopardize the natural environment, &
fundamentally alter the world & the lives of future generations
through their catastrophic effects” (a UN definition)
• Nuclear, chemical or biological weapons—e.g., nuclear test ban treaties
• …& agreements on certain other weapons
• Permanently blinding lasers
• Landmines

31
LAWS option 1: Regulate development, use,
possession with int’l treaties: Precedents
• Compare nuclear non-proliferation treaties:
• No spread of weapons beyond 5 nations that already were
“nuclear weapons states” before 1967:
• 5 had nuclear weapons: US, Russia, UK, France, China ()
• The 5 must share nuclear energy tech (for non-military use)
with 191 state parties to treaty (including SG)
• But the 191 can’t access nuclear weapons
• Non-parties India, Pakistan, Israel, & North Korea have nukes

Treaty on the Non-Proliferation of Nuclear Weapons, 1968 32


LAWS option 1: UN negotiations stalling

US Small state
• One can’t regulate what • “We are all aware that a
doesn’t exist yet developing country does not
have the technology that we are
discussing…. How are we going
to defend ourselves?”
• Representative of Cuban
delegation

As reported in New York Times, “A.I. is making it easier to kill (you)” (2019)
33
LAWS option 2: Arm ourselves
• “The ‘choice’ is really no choice at all: We must fight
AI with AI”
• Development inevitable (?)
• No bans: Because of speed & complexity of LAWS battles,
“human input in certain conflicts is not only unnecessary
but also dangerous”
• Husain, The Sentient Machine

34
Your view
• Should nations agree NOT to develop machines that
can kill entirely autonomously, without a human in
the loop?
• Should SG sign such an agreement if it happens?

35
Lethal Autonomous Weapons Systems
• LAWS
• Definition & state of the art
• Arguments against/for LAWS
• Examples & options
• Contrast rogue AI & the control problem

37
the control problem
• Maintaining control of AI,
especially ASI
• Or even AGI
• Ensuring that AI doesn’t
go rogue—that AI’s goals
remain aligned with ours

38
39
40
Debate positions

We’ll lose control We won’t


• Failure of imagination • We may keep AI under
• False sense of security from control, despite the hype
difficulty imagining
catastrophe till it’s too late • Maybe ASI can’t get mad
(e.g., Musk) • Rules can be built in
• ASI may be indifferent to us
• Bostrum (paperclips),
Yudkowsky (atoms), Singler
(insects)
41
ASI control problem
Failure of imagination
• “AI is a fundamental existential risk
for human civilization…
• ...but until people see robots going
down the street killing people, they
don’t know how to react”
• Elon Musk, Tesla & SpaceX, at National
Governors’ Association (2017)

42
Control: Indifference
• “It…seems perfectly possible to have a
superintelligence whose sole goal is something
completely arbitrary, such as to manufacture
as many paperclips as possible, & who would
resist with all its might any attempt to alter this
goal.”
• Nick Bostrum, Oxford U, 2003

43
Control: Indifference

• ASI “does not love you, nor


does it hate you,…
• …but you are made of atoms
it can use for something
else”
• AI theorist Eliezer Yudkowsky, co-
founder, Machine Intelligence
Research Institute, 2008

44
ASI control: Learn from our everyday
behaviour
• “if we see the distance between ourselves and
the ants as equivalent to the distance between
a superintelligence and ourselves,
• then maybe [ASI] just doesn’t care as well.”
• Singler (2022)

45
Control: Maybe ASI can’t get mad
No sentience
• “…An AI system that has the
equivalent of a neocortex…
• …but not the other parts of the
brain [that produce emotion]…
• …will not spontaneously develop
human-like emotions and drives….
• …So if we don't put [emotions &
drives] in machines, they won’t
just suddenly appear.”
Hawkins interview (2021)

46
Control: Safeguards can be built in
• Some say rules can be
programmed into ASI to keep it
under control

• E.g., science fiction writer Isaac


Asimov (1920-1992), Boston U

47
48

You might also like