0% found this document useful (0 votes)
6 views

Untitled Document

work

Uploaded by

lozanomarcelo197
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Untitled Document

work

Uploaded by

lozanomarcelo197
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

How Does Artificial Intelligence Influence Conflict?

How is the military using AI? Killer robots have long been a fear and
fascination of humankind. Explore how weapons that can locate, target,
and kill without human involvement shape today’s conflicts and hold the
potential to re-shape future conflicts.

Last Updated
May 25, 2023

People demonstrate as part of the Stop Killer Robots campaign organized


by German nongovernmental organization Facing Finance to ban what
they call killer robots on March 21, 2019, in front of the Brandenburg Gate in
Berlin.

Source: Wolfgang Kumm/DPA/AFP via Getty Images.

Share
In November 2020, a top Iranian nuclear scientist was killed when his car
came under machine gun fire. The attack sparked outrage and confusion.
Who pulled the trigger?

According to Iranian officials and news reports, no one did.

The assassination was reportedly carried out by an Israeli


remote-controlled machine gun. This weapon used artificial intelligence (AI)
to target and kill the scientist Mohsen Fakhrizadeh. Secrecy continues to
surround the attack, but revelations of a sophisticated “smart gun” have
added fuel to the ongoing debate over the morality and practicality of AI
weaponry.

In this resource, we’ll explore the debate and detail efforts by governments
and international organizations to regulate AI’s use in conflict.

What is AI, and how is it used militarily?


AI refers to machines or systems that can make decisions, perform tasks,
and—in theory—improve performance without human involvement. It
exists in everyday technologies, including customer service chatbots and
email spam filters.
AI can also enhance military capabilities, including many nonlethal
functions, such as systems that record and analyze data from aircraft
sensors to better predict engine failures. In fact, the U.S. The Department of
Defense has over six hundred AI programs in use or in development.
Funding for AI hit $874 million in 2022.

U.S. Secretary of Defense Lloyd Austin delivers remarks at the National


Security Commission on Artificial Intelligence Global Emerging Technology
Summit on July 13, 2021, in Washington, DC.

Source: Kevin Dietsch/Getty Images

Share
Additionally, AI is used in lethal autonomous weapons systems (LAWS),
which can locate, target, and kill without human involvement. LAWS, for
example, could be affixed to an autonomous drone or tank and use
recognition technology to identify enemy targets. When a target—say,
Mohsen Fakhrizadeh—is located, a weapon could be programmed to fire.

LAWS have sparked controversy. Critics argue that human beings should
never entrust computers with the decision to kill.

What are opponents of AI weapons saying?


For many, LAWS cross an ethical red line.

Critics argue that LAWS remove nuanced decision-making in life-or-death


scenarios. These weapon systems could also be prone to programming
mistakes or hacks. For instance, detractors fear autonomous weapons
could malfunction, fail to distinguish between civilian and military targets, or
attack disproportionately. Such failures could result in harm to civilians and,
thus, violations of international law. They also argue that LAWS create
accountability issues because no individual decides when to kill.

In recent years, numerous Nobel Peace Prize winners, dozens of


governments, the Vatican, and over 180 nongovernmental organizations
have called for a ban on LAWS. Many others have called for strict limits and
regulations around LAWS. In fact, in 2017, over one hundred AI industry
leaders—including big names like Elon Musk—published an open letter
urging world leaders to regulate AI weaponry.
These can be weapons of terror, weapons that despots and terrorists use
against innocent populations, and weapons hacked to behave in
undesirable ways.

—open letter from AI industry leaders, 2017

Share

What are advocates of AI weapons saying?


Proponents argue that LAWS can boost a military’s might. Their thinking is
that LAWS will create stronger deterrents that prevent conflict. And if war
does break out, advocates maintain that AI can make fighting more
efficient and targeted. LAWS can remove human error and limit loss of life,
they argue.

Take the killing of Fakhrizadeh. His wife walked away unharmed from the
attack despite sitting inches away from her husband. Iranian investigators
attributed the shooting’s pinpoint accuracy to the weapon’s advanced
facial recognition capabilities.

This photo released by the semi-official Fars News Agency shows the
scene where Mohsen Fakhrizadeh was killed in Absard, a small city just
east of Iran’s capital, Tehran, on, Friday, November 27, 2020.

Source: Fars News Agency via AP

Share
Moreover, advocates claim that LAWS are necessary for defense because
other countries are developing them. The U.S. military, for instance, notes AI
advances in China and Russia as partial motivation for pursuing AI
technology. Proponents also contend that banning LAWS could inhibit AI
research more broadly. Military restrictions could affect other industries,
limiting the development of new technologies that could help civilians and
society at large.

Finally, advocates maintain that AI weaponry can be developed ethically. To


that end, the U.S. Department of Defense and the North Atlantic Treaty
Organization have recently created principles they say will facilitate such
responsible development. These principles would mandate robust
accountability and oversight processes in the development of AI military
technology. The United States and NATO allies are also committed to
addressing race, gender, and other biases in AI programming.

What is the state of AI in combat?


Today, LAWS are not in widespread use. But their development is
increasing.

Experts believe several countries possess or are developing LAWS.


Additionally, numerous private defense manufacturers now sell them. For
instance, Israel’s leading weapons manufacturer, Israel Aerospace
Industries, has reportedly exported AI weapons to Chile, China, India, South
Korea, and Turkey.

China’s military and defense contractors are also making inroads in AI


development. In 2019, Chinese manufacturer Ziyan caught the world’s
attention when it released AI drones that could autonomously swarm and
attack targets. China has reportedly exported the autonomous drones and
other weapons systems to the Middle East.

Certain countries have begun to use AI weapons in combat. In 2021, the


United Nations reported that Libyan forces used an armed AI drone in an
offensive during the country’s civil war the previous year. The drone—a
Turkish-made LAWS known as a Kargu-2—attacked retreating militia
forces without human involvement.

The incident in Libya sparked scrutiny and furthered the debate over the
need for regulations on LAWS. However, as is often the case, weapons
technology has advanced much more quickly than diplomacy can respond.
How is AI in the military regulated?
In 2014, the UN Convention on Certain Conventional Weapons (CCW),
which tries to ban or restrict an array of weapons systems, met for the first
time to discuss LAWS. So far, the CCW has failed to garner consensus on
the issue.

Dozens of governments have supported a global ban on LAWS. In fact,


Pakistan, which has endured sustained U.S. drone strikes [PDF] for
decades, was the first country to call for their prohibition.

Meanwhile, countries such as the United States, Russia, and others with
powerful militaries have opposed a ban. China has supported a ban on
using LAWS but not a ban on their development.

In 2019, the CCW made some progress in establishing guardrails around


LAWS. The body adopted eleven guidelines, including steps to prevent
terrorist groups from acquiring such weaponry. The CCW also created rules
prohibiting LAWS from having human-like traits so the line between human
and machine is never blurred.

However, the CCW failed to take any significant further steps during their
last meeting on the subject in December 2021.

What is the future of AI in conflict?


AI’s use in combat remains in its early stages. Moreover, much of the AI
technology militaries are developing is for nonlethal use. So, the world is
unlikely to witness a state-sanctioned robot war anytime soon.

But the reality of AI weapons is no longer constrained to science fiction hits


like I, Robot or Black Mirror. The technology exists today, and its
development and use are accelerating.

You might also like