Command Responsibility A Model For Defining Meaningful Human Control - 2
Command Responsibility A Model For Defining Meaningful Human Control - 2
Matthew T. Miller*
In the relatively near future, the United States and other countries are likely to
develop varying levels of artificial intelligence (AI) and integrate it into autono-
mous weapons.1 There are significant voices, spearheaded by The Campaign to
Ban Killer Robots, advocating for a preemptive ban on these weapons.2 The
opponents of lethal autonomous weapon systems (LAWS) argue that it is unethi-
cal to allow a machine to decide when to kill and that AI will never be able to
adhere to International Humanitarian Law (IHL) obligations.3 Although this
opposition campaign has not yet achieved its goal of a ban, it has prompted con-
siderable debate over the legality of developing and using LAWS. One of the
concepts that has arisen in this debate is a legal requirement for meaningful
human control (MHC) over LAWS.4 The idea of MHC has gained traction within
discussions at the United Nations Convention on Certain Conventional Weapons
(CCW), but the concept has its detractors.5
One of those detractors is the United States, whose delegation to the CCW
Group of Governmental Experts continues to warn that MHC is an ambiguous
term that “obscures rather than clarifies the genuine challenges” related to
LAWS.6 Instead of human control, the U.S. argues that the key issue is ensuring
“machines help effectuate the intention of commanders and the operators of
weapon systems.”7 The U.S. Department of Defense showed its focus on intent,
* Major Matthew Miller is a Judge Advocate in the U.S. Army and currently serves as the Chief of
the Operational Law Branch in the National Security Law Division of the Army’s Office of The Judge
Advocate General. Major Miller holds a Master of Laws (LL.M) in National Security Law from the
Georgetown Law Center and an LL.M. in Military Law from The Judge Advocate General’s Legal
Center and School. The views expressed in the paper are the author’s alone and do not necessarily reflect
those of the author’s employer. © 2021, Matthew T. Miller.
1. Melissa K. Chan, China and the U.S. are Fighting a Major Battle Over Killer Robots and the
Future of AI, TIME, Sep. 13, 2019, https://perma.cc/62ZU-4FUZ.
2. See CAMPAIGN TO STOP KILLER ROBOTS, https://perma.cc/9RGG-A6ZU (providing an overview
of the campaign and its goals).
3. HUMAN RIGHTS WATCH, HEED THE CALL: A MORAL AND LEGAL IMPERATIVE TO BAN KILLER
ROBOTS 21 (2018), https://perma.cc/9WDZ-X655.
4. See Hayley Evans, Lethal Autonomous Weapons Systems at the First and Second U.N. GGE
Meetings, LAWFARE (Apr. 9, 2018, 9:00 AM), https://perma.cc/9ARQ-3EHA (discussing numerous
states’ references to meaningful human control).
5. See Karl Chang, U.S. Mission to Int’l Orgs. in Geneva, Consideration of the Human Element in the
Use of Lethal Force, Address Before the Convention on Certain Conventional Weapons Group of
Governmental Experts on Emerging Technologies in the Area of LAWS (Mar. 26, 2019) (discussing his
skepticism over the ability to determine the level of human control that is necessary to comply with
International Humanitarian Law).
6. Id.
7. Id.
533
534 JOURNAL OF NATIONAL SECURITY LAW & POLICY [Vol. 11:533
rather than control, by adopted the policy that “autonomous and semi-autono-
mous weapon systems shall be designed to allow commanders and operators to
exercise appropriate levels of human judgment over the use of force.”8
The difference between meaningful human control and appropriate levels of
human judgment may seem trivial to some, but it demonstrates the ambiguity of
MHC. Using an ambiguous term can be useful to gain political and diplomatic
consensus,9 but it has little value when attempting to apply the term as a legal
obligation.10 The United States and others may interpret MHC to require meas-
ures that effectuate command intent and maintain human judgement over the use
of force, while States that are more hesitant about LAWS may interpret MHC to
require direct human control of every possible action by the weapon.11
The purpose of this paper is to provide a solution to this ambiguity and offer a
workable definition of MHC. The overall thesis is that MHC should be defined as
the control necessary to facilitate responsible command. Commanders do not
have direct control over each engagement. Rather, command responsibility is
based upon a leader’s broader control of military operations and responsibility for
her forces’ adherence to IHL.12 Therefore, MHC should require that a LAWS be
designed to ensure commanders can: 1) understand the capabilities and limita-
tions of the LAWS and convey this information to their forces; 2) limit, at a mini-
mum, the time and space in which the LAWS will operate; and 3) effectively
investigate the causes of a LAWS taking unexpected action.13 Defining MHC
through this lens of command responsibility will provide states with a clearer
standard that is grounded in a well-developed IHL concept.
To explain how the command responsibility model can be applied to MHC, the
paper will begin by defining LAWS and providing an overview of the ways in
which humans can interact with autonomous systems. This first section will also
describe how a common method for understanding human-machine interaction is
to look at where humans are located in the system’s decision loop: providing
direct input “in the loop”; providing supervision “on the loop”; or being “out of
8. U.S. DEP’T OF DEF., DIR. 3000.09, AUTONOMY IN WEAPON SYSTEMS 2 (Nov. 21, 2012)
[hereinafter DIR. 3000.09].
9. See, e.g., Rebecca Crootof, A Meaningful Floor for Meaningful Human Control, 30 TEMPLE INT’L &
COMP. L.J. 53, 54 (2015), available at https://sites.temple.edu/ticlj/files/2017/02/30.1.Crootof-TICLJ.pdf.
10. See Merel Ekelhof, Autonomous Weapons: Operationalizing Meaningful Human Control, INT’L
COMM. OF THE RED CROSS (Aug. 21, 2018), https://perma.cc/2G2F-PA2P (explaining that abstract
concepts about human supervision provide little value if they do not address the reality of military
application).
11. Crootof, supra note 9, at 54.
12. See U.S. DEP’T OF DEF., DOD LAW OF WAR MANUAL § 18.4 (Dec. 2016) [hereinafter LAW OF
WAR MANUAL].
13. Id.; see Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the
Protection of Victims of International Armed Conflicts, art. 87, June 8, 1977, 1125 U.N.T.S. 3
[hereinafter Additional Protocol I] (discussing commanders’ responsibility to ensure their subordinates
are aware of their legal obligations and take necessary steps to prevent violations); see generally LAW OF
WAR MANUAL, supra note 12, § 19.20.1 (discussing how the United States has not ratified Additional
Protocol I, but supports many of its provisions because they comply with longstanding U.S. practice or
are based upon customary law principles).
2021] COMMAND RESPONSIBILITY 535
the loop” and unable to provide input.14 Section II will outline the fundamental
IHL principles that are most relevant to LAWS: military necessity; distinction;
proportionality; precautions in the attack; and command responsibility.
After the explanation of the key concepts in autonomy and IHL, section III will
merge these concepts to demonstrate how MHC can be applied to the design and
use of LAWS through the lens of command responsibility. This section will use
vignettes to analyze how the level of human control necessary to facilitate re-
sponsible command will vary, depending on the capabilities of the LAWS and
the circumstances in which it will be used. Section IV will conclude with a dis-
cussion on how the command responsibility framework can address concerns that
the use of LAWS will prevent accountability for IHL violations. Specifically, this
section will argue that a commander’s obligations to train her forces and investi-
gate and remediate potential IHL violations will allow for accountability even if a
LAWS performs an unforeseen action.
I. AUTONOMOUS SYSTEMS AND HUMAN INTERACTION IN THE DECISION LOOP
The first step in discussing MHC is to provide a working definition of a
LAWS. There remains some debate over this topic and, even after five years of
work, the CCW Group of Governmental Experts has yet to agree on a definition.15
Opponents of LAWS define autonomy as a machine that acts on its “own deliber-
ations, beyond the instructions and parameters its producers, programmers, and
users provided to the machine.”16 This definition implies that it is impossible to
apply human control to LAWS, because its actions cannot be contained by its
programmers or operators.
The U.S. Department of Defense defines autonomous weapons as those that,
“once activated, can select and engage targets without further intervention by a
human operator.”17 The International Committee of the Red Cross (ICRC) simi-
larly defines fully autonomous weapons as those that “can select (search for,
detect, identify, track or select) and attack (use force against, neutralize, damage
or destroy) targets without human intervention.”18 Unlike the definition offered
by LAWS opponents, the U.S. and ICRC definitions do not remove the possibility
that humans may retain some ability to control a LAWS’ actions. Therefore, these
two later definitions provide a more effective starting point for the analysis of
MHC.
14. PAUL SCHARRE, ARMY OF NONE: AUTONOMOUS WEAPONS AND THE FUTURE OF WAR 28-30
(2019); PAUL SCHARRE & MICHAEL C. HOROWITZ, CENTER FOR A NEW AM. SEC., WORKING PAPER: AN
INTRODUCTION TO AUTONOMY IN WEAPON SYSTEMS 6 (2015), https://perma.cc/T4GP-PEBS.
15. Telephone Interview with Michael Meier, Professor, Georgetown Univ. L. Ctr. (Oct. 23, 2019)
(conveying experiences as a member of the U.S. delegation to the Group of Governmental Experts)
[hereinafter Interview].
16. Amitai Etzioni & Oren Etzioni, Pros and Cons of Autonomous Weapon Systems, MIL. REV.,
May–Jun. 2017, at 72, 79, https://perma.cc/M9F8-FWZ2.
17. DIR. 3000.09, supra note 8, at 13.
18. INT’L COMM. OF THE RED CROSS, VIEWS OF THE INTERNATIONAL COMMITTEE OF THE RED CROSS
(ICRC) ON AUTONOMOUS WEAPON SYSTEM 1 (2016),1, https://perma.cc/9ZKL-YMGZ.
536 JOURNAL OF NATIONAL SECURITY LAW & POLICY [Vol. 11:533
A common method for understanding the range of possible human control over
autonomous systems is to look at where humans are located on the autonomous
system’s decision loop: 1) in the loop; 2) on the loop; or 3) out of the loop.19
When a human is in the loop, an autonomous system needs human input before
acting. This would commonly involve the human identifying a target or giving
permission before the weapon can fire. Current “in the loop” systems are guided
munitions, such as GPS-guided bombs and cruise missiles, that use autonomous
guidance systems to attack a human-selected target.20 Since an “in the loop” sys-
tem requires direct human intervention, it does not satisfy the various definitions
of LAWS and instead is considered semi-autonomous.21 It has already become
common practice to use semi-autonomous weapon systems and they are not the
focus of this paper’s discussion.
A LAWS with humans on the loop will not require direct human input or per-
mission before acting.22 Instead, the LAWS will select and attack targets while a
human monitors the weapon’s performance and intervenes to halt its operation, if
necessary.23 The U.S. Patriot Air Defense system is an analogous example of
how this kind of “human-supervised” LAWS would operate. In automatic mode,
the Patriot selects and engages targets unless the human operator intervenes to
abort the launch.24
The final category of human-machine interaction is humans out of the loop,
which provides no opportunity to intervene in a LAWS’ individual acts. Once an
operator employs an “out of the loop” LAWS, the system will independently
identify, select, and attack targets in accordance with its programming and any
additional parameters that have been put in place by the operator.25 Key military
advantages of “out of the loop” LAWS are their ability to operate far faster than a
human26 and accomplish a mission in an environment where enemy jamming
cuts off communication with the weapon.27
In exchange for these advantages, an “out of the loop” LAWS removes direct
human control. This absence of direct control is the crux of the debate over what
level of control is necessary to uphold IHL obligations. The next section will out-
line those legal obligations.
19. SCHARRE, supra note 14, at 26-34; SCHARRE & HOROWITZ, supra note 14, at 8-14.
20. SCHARRE & HOROWITZ, supra note 14, at 8-12.
21. DIR. 3000.09, supra note 8, at 14.
22. SCHARRE, supra note 14, at 44 (describing a semi-autonomous weapon that does not need to ask
permission before attacking a target, but the human operator can intervene when necessary); SCHARRE &
HOROWITZ, supra note 14, at 12-13.
23. SCHARRE, supra note 14, at 44; SCHARRE & HOROWITZ, supra note 14, at 12-13.
24. John K. Hawley, Patriot Wars: Automation and the Patriot Air and Missile Defense System, CTR.
FOR A NEW AMERICAN SEC. (Jan. 25, 2017), https://perma.cc/K228-G4M2.
25. SCHARRE & HOROWITZ, supra note 14, at 13-15.
26. See Michael T. Boulet, The Autonomous Systems Tidal Wave, 22 LINCOLN LAB’Y J., no. 2, 2017,
at 18, 19, https://perma.cc/YX5X-UMHT (discussing how artificial intelligence accomplishes great
speed by decoupling humans from decisions and leveraging computing capabilities).
27. See Courtney Kube, Russia has Figure Out How to Jam U.S. Drones in Syria, Officials Say, NBC
NEWS (Apr. 10, 2018), https://perma.cc/5ZYM-7J4X.
2021] COMMAND RESPONSIBILITY 537
28. See Int’l Comm. of the Red Cross, Fundamental Principles of IHL, https://perma.cc/GG8M-
SZSC; LAW OF WAR MANUAL, supra note 12, § 2, § 5.2.3.
29. U.S. Working Paper, Implementing International Humanitarian Law in the Use of Autonomy in
Weapon Systems ¶3, CCW/GGE.1/2019/WP.5 (Mar. 28, 2019).
30. LAW OF WAR MANUAL, supra note 12, § 2.2.
31. Id.; Hague Convention (IV) Respecting the Laws and Customs of War on Land and its Annex:
Regulations Concerning the Laws and Customs of War on Land art. 22, Oct. 18, 1907, 36 Stat. 2277
(declaring that “the right of belligerents to adopt means of injuring the enemy is not unlimited”).
32. LAW OF WAR MANUAL, supra note 12, § 2.5; Additional Protocol I, supra note 13, art. 48.
33. Int’l Comm. of the Red Cross, Customary IHL Rule 71: Weapons That Are by Nature
Indiscriminate, https://perma.cc/P9LJ-MUGS [hereinafter Customary IHL Rule 71].
34. LAW OF WAR MANUAL, supra note 12, § 2.5.2; Additional Protocol I, supra note 13, art. 48.
35. LAW OF WAR MANUAL, supra note 12, § 2.2.1.
36. LAW OF WAR MANUAL, supra note 12, § 2.4.1.2.
37. Additional Protocol I, supra note 13, art. 51(5)(b); LAW OF WAR MANUAL, supra note 12, § 5.12;
See Int’l Comm. of the Red Cross, Customary IHL Rule 14: Proportionality in Attack, https://perma.cc/
EB49-FH7B; see also U.S. DEP’T OF ARMY, FIELD MANUAL 6-27, COMMANDER’S HANDBOOK ON THE
LAW OF LAND WARFARE ¶1-46 (Aug. 7, 2019) (discussing how the U.S. Army and Marine Corps
explain this principle to military commanders).
538 JOURNAL OF NATIONAL SECURITY LAW & POLICY [Vol. 11:533
38. Additional Protocol I, supra note 13, arts. 57-58; LAW OF WAR MANUAL, supra note 12, § 5.11.
39. Additional Protocol I, supra note 13, art. 57; LAW OF WAR MANUAL, supra note 12, § 5.11.
40. GEOFFREY S. CORN, ERIC TALBOT JENSEN, VICTOR HANSEN, M. CHRISTOPHER JENKS, & RICHARD
JACKSON, THE LAW OF ARMED CONFLICT: AN OPERATIONAL APPROACH 60 (2d ed. 2019).
41. Id. at 597.
42. See NAT’L SEC. L. DEP’T, THE JUDGE ADVOCATE GEN.’S LEGAL CTR. & SCH., U.S. ARMY,
OPERATIONAL LAW HANDBOOK, 79-96 (2018) (providing an overview of how commanders use rules of
engagement and other controls).
43. CORN ET. AL., supra note 40, at 596-597 (describing commanders as the focal point of military
discipline and the person who must make sure that his unit conducts military operations in compliance
with the law of armed conflict).
44. Additional Protocol I, supra note 13.
45. CORN ET. AL., supra note 40, at 571-88 (providing an overview of the ways in which a member of
the United States military may be prosecuted for violating IHL).
2021] COMMAND RESPONSIBILITY 539
necessary and reasonable measures within their power to prevent, report, and
punish IHL violations.46
Commanders not only have a duty to act once they know about a potential
problem, they also have a duty to seek out information that is reasonably avail-
able to them.47 This duty prevents commanders from unreasonably relying on
assurances from their superiors or subordinates when the commander should
have known the information was not reliable.
When looking at superior-subordinate issues, it is also important to understand
that command responsibility does not solely rest upon the lowest-level com-
mander. Militaries are organized with many levels of command, ranging from the
front-line commander to a state’s commander-in-chief. Command responsibility
applies to all levels of command and senior civilian leadership of the military.48
III. APPLYING COMMAND RESPONSIBILITY TO MEANINGFUL HUMAN CONTROL
As discussed above, a commander’s IHL obligations are not defined by her
direct control over each use of a weapon; over each pull of the trigger.49 Instead,
a commander’s IHL obligations are based upon her control over the whole mili-
tary operation or attack.50 Therefore, viewing MHC through the lens of command
responsibility does not necessarily require direct human control over each of a
LAWS’ uses of force. Instead, MHC would require LAWS to be designed to
allow commanders to apply controls to the overall use of the weapon that are nec-
essary and reasonable to prevent IHL violations.
To analyze what controls are necessary and reasonable, a commander must
understand the capabilities of the LAWS. Georgetown Law Professor Michael
Meier, who serves as the senior civilian law of war advisor to the U.S. Army
Judge Advocate General, emphasizes that “when looking at the lawful use of an
autonomous weapon, the first thing a commander must consider is what the plat-
form was designed to do and what testing has shown the platform to be able to
reliably and consistently do.”51
This information will be obtained when a LAWS is tested prior to a State’s
review of the new weapon system. Article 36 of Additional Protocol I requires
46. Int’l Comm. of the Red Cross, Customary IHL Rule 153: Command Responsibility for Failure to
Prevent, Repress or Report War Crimes, https://perma.cc/QSB5-HEBH; Additional Protocol I, supra
note 13, at arts. 86-87; Statute of the International Tribunal for the Former Yugoslavia art. 7(3), S.C.
Res. 827, U.N. Doc. S/RES/827 (May 25, 1993) [hereinafter ICTY Statute]; Statute of the International
Criminal Tribunal for Rwanda art. 6(3), S.C. Res. 955, U.N. Doc. S/RES/955 (Nov. 8, 1994) [hereinafter
ICTR Statute]; Rome Statute of the International Criminal Court art. 28, July 17, 1998, 2187 U.N.T.S. 3
[hereinafter Rome Statute].
47. CORN ET AL., supra note 40, at 611.
48. CORN ET AL., supra note 40, at 600-01 (explaining that criminal liability through command
responsibility is not defined by level of command, but is derived from a commander’s relationship to
subordinates); See Rome Statute, supra note 46, art. 28(b) (describing how the command responsibility
standard applies to civilian supervisors).
49. See U.S. Working Paper, supra note 29, ¶4.
50. See id.
51. Interview, supra note 15.
540 JOURNAL OF NATIONAL SECURITY LAW & POLICY [Vol. 11:533
attack.58 However, for purposes of this scenario, the paper will assume testing shows
the LAWS to have a false positive rate slightly worse than a human’s.
In order to use the anti-tank LAWS in accordance with IHL obligations, MHC
would require the commander to at least be able to apply geographic and time
limits to the LAWS’ actions. Professor Meier acknowledges that “a commander
asserts a lot of discretion over the use of force through the planning process by
implementing precautions that reduce risk and ensure an attack meets proportion-
ality standards.”59 In order to properly plan the use of a LAWS in an attack, a
commander must at a minimum be able to dictate when and where it will operate.
For example, enemy tanks may use major roadways to quickly travel around
the battlefield. These roads may intersect with towns or cities with dense civilian
populations. If the LAWS’ false positive rate presents an excessive risk to civil-
ians and civilian objects, then the commander could limit the LAWS to operating
on parts of the road that are far from towns. This limitation would maximize the
LAWS’ ability to verify enemy targets and satisfy proportionality by ensuring the
risk to civilians was not excessive. Where feasible to meet the mission objectives,
the commander could also take the precaution of limiting the LAWS to operating
during times when civilian traffic on the road is low. By containing the LAWS to
a part of the road, the commander can further reduce risk to civilians by planning
concurrent operations, such as roadblocks, that would prevent civilians from
entering the area in which the LAWS is operating.
Controlling the area and timing of LAWS operations is also essential for con-
ducting the required comparative analysis of means and methods of the attack.
While planning this operation, the commander may consider other weapons, such
as the AH-64 Apache attack helicopter, that could also destroy the enemy tanks
along the road.60 Even if the use of helicopters presents a lower risk of collateral
damage, they may not provide the same military advantage as the LAWS. Rather
than spreading her helicopters across the battlefield, the commander may want to
use LAWS on roads so she can focus the helicopters on supporting her infantry in
the towns, where human pilots are needed to better discriminate between enemy
forces and the dense civilian population. The commander can conduct this kind
of planning only if she can constrain the LAWS to only operating in an area
where the attack would satisfy proportionality.
If the commander wishes to use the anti-tank LAWS closer to the towns, MHC
may require that she be able to apply additional controls beyond geography and time.
To ensure compliance with the principles of distinction and proportionality, a com-
mander could identify specific areas that the LAWS may not fire upon, such as highly
populated parts of the town or protected medical and religious buildings.61 The US
military already uses digital systems to implement these types of controls across the
battlefield and provide safeguards against combatants inadvertently attacking a pro-
tected location.62 If the LAWS is able to access these digital safeguards, that may pro-
vide commanders with the necessary and reasonable control needed for MHC.
Depending on the capabilities and false-positive rates of the anti-tank LAWS,
the above control measures may still be insufficient for a commander to reason-
ably prevent disproportionate attacks in towns. If the commander cannot rely on
the LAWS to distinguish between enemy tanks and civilian vehicles in an urban
environment, then the commander would need more direct control over LAWS in
order to prevent indiscriminate or disproportionate attacks.
To address the need for more control, Professor Meier foresees the possibility that
militaries may utilize LAWS in complex environments by relying on human-machine
teaming. “Human-machine teaming will allow the military to rely on the relative
strengths of both humans and artificial intelligence.”63 Depending on the capability of
an AI, a human may have a greater ability to identify irregular enemy forces and con-
duct a proportionality analysis for each engagement.64 But like modern precision
munitions, the LAWS could be able to engage an approved target faster and more
accurately than the human.65 This type of teaming would necessitate the use of either
a semi-autonomous weapon or an “on the loop” human-supervised LAWS.66
When planning for the use of human-supervised LAWS, commanders will
need to take into consideration the growing threat of enemy jamming.67 As dis-
cussed in section I, the risk of jamming provides incentive for employing “out of
the loop” autonomous systems that can accomplish an attack even when cut off
from human operators. However, if MHC under certain circumstances requires
human supervision over LAWS, jamming presents the risk that commanders may
not be able to maintain that supervision.
To maintain MHC in an area with jamming, “on the loop” LAWS may need to
be designed to allow commanders to dictate what actions the LAWS should take
if cut-off from human supervision. If a commander determines that the circum-
stances of a mission legally justify the use of the LAWS without human supervi-
sion, then the commander could instruct the LAWS to continue mission in the
event of a breakdown in communication. If the circumstances require human
62. See Advanced Field Artillery Tactical Data System (AFATDS), U.S. ARMY (2020), https://perma.
cc/83T7-MEDL.
63. Interview, supra note 15.
64. See HUM. RTS. WATCH, LOSING HUMANITY: THE CASE AGAINST KILLER ROBOTS 29 (2012),
https://perma.cc/NQ8X-B2BN (discussing how there are doubts that AI will be able to effectively
balance the moral and legal aspects of proportionality, even if engineers develop advanced ethical
programming).
65. SCHARRE & HOROWITZ, supra note 14, at 11.
66. Interview, supra note 15.
67. See Michael R. Gordan & Jeremy Page, China Installed Military Jamming Equipment on Spratly
Islands, U.S. Says, WALL ST. J. (Apr. 9, 2019), https://perma.cc/3AXB-P8RE; Kube, supra note 27.
2021] COMMAND RESPONSIBILITY 543
supervision to uphold IHL obligations, then the commander will need to dictate
that the LAWS stop attacking targets if cut-off.68
If an “on the loop” LAWS cannot be protected against jamming or pro-
grammed with cut-off instructions, then commanders will likely need to plan
operations in jammed environments as if the LAWS was fully autonomous. As
discussed above, this will not prevent a commander from ever using the LAWS,
but it will restrict the circumstances in which the commander may determine the
use is lawful.
IV. COMMAND RESPONSIBILITY AND ACCOUNTABILITY FOR UNINTENDED ACTIONS
Command responsibility not only provides a lens through which to view MHC,
but also a method for ensuring accountability if a LAWS performs an action that
may violate IHL obligations. If that occurs, the commander will have a duty to
report the incident to appropriate authorities and conduct an investigation.69 Due
to the complexity of AI, this duty to investigate will likely belong to a high eche-
lon of command with access to necessary subject-matter experts.70 This investiga-
tion would allow higher command to assess whether the relevant commanders
applied appropriate controls over the operation of the LAWS. If the commander
failed to take necessary measures to prevent the LAWS’ inappropriate use of
force, then she may be held criminally liable.71
Investigations would also need to determine whether commanders satisfied
their mutually supporting duties to properly train their subordinates and seek out
information that is reasonably available to them.72 If a commander claims that
she used LAWS in a certain manner because higher authority provided an inaccu-
rate assessment of the weapon’s reliability, this will likely not absolve everyone
of liability. In that circumstance, the higher commander may be disciplined for
failing to properly train her subordinates on the weapon’s capabilities. Or, if the
facts show that the lower commander should have known of the LAWS’ limita-
tions, then command responsibility could hold her liable for what she reasonably
should have known.73
68. See Google Developing Kill Switch for AI, BBC NEWS (Jun. 8, 2016), https://perma.cc/3SNE-
CFTB (discussing efforts to allow humans to prevent AI from acting outside of the programmers’
intended limits). U.S. policy directly addresses this concern for semi-autonomous systems that are
intended to use lethal force and requires these systems to be “designed such that, in the event of
degraded or lost communications, the system does not autonomously select and engage individual
targets or specific target groups that have not been previously selected by an authorized human operator.
” DIR. 3000.09, supra note 8, at 3.
69. Additional Protocol I, supra note 13, art. 87(1); Rome Statute, supra note 46, art. 28(a)(ii).
70. Richard J. Sleesman & Todd C. Huntley, Lethal Autonomous Weapon Systems: An Overview, 1
ARMY L. 32, 34 (2019) (discussing the possibility that all autonomous weapon incidents will require
centralized national-level investigation because of the complexities of artificial intelligence).
71. Additional Protocol I, supra note 13, art. 86(2).
72. Additional Protocol I, supra note 13, art. 87(1).
73. Additional Protocol I, supra note 13, art. 86(1); ICTY Statute, supra note 46, art. 7(3); ICTR
Statute, supra note 46, art. 6(3).
544 JOURNAL OF NATIONAL SECURITY LAW & POLICY [Vol. 11:533
It all comes down to whether the commander’s confidence in the system is rea-
sonable. The first time an accident happens, it may not be a violation of [IHL].
74. See Rebecca Crootof, War Torts: Accountability for Autonomous Weapons, 164 U. PA. L.
REV. 1347 (2016), https://perma.cc/Y54Q-D7A6.
75. Id. at 1379-81.
76. See Precision Weapons, RAYTHEON (2020), https://perma.cc/XQ8F-9SCD (providing examples
of GPS-guided munitions).
77. Additional Protocol I, supra note 13, art. 57(2)(a)(ii).
78. Additional Protocol I, supra note 13, art. 87(1); Rome Statute, supra note 46, art. 28(a)(ii); See
U.S. DEP’T OF DEF., DIR. 2311.01, DOD LAW OF WAR PROGRAM ¶4.2 (July 2, 2020) (describing United
States policy that commanders must investigate alleged violations of the Law of War when they are
based on credible evidence).
79. See Rome Statute, supra note 46, art. 28(a)(ii) (discussing how, even if the commander did not
have the capability to properly investigate the technical nature of the bomb, she has an obligation to
report it to appropriate authority for investigation).
2021] COMMAND RESPONSIBILITY 545
But if it keeps happening and nothing is done to prevent it, a commander will
have a difficult time arguing that the problem is unforeseeable.80
One final issue associated with accountability of LAWS is the fact that the com-
plexity of AI currently makes it difficult, if not impossible, to reverse-
engineer the causes for an AI’s action. To address this concern, many organiza-
tions are working to create “understandable AI,” which provides human operators
with the ability to review the basis for an AI’s actions.81 This capability will be
essential for the lawful use of LAWS, because without it, an investigation will be
unable to determine why an AI made an unforeseen decision. Without that knowl-
edge, commanders will likely have only two options: 1) significantly limit the cir-
cumstances in which they use LAWS; or 2) determine they can no longer use the
weapon lawfully under any circumstances.
CONCLUSION
The introduction of LAWS on the modern battlefield may appear to strain the
IHL framework by having machines carry out a function that has previously only
been done by humans – selecting and engaging targets. But the use of LAWS will
not allow commanders to abdicate their responsibility to ensure their forces uphold
IHL obligations. Commanders will remain obligated to take necessary and reasona-
ble measures to prevent and suppress violations of IHL by their forces. Therefore,
MHC should be defined as the control necessary for commanders to satisfy this
obligation.
To maintain responsible command, LAWS must be designed to ensure
commanders can understand the purpose, capabilities, and limitations of the sys-
tem. The level of direct control necessary to maintain command responsibility
will depend on the purpose and capabilities of the LAWS and the circumstances
in which it is intended to be used. At a minimum, MHC requires that a com-
mander be able to apply geographic and time constraints in order to limit a
LAWS’ use to the circumstances that will uphold distinction and proportionality.
To use LAWS in more complex and civilian-saturated environments, MHC may
require that commanders have the ability to apply additional control measures or
human supervision.
In addition to providing a working definition for MHC, command responsibility
also provides a mechanism for accountability when using LAWS. Commanders
may be held criminally liable if they failed to properly train their forces on the weap-
on’s reliability or failed to apply the types of controls necessary to prevent IHL vio-
lations. If a LAWS takes an unforeseeable action, despite commanders taking all
necessary precautions, commanders may still be criminally liable if they fail to
investigate the incident and take action to prevent further unintended uses of force.