0% found this document useful (0 votes)
6 views

2024_OA Osoba

The article discusses the integration of artificial intelligence (AI) into military decision-making through the lens of complex adaptive systems (CAS) theory. It highlights the incentives for military institutions to adopt AI technologies, the potential benefits and risks associated with this integration, and the implications for decision-making processes. The author argues that a systemic understanding of AI's role can provide insights into both opportunities and challenges, including concerns about human de-skilling and algorithmic transparency.

Uploaded by

omar vanegas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

2024_OA Osoba

The article discusses the integration of artificial intelligence (AI) into military decision-making through the lens of complex adaptive systems (CAS) theory. It highlights the incentives for military institutions to adopt AI technologies, the potential benefits and risks associated with this integration, and the implications for decision-making processes. The author argues that a systemic understanding of AI's role can provide insights into both opportunities and challenges, including concerns about human de-skilling and algorithmic transparency.

Uploaded by

omar vanegas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Australian Journal of International Affairs

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/caji20

A complex-systems view on military decision


making

Osonde A. Osoba

To cite this article: Osonde A. Osoba (2024) A complex-systems view on military


decision making, Australian Journal of International Affairs, 78:2, 237-246, DOI:
10.1080/10357718.2024.2333817

To link to this article: https://doi.org/10.1080/10357718.2024.2333817

Published online: 31 May 2024.

Submit your article to this journal

Article views: 168

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=caji20
AUSTRALIAN JOURNAL OF INTERNATIONAL AFFAIRS
2024, VOL. 78, NO. 2, 237–246
https://doi.org/10.1080/10357718.2024.2333817

A complex-systems view on military decision making*


Osonde A. Osoba
RAND Corporation, Santa Monica, CA, USA

ABSTRACT KEYWORDS
Military decision-making institutions are sociotechnical systems. Artificial intelligence (AI);
They feature interactions among people applying technologies to machine learning; complex-
enact roles within mission-oriented collectives. As sociotechnical adaptive-systems;
complexity; decision-making
systems, military institutions can be examined through the lens of
complex adaptive systems (CAS) theory. This discussion applies
the CAS perspective to reveal implications of integrating newer
artificial intelligence (AI) technologies into military decision-
making institutions. I begin by arguing that military adoption of
AI is well-incentivised by the current defence landscape. Given
these incentives, it would be useful to try to understand the likely
effects of AI integration into military decision-making. Direct
examinations of the new affordances and risks of new AI models
is a natural mode of analysis for this. I discuss some low-hanging
fruit in this tradition. However, I also maintain that such
examinations can miss systemic impacts of AI reliance in decision-
making workflows. By taking a complex systems view of AI
integration, it is possible to glean non-intuitive insights,
including, for example, that common policy concerns like
preventing human deskilling or requiring algorithmic
transparency may be overblown or counterproductive.

Introduction
Large Language Models (LLMs), a recent iteration in advanced artificial intelligence (AI),
have gained prominence for impressive feats of general intelligence. It raises a natural
question for the security-minded: does the availability of such broadly capable, advanced
AI systems impact the effectiveness of security actors?
A recent short report (Mouton, Lucas, and Guest 2023) illustrates this concern. The
authors discuss results from ‘red-teaming’ exercises that aim to identify novel security
risks arising from the open deployment of advanced AI tools like OpenAI’s ChatGPT.
Red-teaming (Rehberger 2020) is a standard exercise developed in cybersecurity practices
in which experts attempt to abuse or defeat a new system to gather evidence about the
system’s weaknesses. In this particular exercise, red-teamers adopt the perspective of
an adversary, perhaps a non-state actor looking to attack a better-resourced opponent.
They aim to plan and execute asymmetric warfare operations using bioweapons in an
urban environment. The study convened panels of biosecurity experts to assess

CONTACT Osonde A. Osoba [email protected]


*This article is one of thirteen articles published as part of an Australian Journal of International Affairs Special Issue,
Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, guest edited by Toni
Erskine and Steven E. Miller.
© 2024 Australian Institute of International Affairs
238 O. A. OSOBA

whether these new LLMs can improve an adversary’s effectiveness at planning such bio-
weapon attacks. Shockingly yet unsurprisingly, red-teamers were able to get LLMs to
discuss practical aspects and make useful recommendations for increasing the lethality
of potential bioweapon attacks … How do we anticipate such military uses of artificial
intelligence (AI)? And what are the implications for deterrence?
AI and machine learning (ML) technologies have become very useful and effective. These
tools are likely to be adopted widely in military decision making. Military adoption of useful
technology is normal. For example, the adaptation of automobile and aviation technologies
for military purposes. The flow of innovation also goes in reverse. A lot of technological
innovation is the direct product of wartime demands. Privacy-enhancing technologies like
digital cryptography have roots in war-time code-cracking efforts and military espionage.
Despite this natural connection, the integration of AI/ML technologies into military
decision making raises special concerns for attempts at meaningful arms control.
Levers for AI ‘arms’ verification and control are still underspecified. For verification:
the verification of an adversary’s use of AI/ML in war or military decision making is a
difficult problem and will likely feature significant deceptive actions by adversaries.
Parties would be well motivated to try to hide the full capabilities of their AI in software
(think, for instance, about Volkswagen using software switches to evade proper emis-
sions testing (Schiermeier 2015)). We would also need answers to questions like: what
kinds of tests are needed to verify the level of intelligence embedded in a military
system? What is a natural metric of artificial intelligence capabilities?
For AI ‘arms’ control, effective control requires that (international) regulating parties
can credibly intervene on the key enablers of AI technologies. But access to key enablers
of these technologies (data, algorithms, compute, tech talent) is ‘democratised,’ making it
difficult for regulating bodies to intervene to regulate their use by adversaries. Further-
more, the primary momentum around AI/ML technologies lies in the hands of private
and often multinational commercial interests who are not bound by the same norms
and responsibilities that traditional nation states have. Finally, public discussions of AI
already express fears about uncontrollability and economic disruption. Military use
heightens these fears.
To add more colour and detail to concerns about the risks of employing AI in military
decision making, I am going to focus on providing a considered response to the following
question: what systemic risks and opportunities accompany the integration of AI into
military decision-making ecosystems?
A natural response for a technologist tackling this kind of question is to dissect the
nature of the technological artefacts1 (Winner 1980) to generate inferences about likely
risks and benefits in use. For example, one could focus on compute or training data
needs (as in Matheny’s (2023) recommendations for AI regulation, for example) or on
differences in AI perceptual abilities (as I do below). These kinds of artefact-level analyses
are important but can be incomplete. They do not clearly connect to insights about how
the institutional culture of military decision-making may adapt to the shock of AI inte-
gration. We need to develop a more systemic frame to better contextualise the effects of
broad AI adoption in military decision-making culture. Vold’s (2024) conception of ‘AI-
enabled cognitive enhancements’ is one such fruitful systemic frame. My later discussion
here of AI as artefacts in Complex Adaptive Systems (CAS) is another exploration of the
systemic frame. Drawing insights from other complex sociotechnical systems, that later
AUSTRALIAN JOURNAL OF INTERNATIONAL AFFAIRS 239

discussion draws counter-intuitive lessons about AI concerns around human de-skilling


and the need for algorithmic transparency.
For the rest of this discussion, I will cover three key points. First, I will identify incen-
tives in the security landscape that are likely to motivate military decision-making insti-
tutions to adopt AI/ML. Second, I will delve into specific AI capabilities and limitations
that can have significant impacts on the quality of military decision-making. Finally, I
will conclude by applying a complexity lens to help set more justifiable expectations
about effective AI integration into military decision-making institutions.

Incentives for expanding AI/ML roles in military decision-making?


Is the integration of AI/ML in military decision making likely to happen? This question is
separate from normative questions of whether this integration should be considered
legal, ethical, or socially acceptable—questions addressed elsewhere in this special issue
(Deeks 2024; Erskine 2024; Sienknecht 2024). I argue that there are good incentives to
encourage the expansion of AI/ML roles in military decision-making institutions.
The use of AI and ML in a competitive decision-making ‘game’ like national defence is
valuable because it can improve mission effectiveness, e.g. by improving situational
awareness thereby improving the management and processing of increasingly complex
information streams. The use of AI/ML in defence may also simply be an imperative
imposed by competition. As credible threats from near-peers and other relevant potential
threat actors improve (via technology improvements, AI-based or otherwise), the inte-
gration of AI into military operations offers a strategic path for evolving a nation’s
defence capabilities to better match the threat landscape. Besides this competitive-
agents perspective, the environment of warfare itself has changed. The relevant
domains of warfare have also evolved considerably in the last 50–100 years. Modern
national security interests now include cyber, influence or information, bio, and space
domains of warfare. These new domains impose new and urgent demands for techno-
logical innovation. AI tooling can help address many of these demands. I will examine
a few of these to illustrate the point.
Space defence, for example, has a few complicating characteristics. The history of mili-
tary operations in space is relatively short so space defence strategies and tactics are still
evolving. Space-domain physics is unintuitive compared to land, air, and sea physics.
Moreover, the earth-bound reaction latencies, i.e. the time between course-of-action
selection on earth and effects in space, can be prohibitively long. This makes the use
of AI models for gaming or refining tactics, and even for automated course-of-action
(COA) selection, an attractive option (Navabi and Osoba 2021).
Information and cyber operations are inherently data and network infrastructure
management operations, even when the effects are physical in kind. Threat actors manip-
ulate information over networks to achieve effects in physical systems or the minds of
other individuals. In the case of information operations, the primary signal modalities
are natural language and audio-visual artefacts. Modern AI tools excel at multimodal
information processing, and now especially natural language processing. Some specific
use cases where AI can be relevant for these domains include cyber threat detection,
language and narrative generation, and the generation and modification of audio-
visual material, etc.
240 O. A. OSOBA

In sum, the threat landscape and operating environments are increasingly complex.
Warfighting institutions need to adapt to this complexity increase. AI-based tools offer
an accessible, plausible, and enticing mode of adaptation. I contend that the operating
incentives described make widespread AI integration into military decision making insti-
tutions practically inevitable. Given this probable trajectory, what new capabilities and
shortcomings do they introduce? The next section discusses some responses to this ques-
tion by examining technical aspects of AI systems.

Frame: perception & action


One way to understand the potential contributions of AI to military decision making is to
dissect the kinds of functions AI artefacts can augment or perform in military insti-
tutions. A useful frame for parsing functions in military decision making is to organise
them according to perception and action sub-functions: perceiving the context or current
state of the environment and then acting in response to this perceived operating picture.
This is similar to how decision loops are framed in the modern AI tradition of actor-critic
reinforcement learning modelling (Sutton and Barto 2018). In their recent book on
global policy threats, ‘Age of Danger,’ Hoehn and Shanker (2023) use a similar frame
of observing and acting to describe the national security functions of a government.
What contributions and/or limitations do AI systems have in these broad functions?

AI applied to military reasoning and deliberation?


Current AI systems typically perform differently at perception vs. reasoning tasks. Up until
the recent developments of Chain of Thought prompting (Wei et al. 2022) in LLMs,
reasoning capacity on statistical AI models has had limited maturity. The potential for
AI to contribute to military decision making is likely still in flux as AI (and LLM) reasoning
capabilities evolve. But there are already creative ways to apply current AI to refine or train
tactical decision-making. These include the use of LLMs for generating tactical plans (such
as the LLM-generated bioweapon attack plans discussed above (Mouton, Lucas, and Guest
2023)), AI-equipped simulation environments to test hypothetical plans, and possibly
advanced AI-augmented war games for refining strategic sense during officer training.
The Defense Advanced Research Projects Agency (DARPA) AlphaDogfight trials give
an example of interactions between human and AI agents in simulated tactical engage-
ments. The AlphaDogfight was a 2019–2020 DARPA-sponsored program in which AI
agents and human fighter pilots competed in simulated aerial dogfights in F-16 fighter
jets. Navabi and Osoba (2021) also gives an example of the use of rudimentary generative
AI models for course-of-action selection in simulated pursuit-evasion games. Such combi-
nations of AI and simulation environments can be especially useful for refining decision-
making in new and unfamiliar domains (like space). These examples suggest the possibility
of AI agents in simulated environments in future training programs.

AI-augmented perception?
In resort-to-force decision making, the actor typically starts with (or is cued in because
of) information flows from tactical intelligence, surveillance, and reconnaissance (Tacti-
cal ISR or Tac-ISR) operations applied to observations or perceptions. Open source
AUSTRALIAN JOURNAL OF INTERNATIONAL AFFAIRS 241

intelligence (OSINT) data flows are one input source of Tac-ISR workflows.2 With the
rise of cheaper and more commercial space infrastructure, space assets are another
input source for enabling robust and timely military Tac-ISR. AI systems show great
promise as facilitators of perception tasks (Yang et al. 2020) and they demonstrate
near-human performance on some visual perception tasks. Osoba et al. (2023) discuss
how AI tools can be applied to data verification tasks in Tac-ISR data supply chains.
The performance of AI tools at these tasks is somewhat of a distraction though.
Achieving human-level perceptual processing is noteworthy, but that is not the most
compelling reason for broad adoption. The main benefit of AI in this part of the decision
cycle is scale. AI-augmented perception would enable a significant increase in the amount
of information that can be processed, for example, the ability to process vast amounts of
audio, visual, and textual inputs. and then cue up the pieces that are more relevant to the
decision maker. This kind of cuing workflow is an operational tactic for leveraging the
cognitive diversity (Hernández-Orallo and Vold 2019) in human-machine teams to
great effect. The workflow preserves the limited attention of human operators and
targets human attention more efficiently. This workflow structure is valuable even
when the algorithm is not a perfect perceiver. This scaling up in perception is going to
be pivotal for parsing multi-sourced data flows to enable responsive and robust Tac-
ISR for resort-to-force decision making.
The promise of AI-augmented perception comes with a new kind of risk: adversarial
examples or adversarial manipulations of machine perceptions.3 AI perception systems
can be deceived to give mistaken inferences via both digital (Kurakin, Goodfellow, and
Bengio 2016) or physical (Song et al. 2018) manipulations of the subject under obser-
vation. Deception is the norm in warfare. Deception incentivizes the routine deployment
of adversarial examples in countersurveillance and warfare. Furthermore, automation
bias can atrophy review processes for lower-level machine outputs, rendering such AI
misperceptions more systemically dangerous.
In summary, an artefact-level examination suggests the following:

. AI technologies can already increase the capabilities of military organisations that are
properly resourced to take advantage of them.
. The use of these technologies in human military institutions represents a concrete
form of cognitive diversity. These artefacts literally perceive the world differently
from humans. But their use requires paying closer attention to the trustworthiness
of AI-sourced and filtered perceptions that feed into resort-to-force decision
making pipelines. How should a decision maker adapt when they cannot fully trust
the testimony of their ‘eyes’?

Frame: complex adaptive systems


Beyond individual AI systems’ behaviours, we must understand the systemic impli-
cations of increased cognitive diversity as AI integrates deeper into military decision-
making institutions. How should we revise our understanding of accountability as the
roles for AI systems proliferate in military institutions? Framing military decision-
making institutions as complex decision systems adapting to technological evolution
can yield useful insight.
242 O. A. OSOBA

Briefly, a complex system is an ensemble of agents and structures that interact in non-
linear ways leading to the emergence of hard-to-predict macroscopic behaviours.
Complex systems which are also adaptive, typically feature additional signature charac-
teristics like nested subsystems, self-organisation, nonlinear effects, adaptation, memory,
and emergence. Most organisations and policy systems qualify as CASs involving adap-
tive heterogeneous agents (Davis et al. 2021). ‘Adaptive’ refers to the agents’ capacity for
learning and behavioural change or evolution. ‘Heterogeneous’ refers to the diversity in
the kinds of agents interacting in the system. For example, public health systems feature
adaptive individuals, firms (hospitals and insurers), infectious pathogens, transport infra-
structure, etc. States and, more specifically, their national security institutions responsible
for resort-to-force decision-making processes would also qualify as CASs.
The integration of AI technologies adds another dimension of complexity to insti-
tutions responsible for resort-to-force decision processes. Exploring other complex adap-
tive systems can offer valuable insights into how to adapt to AI-imposed complexity, for
example, by highlighting properties common to well-functioning CASs.

Is specialisation beneficial in a CAS?


To illustrate a relevant precedent, consider the concern of the ‘deskilling’ of human
actors, meaning the tendency of human actors in an AI-equipped sociotechnical
system (STS) to delegate key functions to artefacts, and subsequently lose competence
in those functions. Deskilling may be summarised as a division-of-labour concern
where ‘labourers’ are construed to include AI artefacts.
Economic firm networks are useful examples of CASs as precedents for parsing des-
killing concerns. Firms are decision-making agents that allocate resources from nature
and other firms to produce outputs (goods and services) often with the aim of turning
a profit. The evolved tendency is for firms to specialise in separate parts of production
processes. This results in individual firms that are completely deskilled at producing
goods and services, even goods and services that are foundational to their own needs.
A good example of this is Apple delegating chipmaking functions and expertise to the
Taiwan Semiconductor Company (TSMC). Despite this extreme deskilling effect, firms
and firm networks can operate efficiently, even if efficiency in consumer goods pro-
duction and resource extraction may not be the ideal measure of performance. They
are also sometimes able to comply with norms such as constraints on environmental
impact. Although studies suggest that the strength of the induced compliance can be atte-
nuated by relational and contextual factors (Delmas and Montiel 2009).
This short analogy suggests that vertical disintegration and the domain specialisation
of human and synthetic agents in decision processes are not necessarily harmful to
mission effectiveness. Vertical disintegration can also coexist with the ability of CASs
to comply with some norms (like limits on labour practices or environmental
impacts).4 Furthermore, such specialisation and disintegrated responsibilities may be
key ingredients for resilience in complex adaptive systems. In Herbert Simon’s discussion
of system complexity, ‘Sciences of the Artificial’ (1996), the author identifies a common
attribute of good exemplars of complex adaptive systems, the property of near-decompo-
sability (also modularity sometimes). This refers to a property of systems for which the
span of the influence of sub-systems is relatively bounded i.e. a subsystem only affects at
AUSTRALIAN JOURNAL OF INTERNATIONAL AFFAIRS 243

most a small fraction of other subsystems. Modular or nearly decomposable systems are
easier to manage (e.g. swap out failing subsystems with limited side-effects) and under-
stand (e.g. easier to trace observed outcomes back to responsible sub-systems).

Is there any novelty specific to AI-Integration in military CASs?


It would be misleading to suggest that all questions around AI proliferation in military
decision making have clear analogies or are foreshadowed by other pre-existing CASs.
A primary AI-specific concern that has limited precedent is the reliance on intelligent
behaviour produced by synthetic AI agents, particularly given that such intelligent behav-
iour does not transparently resolve down to morally or legally accountable human
persons. Some scholars have used this fact to argue for legal personhood for important
AI systems. However, Bryson, Diamantis, and Grant (2017) give a good discussion of
the incoherence of conferring legal personhood to algorithm-based agents for the goal
of accountability. To summarise their investigation in their own words (2017, 289):
There is no question that such a readily-manufacturable legal lacuna would be exploited as a
mechanism for [avoiding] legal liabilities […] and we find the idea [of synthetic persons]
could easily lead to abuse at the expense of the legal rights of extant legal persons.

In other words, synthetic AI-personhood would be abused as liability shields while


offering no actual ability to redress wrongs they effect. Erskine (2024) and Sienknecht
(2024) go further to reject the coherence of conferring moral personhood on algor-
ithm-based agents.
Another approach to solving this accountability and trust concern is to require algo-
rithmic explanations or transparency. I argue that this approach is inadequate to the task
for a few reasons. First, achieving true transparency is impractical for today’s large,
complex AI models. Exposing their inner workings provides little insight into their
decision rationale, even to experts.
Second, algorithmic ‘explanations’ tend to take the form of mere post hoc rationalis-
ations of decisions from models trained on observational data. Their reliance on obser-
vational training data means these models lack a detailed causal understanding of their
decision contexts and can only refer to correlational historical patterns. Algorithmic
‘explanations’ on such models have limited utility for grounding trust and accountability,
especially in dynamic, adversarial environments where historical patterns are unreliable
interpretive lenses for new scenarios. Furthermore, recent findings show that explanatory
methods applied to AI models can easily emit misleading explanations (Lakkaraju and
Bastani 2020).
So … what is the operational value of decision explanations provided by artefacts that
are susceptible to fallacious reasoning? And what is the value of breaking open black-box
models if their contents multiply confusion without adding insight? Given these limit-
ations, I argue that relying on these forms of algorithmic introspection for evidence
when judging algorithms is a fragile and ineffective endeavour that ultimately provides
no clear path for actual redress.
There is a different line of thinking that anchors calibrations of trustworthiness in
organisational structures and processes surrounding the information-producing agent
instead of as a function of the agent itself. Sienknecht (2024) coins the phrase ‘proxy
244 O. A. OSOBA

responsibility’ to describe the concept of reaching back through the technological arte-
facts to root responsibility squarely in the (legally & socially) responsive humans and
organisations deploying these artefacts. This concept is similar to Floridi’s concepts of
distributed morality and distributed faultless responsibility (Floridi 2016; 2020) (if we
drop the ‘faultless’ aspect).
For example, we have some trust in the veracity of an esoteric academic paper pub-
lished by an unknown author in a reputable journal. This trust is not because we
always have enough deep expertise to critically evaluate the paper. Our knowledge of
peer-review processes and the embedded incentives anchors our calibration of the trust-
worthiness of the paper. This suggests that there might be value in exploring ways of
designing processes around the generation and consumption of AI-augmented resort-
to-force decisions that may be more effective at providing compelling justifications.

Conclusion
My aim in this piece has been to highlight the utility of two different frames for under-
standing the implications of AI integration into military decision-making institutions: a
frame focusing on the technical capabilities of the AI artefacts and a frame focused on the
systems into which these AI artefacts will be integrated. From those explorations, the fol-
lowing insights are worth reiterating.
The artefact-level analyses reveal that AI systems perceive the world differently than
humans. That difference in perception can be exploited by adversaries but also provides
useful cognitive diversity. Deceptive threat actors seeking to hide their activities from
monitoring satellites may be able to craft AI-targeted camouflage to confuse large-
scale surveillance workflows. Threat actors can also take advantage of current AI tools
to generate novel tactical plans and courses of action. Increased variety in adversary
tactics can slow or muddy resort-to-force decision making.
The systems-level analysis points to two additional conclusions. First, the concern
over deskilling human actors may not be a central one. Specialisation is likely essential
for efficient and scalable resort-to-force decision making processes, especially if that
specialisation makes wise use of the cognitive diversity in human-machine teams.
Second, achieving a decision system architecture that is responsive to societal norms
requires the ability to link decisions to accountable non-machine agents. Algorithmic
introspection (transparency and explanation) centres algorithms in our search for
accountability in decision making. But algorithms are not capable of bearing responsibil-
ity in any useful way. Perfecting our algorithmic introspection capabilities would not
bridge accountability gaps. The focus should be on accountability structures rather
than algorithmic introspection.

Notes
1. ‘Artefacts’ are simply man-made objects. I use this term typically to refer to man-made
elements that play roles in a sociotechnical system. I intend my use of this to be similar
to Langdon Winner’s use of the term in his 1980 essay ‘Do artifacts have politics?’
(Winner 1980) in which he scopes artefacts to include ‘machines, structures, and systems
of modern material culture.’
AUSTRALIAN JOURNAL OF INTERNATIONAL AFFAIRS 245

2. These were identified by Mick Ryan (2023) in his contribution to the ‘Anticipating the
Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ workshop.
3. The adversarial example concern can be parsed as a mirror image of the problem of ‘deep-
fakes.’ The deepfake concern involves the production of AI-generated artifacts designed to
fool humans into believing falsehoods.
4. There is a secondary question about the degree to which the skill of human actors is necess-
ary for maintaining compliance with international rules and norms around the use of force.
If the primary requirement for compliance is meaningful human oversight, then there is
research and design to do on how to organise coordination between human actors and
AI-systems to achieve compliance.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributor
Osonde Osoba (Ph.D.) is a researcher and practitioner in the field of Responsible AI (RAI). Over
the past decade, he has worked on RAI by applying AI to policy problems (at RAND) and by exam-
ining questions of fairness and equity in the use of AI for decision-making (at RAND & now at
LinkedIn).

References
Bryson, Joanna J, Mihailis E. Diamantis, and Thomas D Grant. 2017. “Of, for, and by the People:
The Legal Lacuna of Synthetic Persons.” Artificial Intelligence and Law 25: 273–291. https://doi.
org/10.1007/s10506-017-9214-9.
Davis, Paul K., Tim McDonald, Ann Pendleton-Jullian, Angela O’Mahony, and Osonde Osoba.
2021. “A Complex-Systems Agenda for Influencing Policy Studies.” In Proceedings of the
2019 International Conference of The Computational Social Science Society of the Americas,
edited by Zining Yang and Elizabeth Von Briesen, 277–296. Springer.
Deeks, Ashley. 2024. “Delegating War Initiation to Machines.” Anticipating the Future of War: AI,
Automated Systems, and Resort-to-Force Decision Making, Special Issue Australian Journal of
International Affairs 78 (2), (this issue).
Delmas, Magali, and Ivan Montiel. 2009. “Greening the Supply Chain: When is Customer Pressure
Effective?” Journal of Economics & Management Strategy 18 (1): 171–201.
Erskine, Toni. 2024. “Before Algorithmic Armageddon: Anticipating Immediate Risks to Restraint
When AI Infiltrates Decisions to Wage War.” Anticipating the Future of War: Ai, Automated
Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of
International Affairs 78 (2), (this issue).
Floridi, Luciano. 2016. “Faultless Responsibility: On the Nature and Allocation of Moral
Responsibility for Distributed Moral Actions.” Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences 374: 20160112. https://doi.org/10.1098/
rsta.2016.0112.
Floridi, Luciano. 2020. “Distributed Morality in an Information Society.” In The Ethics of
Information Technologies, edited by Keith Miller and Mariarosaria Taddeo, 63–79. Milton
Park, UK: Routledge.
Hernández-Orallo, José, and Karina Vold. 2019. “AI Extenders: The Ethical and Societal
Implications of Humans Cognitively Extended by AI.” Proceedings of the 2019 AAAI/ACM
Conference on AI, Ethics, and Society.
Hoehn, A., and T Shanker. 2023. Age of Danger: Keeping America Safe in an Era of New
Superpowers, New Weapons, and New Threats. Hachette Books.
246 O. A. OSOBA

Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. 2016. “Adversarial Machine Learning at
Scale.” arXiv preprint arXiv:1611.01236.
Lakkaraju, Himabindu, and Osbert Bastani. 2020. “‘How do I Fool you?’ Manipulating User Trust
Via Misleading Black Box Explanations.” In Proceedings of the AAAI/ACM Conference on AI,
Ethics, and Society, edited by Annette Markham, Julia Powles, Toby Walsh, and Anne L.
Washington, 79–85. New York, NY: Association for Computing Machinery.
Matheny, Jason. 2023. “Advancing Trustworthy Artificial Intelligence.” Testimony presented before
the U.S. House Committee on Science, Space, and Technology, 22 June 2023. Santa Monica, CA:
RAND Corporation.
Mouton, Christopher A., Caleb Lucas, and Ella Guest. 2023. The Operational Risks of AI in Large-
Scale Biological Attacks: A Red-Team Approach. Santa Monica, CA: RAND Corporation.
Navabi, Shiva, and Osonde A. Osoba. 2021. “A Generative Machine Learning Approach to Policy
Optimization in Pursuit-Evasion Games.” Paper presented at the 60th IEEE Conference on
Decision and Control (CDC), Austin, Texas, United States. December 2021.
Osoba, Osonde, George Nacouzi, Jeff Hagen, Jonathan Tran, Li Ang Zhang, Marissa Herron,
Christopher M Lynch, Mel Eisman, and Charlie Barton. 2023. The Resilience Assessment
Framework: Assessing Commercial Contributions to US Space Force Mission Resilience. Santa
Monica, CA: RAND Corporation.
Rehberger, Johann. 2020. Cybersecurity Attacks–Red Team Strategies: A Practical Guide to Building
a Penetration Testing Program Having Homefield Advantage. Birmingham, UK: Packt
Publishing Ltd.
Ryan, Mick. 2023. “Meshed Civil-Military Sensor Systems: Opportunities and Challenges of AI-
Enabled Battlespace Transparency.” Paper presented at the Anticipating the Future of War:
AI, Automated Systems and Resort-to-Force Decision Making Workshop. Canberra,
Australia, 28–29 June.
Schiermeier, Quirin. 2015. “The Science Behind the Volkswagen Emissions Scandal.” Nature 9: 24.
Sienknecht, Mitja. 2024. “Proxy Responsibility: Addressing Responsibility Gaps in Human-
Machine Decision Making on the Resort to Force.” Anticipating the Future of War: Ai,
Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal
of International Affairs 78 (2), (this issue).
Simon, Herbert A. 1996. The Sciences of the Artificial. Cambridge, MA, United States: MIT press.
Song, Dawn, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian
Tramer, Atul Prakash, and Tadayoshi Kohno. 2018. “Physical Adversarial Examples for
Object Detectors.” Paper Presented at the 12th USENIX Workshop on Offensive
Technologies (WOOT 18), Baltimore, MD, United States, August 2018.
Sutton, Richard S., and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction.
Cambridge, MA: MIT Press.
Vold, Karina. 2024. “Human-AI Cognitive Teaming: Using AI to Support State-Level Decision
Making on the Use of Force.” Anticipating the Future of War: Ai, Automated Systems, and
Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs
78 (2), (this issue).
Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, and
Denny Zhou. 2022. “Chain-of-thought Prompting Elicits Reasoning in Large Language
Models.” Advances in Neural Information Processing Systems 35: 24824–24837.
Winner, Langdon. 1980. “Do Artifacts Have Politics?” Daedalus 109 (1): 121–136.
Yang, Jiachen, Chenguang Wang, Bin Jiang, Houbing Song, and Qinggang Meng. 2020. “Visual
Perception Enabled Industry Intelligence: State of the Art, Challenges and Prospects.” IEEE
Transactions on Industrial Informatics 17 (3): 2204–2219.

You might also like