0% found this document useful (0 votes)
16 views

AI Liability in Europe How Does It Complement Risk Regulation and Deal With The Problem of Human Oversight

Uploaded by

Silvio Menezes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

AI Liability in Europe How Does It Complement Risk Regulation and Deal With The Problem of Human Oversight

Uploaded by

Silvio Menezes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Computer Law & Security Review 54 (2024) 106012

Contents lists available at ScienceDirect

Computer Law & Security Review: The International


Journal of Technology Law and Practice
journal homepage: www.elsevier.com/locate/clsr

AI liability in Europe: How does it complement risk regulation and deal


with the problem of human oversight?
Beatriz Botero Arcila
Assistant Professor of Law, Sciences Po Law School, Faculty Associate, Berkman Klein Center for Internet and Society Harvard University, 13 Rue de l’Université, Paris,
75016, France

A R T I C L E I N F O A B S T R A C T

Keywords: Who should compensate you if you get hit by a car in “autopilot” mode: the safety driver or the car manufac­
AI liability turer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was
AI Act being supervised by an HR professional? Should the developer compensate you, the company that procured the
Human in the loop
software, or the (employer of the) HR professional that was “supervising” the system’s output?
Fundamental rights
AI risks
These questions do not have easy answers. In the European Union and elsewhere around the world, AI
AI harms governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations
AI bias above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI
Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI
systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must
also be complemented by liability law because when harms do occur, harmed individuals should be compen­
sated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all
parties to take precautions.
Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI
regulations should be complemented by liability rules to address AI harms when they occur. Using a law and
economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision
of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the
particularities of AI-human interactions.

1. Introduction deploy, comply with certain safety requirements, and participate in the
creation of substantive optional safety standards.3
In the European Union and elsewhere around the world, AI gover­ Risk regulation is a regulatory mechanism often employed when the
nance is turning towards risk regulation.1 Risk regulation is a particular harms potentially caused by the activities at issue are hard to dis­
approach for controlling activities that create risks of harm, which relies incentivize via other main instruments to control harmful activities,
on instruments such as standards, prohibitions, and risk and impact such as the market or liability law.4 Even though regulation is expensive
assessments to regulate behavior ex-ante; that is, before or at least (both in terms of compliance and enforcement), economic theory jus­
independently of whether the potential harm actually occurs.2 In the EU tifies it when market failures allow an actor conducting a dangerous
the recently approved AI Act creates a hierarchy of different levels of activity (such as developing and deploying high-risk AI models) to take
riskiness for AI systems, and requires the providers of high-risk AI sys­ precautions to not unduly expose society to harm. These market failures
tems to produce documentation on the functioning of the systems they may be incomplete information by victims, consumer misperceptions

E-mail address: [email protected].


1
See Margot Kaminksi, “The Developing Law of AI Regulation: A Turn to Risk Regulation” (Lawfare, April 21, 2022; https://www.lawfaremedia.org/article/the-de
veloping-law-of-ai-regulation-a-turn-to-risk-regulation.
2
See i.e. Steven Shavell, “Liability for Harm versus Regulation of Safety” (2004) NBER Working Paper No. 21218.
3
European Parliament, Artificial Intelligence Act Corrigendum (19 April 2024) (AI Act).
4
See Margot E. Kaminski, “Regulating the Risks of AI” (2023) 103 Boston University Law Review 18.

https://doi.org/10.1016/j.clsr.2024.106012

Available online 29 June 2024


0267-3649/© 2024 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

about the product, or externalities that allow risk-takers to conduct their complement the AI Act in the cases where it may be most useful: for
activities at a cost that is lower than their societal cost.5 Risk regulation systems where little or no other regulatory requirements are in place, or
alone is, however, rarely optimal. The tools of risk regulation do not in some of the cases involving high-risk systems which must, according
offer those impacted by AI systems – either in their fundamental rights to the AI Act, be designed to be effectively supervised by a human.
or other legally protected interests - any rights and little avenues to seek The second main finding of this Article is that central issues for future
redress – alone.6 From a corrective justice perspective risk regulation liability cases such as whether a human supervisor was “effectively
must also be complemented by liability law because when harms do empowered” to supervise an AI system, and or exercising due care, will
occur, harmed individuals should be compensated.7 Consequently, importantly depend on the standards that are yet to be developed,
scholars and regulators around the world have highlighted that AI reg­ following the approval of the AI Act.
ulations should be complemented by liability rules to address AI harms The ambition of this article is for these conclusions to contribute to
when they occur.8 the debate on AI liability in Europe, as well as to the broader discussion
The main question addressed in this Article is, thus, what should on the complementarity of risk regulation and liability law in AI
the liability rules be that complement AI risk regulation. To address it, governance across different jurisdictions.
it studies the EU’s 2022 AI liability proposals, the AI Liability This Article proceeds as follows: Part 2, is the background section.
Directive (AILD) and a revision of the Product Liability Directive (PLD It surveys the literature on the challenges of regulating AI, the policy
which seeks to complement the Artificial Intelligent Act (AI Act) risk conversation in Europe, and the law and economics framework on the
and safety regulation. These proposals are an important complement institutional choices to control the risk of harm and the complemen­
to the AI Act’s risk and safety approach. Indeed, relying solely on risk tarity of regulation and liability to address the risks of AI. Part 3
regulation has distributive consequences, including the possibility presents the two proposed AI liability directives as they relate to the
that individual harms and costs will be dismissed if a particular framework set in place by the AI Act. Part 4 analyzes the comple­
measure makes sense collectively, which may especially harm mi­ mentarity of the AI liability directives with the AI Act, paying special
norities.9 It may also lead to situations where, because regulators are attention to how, together, they facilitate victims’ access to corrective
fallible, regulations set suboptimal standards and organizations won’t justice incentivize precautionary measures, and reducing socially
have enough incentives to take optimal care.10 Similarly, one of the wasteful AI accidents, especially considering the complexity of AI and
main arguments that were raised when the AI Act was first published the involvement of multiple actors in AI accidents. Part 5 finishes by
was that it didn’t include individual rights nor rights of action for offering some suggestions for reform and the wider AI governance
affected persons, even if its stated goal is to protect fundamental conversation.
rights in Europe.11 In this context, liability law becomes an important
vehicle to ensure that the vast and fast adoption of AI systems in all 2. Background: AI and the institutional choices to control risks
facets of life and society is done in a way that guarantees the pro­ of harm
tection of people’s rights and interests, but also to provide legal cer­
tainty for AI developers and deployers. This first Part outlines the now well-known specific risks posed by AI
Using a law and economics framework this Article evaluates how the to legally protected interests and why these features complicate AI
proposed AI liability regime complements the AI Act in reducing socially accountability when harm occurs. It then presents the theory drawing
wasteful AI accidents by incentivizing precautionary measures, and in from law and economic analysis to assess the desirability for risk regu­
offering victims of harm improved avenues to seek compensation. It lation and liability and their complementarity. Lastly, it lays out a
does so especially considering the complexity of AI and the involvement framework to assess the complementarity of these two regimes, which
of humans in AI accidents. It finds, in a nutshell, that the AILD and the will be later applied to the EU framework.
PLD, in their current forms, fall somewhat short of their ambition to
effectively complement the AI Act, especially because they very strongly
rely on the tiered framework developed by the AI Act. Indeed, both the 2.1. Controlling AI harms and risks: the technical and organizational
AILD and PLD tend to focus on making it easier for plaintiffs of in ac­ challenges
cidents involving high-risk systems (as defined by the AI Act) to access
relevant evidence or creates rebuttable presumptions that should make There is a vast literature on the benefits and risks of AI systems.12 It is
their burden easier. Additionally, the AILD does not apply to accidents well recognized that AI systems can enhance efficiency and productivity,
where a human is involved in supervising the AI system. The paradox, and enable more accurate data analysis, aiding in better decision-
however, is that by doing so the AILD and PLD fail to effectively making in a variety of fields.13 At the same time, it is also well docu­
mented that AI systems pose several risks and can cause a variety of
harms: AI systems like automated vehicles or appliances can pose safety
5
risks, to life, bodily integrity, or property; AI-powered decision-making
Eric Marsden, “Risk regulation, liability and insurance: Literature review of
software poses risks to fundamental rights, privacy, human dignity, and
their influence on safety management,” [2014] Les Cahiers de la sécurité
equality; and AI systems also pose epistemic risks. They may, for
industrielle FonCSI no. 2014-08.
6
See EDRi et al., “An EU Artificial Intelligence Act for Fundamental Rights: A
example, slowly change how we conceptualize the world as organiza­
Civil Society Statement” 30 November 2021 <https://edri.org/wp-content/ tions increasingly rely on profiling or sorting algorithms to make
uploads/2021/12/Political-statement-on-AI-Act.pdf> accessed 30 October
2023; Lilian Edwards, “Regulating AI in Europe: four problems and four solu­
12
tions” (2022) Ada Lovelace Institute; Marco Almada and Nicolas Petit, “The EU AI is used in this piece following the definition adopted by the European
AI Act: Between product Safety and Fundamental Rights” (December 20, 2022) Commission: “a machine-based system that is designed to operate with varying
<https://ssrn.com/abstract=4308072> accessed 30 October 2023, Kaminski levels of autonomy and that can, for explicit or implicit objectives, generate
(n4). output such as predictions, recommendations, or decisions influencing physical
7
Marsden (n5) 20 see also Nicomachean Ethics, Book V. or virtual environments. Lucas Bertuzzi, “EU lawmakers set to settle on OECD
8
See European Commission, White Paper on Artificial intelligence. A Euro­ definition for AI” (Euractiv, Mar. 7 2023) <https://www.euractiv.com/secti
pean approach to excellence and trust, Brussels, Feb. 19, 2020, COM(2020) 65 on/artificial-intelligence/news/eu-lawmakers-set-to-settle-on-oecd-definition-f
final, (2020) (White Paper on AI). or-artificial-intelligence/> accessed 30 October 2023.
9 13
Kaminski (n4), 8. White Paper on AI (n8).
10
Shavell (n2).
11
EDRi and others (n6).

2
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

decisions.14 their employer.


What is particular about AI from a liability perspective, however, is This latter issue, the problem of many hands, deserves some addi­
that when harms occur AI systems’ characteristics make them hard to tional discussion for its relevance for liability.
scrutinize. AI systems have characteristics that complicate understand­
ing, and often fully predicting, their behavior. Machine learning (ML)
2.2. Controlling AI harms and risks: the problem of many hands
algorithms, for example, power many of the AI-powered tools consumers
are often in contact with, such as assisted driving, healthcare, and home
The challenges brought about by AI systems’ technical and orga­
appliances like Amazon’s Alexa,15 and are used to make classifications,
nizational characteristics are aggravated as humans interact with AI
predictions and to decide what can be the best action in a particular
systems. Early conversations about the regulation and liability of AI
situation.16 ML algorithms work with high-dimension data to determine
focused on the “substitution effect:” What the law should do when an
what features are relevant to that decision. The number of features can
AI system replaces a human actor such as a driver, a decisionmaker, or
run into the tens of thousands which, even if it is replicating work done
a medical doctor.24 The development and best practices around these
by humans, involves a qualitatively different decision-making logic from
tools today, however, reveal that AI development seems to be oriented
that of humans.17 Trained machine learning algorithms define decision-
towards situations where often, humans and AI systems collaborate.25
making rules to handle new inputs that not need to be understood by a
In addition, regulations increasingly a mandate that a human is
human operator.18 This makes AI opaque, in the sense that recipients of
involved in different forms of AI decision-making processes; so-called
the output of an algorithm rarely have a concrete sense of how the
human-in-the-loop.26
output was arrived at from the inputs – or what those inputs were.19
The key assumption of these human-in-the loop mandates is that
They are also complex in the sense that their behavior arises in a
humans and machines can complement each other well: Algorithms are
nonlinear, often unpredictable way from that of its parts,20 and some­
fast, and they can make decisions based on far more information and
times autonomous, which comes from their mathematical optimization
factors than humans, consistently, and at scale.27 Algorithms are, how­
in high-dimensionality processes. This is also what allows their self-
ever, bad at ethics or following norms, do not justify their decisions, and
learning capacity.21
are especially dependent on their training data and the data fed into
Importantly, the organizational structures in which AI systems are
models, which makes them prone to reproduce the biases in them. They
embedded and deployed accentuate these challenges. AI opaqueness is
are thus bad at edge cases.28 Humans, on the contrary, are flexible
not only a feature of AI systems’ mathematical complexity, but it can
decision-makers. We can exercise discretion, generalize and jump across
also be a function of proprietary protections of corporate or state se­
context, even if actual decision-making processes are also opaque.29
crecy, or because of generalized technical illiteracy.22 Similarly, what is
Hybrid systems thus promise to bring the best of both worlds by allo­
commonly referred to as AI socio-technical systems involve a variety of
cating tasks to either an individual or a machine, based on lists of what
actors and elements that participate in the design of the system
each is supposed to be better at.30 To do so, the most popular methods
throughout its life cycle, program it, decide when and for what it will be
construe humans and machines based on their capabilities and, on that
adopted, and supervise it. The involvement of multiple individuals or
basis, determining which capabilities can and should be automated and
actors in the development, deployment, and operation of AI systems is
which ones shouldn’t.31 Thus, for example, Article 22 of the GDPR
referred to as the problem of many hands and complicates assigning
introduced a data protection right to “not be subject to a decision based
responsibility and accountability for AI outcomes.23 This is especially
solely on automated processing (…) which produces legal effects.”32
the case if the AI provider is not the same actor as the person at issue or

24
Kaminski (n1).
14 25
See Juan Ortiz Freuler, “Dataification, Identity, and the Reorganization of See H. James Wilson and Paul R. Daugherty, Collaborative Intelligence:
the Category Individual” (2023) 65 TLR. Humans and AI Are Joining Forces (Harvard Business Review, July-August 2018)
15
Bernard Marr, Machine learning in Practice: How Does Amazon’s Alexa Really <https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joini
Work?, Bernard Marr & Co. (n.a.) available at: https://bernardmarr.com/ma ng-forces > accessed April 29 2024.
26
chine-learning-in-practice-how-does-amazons-alexa-really-work/; How Ma­ In 2020 Jessica Fjield et al. found that out of 36 AI ethics documents, 70%
chine Learning is Used in Autonomous Vehicles, Rinf.Tech (n.a.) available at: included a principle proposing that important decisions made by AI systems be
https://www.rinf.tech/how-machine-learning-is-used-in-autonomous-vehic subject to human review see Jessica Jeld et al., Principled Artificial Intelligence:
les/#:~:text=An%20autonomous%20vehicle%20can%20use,the%20world% Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI,
20around%20a%20car. (2020), https://papers.ssrn.com/abstract=3518482 (last visited Aug 27, 2023).
16
ML is broadly defined as a methodology and set of data- driven techniques Similarly, in 2021 Ben Green identified at least 41 policy documents from
to come up with novel patterns and knowledge and to generate models that can around the world that included some form human oversight requirement for
be used for effective predictions about the data see Brent D. Mittelstadt and algorithms in the public sector, including the AI Act. Ben Green, “The Flaws of
others, “The ethics of algorithms: Mapping the debate” (2016) 3 BD&S 2. Policies Requiring Human Oversight of Government Algorithms” (2022) CLSR
17
See Mittelstadt (n16) 3. 45.
18 27
Mittelstadt (n16) 3. See Rebecca Crootof, Margot E. Kaminski & W. Nicholson Price II, “Humans
19
Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in ma­ in the Loop” (2023) 76 VLR. 429, 464. Recent research is defining new types of
chine learning algorithms” (2016) BD&S 3(1). interactions between humans and machine learning algorithms at the learning
20
See Donella H. Meadows, Thinking in systems: A primer (Chelsea Green process see Eduardo Mosqueira-Rey and others, “Human-in-the-Loop Machine
Publishing, 2008). Learning: A State of the Art” (2022) 56 AIR 3005.
21 28
See Commission Report on safety and liability implications of AI, the Crootof and others (n27) 465.
29
Internet of Things and Robotics, 16 (Commission Report on safety and liability Crootof and others (n27) 462.
30
implications of AI) <https://commission.europa.eu/publications/commission- Sidney Decker & David Woods, “MABA-MABA or Abracadabra? Progress on
report-safety-and-liability-implications-ai-internet-things-and-robotics-0_en> Human–Automation Co-Ordination,” (2002) CTW4, 240.
31
accessed 26 August 2023. Joost de Winter and Dimitra Dodou, “Why the Fitts list has persisted
22
Burrell (n19). throughout the history of function allocation” (2014) CTW16 (2014); Decker &
23
Expert Group on Liability and New Technologies, New Technologies For­ Woods (n30) 104.
32
mation, Liability for Artificial Intelligence and other emerging digital technol­ Art. 22 GDPR.
ogies (2019) (Expert Group on Liability and New Technologies), 33; Helen
Nissenbaum “Accountability in a computerized society,” Science and engi­
neering Ethics, 2(1) 29.

3
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

This approach effectively precludes or restricts decision-making that is rely blindly on automated cues and don’t remain vigilant.44 Studies
fully automated so that algorithmic predictions act more as an aid rather have shown that time pressure, complex tasks, and the degree of the
than a substitute to human decision-making.33 Echoing this approach, user’s self-confidence on their decisions tend to contribute to auto­
the EU’s Artificial Intelligence Act imposes an obligation for developers mation bias.45 In other instances machines alone may be better at
and deployers of high-risk systems to design and develop them so that certain tasks.
they can be effectively overseen by a natural person.34 Relying on AI A vast field of research has sought to identify ways to overcome some
human supervision, the proposed AI Liability Directive, which will be of the challenges of effective human-machine collaboration.46 Much of
further discussed in Part 3, explains that “[t]here is no need to cover this work has highlighted that since allocating a particular function to
liability claims when the damage is caused by a human assessment machines also creates new functions for humans, these must be
followed by a human act or omission, while the AI system only provided accounted for in training and interaction. With other technologies these
information or advice which was taken into account by the relevant have included a transition to typing, or interacting with a screen and
human actor.”35 searching for the right display.47 In the AI context, a move towards
The optimism about leveraging human-machine interaction is supporting better conditions for human-AI interaction requires making
problematic, however, and tempered by evidence that human-machine the operations of automated systems observable to humans and making
systems have dynamics of their own and it is difficult to design effec­ it easy and efficient for human operators to direct the system, especially
tive hybrid systems that require collaboration between humans and in novel episodes.48 This should also be done paying special attention to
automated technologies.36 This occurs for two main reasons: First, the the different levels of expertise, experience and training of the in­
assumption that people and computers have fixed strengths and dividuals interacting with these systems.49 Maria de Arteaga and co-
weaknesses that can be easily capitalized on or used to compensate for authors identify, for example, that supervisors of an algorithmic sys­
the other party’s weaknesses is not accurate.37 Hybrid systems create tem in child-welfare in the US were able to correct for a glitch in the
new human strengths and weaknesses and it is a priori not obvious how system because they had access to the underlying administrative data.
to capitalize on different strengths.38 For example, when automation This provided them with an alternative view of the case than what was
can perform complex and repetitive tasks for an extended period, it being shown in the risk score calculation.50 Other researchers have
increases the difficulty for humans to remain attentive and vigilant to explored the idea of offering explanations for algorithmic decision-
the system. This can lead to a potential problem known as “vigilance making51 and incorporating forms of accountability to incentivize the
decrement.”39 reduction of automation bias.52
Second, there are two competing tendencies in humans interacting From the governance perspective, Crootof et al. have drawn from
with machines that have been observed: automation bias and algo­ the experience of successful regulation of human-machine systems in
rithmic aversion.40 Algorithmic aversion refers to the phenomenon of safety-critical systems, to emphasize that hybrid human-AI systems
individuals wanting to override machine predictions even when these require detailed rules for system designers and operators.53 Regula­
are highly reliable. Some of this originates from a perceived lack of tion should require that product designers create technological sys­
agency, lack of transparency, and lack of trust in how accurate the tems around the people operating the system, that the devices are
system is.41 Studies have thus shown that users will sometimes prefer designed and labelled sufficiently for effective use, and address
to sacrifice accuracy for control over the algorithm’s output.42 training and organizational policies.54 Talia Gillis and her co-authors
Automation bias, on the other hand, refers to individuals’ tendency to have relatedly highlighted the importance of taking into account the
defer to automated systems even when they are wrong.43 This can kind of interaction that is expected from humans and machines when
lead to a situation where the human does not detect problematic cases designing these systems, but also, that oversight requirements be built
or fails to act even if they do; a famous example is pilots who tend to that appropriately consider the combined and expected impact of the
machine and human interaction and how it is implemented.55 Indeed,
substantive oversight requirements such as transparency or scrutiny
33
Talia Gillis, Regulating for “humans-in-the-loop” (ECGI blog, 2022) <https:
of the data with which algorithms are trained seem to assume that the
//www.ecgi.global/publications/blog/regulating-for-humans-in-the-loop> outcome that should be scrutinized and monitored is the algorithmic
accessed April 29, 2024; see also Tal Zarski, “Incompatible: The GDPR in the component of the decision in isolation. However, the true impact of AI
Age of Big Data Seton Hall Law Review” (2017) 47 Seton Hall Review 4(2), systems is also the result of the human decision-making that
arguing in the early days of the GDPR that the requirements article 22 could be
be sidestepped by inserting human intervention into the process.
34
AI Act (n3) Article 14 see below Part 2.
35 44
European Commission, Explanatory Memorandum, Proposal for a Directive De-Arteaga and others (n40) citing Kathleen L Mosier and others “Auto­
of the European Parliament and of the Council on adapting non-contractual mation Bias: Decision Making and Performance in High-Tech Cockpits,” (1997)
civil liability rules to artificial intelligence (AI Liability Directive), COM/ 8 IJAP 47.
45
2022/496 final, 1 (Proposal AILD) Recital 15. De-Arteaga and others (n40).
36 46
Green (n26) citing Lisanne Bainbridge, “Ironies of automation” (1983) Much of it developed in the past 30 years for aviation and surface trans­
Automatica 19(6). portation settings see Decker & Woods (n30).
37 47
Decker & Woods (n30). Decker & Woods (n30); Dale Richards and others, “Designing for Human-
38
Decker & Woods (n30). Machine Teams: A Methodological Enquiry” (2022) IEEE 3rd International Con­
39
Decker & Woods (n30). ference on Human-Machine Systems (ICHMS).
40 48
Maria De-Arteaga, Riccardo Fogliato & Alexandra Chouldechova, “A Case Decker & Woods (n30).
49
for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Crootof and others (n27) 498.
50
Scores” (2020) Proceedings CHI Conference on Human Factors in Computing De-Arteaga and others (n40).
51
Systems 1. De-Arteaga and others (n40) 4.
41 52
Kun Yu et al., Trust and Reliance Based on System Accuracy: 24th Inter­ Linda J. Skitka and others, “Automation Bias and Errors: Are Crews Better
national Conference on User Modeling, Adaptation, and Personalization (2016) than Individuals?” (2000) 10 IJAP 85.
53
Proceedings UMAP 2016 223. Crootof and others (n27) 494-496.
42 54
Kun Yu et al. (n41). Crootof and others (n27) 466; Green (n26) 14, emphasizing the importance
43
Green (n26). of strengthening institutional oversight of algorithms, requiring justifications as
to why it is appropriate to incorporate an algorithm into decision-making and to
provide evidence that the algorithmic system can be effectively overseen.
55
Gillis (n33).

4
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

accompanies it.56 As scholars and policymakers have noted in the EU and elsewhere,
In the last Part of this Article, I propose a related system-wide this isn’t necessarily the case with AI systems.68 AI system’s complexity,
approach to AI liability. Before discussing the AI liability directive from a technical and organizational perspective – such as when humans
specifically, however, the next section taps into law and economics and different organizations intervene in a particular outcome –
scholarship and presents the EU on AI governance and AI liability to complicate proving the key elements of fault-based liability. In the EU,
propose a simplified framework of analysis for AI liability regimes. the Expert Group on AI Liability, convenedby the Commission, identi­
fied in its influential 2019 report (“the Report”) that regulation, such as
2.3. The choice for regulation and liability for AI product safety regulation, offer some safeguards to minimize the risks of
harm when new technologies are rolled out in the market. It highlights
Societies use two main institutional mechanisms to control the those regulations, must be complemented with liability laws, some of
risks generated by new technologies like AI: Liability law and regu­ which must be adapted, as they do not (and cannot) completely exclude
lation. Liability law intervenes ex-post (only when harm occurs).57 the possibility that harm may occur.69
The objective is both to provide corrective justice and provide the What follows briefly highlights the challenge for liability law in the
right incentives to avoid harm.58 Safety and risk regulation intervene EU, drawing mainly from the Expert Group’s Report and presenting the
ex-ante. Under these regimes, the government determines the optimal case for, and a way to analyze, a regime that taps into the comple­
level of care for risk creators and seeks to modify behavior before and mentarity of regulation and AI liability.
independently of the actual harm. It does so by prescribing, for
example, specific technological or organizational requirements or that a. The challenge for liability law
certain outcomes or processes be met.59 The EU AI strategy uses
both.60 The characteristics of AI systems and their applications complicate
Law and economics scholars have long studied when societies should the process of accessing compensation to victims of harm in all the cases
recur to civil liability – whether strict liability or fault-based liability -, to where it seems justified. Additionally, the allocation of liability can be
regulation, or to both to control risks and harms. Steven Shavell iden­ unfair or inefficient.70 This occurs, for somewhat different reasons, in
tifies five main relevant factors to evaluate the desirability of any of both of liability regimes in the EU: fault-based liability, and strict lia­
these methods for controlling harm: quality of information of the state,61 bility (which includes product liability).
information available to victims,62 the level of activity of the injurer,63 In a fault-based liability regime, a victim of AI harm will face
the role of victims in diminishing harm,64 and the administrative costs important obstacles establishing the three elements of fault-based lia­
associated with enforcing liability or regulation.65 bility: a harm, a wrongful action or omission by another person (fault),
Most of the activities of everyday life are successfully regulated with and causation. This can occur because (1) harm from certain types of
civil liability: Many harms are easy to identify, and parties have the actions may not be immediately obvious. This may be the case, of cases
means and knowledge to mitigate harm at optimal levels (for example, of AI bias in loans or subsidy applications where victims and other ob­
what to do so that a tree in my property does not fall and affect my servers face difficulties knowing that a decision made by or with an AI
neighbor’s property). Risks would be very difficult to address via regu­ system can be biased and can illegally discriminatory against them.71
lation, as it would require frequent, intrusive, and expensive verification This “information gap”, as Marta Ziosi and co-authors call it, can be
procedures.66 Not by coincidence in most domestic regimes, the general
rule for liability attribution is fault-based, which requires that the in­
jurer’s objectionable and avoidable conduct - fault - caused the 68
In 2017, for example, the Parliament adopted a Resolution urging the
damage.67 Commission to propose legislation on civil law rules for robotics and AI lia­
bility. In 2018, the Commission published a Staff Working Document on lia­
bility for emerging digital technologies which accompanied the Commission’s
56
Gillis (n33). Communication on Artificial Intelligence for Europe See Resolution on Civil
57
European Group on Tort Law, Principles of European Tort Law, Art. 1:101 Law Rules on Robotics, Eur. Parl. Doc. 2015/2103(INL) (2017), http://www.
<http://egtl.org/docs/PETL.pdf > accessed 30 October 2023 (PETL). europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html. See also
58
See Miriam Buiten, Alexandre de Streel and Martin Peitz, “The law and Laura Coppini, Robotica e intelligenza artificiale: questioni di responsabilità
economics of AI liability” (2023) 48 CLSR p 4. Some authors also highlight that, civile, 4 Politica Del Diritto 713 (2018); Commission Staff Working Document,
since liability law creates incentives to take precautions ex-ante, it is also a form Liability for emerging digital technologies, accompanying the document
of risk regulation. Catherine M. Sharkey, presentation at "Free Expression and Communication from the Commission to the European Parliament, the Euro­
the DSA: Private-Public Workshop." Paris, France, Sciences Po Law School, June pean Council, the Council, the European Economic and Social Committee and
10 and 11 2024. the Committee of the Regions Artificial intelligence for Europe, SWD/2018/137
59
Marsden (n5) 1. final (2018).
60 69
See infra Part 4. See Commission Report on safety and liability implications of AI (n21).
61 70
Steven Shavell, “Liability for Accidents” in Handbook of Law and Economics Expert Group on Liability and New Technologies (n23), 1.
71
Vol. 1 (Eds. Mitchell Polinksy and Steven Shavell, 2007) 176 < http://www.la In a well-documented scandal in the Netherlands an algorithmic decision-
w.harvard.edu/faculty/shavell/pdf/07-Shavell-Liability%20for%20Accident making system used by the tax authorities falsely accused tens of thousands
s-Hdbk%20of%20LE.pdf > accessed May 1, 2024. of parents and caregivers. Yet, only in 2019, did it become apparent that the
62
Shavell (n61) 176. system was biased, while the system had been in place since 2013, even if
63
Shavell (n61) 177. victims maybe had a sense that something wrong was going on. Meilssa
64
Shavell (n61) 177. Heikkilä, “Dutch scandal serves as a warning for Europe over risks of using
65
Shavell (n61) 177. algorithms,” (Politico.eu March 29, 2022) https://www.politico.eu/article
66
Steven Shavell, “A model of the optimal use of liability and safety regula­ /dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorith
tion,” (1984) 15 2 Rand Journal of Economics p 368. ms/?tpcc=nleyeonai; see also Marta Ziosi et al. “The EU Liability Directive
67
PETL (n57), Art. 1:101(2) a). Shavell explains that a fault-based liability is (AILD): Bridging Information Gaps” (2024) European Journal of Law and
optimal when potential injurers have (1) enough information to know how to Technology <https://papers.ssrn.com/sol3/papers.cfm?abstrac
take care and the state has less information, so that it may err at determining t_id=4470725>.
what are optimal actions to prevent harm; (2) victims have a role to play in
mitigating harm (by taking care as well); and (3) the administrative costs of
verification and proving the elements of liability, harm included, is not exces­
sively costly see Shavell (n61) 176.

5
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

aggravated by the organizational opacity often surrounding AI sys­ may not install, or the fact that AI systems are supposed to continue
tems.72 (2) Proving fault is equality complicated given AI’s opacity and learning once they are placed in the market nor does it provide duties to
complexity. It is hard to identify who was at fault, and a lack of monitor products after circulation.80
behavioral standards complicates establishing what is the standard of From a policy, and law and economics perspective, an alternative
care that different parties must follow.73 This would require showing, would be to extend a "stricter" version of strict liability to AI producers,
for example, how others in the industry or field would have acted in regardless of who is in control and regardless of whether the defect was
similar circumstances and proving that the defendant’s actions fell short known. Indeed, particularly from the 19th century onwards, legislators
of this expected standard, something that is hard to do given, in general, often responded to risks brought about by new technologies - like trains
the opacity of the AI industry.74 In the case of human-AI hybrid systems, and motor vehicles - by introducing strict liability, a liability regime that
the lack of clarity of how a particular system is supposed to improve does not require the injurer’s conduct to have been faulty but merely
human decision-making, and vice versa, creates additional difficulties in that their conduct caused harm.81 Professor Christiane Wendehorst has
establishing to what extent the human in the loop contributed to harm or recommended, for example, that a harmonized regime of vicarious lia­
the contributing victim.75 (3) Lastly, and for similar reasons, proving the bility be adopted so that “a principal that employs AI for a sophisticated
cause-and-effect relationship between the defendant’s actions or omis­ task faces the same liability under existing Member State law as a
sions and the resulting harm can be significantly hard. Given AI’s principal that employs a human auxiliary.”82 This would address the
technical and organizational opacity, doing things such as identifying difficulty victims have in proving fault or defectiveness. Legislators and
how a bug in intricate software code, or the process behind an AI sys­ courts would not need to have information on the optimal level of
tem’s decision-making leads to a specific outcome, or gathering relevant precaution in designing and deploying AI-based systems.83
evidence is more difficult, time-consuming, and expensive.76 Law and economics scholars highlight that, indeed, strict liability
Similarly, given current product liability law, the victims of AI harm creates optimal incentives to reduce socially wasteful accidents: By
will also face important challenges in succeeding at liability claims. removing the fault requirement, strict liability creates incentives for care
Product liability law is usually understood as a form of strict liability, where a potential injurer would find that it cheaper, under a fault
based on the principle that “the producer” of a product is liable for regime, to eventually pay for damages than to prevent damage.84
damages to life, health and property caused by a defect in a product they Additionally, strict liability is easier to prove (one factor, negligence,
have put into the market as part of their business regardless of whether does not have to be proven).85 Importantly, strict liability also directly
the defect is their fault.77 Some scholars have highlighted, that the induces changes in the levels of the activity as issue as higher levels of
definition of a defect as something that could have been known at the activity increase the likelihood of harm, regardless of level of care. Thus,
time of placing a product in the market makes it more similar to fault- if a potential injurer believes that they can achieve the same business
based liability.77 In any case, European authorities and the Expert results, but at lower activity levels, they will do so.86
Group for AI Liability have identified that the Product Liability Directive As the Expert Group and other scholars have noted, however, strict
of 1986 (PLD) regime is not fit to meet the risks of emergent technologies liability is less useful in cases when the AI systems are complex and there
like AI: This occurs because systems challenge the notions of a “product” is a human in the loop: strict liability for producers would not create
and a “defect.”78 The PLD (1) only covers tangible products, which enough incentives for AI operators nor victims to take optimal pre­
included software and AI integrated into tangible products, but not to cautions.87 Indeed, in instances where harm can also be avoided by
standalone software products.79 (2) Defectiveness is determined based encouraging changes in activity by victims or other actors, law and
on the safety expectations of the average consumer, but so long as the economics scholars don’t encourage strict liability either.88
defects could have been known at the time the product was placed on the The Expert Group also notes that strict liability may have important
market. (3) The PLD focuses on the moment when a product was put into impacts on technological advancement. Some individuals or entities
circulation as the moment that defines the producer’s liability, this cuts may become more hesitant to actively promote technological research if
off claims over subsequent additions – by the producer or someone else – the risk of liability is perceived as a deterrent.89 Activities that are
over updates or upgrades or a system. It also does not account for soft­ beneficial to society but also risky may be reduced below the optimal
ware updates which are often meant to make products safer but users level because costs will be internalized while positive externalities will
not flow back directly to developers, even when sufficient precautions
are in place.90 This could be the case in instances where AI’s exceptional
72
Amnesty International, “Xenophobic machines: Discrimination through performance reduces harm to society compared to not using AI at all –
unregulated use of algorithms in the Dutch childcare benefits scandal,” (Am­
nesty International, 2021) https://www.amnesty.org/en/documents/eur35/46
86/2021/en/ accessed October 30, 2023, 12, 15; Ziosi (n71).
73 80
See Buiten and others (n58) 7; Expert Group on Liability and New Tech­ Expert Group on Liability and New Technologies (n23) 30.
81
nologies (n23) 20; see also discussion of the AI Act above. Miquel Martín-Casals, Technological Change and the Development of Liability
74
See Expert Group on Liability and New Technologies (n13) 26; Buiten and for Fault: A General Introduction, The Development of Liability in Relation to
others (n58) argue that, in the case of autonomous AI systems, this is aggra­ Technological Change (Miquel Martín-casals, et al. eds. 2010).
82
vated by the fact that some outputs can’t be anticipated. This challenge may be Christiane Wendehorst, Liability for Artificial Intelligence: The Need to Address
mitigated however, upon interacting with other risk-mitigating regulations Both Safety Risks and Fundamental Rights Risks, The Cambridge Handbook of
where AI systems are specifically trained to avoid certain harmful outputs). Responsible Artificial Intelligence (Silja Voeneky et al., eds. 2022), 208.
75 83
Expert Group on Liability and New Technologies (n23) 31. Wendehorst (n82).
76 84
Expert Group on Liability and New Technologies (n23) 26. Posner (n77) 160.
77 85
Directive (EU) 85/374/EEC of 25 July 1985 on the approximation of the Posner (n77) 164 Shavell adds that an outcome where victims are not
laws, regulations and administrative provisions of the Member States con­ encouraged to take due care, where they could, is also inefficient see Steven
cerning liability for defective products, Art. 4 and 7 (Product Liability Direc­ Shavell “Strict Liability versus Negligence,” The Journal of Legal Studies, 7.
tive). See also Richard Posner, “Economic Analysis of Law” p. 165. 3rd Edition 86
Posner (n77) 161.
87
(1986) at 166 arguing that “the term strict liability is something of a misnomer See Expert Group on Liability and New Technologies (n23); Buiten and
here, because in deciding whether a product is detective or unreasonably others (n58) at 10.
88
dangerous in design or manufacture the courts often use a Hand Formula Posner (n77) 162.
89
approach, balancing expected accident costs against the costs of making the Expert Group on Liability and New Technologies (n23), 28.
90
product safer.” Expert Group on Liability and New Technologies (n23), 10.
78
Expert Group on Liability and New Technologies (n23) 30.
79
Expert Group on Liability and New Technologies (n23) 19.

6
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

such as AI diagnostic tools that outperform humans in disease detection. The question guiding this Article is how policymaker draft AI lia­
This is also below their optimal level.91 While the use of AI reduces harm bility should complement AI regulatory frameworks considering AI
when compared to other options, there are also opportunity costs complexity and the plurality of actors and people that participate in AI
associated with not utilizing AI.92 systems. It does focus on how the AI liability directives proposed by the
EU Commission are set to complement the AI Act.
b. The place of regulation and the joint use of liability law Drawing from the analysis and review conducted in this section there
seem to be two key factors that should be examined when analyzing how
Regulation is well suited to control harm in instances where there are the proposed liability rules complement the AI Act: First, in the case of
sufficiently important factors that dilute the incentive to take care under harm, the AI liability framework makes it easy for victims to bring a
liability. This is the case when the regulatory entity has an information liability claim against AI producers or deployers. Second, given the level
advantage, or where it may be desirable to compel parties to produce of activity chosen by the AI Act, the liability framework is capable of
information that they do not produce ;93 where the potential harm is encouraging AI developers and deployers who create more than low risk
very large and would exceed companies’ capacity to compensate to take more care.
harmed people,94 and where responsible parties perceive that they may
not be sued in case of harm.95 This can occur, for example, when harms 3. The AILD and PLD in the context of the European AI strategy
are dispersed and individual victims may not find it worth it to suit,
when harms are hard to identify and/or only become apparent later on, The EU is a world leader in AI governance.100 The EU AI strategy,
and where it is difficult to trace the harm to particular causes or firms.96 first announced in 2017, seeks to establish a general EU-wide coordi­
In these cases harmed individuals will find it hard, or not cost-effective, nated approach “to make the most of the opportunities offered by AI and
to sue and show the main requirements of liability. to address the new challenges that it brings.”101 At the regulatory level it
The choice for these harm-control tools is not, however, exclusive. seeks to establish an appropriate ethical and legal framework that would
Rather, regulation and civil liability can complement each other well in support “an environment of trust and accountability around the devel­
some instances. In France, for example, Pierre Bentata found that in the opment and use of AI.”102 Three interrelated legal initiatives seek to
management of hazardous operations, judges and regulators interact in create the ecosystem of trust sought by the Commission: The AI Act,
interesting ways and often provide each other with important infor­ approved in 2024, seeks to address fundamental rights and safety risks; a
mation: the number of cases increased sharply after regulations were civil liability framework, which is composed of the directives at issue
passed and victims’ chance of success seems to increase. Additionally, here, the revision of the PLD and a the AILD; and a revision of sectoral
Bentata observes that most of the plaintiffs are the most heavily regu­ safety legislation, such as Machinery Regulation and the General Prod­
lated facilities and the state-owned companies and that judges are more uct Safety Regulation.103 (This piece does not discuss directly the rele­
severe against the latter. This, he suggests, appears to be a way in which vant sectoral safety regulations).104 At the time of writing the liability
civil liability reduces risks of regulatory capture, and offers an additional framework for AI systems is under consideration in the EU parliament.
level of deterrence that goes beyond the one offered by regulation.97 This Part briefly presents the two proposed AI liability directives as
Relatedly, Shavell also found that liability and regulation can be they relate to the framework set in place by the AI Act.
designed so that they complement each other optimally. He finds that
ultimately neither regulation nor liability alone led all parties to exercise 3.1. The AI Act
the socially desirable standard of care. This occurs for the reasons
already highlighted above: regulatory authority’s information about risk The cornerstone of European AI regulation is the AI Act. This Act is
is often imperfect (and so it will sometimes ere setting the right stan­ an umbrella and union-wide framework adopting a risk-based approach
dard), and because liability will sometimes not create sufficient in­ to AI regulation to ensure embedded safety and security in products and
centives to take sufficient care (because they may not be sued for it, for services.105 It aims to promote human-centric and trustworthy AI while
example).98 Shavell explains that it may be thus advantageous to use safeguarding health, safety, fundamental rights, democracy, the rule of
both tools so that they have the following effects: Regulation sets a
baseline for all parties covered by it to take a certain level of care.
However, parties causing more than relatively low risks will be led to do
more than is required by the regulatory standard, because they will be 100
See Anu Bradford, Digital Empires. The Global Battle to Regulate Technology
further deterred by the likelihood of being held liable. Regulators can (Oxford University Press, 2023).
also reduce the standard of regulation, and thus reduce the cost of
101
compliance, since liability compensates for some of the slack associated European Commission, Communication from the Commission, Artificial
with the lower standard but is based on parties’ better information.99 Intelligence for Europe, COM(2018) 237 final (Communication Artificial In­
telligence for Europe).

c. Towards a mixed use of AI regulation and liability to control AI risks 102


Communication Artificial Intelligence for Europe (n102).

103
European Commission, “A European approach to artificial intelligence”,
91
This may be the case of some instances in medicine and some autopilots, Shaping Europe’s Digital Future, https://digital-strategy.ec.europa.eu/en/poli
like airplanes. cies/european-approach-artificial-intelligence.
92
Buiten and others (n58) at 10 discussing from a law & economics
104
perspective how “the chosen liability regime should therefore be seen in the The proposed Machinery Regulation and the proposed General Product
context of public policy towards innovation.” Safety Regulation which revise the existing Machinery Directive and General
93
Shavell (n66) at 361. Product Safety Directive, aim, in their respective fields, to address the risks of
94
Shavell (n66) at 369. digitalization in product safety, but not liability. See European Commission,
95
Shavell (n66) at 370. “The General Product Safety Directive” <https://commission.europa.eu/busi
96
Shavell (n66) 370. ness-economy-euro/product-safety-and-requirements/product-safety/cons
97
Pierre Bentata, (2014) “Liability as a Complement to Environmental umer-product-safety_en> accessed October 30, 2023. European Commission,
Regulation: An Empirical Study of The French Legal System,” Environmental “Machinery”<https://single-market-economy.ec.europa.eu/sectors/mechanica
Economics and Policy Studies, vol. 16, 722. l-engineering/machinery_en> accessed October 30, 2023.
98
Shavell (n66) 271.
99 105
Shavell (n66) 272. Commission Report on safety and liability implications of AI (n21) 4.

7
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

law, and the environment from AI’s harmful effects, all while fostering and use it appropriately.”117 AI systems must also be accompanied by
innovation.106 What follows is a concise overview of the main safety instructions for use, and human oversight measures to facilitate the
requirements introduced by the AI Act, with a focus on human oversight interpretation of AI outputs.118 (6) Designing and developing high-risk
and the importance of standardization and conformity assessments in AI systems so that they enable effective human oversight while in
the AI’s implementation. use.119 (7) AI systems shall be designed and developed to achieve an
appropriate level of accuracy, robustness and cybersecurity.120
a. Levels of risk and key safety requirements Distributors, importers, deployers and other third parties will be
considered providers when they put their name or trademark on a sys­
The AI Act applies to providers and deployers of AI systems in the EU. tem on the market when they make substantial modifications to it, or
The Act defines providers as the natural or legal person who develops an when they modify their intended purpose.121 Providers can demonstrate
AI system with a view of placing it in the market, and deployers as the conformity with these requirements through self-assessment and inter­
natural or legal person that uses the AI system.107 It categorizes AI nal control. If they conform with harmonized standards, they will be
systems into four risk levels based on their intended use and regulates presumed to be compliant with the requirements of the Act and EU law
them differently, banning systems that pose certain unacceptable risks, protecting fundamental rights.122
and imposing certain requirements on the rest.108 Most of the Act is
concerned with the safety requirements for high-risk systems, identified b. The human-in-the-loop requirement
in Annex III. Late in the process of passing the Act, the EU parliament
introduced amendments for providers of general-purpose AI models in One of the key objectives of the EU’s regulatory framework is to
response to the emergence of generative AI.109 General-purpose AI promote the development of AI systems that function “in a way that can
models are classified as with systemic risk when it has high-impact ca­ be appropriately controlled and overseen by humans.”123 Early versions
pabilities, based on their computational power and indicators and of the AI Act were criticized for their reliance on the human-in-the-loop
benchmarks still to be defined.110 They have similar requirements as as safety requirements.124 The latest version at the time of writing seems
high-risk systems.111 Limited risk systems must comply with minimal to have tried to accommodate some of the research, and critiques to
transparency requirements to enable informed user interaction.112 human-in-the-loop requirements presented in Part I C. Attempting to
Providers of high-risk systems must comply with the seven key re­ accommodate the concerns presented in Part I C, the current version of
quirements: (1) Implementing and maintaining a risk management the Article emphasizes the design of “appropriate human-machine
system throughout an AI system’s lifecycle.113 (2) Evaluating the interface tools” so that high-risk AI systems can be “effectively over­
availability, quantity and suitability of the data used for training models, seen by natural persons.”125 It also requires that individuals in charge of
identifying biases and gaps that need to be addressed.114 (3) Creating the oversight must have sufficient AI literacy, and are appropriately
and updating technical documentation of high-risk systems before they enabled to understand and interpret the system, be aware of the possi­
are placed on the market.115 (4) Designing AI systems to automatically bility of over-relying on the system be able “to decide, in any particular
record operational events to ensure that the AI system’s functioning is situation, not to use the high-risk AI system or otherwise disregard,
traceable.116 (5) Designing AI systems so that their operation is “suffi­ override or reverse the output of the high-risk AI system; [and be able] to
ciently transparent to enable deployers to interpret the system’s output intervene on the operation of the high-risk AI system or interrupt the
system through a ‘stop’ button or a similar procedure.”126

c. Reception and critique of the AI Act


106
AI Act (n3).
The reception of the AI Act in the EU was mixed.127 European in­
107 stitutions saw it as a major success, positioning the EU’s leadership as
AI Act (n3) Article 3(3). Note that individuals who are subject to AI systems
have no role to play in the AI act. This is according to the latest version of the
Act. In former versions, the Act has referred to deployers, as users.

108 117
European Parliament, “EU AI Act: first regulation on artificial intelligence” AI Act (n3) Article 13.1.
(News-European Parliament, June 14, 2023) <https://www.europarl.europa.
118
eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation AI Act (n3) Article 13.2, Art 13.3.
-on-artificial-intelligence> accessed 25 April 2024.
119
AI Act (n3) Article 14.
109
See Dr. Benedikt Kohn and Lennart Van Neerven, Will Disagreement Over
120
Foundation Models Put the EU AI Act at Risk? (Tech Policy Press, November 29, AI Act (n3) Article 15.
2023), <https://www.techpolicy.press/will-disagreement-over-foundation-
121
models-put-the-eu-ai-act-at-risk/> accessed May 2, 2024. AI Act, (n114) Article 25.

110 122
AI Act (n3) Article 51. AI Act (n3) Article 40.

111 123
European Parliament (n108). AI Act (n3) Recital 27.

112 124
European Parliament (n108). Crootof and others (n27), Green (n26).

113 125
AI Act (n3) Article 9. AI Act (n3) Article 14(1).

114 126
AI Act (n3) Article 10.2. AI Act (n3) Article 14(4).

115 127
AI Act (n3) Article11. Emma Woolacott, “European Union’s AI Act Gets Mixed Reception” (Forbes,
March 19, 2024) < https://www.forbes.com/sites/emmawoollacott/2024/03
116
AI Act (n3) Article 12. /19/eus-ai-act-gets-mixed-reception/?sh=5145a22042c9 > accessed May 2,
2024.

8
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

the global leader in technology regulation.128 Certain think tanks and key definitions. Importantly, the Revised PLD changes the definition of
companies worry that the new rules may overburden innovative AI product and economic operators to extend it to software, AI systems and
developers, and highlight that a lot of uncertainty remains about the AI-enabled goods, such as smart-home devices.138 It also expands its
implementation of the Act.129 Human and digital rights activists argue application to digital services that are integrated or integrated with a
that the Act did not go far enough to protect individuals from AI product “in a way that would prevent the product from performing one
harms.130 Indeed, the tiered-risk approach inevitably leaves out certain of its functions,”139 such as navigation software in an autonomous
applications that may be risky. Critiques highlight, however, that the vehicle.140 Additionally, it defines economic operators as the manufac­
last version highlights that the final version of the Act seems to have turers of a product or a component, the provider of a service, and the
been particularly lenient to particularly sensitive realms such as national importer or the distributors,141 and extends liability to natural or legal
security (where a blank exception was adopted), and by allowing uses of persons that modify a product substantially after it has already been
facial recognition and other biometric categorization systems by law placed in the market will also be considered economic operators.142
enforcement and migration authorities, while they are prohibited in Chapter 2 lays out the key rights and obligations of the product lia­
education and the workplace.131 bility regime.143 Defectiveness is defined as the circumstances when a
Another critique is that while the latest version of the Act provides product “does not provide the safety which the public at large is entitled
some individual remedies, like lodging complaints or receiving expla­ to expect.” This is to be determined considering the presentation of the
nations for decisions, it lacks robust rights and redress mechanisms.132 product including instructions for installation and maintenance,144 and
The AI liability regime, however, is supposed to offer those mechanisms the expectations of the end-users for whom the product is intended,145
of redress.133 The next sections briefly present the two directives, and reasonable use and misuse of the product,146 the safety requirements of
the last Part evaluates how they complement the AI Act. the product,147 the moment in time when the product was placed in the
market and, importantly, the moment in time when the product leaves
3.2. The revised PLD: liability for “material damages caused to natural the control of the manufacturer.148 The distinction between the moment
persons by AI-powered products” in time at which a product is placed in the market, and the moment at
which it leaves the manufacturer’s control seeks to reflect that many
The revision of the Product Liability Directive seeks to adapt the EU’s products, such as AI systems, remain within the manufacturer’s control
product liability regime to new technologies.134 The Revised PLD aims even after being placed in the market.149
to ensure that liability rules reflect the nature and risks of the new The Revised PLD also establishes a rebuttable presumption of
digitally powered products, easing the burden of proof in complex cases defectiveness to alleviate the claimant’s burden of proof.150 This will be
and easing restrictions on making claims “while ensuring a fair balance the case if the claimant establishes that the product does not comply
between the legitimate interests of manufacturers, injured persons and with mandatory safety requirements or when the damage was caused by
consumers in general.”135 As its predecessor, the proposed directive an obvious malfunction of the product during normal use and circum­
establishes a form of strict liability of the relevant economic operators stances.151 Additionally, the directive establishes that national courts
“as the sole means of adequately solving the problem of a fair appor­ must be empowered to order the defendant to disclose relevant evidence
tionment of the risks inherent in modern technological production.”136 that is at its disposal, upon request of an injured person claiming
Claimant must prove the elements of product-strict liability: defective­ compensation for damage caused by a defective product, and when the
ness of the product, the damage suffered and the causal link between the claimant has presented facts and evidence sufficient to support the
defectiveness and the damage.137 What follows describe the main
changes and how they apply to AI systems:
Chapter 1 lays out the subject matter, the scope of the directive, and
138
Proposal New PLD (n136) Recital 12, see also Explanatory Memorandum
New PLD (n125), 3 Art. 4(1).

128 139
Gian Volpicelli, “European lawmakers rubberstamp EU’s AI rulebook,” Proposal New PLD (n136) Article 4(4).
(Politico, March 13, 2024) <https://www.politico.eu/article/european-lawma
140
kers-rubber-stamp-eus-ai-rulebook/> accessed May 2, 2024. Proposal New PLD (n136) Recital 15.

129 141
Eliza Gkritsi, “The long and winding road to implement the AI Act” (Eur­ Proposal New PLD (n136) Article 4(16).
activ, March 14, 2024) https://www.euractiv.com/section/digital/news/the-
142
long-and-winding-road-to-implement-the-ai-act/> accessed May 2, 2024. Proposal New PLD (n136) Article 7(4).

130 143
Gkritsi (n129). Proposed New PLD (n136) Article 5.

131 144
EDRi and coalition partners, “EU’s AI Act fails to set gold standard for Proposal New PLD (n136) Art 6(a).
human rights” (EDRi.org, Arpil 3, 2024) < https://edri.org/our-work/eu-ai-act-
145
fails-to-set-gold-standard-for-human-rights/> accessed May 2, 2024. Proposal New PLD (n136) Art 6(h).

132 146
EDRi and coalition partners (n131). Proposal New PLD (n136) Art 6(b).

133 147
See White Paper on AI (n8) and the discussion on the AILD below. Proposal New PLD (n136) Article 6(f).

134 148
See infra section 3.1(a) for why it needs updating. Proposal New PLD (n136) Art 6(e).

135 149
Expert Group on Liability and New Technologies (n23), 2. Proposal New PLD (n136) Recitals, 22, 23.

136 150
European Commission, Proposal for a Directive of the European Parliament, Proposal New PLD (n136) Recital 33.
and the Council on liability for defective products, COM/2022/495 final,
151
Recital 2 (Proposal New PLD). Proposal New PLD (n136) Article 9.

137
Proposal New PLD (n136) Article 9.

9
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

plausibility of the claim for compensation.152 Defectiveness will be also ensuring ensures that victims of damage caused by AI have an equivalent
presumed when the defendant fails to comply with an order to disclose level of protection under fault-based liability rules as victims of equiv­
relevant evidence,153 and when it is established that the product is alent harms caused without AI systems.164
defective and the damage caused is of a kind typically consistent with The directive is rather short: it has only nine Articles, four out of nine
the defect in question.154 (Articles 5 to 9) which are concerned with the creation of a monitoring
Chapter 3 covers other general provisions on liability., Manufac­ program to provide the European Commission with information on in­
turers and distributors will not be liable if they can prove that it is cidents involving AI systems and the implementation of the Directive in
probable that the defect that caused the damage did not exist when the Member States.165 Article 1 establishes its subject matter, Article 2,
product was placed on the market or put into service ;155 that the covers key definitions, mostly referring to the AI Act.166 Articles 3 and 4
defectiveness is due to compliance of the product with mandatory reg­ contain the key measures: rules for the disclosure of evidence; and
ulations ;156 that the product is up to the state of the scientific and conditions to establish a rebuttable presumption and a rebuttable pre­
technical knowledge at the time it was placed in the market.157 How­ sumption of the causal link between fault and harm.
ever, economic operators will not be exempted from liability when the The rules for the disclosure of evidence are in Article 3. In a nutshell,
defect is due to software updates or upgrades, or a lack thereof.158 national courts must be empowered to demand the disclosure of relevant
Lastly, the proposed directive establishes that economic operators evidence from high-risk systems suspected of causing damage to pro­
cannot reduce their liability when a third party’s actions or omissions viders or those subject to their obligations. This disclosure must strictly
contributed to the harm.159 In any case, in the interests of a fair adhere to what is necessary and proportionate to support the claim.167
apportionment of risk, when the damage was caused by the defective­ Article 4 lays the requirements for national courts to establish a
ness of the product and the faulty action of a third party or the victim, rebuttable presumption of a causal link between the fault and the output
their liability may be reduced.160 of the AI system. National courts shall presume fault where three con­
ditions are met: fault has been established or presumed according to
Article 3, it can be considered likely that the fault influenced the output,
3.3. The AILD: “Adapting non-contractual civil liability rules to artificial and the claimant showed that the output led to the damage.168 The
intelligence” causal link between fault and output will also be presumed when the
claimant shows that the deployer of a high-risk AI system did not comply
The AILD seeks to adapt, in general, national liability rules to the with its obligations under the AI Act.169 Similarly, the presumption will
challenges posed by claims for damages caused by AI-enabled products be established when it is deployers who do not comply with their obli­
and services. It does so by laying out rules on the disclosure of evidence, gations to use or monitor the AI system following the accompanying
and by establishing a rebuttable presumption of causal link in the case of instructions of use,170 or if the claimant proves that the deployer
fault.161 By doing so, the AILD seeks to address the challenges that “exposed the AI system to input data under its control which is not
victims may face when an AI system participates in the action that led to relevant given the system’s intended purpose.”171
the harm.162 Interestingly, the directive explicitly does not adopt a For non-high-risk systems, the presumption of causality will apply
stringer standard than fault-based liability (such as reversal of the only if the court determines that it is excessively difficult for the
burden of proof, or an irrebuttable presumption) because of how costly claimant to prove the causal link between damage and fault. This should
this could be for developers or deployers.163 It is thus mostly oriented at be assessed given the characteristics of certain AI systems, such as their
autonomy or opacity.172
Importantly, Recital 15 states that the AILD need not cover situations
152
“when the damage is caused by a human assessment followed by a
Proposal New PLD (n136) Article 8.
human act or omission, while the AI system only provided information
153 or advice which was taken into account by the relevant human actor.”173
Proposal New PLD (n136) Article 9.
This is the case because, supposedly, when the damage is caused by a
154
Proposal New PLD (n136) Article 9. human assessment, “while the AI system only provides information or
advice” it will be possible to trace back the damage to a human act or
155
Proposal New PLD (n136) Art 10(c). omission, and therefore establishing causality will not be as hard as

156
Proposal New PLD (n136) Article 10(d).

157
Proposal New PLD (n136) Article 10(e).
164
Explanatory Memorandum AILD (n163), 10, see also Proposed AILD (n35)
158
Proposal New PLD (n136) Article 10.2. Article 1.

159 165
Proposal New PLD (n136) Article 12.1. Proposal AILD (n35) Article 5.

160 166
Proposal New PLD (n136) Art 12.2 This echoes the principle of the Proposal AILD (n35) Article 2.
contributory conduct tor activity of the victim see PETL (n2) Article 8:101,
167
Recital 36. Proposal AILD (n35) Article 3.1 paragraph 2.

161 168
Proposal AILD (n35) Article 1(b), Article 4. Proposal AILD (n35) Article 4.1.

162 169
See infra 3.1(a); Proposal AILD (n35). Proposal AILD (n35) Article 4.2.

163 170
European Commission, Explanatory Memorandum, Proposal for a Directive Proposal AILD (n35) Article 4.3(a).
of the European Parliament and of the Council on adapting non-contractual
171
civil liability rules to artificial intelligence, Explanatory Memorandum, COM/ Proposal AILD (n35) Article 4.3(b).
2022/496 final (Explanatory Memorandum AILD).
172
Proposal AILD (n35) Article 4.5, Explanatory Memorandum AILD (n163).

173
Proposal AILD (n35) Recital 15.

10
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

when an AI system is involved. As other commentators have noted, this that apply; there are no special liability nor product safety rules for
may leave significant amounts of the AILD proposal inapplicable, as the Automated vehicles.176 The vehicle swerved into a curb, causing the car
AI Act will require that high-risk systems be designed and developed so accident which resulted in an injury to the driver. The car manufacturer
that they can be effectively overseen by natural persons (as proposed in cautions drivers to keep their hands on the wheel, and “be prepared to
the text by the Commission) or so that “they be effectively overseen by take over at any moment.”177 In the accident, the driver received a
natural persons” (as proposed by the Parliament).174 warning to control the vehicle less than a second before the strike, as this
was when the software identified it was facing an unknown situation.
4. Analysis of the EU AI liability regime The manufacturer says the software worked correctly.178
The driver sues the car manufacturer, alleging that the Autopilot
So, how does the proposed AI liability regime complement the AI Act feature failed to operate safely and caused the accident. In real-life cases
in incentivizing precautionary measures and reducing socially wasteful like this, juries in the US have found that the Autopilot feature had not
AI accidents, considering the complexity of AI and the involvement of malfunctioned and that the driver’s negligence caused the accident.179
multiple actors? The legal issue in this case would thus be: given the EU’s new liability rules,
This Part answers this question by first presenting three hypothetical is the manufacturer of an automated vehicle liable for an accident involving
accidents involving an AI system. Analyzing these three examples and the AV, where the driver received a warning to control the vehicle less than a
using the framework presented in Part 2 it then concludes that the second before the strike, but where the car-manufacturer also warns drivers to
proposed directives make important progress in addressing the chal­ be ready to take control anytime?
lenges AI systems pose to accountability and enhance the incentives for From the victim’s perspective, a good result would be that the
AI deployers, developers and users to take better care and avoid harm. manufacturer is found to be at fault, or that the software is found to be
The analysis shows too, however, that the current proposals fall short on defective because it passed the control to the driver less than one second
two main accounts: before the strike. From a societal perspective, if the driver is shown to
First, the AI Act’s tiered risk regulation drifts over into the liability have been distracted, a better result would be that both the driver and
proposals, as it will be mostly in cases involving high-risk systems where the software share some of the responsibility so that all parties have
victims will have better access to information and where most of the incentives to take optimal precautions in the future.180
presumptions will apply. This leads to the regime being still not very The EU is expected to draft specific security rules for AVs, and they
effective at addressing the liability challenges for systems that are non- will be exempt from the core obligations of the AI Act. Let’s assume, for
high risk but still complex and opaque. this example, that these will be equivalent to those of the AI Act.181
Second, the regime’s treatment of human-AI hybrid systems is still Because this is a claim about a bodily injury, suffered by a natural
somewhat simplistic. Since AILD excludes from its application systems person, and caused by a product, let’s also assume that this claim falls
where AI is only advising humans but not effectively deciding, it may under the jurisdiction of the Revised PLD.182 This is already beneficial
also create incentives for AI designers to design systems to “advise” for the plaintiff (although not new) as they would not have to establish
humans, even if more collaborative or even entirely automated systems
may be safer and better. This contradicts not only the research about
better human-AI design and interaction, but also seems to contradict the
final version of the AI Act, which incorporated some of the critiques on
176
the challenges of effective human supervision and emphasizes the Special liability rules for road accidents exist in several countries, which are
commonly strict liability rules, as do special safety regulations for AVs. See
importance of effective design, instructions, and a turn towards human-
David Fernandez Llorca and Emilia Gomez Gutierrez, “Artificial Intelligence in
AI collaboration and not, merely, supervision.
Autonomous Vehicles towards trustworthy systems”, European Commission
A last observation that follows from the analysis is that enforcing the
2022 (JRC128170) <https://publications.jrc.ec.europa.eu/repository/handle/
AI liability regime will potentially – and maybe unavoidably – rely on JRC128170> accessed October 30, 2023.
the development of the technical standards mandated by the AI Act.
Though liability standards of care – referring to the model of careful and 177
This is, in fact, what drivers of Tesla’s are expected to do Mike Spector, Dan
prudent conduct required from the perpetrator of the damage – are in Levine & Mike Spector, “Exclusive: Tesla Faces U.S. Criminal Probe over Self-
principle different from standards of quality and safety required by law Driving Claims” (Reuters, Oct. 27, 2022) https://www.reuters.com/legal
and established standard-setting bodies, certain legal and technical /exclusive-tesla-faces-us-criminal-probe-over-self-driving-claims-sources-2022-
standards will play a significant role in determining what is reasonable 10-26/ accessed August 25, 2023.
to expect from the various parties involved.175 178
This has happened in Tesla-related accidents Abhirup Roy, Dan Levine &
Hyunjoo Jin, “Tesla Wins Bellwether Trial over Autopilot Car Crash” (Reuters,
4.1. AI and safety when a human is involved: the case of an autopilot Apr. 22, 2023) <https://www.reuters.com/legal/us-jury-set-decide-test-case-t
esla-autopilot-crash-2023-04-21/> accessed August 25, 2023.
Imagine an accident involving a vehicle with an autopilot feature. 179
See Andrew J. Hawkins, “The world’s first robot car death was the result of
This happens in a part of a city where using autopilot is allowed. Assume
human error – and it can happen again” (The Verge, 20 November 2019)
the AI Act, the PLD and the AILD are in place (as they were presented in <https://www.theverge.com/2019/11/20/20973971/uber-self-driving-car-cr
the previous section), and that these are the main EU-law institutions ash-investigation-human-error-results > accessed 30 October 2023.

180
See Part 2; Buiten and others (n58).

174 181
See Philip Hacker, “The European AI Liability Directives – Critique of a See Hacker (n174) 2: “Technically, autonomous vehicles will be considered
Half-Hearted Approach and Lessons for the Future” Working Paper, at 19 high-risk (Article 6(1) and (2) AI Act) but are exempt from all of the core ob­
<https://arxiv.org/pdf/2211.13960.pdf > accessed 30 October 2023. ligations of the AI Act (Articles 2(2) and 84 and Annex II Section B No. 2, 3, 6
and 7 AI Act), hence rendering the relevant references in Articles 3 and 4 AILD
175
See e.g. Bryan H. Choi “NIST’s Software Un-Standards” (2024) The Digital Proposal inapplicable to them.”
Social Contract: A Lawfare Paper Series < https://www.lawfaremedia.org/ar
182
ticle/nist’s-software-un-standards> accessed May 2 2024, (discussing how in Proposal New PLD (n136) Article 1.
the US, NIST’s cyber frameworks are being invoked as standard of care and
raising the question on whether they are adequate).

11
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

fault, they only have to prove that the product was defective, the human oversight.197 Though there are, still to date, no such clear
damage suffered, and the causal link amongst both.183 A second legal behaviural standards, the AI Act (or the future requirements for AVs in
element is that the AV software is the high-risk category under the AI particular), may offer some guidelines about how this looks like at the
Act.184 Thus, the AV manufacturer is obliged to meet safety re­ design stage: There must be “appropriate human-machine interface
quirements such as producing technical documentation and record tools” so that high-risk AI systems can be “effectively overseen by nat­
keeping, and designing for human oversight, and transparency.185 ural persons.” Similarly, it also requires that individuals are aware of the
According to Article 6 of the Revised PLD, a product is defective if it possibility of relying and over-relying on the system, and “be able to
does not “provide the safety which the public at large is entitled to intervene in the operation.”198
expect.”186 This includes the presentation of the product, instructions, Because the navigation software is a high-risk system, the conformity
etc. ;187 the reasonably foreseeable use and misuse of the product,188 assessment would show whether the human interface meets the EU
product safety requirements,189 and the specific expectations of the end- standards, which most likely follow the state of scientific and technical
users for whom the product is intended.190 Article 9, the defectiveness is knowledge. If it does, it will most likely be uphill for the plaintiff to
presumed if, the plaintiff shows that the vehicle (1) does not comply prove that the interface is not of the kind the public at large expects and
with mandatory safety requirements of the product, or (2) that the reasonable for the end user. If the conformity assessment is non-
damage was caused by an obvious malfunction.191 Producers are exempt compliant with the safety standards, causality will be presumed and
if the defect did not exist when the product was placed on the market,192 the manufacturer or provider will have to prove that this didn’t cause
if the defect is caused due to compliance of the product with mandatory the accident (regardless of administrative complaints that may be filed
regulations,193 or if the state of scientific-technical knowledge at the aside, under the AI Act, for nonconformity).
time the product was placed on the market was not such that the defect In all cases, if the plaintiff did not abide by her expected standard of
could be discovered.194 care and, for example, didn’t follow instructions, was distracted, or was
Following the research on the complexities of AI-Human in­ in breach of a legal obligation, the liability of the manufacturer could be
teractions, one of the questions such a case raises is whether handing reduced, but most likely not eliminated.199 This is positive, as it would
over control less than a second before the accident is “the kind of safety also encourage harm-reducing behavior from AI system end-users.200 If
the public at large expects,” or according to the expectations of end- the plaintiff contributed to the accident with her action or omission with
users who, in this case, is a regular driver (but not a professional car- no fault - perhaps she did receive control of the car, but given how
racing driver, for example). If it isn’t it would be a defect.195 Indeed, control was handed it was not reasonable to expect from her that she
handing over control in such a way seems like the kind of problematic would control the vehicle - the Revised PLD also establishes that this
interface inspired by the idea that humans and machines complement should not reduce the liability of the producer.201
each other easily, discussed in Section 2.2.
To show that there is a defect, and with the PLD in place, the plaintiff 4.2. Analysis of the example
would be able to request documentation and evidence from the vehicle’s
manufacturer about the system and its design.196 In this case, this would The example above reveals a few interesting ways in which the
include information on the technical documentation on the autopilot, Revised PLD complements the AI Act and two important shortcomings:
the AI-Human interface but also, if this were a device covered by the AI First, the disclosure of evidence requirement strongly relies on the
Act, the conformity assessments with the requirements of the AI Act and, presumption that extensive evidence will exist. Under the AI Act, how­
in general, what the expected duty of care of the producer is about ever, only the producers and deployers of high-risk systems and foun­
dational models are required to produce and keep documentation about
the functioning of AI systems. Recall, that one of the advantages of
regulation, according to Shavell, is to mandate the production of in­
183 formation that is not produced.202 Thus, even if under the PLD courts are
Proposal New PLD (n136) Article 5; Article 9.1.
empowered to order the defendant to disclose relevant evidence from all
184
Software supporting motor vehicles is under the current high-risk category AI producers, it is less clear that victims of harm by non-high-risk sys­
of the AI Act, but it is also expected that specific regulations will be developed. tems will have access to equivalent evidence than victims of harms by
See Fernandez Llorca and Gomez Gutierrez (n176). high-risk AI systems. When an accident involves an AI-powered product
that falls outside the high-risk system category defined by the AI Act the
185
See above the discussion on the AI Act and conformity assessments. level of protection may thus be lower, simply because less documenta­
tion, and technical standards, may be available. This is a function of the
186
Proposal New PLD (n136) Article 6.1. AI Act’s structure and less so the liability regime itself.
187 Second, the ease with which victims will be able to succeed at their
Proposal New PLD (n136) Article 6.1(a).
liability claims may strongly depend on compliance with the special
188
Proposal New PLD (n136) Article 6.1(b).
requirements and standards mandated by the AI Act. This is, again, a
residual effect of the AI Act that spills into the liability regime. This was
189
Proposal New PLD (n136) Article 6.1(f).

190
Proposal New PLD (n136), Article 6.1(h).
197
See AI Act (n3) Article 11.
191
Proposal New PLD (n136) Article 9.1 (b), (c).
198
See AI Act (n3) Article 14.
192
Proposal New PLD (n136) Article 10.1(c).
199
Proposal New PLD (n136) Art 12.2 this echoes the principle of the
193
Proposal New PLD (n136) Article 10.1(d). contributory conduct or activity of the victim see PETL (n2) Article 8:101.

194 200
Proposal New PLD (n136) Article 10.1(e). See Buiten and others (n58).

195 201
See above Part 1. Proposal AILD (n35) Article 12.

196 202
Proposal New PLD (n136) 243, Article 8. See Part 3.1(b).

12
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

exemplified above, and will be important, in the case of hybrid systems unclear how this recital may affect situations where humans and AI are
under the PLD: When the AI Act is in place, high-risk systems will be very supposed to work together.
likely to be designed to meet the expectations and standards of the Take an illustrative example: In a famous aeroplane crash involving
human control requirement. This should improve the interface overall an automated aviation system and a pilot, the accident happened
and to a certain degree link the standard of conduct of developers and because the pilot failed to steer the plane up, while the system was
providers to more clearly defined industry standards. Thus, if the human (wrongfully) steering it down.208 In such cases of complex interactions,
operator has, for example, not had access to information about the AI the AILD will apply if the plaintiffs succeed at arguing that this scenario
system in their training or in the form of readable instructions the AI is not an instance where “damage is caused by a human assessment
manufacturer may be held liable.203 A significant amount of the legal followed by a human act or omission, while the AI system only provides
work of proving a defect will thus be focused on proving that the human- information or advice.”209 It may not apply, however, to instances
AI interface was not fit for purpose. As above, however, if the system at where control is handed over a second before an accident happens, as it
issue is not a high-risk system, less extensive and accurate documenta­ often happens on car accidents, and this is considered to comply with
tion may be available to prove such claims. best practices and efforts. This is, unless the AILD introduces some of the
At the same time, recall that in instances where compliance with nuances the newer version of the AI Act has, but plaintiffs will still need
standards is what led to the harm, developers and deployers will not be to assert and substantiate the likelihood that the human-machine system
held liable.204 Though this makes sense from the developer and did not adequately prepare the human for effective control of the situ­
deployers’ perspective, it shifts attention to how the human in the loop ation to establish the applicability of the AILD. In what seems like a
requirement will be developed in the standard-setting process. If these circular situation, plaintiffs will only then be able to compel AI de­
standardization process fails to account for the difficulties discussed in velopers – or courts - to disclose pertinent evidence. Yet, to be able to
Part 2, then the outcome will be undesirable and victims are likely to assert that, they would benefit from examining the documentation of the
remain unprotected under civil liability rules vis à vis victims of harms human-AI interface and system dynamics.
that occur without a hybrid AI system: developers will argue that the
human was a regulatory requirement, and the human (or their
4.4. Harms to fundamental rights
employer) may be able to argue that the system was not fit for purpose.

For a last scenario, let’s look at how the system will fare in the case of
4.3. Variations on the main theme: AI and safety with a human under the fundamental rights violations. Recall that one of the main objectives of
AILD the AI regulatory framework in general, and the AILD specifically is to
help protect and give redress to victims of harm to fundamental rights,
Now let’s assume the situation is similar but the victim is not a such as the right to non-discrimination.210 It is also worth recalling,
natural person or a legal entity. Imagine that the accident involves a however, that EU fundamental rights law is generally applicable mainly
semi-automated vehicle operating under human supervision in an in­ to institutions and body of the EU, and to Member States only when they
dustrial setting. The vehicle swerved into a curb, the human operator are implementing Union law.211 This is, of course, unless there are other,
didn’t manage to take control of the vehicle, and this caused an accident specific laws such as data protection law or antidiscrimination law that
which resulted in material damages for the factory-owner. Here, because extend the obligation to private parties to comply with fundamental
the victim of harm is a legal entity, the AILD applies. rights law to private parties.212 Additionally, Member States have rich
When we look at how the AILD would perform, it becomes evident traditions on the application of fundamental rights and, in general, it is
that the cliff effects from the AI Act are even stronger on the AILD than up to Member States to establish procedural rules for the actions
on the PLD.205 The AILD’s provision providing for disclosure of docu­ intended to safeguard fundamental rights.213 The application of the
mentation only applies to high-risk systems. Victims of harms that occur AILD will thus be subject to national, or special, rules on the application
by or with the participation of an AI system that is not high risk, but is of liability law to guarantee the protect fundamental rights.
still opaque or complex, will thus still face significant hurdles in over­ As with all forms of liability law, victims of fundamental rights in
coming the technical and organizational opacity of AI systems. Addi­ situations that involve an AI system will have to show that harm
tionally, courts and plaintiffs may face a significant challenge of occurred. This is not necessarily straightforward: Plaintiffs must have
unknown unknowns when trying to order only the “necessary and pro­ legal knowledge or reasonable suspicion of harm and provide sufficient
portionate” evidence to support a potential claim fault.206 facts and evidence to support the likelihood of a damages claim. How­
In situations where the AILD would apply, there is also the question ever, victims of AI discrimination, for example, may be unaware or
of the human in the loop. The AILD does not apply “when the damage is
caused by a human assessment followed by a human act or omission.”207
The phrasing of the recital does not seem to consider yet the complex­
ities of human assessment after an AI system provides advice. It is 208
Dominic Gates and Lewis Kamb, “Indonesia’s devastating final report
blames Boeing 737 MAX design, certification in Lion Air Crash” (The Seattle
Time, Oct. 24, 2019) <https://www.seattletimes.com/business/boeing-aerospa
ce/indonesias-investigation-of-lion-air-737-max-crash-faults-boeing-design-an
203
See discussion in Part 1. d-faa-certification-as-well-as-airlines-maintenance-and-pilot-errors/ > accessed
august 26, 2023.
204
Recall that under the PLD, manufacturers and distributors will not be liable
209
if they are able to prove that the defectiveness is due to compliance of the Proposal AILD (n159) Recital 15.
product with mandatory regulations. Proposal New PLD (n136) Article 10(d).
210
Proposal AILD (n159).
205
See Hacker (n174) 20, arguing that this problem arises because the EU AI
211
liability regime excessively relies on the risk categories defined in the AI Act Charter of Fundamental Rights Art. 51.
and arguing that the list of the AI Act is both over and under inclusive.
212
See Hacker (n174).
206
See Hacker (n174) 20, making a similar critique and arguing that the list of
213
the AI Act is both over and under inclusive. See judgments of 13 December 2017, El Hassani, C‑403/16, EU:C:2017:960,
paragraph 26, and of 15 September 2022, Uniqa Versicherungen, C‑18/21, EU:
207
Proposal AILD (n159) Recital 15. C:2022:682, paragraph 36.

13
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

suspicious of whether an AI system’s decision stems from algorithmic legal nature of documentation, when they exist, may still be challenging
bias leading to unlawful discrimination, and they typically cannot access for less technically literate plaintiffs.223
the necessary information from the system’s output logs. In some in­ Lastly, and as already discussed above, however, the AILD is not
stances, such as the rejection of a loan application, there may be in­ supposed to apply to situations caused by a human assessment followed
centives to investigate. Ziosi et al. argue, for example, that by a human act or omission where the AI system only provides infor­
discrimination’s impact can be more subtle, such as when women mation or advice.224 Though high-risk systems are not the only type of
consistently receive fewer job opportunities than men. In such cases, AI system that can eventually affect fundamental rights, to the extent the
discrimination manifests as a lack of opportunity rather than a direct list of high-risk systems contains a list of the “usual suspects,” it seems
denial.214 like a notable exclusion.225 This is paradoxical as a central objective of
Additionally, and in some cases, the affectation of a fundamental the whole AI regime in Europe is to protect and mitigate fundamental
right may not necessarily amount to damage. Take, for example, the case rights related harms.226
of data protection law. The GDPR establishes in Article 82(1) that “[a]ny
person who has suffered material or non-material damage as a result of 4.5. Conclusion to this part
an infringement of this Regulation shall have the right to receive
compensation from the controller or processor for the damage suf­ This Part “ran” the EU liability directives through three examples of
fered.”215 The ECJ has explained, however, that the conditions that give situations where AI harms occurred, and a human was involved: two
rise to compensation for an infringement on an individual’s data pro­ safety harms and harm to fundamental rights.
tection rights require establishing, in essence, similar conditions to any Based on these examples and applying the framework laid out in Part
other liability claim: “namely processing of personal data that infringes 2, the following conclusions can be drawn:
the provisions of the GDPR, damage suffered by the data subject, and a First, the liability regime seems to successfully complement the AI
causal link between that unlawful processing and that damage.”216 One Act, especially in instances where high-risk systems are involved. This
of the reason for this is that the GDPR, specifically, provides for occurs for two main reasons: because most of the rules directed at
administrative and judicial remedies before a supervisory authority in facilitating access to evidence are directed at high-risk systems, and
case of an infringement of the GDPR some of which have a punitive because under the AI Act, it is only developers of high-risk systems that
purpose and are not conditioned by the existence of damage.217 Thus, to will produce the desirable information. In some instances, it may be
be able to obtain compensation, the injured party must prove that the unrealistic to expect victims of harm involving high-risk systems to
consequence of the breach of the GDPR constituted a certain form of assess the technical documentation and prove, for example, lack of
damage, even if a non-material damage (which the court has also comformity. An additional downside of this complementarity is that
explained must be interpreted broadly).218 victims of harm by AI systems that are complex, opaque, or autonomous
As in the previous example, victims seeking compensation for in­ and not defined as high-risk systems or foundation models under the AI
fringements on their fundamental rights may not always be able to do so Act, may still face important obstacles when trying to prove the exis­
under the design of the legal system, unless there is additional harm. tence of a defect (in the case of “products,” or other special regimes), or
Even when they can, they may encounter challenges in accessing and fault on the side of the producer or deployer (in all other cases). The AI
understanding relevant evidence: Under the AILD, plaintiffs have a right Act tiered framework thus drifts over the AI liability regime and the
to access evidence about high-risk AI systems which they suspect caused trustworthy AI regime, and most of the incentives to take care and
them harm. This right requires that plaintiffs present “facts and evidence produce desirable information fall up to the developers and deployers of
sufficient to support the plausibility of a claim,”219 is limited to evidence high-risk systems.
that is “necessary and proportionate” to support the claim,220 and re­ Second, the liability regimes treat human-hybrid systems in a con­
quires courts to only order the disclosure of evidence when claimants tradictory manner. On the one hand, the AI Act mandates human
have made “all proportionate attempts at gathering the relevant evi­ oversight over high-risk systems and emphasizes how the human-AI
dence from the defendant.”221 As Ziosi and co-authors explain, it may be interface must be designed to be effective. Increased focus on whether
hard for non-experts to consider what evidence can be considered a human-AI interface is “fit for purpose” is an important improvement
plausible, and presume victims awareness of harm.222 Additionally, from the status quo. However, human-AI interactions are complex and
claimants may face difficulties proving fault not only because of the not always desirable, and the focus of the AI Act on human control may
lead to situations where, under the AILD, designers may rightfully claim
that the defect or situation at issue arose because they must comply with
the human supervision requirement.
214 On the other hand, the AILD excludes from its coverage systems
Ziosi (n71).
where AI is advisory, rather than decisionmaker and thus developers and
215
Regulation (EU) 2016/ 679 General Data Protection Regulation, OJ L 119,
deployers of high-risk systems may not be subject to the AILD at all.
4.5.2016, Article 82 (GDPR). Paradoxically, this will be the case for many systems used to make

216
Case C-300/21, UI v. Österreichische Post AG [2023] ECLI:EU:C:2023:370, 14
(UI v. OP) 36.
223
Ziosi (n71)7.
217
UI v. OP (n216) 40, GDPR (n227) Article 83 and 84.
224
Proposal AILD (n159) Recital 15.
218
UI v. OP (n216) 50 (in the decision it is unclear what is a damage within the
225
meaning of the GDPR). High risk systems include AI systems used in critical infrastructures;
educational or vocational training; safety components of products; employ­
219
AILD, Article 3(1). ment, management of workers and access to self-employment (e.g. CV-sorting
software for recruitment procedures); essential private and public services;
220
AILD, Article 3(4). law enforcement that may interfere with people’s fundamental rights; migra­
tion, asylum and border control management; administration of justice and
221
AILD, Article 3(2). democratic processes. See AI Act (n3) Annex III.

222 226
Ziosi (n71) 7. See discussion in Part 2.

14
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

decisions that are consequential to fundamental rights, such as systems inspection of AI systems.229 The EU proposals successfully address
used in educational and vocational settings to determine who can access organizational opacity, especially for high-risk systems under the AILD
a certain program. and, in general, under the PLD, because developers will no longer be
Third, the AILD may be a somewhat limited measure to solve the able to assert confidentiality over the evidence. Thus, once the AI Act is
individual redress and recourse gap in the case of affectations to applicable, there will also be adequate documentation, which should
fundamental rights, even when they apply. Victims may, however, face also diminish the difficulty in scrutinizing the workings of a system.
difficulties identifying that they suffered harm to their fundamental Similarly, the strict liability regime under the PLD, and the rebuttable
rights and even when they do the application of the AILD will be con­ presumption of causality under the AI Act, are positive adjustments to
strained to Member State or specific regulation regarding the use of li­ ease the burden of victims of proving causality.
ability law to seek redress for harms to fundamental rights. EU Law and However, it is noteworthy that the proposed regime may better serve
Member State liability laws typically distinguish between the affectation the victims’ high-risk systems and foundation models, as defined by the
of a fundamental right and the effective occurrence of material or AI Act, than the victims of harm by other systems. From an organiza­
immaterial harm to grant compensation, which is both hard to prove and tional opacity perspective, in the case of the AILD, the courts’ power to
may not always occur. demand the disclosure of relevant evidence extends only to high-risk
Lastly, the effectiveness of the liability regime seems to importantly systems. Even though the AI Act’s high-risk systems list is a good
rely on the development of standards that are mandated under the AI proxy for the systems that are most likely to cause harm, and are com­
Act. This is true for the human oversight requirement, as I showed plex, they will not be the only systems that cause harm, nor are they the
above, but it may be true for most other requirements where the only opaque and complex systems that may, both now and in the future,
development of standards will lead to a better understanding what “best cause harm. Thus, it is advisable that under the AILD, as in the PLD,
practices” around the development and deployment of AI systems courts are always empowered to order the defendant to disclose relevant
should be. Indeed, these standards could be used to establish duties of evidence that is at its disposal, upon request of an injured person
care. This also highlights the importance of standards to the overall claiming compensation and when the claimant has presented facts and
effectiveness, and democratic legitimacy, of AI governance.227 evidence sufficient to support the plausibility of the claim for
Based on these observations, the next and last section offers some compensation.
recommendations for addressing these limitations in the current AI Li­ In the case of technically opaque or complex systems, victims seeking
ability framework in Europe. to prove fault under the AILD may again find it easier when the system is
high-risk this is considering that explanatory documentation that can be
5. Suggested reforms and key elements for the broader relied upon to provide evidence will most likely be the one produced on
discussion the transparency, explainability and record-keeping requirements pro­
duced under the AI Act for high-risk systems. Additionally, the devel­
The proposed revised PLD and the AILD seek to update the existing opment of legal and industry standards will enable plaintiffs to compare
liability frameworks in EU Member States so that individuals who suffer a producer or deployer’s behavior with other actor’s behaviors and
such harm obtain fair compensation, and thus ensure, in general, that standards of care.
the uptake of AI is done with individual interests in mind. As the EU Consequently, if the power to request documentation from non-high-
strategy emphasizes, and to the extent the EU also wants to incentivize risk systems is extended, courts could request developers and deployers
the development and adoption of “trustworthy” AI, a fit-for-purpose li­ to provide ex-post explanations of how a system operates. This should be
ability regime also creates legal certainty for businesses.228 done to the extent possible and based on a reasonable justification
The proposals, though certainly advancing in an important direction presented by the plaintiff as to why this is needed.
and part of a broader regulatory initiative. This Part proposes a few
avenues in which the AI Liability Regime can be further improved, based 5.2. Human-AI hybrid systems and the role of standards
on the considerations of the previous parts, to (1) better address the
information asymmetries for systems that are not subject to special re­ In instances where liability claims involve human-AI hybrid systems,
quirements under the AI Act; (2) ensure victims of harms in AI-Human courts should emphasize evaluating the identity of the human-AI
systems are not left worse off than victims of solely automatized or interface. This is particularly crucial when examining cases where the
non-automatized systems; (3) improve the redress of fundamental rights human element in the loop is being considered as the cause or a
and create better incentives for AI developers and deployers to exercise contributing factor to AI-related harm.
more care. To shift legal processes in this direction, and as the European Union’s
framework for trustworthy AI reaches completion, these considerations
must be considered during the process of establishing industry standards
5.1. Addressing information asymmetries for the human supervision requirement under the AI Act. Indeed, the
standard-setting process will play a structural role not only in imple­
Information asymmetries between plaintiffs and AI developers and menting and materializing the ambitions of the AI Act but, importantly,
producers are a function of AI opacity because it obstructs effective in creating the baseline expectations to assess and evaluate liability
claims.
Human oversight standards must, for example, mandate a clear
definition of the roles and responsibilities of each party involved,
227
On the importance of standards for the implementation of the AI Act see consider the level of training and automation of the system in place, and
Edwards (n6); Michael Veale and Zuiderveen Borgesius, “Demystifying the
account for the competencies possessed by the human actor in question.
Draft EU Artificial Intelligence Act - Analyzing the good, the bad, and the un­
Similarly, standard setting bodies should mandate, for example, that
clear elements of the proposed approach,” (2021) 22(4) CLRI,t 8, 9; Mélanie
Gornet and Winston Maxwell, “The European approach to regulating AI
depending on the competences of the expected AI users, and the sensi­
through technical standards” (On file with the authors, 2024). tivity of the situation, trainings and clear instructions are part of

228
White Paper on AI (n8) 13.

229
See making a similar argument Commission Report on safety and liability
implications of AI (n21) 16.

15
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

teaching humans how to operate AI systems. Professionals such as pilots address such violations
or machine operators should arguably be held to a more stringent
accountability standard compared to everyday consumers. 6. Conclusion
As in critical safety industries or other industries with experience in
human-machine interactions, EU standard-setting bodies and judges While AI can do much good, it can also harm. The characteristics of
should pay special attention to the stated goals of the AI-Human system, AI, and how individuals participate in and interact with AI systems make
the reasonability of those expectations, and systems are designed and it difficult to trace back potentially problematic decisions or outcomes
labelled sufficiently for effective use, and address training and organi­ made with their involvement. This makes it difficult for victims of harm
zational policies.230 Though from a liability perspective technical stan­ to obtain redress. The 2022 directives proposed by the European Com­
dards are different from standards of care, it seems inescapable that at mission seek to update the existing liability frameworks in EU Member
least part of the evaluation of compliance with standards of care will rely States so that victims of harm with an AI system obtain fair compensa­
on what are defined to be the appropriate technical standards for hybrid tion, and thus to ensure, in general, that the uptake of AI is done with
systems. individual interests in mind and with legal certainty for businesses.234
The proposals are an important complement to the AI Act’s risk and
safety approach. Indeed, relying solely on risk regulation has distribu­
5.3. Redress to affectations of fundamental rights tive consequences, including the possibility that individual harms and
costs will be dismissed if a particular measure makes sense collectively,
The third element of discussion is the suitability of the AILD to seek which may especially harm minorities.235 It may also lead to situations
redress for affectations of fundamental rights. where, because regulators are fallible, organizations don’t have enough
The first shortcoming of the existing AILD is the exclusion from its incentives to take optimal care.236 Similarly, one of the main arguments
scope of application of AI systems are supervised by humans. This would that were raised when the AI Act was first published was that it didn’t
lead, for example, to an algorithm like the one at issue in the Dutch include individual rights nor rights of action for affected persons, even if
scandal outside of its scope of application, but it is also especially its stated goal is to protect fundamental rights in Europe.237 In this
worrisome as human supervisors are increasingly introduced to specif­ context, liability law becomes an important vehicle to ensure that the
ically mitigate the risks posed by AI systems used in different forms of vast and fast adoption of AI systems in all facets of life and society is
decision-making that can affect fundamental rights. A first key recom­ done in a way that guarantees the protection of people’s rights and in­
mendation is, thus, to eliminate this requirement. terests, but also to provide legal certainty for AI developers and
The second, more structural shortcoming, is that liability law deployers. It is, also, an important moment of policy choice, where not
necessarily requires the occurrence of harms to warrant compensation - only the interests of victims but also the societal interests in adopting
the main remedy within liability law. This may constrain the AILD’s and developing AI are weighed against each other.
capacity to facilitate victims’ access to justice and may create fewer Nevertheless, this Article has shown that the AILD and the PLD, in
incentives for certain AI providers to take optimal care. their current forms, fall somewhat short of their ambition to effectively
To be fair, the general framework for the trustworthy AI Act is cen­ complement the AI Act, not in small part because they very strongly rely
tred around the understanding that the protection of fundamental rights on the tiered framework developed by the AI Act. This occurs, especially,
isn’t only about an individual’s right itself - for example, a person’s right because the ex-ante regulatory interventions will often lead to the cre­
to equality before the law -, but it also about building societies that are ation of the documentation, standards and information that will be
respectful of fundamental rights. The EU’s system of fundamental rights important to successfully succeed in liability claims ex post. This is
seeks to achieve this, by, for example, promoting political participation especially the case for hybrid systems. This analysis also calls into
and a functioning democracy and directing the work of different gov­ question whether liability law is the best mechanism to give victims of
ernment bodies towards building societies and markets where funda­ affectations to their fundamental rights, when an AI system is involved,
mental rights are in general guaranteed. These systemic aspects of the a viable mechanism to seek redress.
protection of fundamental rights are the objectives to be addressed via The time is right, however, for the EU Commission and Parliament,
the enforcement of the AI Act and its safety requirements. Additionally, and legislators around the world, to have a broader conversation about
in some instances fundamental rights violations are better addressed the scope of liability and individual redress mechanisms for AI-related
with non-pecuniary remedies, such as injunctions, declarations, or harms. In the EU, some of the elements identified here may be an un­
specific performance orders to correct the violation.231 avoidable result of focusing attention on a particular and limited set of
At the same time, one of the key concerns of civil society is the lack of systems in the AI Act. Other issues – such as the standard-setting process
mechanisms of robust mechanisms for redress for individuals and groups – are outside the scope of the specific conversation on liability but will
affected by AI systems.232 Even if the final version of the AI Act includes be critical to its successful implementation. EU institutions, however,
a remedy chapter that includes a right to lodge complaints with a market should extend some of the benefits proposed by the AILD and the PLD to
surveillance authority, it remains unclear what the effectiveness of this more or all harms involving opaque and complex AI systems, extend the
mechanism will be and how it will act as a mechanism to compensate for application of the AILD for all AI systems regardless of whether a human
individual affectations to fundamental rights.233 To improve access to is supervising, and explore other avenues for individuals to seek redress
recourse for individuals who are victims of illegal violations of funda­ for AI affectations to their fundamental rights. Doing so will better
mental rights in situations involving AI systems, European and Member enable the goal of trustworthy AI and help realize the EU Approach to
State authorities may consider adopting or expanding mechanisms like Artificial Intelligence, which focuses on fostering trust, enhancing
those of the AILD within other procedures intended to effectively

230 234
Crootof and others (n27) 466. White Paper on AI (n8) 13.

231 235
UI v. OP (n223) 39, GDPR (n227) Article 77. Kaminski (n4), 8.

232 236
EDRi and coalition partners (n131). See discussion of law and economics analysis of regulation in Part 2.

233 237
EDRi and coalition partners (n131). EDRi and coalition partners (n131).

16
B. Botero Arcila Computer Law & Security Review: The International Journal of Technology Law and Practice 54 (2024) 106012

research and industrial capacity, and ensuring safety and fundamental Data availability
rights. Similarly, as other countries pass AI regulations, the example of
the EU liability framework for AI may be useful to analyze to better No data was used for the research described in the article.
understand how liability law can complement AI risk regulations.

Declaration of competing interest Acknowledgments

The author declares that they have no known competing financial Thank you to Margot Kaminski, Maximilian Gahntz, and the partic­
interests or personal relationships that could have appeared to influence ipants of WeRobot 2023 for their comments and feedback on previous
the work reported in this paper. versions of these piece. Thank you, also, to the editors and reviewers of
CLSR, and to Francesca Elli and Giovanna Hajdu Hungria Da Custódia
for their research assistance.

17

You might also like