Artificial Intelligence & Damages: Assessing Liability and Calculating The Damages
Artificial Intelligence & Damages: Assessing Liability and Calculating The Damages
net/publication/339140477
CITATIONS READS
0 363
2 authors, including:
Yaniv Benhamou
University of Geneva
30 PUBLICATIONS 12 CITATIONS
SEE PROFILE
All content following this page was uploaded by Yaniv Benhamou on 10 February 2020.
I. Introduction ................................................................................................................................... 3
II. Assessing Liability..................................................................................................................... 4
1. Potentially relevant liability regimes....................................................................................... 4
a. Tort liability ............................................................................................................................ 4
b. Product Liability ..................................................................................................................... 5
2. Challenges and shortcomings .................................................................................................. 5
a. High number of involved stakeholders................................................................................... 5
b. AI’s increased autonomy ........................................................................................................ 6
c. Lack of explainability (the “black box” phenomenon) .......................................................... 8
d. Lack of predictability or foreseeability .................................................................................. 8
e. Special considerations regarding product liability law .......................................................... 9
3. Policy-driven solutions (lege ferenda solutions) ................................................................... 10
a. Granting legal personality to AI ........................................................................................... 10
b. Creating a new form of strict liability for operators of technologies increasing risk of harm
11
c. Applying vicarious liability principles for operators of autonomous technologies ............. 12
d. Extending product liability to producers of emerging technologies (including services) .... 13
e. Compulsory insurance schemes ........................................................................................... 14
4. Developping the current fault liability regime (legal lata solution).................................... 15
a. Enhanced duties of care ............................................................................................................ 15
b. Solidarity rules between tortfeasors .......................................................................................... 16
III. Calculating the Damages ........................................................................................................ 17
1. General considerations ........................................................................................................... 17
2. Intellectual property right infringements ............................................................................. 17
3. Privacy violations .................................................................................................................... 19
4. Economic methods and “flat-rating“ damages .................................................................... 19
IV. Conclusion ............................................................................................................................... 20
*
Yaniv Benhamou is Lecturer (IP & Privacy), University of Geneva, PhD and Of counsel Attorney. Justine Ferland is attorney-at-law, PhD,
Research Assistant, University of Geneva. The authors sincerely thank Ms. Ana Andrijevic and Mr. Dino Vajzovic for their helpful comments.
1
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
Abstract
Establishing liability for damages caused by AI used to be rather straightforward when only one or few
stakeholders are involved, or when the AI could only take a limited range of pre-defined decisions in
accordance with specific parameters defined by a human programmer. However, AI usually involves
several stakeholders and components (e.g. sensors and hardware, softwares and applications, data itself
and data services, connectivity features) and recent forms of AI are increasingly able to learn without
human supervision which makes it difficult to allocate liability between all stakeholders. This
contributions maps various possibilities, identify their challenges and explore lines of thought to develop
new solutions or close the gaps, the whole from a global perspective.
Existing liability regimes already offer basic protection of victims, to the extent that specific
characteristics of emerging technologies are taken into account. Consequently, instead of considering
new liability principles (solutions that require certain amendments of the current liability regimes), one
should consider simply adapting current fault-based liability regimes with enhanced duties of care and
precisions regarding shared liability and solidarity between tortfeasors, which could potentially be done
through case-law in most jurisdictions. When it comes to the calculation of damages, given the
difficulties in calculating the damage and to take into account the specificities of IPR or privacy rights,
economic methods may be considered to calculate the damages in general, such as the Discounted Cash
Flow Method (DCF) and the Financial Indicative Running Royalty Model (FIRRM), as well as the
Royalty Rate Method and case-law about Fair, Reasonable and Non-Discriminatory license terms
(FRAND). This path will lead to a certain “flat-rating“damages (“barémisation“ or “forfaitisation“),
at least when IPR and personal data are illegally used by AI-tools and mostly not visible, hence barely
quantifiable in terms of damages.
2
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
I. INTRODUCTION
Artificial intelligence (AI) has undoubtedly brought along key societal benefits in the past years – one
can notably think about fighting climate change with more accurate predictions and quicker responses
to natural disasters, increase in patients’ wellbeing and health outcomes with robot-assisted surgery and
medical diagnosis assistance, and increased productivity and operational efficiency in the workplace
with automated and optimized routine tasks. It may in many cases reduce the risks of injuries or damages
in comparison to those arising when humans perform similar tasks. Yet, the widespread adoption of AI
has also led to unwanted and sometimes serious consequences. We have already seen, amongst other
issues, privacy violations, discrimination and fatal accidents caused by the use of AI, and it is
unfortunately probably just a matter of time before other, wider-scale examples are added to this list.
Establishing liability for damages caused by AI used to be rather straightforward when only one or few
stakeholders are involved, or when the AI could only take a limited range of pre-defined decisions in
accordance with specific parameters defined by a human programmer. However, AI usually involves
several stakeholders and components (e.g. sensors and hardware, softwares and applications, data itself
and data services, connectivity features), which makes it difficult to allocate liability between all
stakeholders. Moreover, recent forms of AI are increasingly able to learn without human supervision
and take autonomous decisions, which poses tremendous challenges for addressing questions of AI-
related liability. Indeed, no jurisdiction has granted legal personhood to AI or machines so far, meaning
that AI cannot be held personally liable for the damage it causes3. Because no specific legal regimes
currently define or regulate the operation of modern AI, courts dealing with these questions must attempt
to solve liability issues by applying general laws often drafted years before the advent of this technology.
We can therefore easily understand why AI-related liability has become one of the main areas of concern
for many experts today, along with other issues such as software accessibility, accountability and ethics.5
Much work remains to be done – not only via legal research but also on the policy, technical and business
sides – before we can satisfactorily answer all questions related to AI liability. The goal of this chapter
is to give an oversight of the general principles that may guide legal practitioners, students and
academics when reflecting on liability for damages caused by modern AI. We do not aim to provide a
detailed analysis of the (potentially) applicable law in a specific jurisdiction, but rather to map various
possibilities, identify their challenges and explore lines of thought to develop new solutions or close the
gaps, the whole from a global perspective.
3
There have been many proposals for extending some kind of legal personality to emerging digital technologies, some even dating from the
last century. See the 2019 Report from the Expert Group on Liability and New Technologies – New Technologies Formation, p. 34, n 98,
quoting Lawrence B. SOLUM, Legal Personhood for Artificial Intelligences, in North Carolina Law Review, April 1992, Vol. 4/70, 1231. For
a discussion on whether current liability regimes can address AI-related challenges, see Yavar BATHAEE, “The artificial intelligence black
box and the failure of intent and causation” in Harvard Journal of Law & Technology, 2018, no. 2, pp. 889-938, at p. 891; Hannah R.
SULLIVAN and Scott J. SCHWEIKART, “Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?” in AMA
Journal of Ethics, February 2019, Vol. 21, no. 2, pp. 160-166.
5
There are great ongoing initiatives and reports at the European level. See the 2019 Report from the Expert Group on Liability and New
Technologies – New Technologies Formation, p. 12, and the references of preexisting works made thereto.
See for instance ALLIANZ GLOBAL CORPORATE & SPECIALTY, The Rise of Artificial Intelligence: Future Outlook and Emerging
Risks, March 2018, available at: https://www.agcs.allianz.com/news-and-insights/reports/the-rise-of-artificial-intelligence.html.
3
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
By nature, AI is constantly developing and keeps surprising us with unexpected achievements. In this
context, it is difficult to draft any kind of legislation specifically governing it, including to cover liability
issues, as such legislation would need to be both universal and constantly amended in order to remain
effective – an impossible task due to the static nature of legal institutions.6 For this reason, it appears
reasonable to start an analysis of AI liability by relying on existing general legal principles to both find
elementary answers and identify their shortcomings. We may then assess how gaps could be closed.
Amongst the potential liability regimes that may be the most directly applicable in the context of AI-
related tort claims are general tort liability (a) and product liability (b). Because AI systems fall on a
spectrum7 – they may be anything between passive agents responding to specific human instructions
and autonomous entities having the capacity to learn, take decisions and perform actions unrelated to
their initial programming – these regimes may sometimes suffice to hold a natural or legal person liable
for AI’s actions and ensure proper indemnification of its victim(s). In complex cases involving several
stakeholders as well as more advanced “intelligent” AI however, they may not be sufficient, as will be
discussed in the next section.
It should be noted that we chose not to discuss vicarious liability (which imposes strict liability on one
person – the principal – for the negligence or wrongdoing of another – the agent)8 under the current
section. Indeed, even if AI could perhaps be seen as having a sufficient degree of autonomy and
intelligence to be treated under vicarious liability principles,9 this type of liability requires that the agent
(e.g. the AI) has legal personality which is currently not the case for AI in any jurisdiction.10 Helpful
parallels that could be pleaded to apply rules akin to those of vicarious liability principles to AI will
however be discussed in the next section.
a. Tort liability
Tort liability is the general liability regime applying to a civil wrong committed by one person against
another person. Although there are differences between common law and civil law jurisdictions11, we
may broadly summarize that tort law is based on fault and implies failure to take reasonable care to
avoid causing injury or loss to another person. The plaintiff must prove a breach of duty of care (in
common law jurisdictions)13 respectively a wrongful action or a fault (in civil law jurisdictions)14. Once
the breach / faulty behavior has been established, the plaintiff must also prove that it suffered a damage
and establish a causal link between the fault and the damage, thus giving rise to compensation.
Tort law may sometimes be applied to hold a person liable for damages related to the use of AI. For
instance, if a physician relies on an AI-powered clinical decision support software to prescribe
medication but the software issues a flawed recommendation that would have been noticed and ignored
by a reasonably competent physician, then the physician will likely be liable in tort for resulting and
foreseeable injuries to the patient notwithstanding AI’s wrong recommendation. However, as we shall
see, the application of tort law principles faces significant challenges and shortcomings.
6
Paulius ČERKA, Jurgita GRIGIENĖ and Gintarė SIRBIKYTĖ, “Liability for damages caused by artificial intelligence” in Computer Law &
Security Review, Vol. 31, Issue 3, June 2015, pp. 376-389, at p. 384.
7
Omri RACHUM-TWAIG, “Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots” in University of Illinois Law
Review, Vol. 2020, Forthcoming, p. 8. Available at SSRN: https://ssrn.com/abstract=3339230.
8
Examples include liability of the employer for the acts or omissions of its employees that took place in the course of their employment and
liability of the parent for the acts of their minor children. For legal sources, see for instance Restatement (Third) Of Agency §§ 7.03-7.07 in
the United States and article 1463 of the Civil Code of Quebec.
9
Emad Abdel Rahim DAHIYAT, “From Science Fiction to Reality: How will the Law Adapt to Self-Driving Vehicles?” in Journal of Arts
and Humanities, 2018, Volume 7, Issue 9, pp. 34-43, at p. 39.
10
RACHUM-TWAIG (n 7), p. 11.
11
Common law jurisdictions often refer to negligence as the default tort liability rule, whereas civil law jurisdictions are based on the Roman
concept of delict.
13
See for instance Donoghue v. Stevenson, [1932] AC 562 in the UK.
14
See for instance article 1382 of the French Civil Code; article 41 of the Swiss Code of Obligations; article 1457 of the Quebec Civil Code.
4
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
b. Product Liability
Although the scope of concerned parties may vary depending on the jurisdiction, product liability
generally targets at least manufacturers of finished products and manufacturers of raw parts or
components included in a finished product. It may also apply to importers, designers, distributors,
suppliers and retailers of the product amongst others (we will generally refer to “manufacturers” in this
article unless specific distinctions apply). Product liability may concern (1) manufacturing defects, (2)
design defects15 and (3) failure to warn users against the product’s inherent, nonobvious dangers.
In many jurisdictions including the European Union, product liability is a form of strict liability: if a
defective product causes any physical damage to consumers or their property, the injured person shall
be required to prove the damage, the defect and the causal relationship between defect and damage, but
once this burden of proof is fulfilled, the manufacturer or producer has to provide
compensation irrespectively of whether there is negligence or fault on their part.16 In the United States,
product liability claims may be brought under three liability theories depending on the situation and
jurisdiction: negligence, strict liability or breach of warranty.17
Manufacturers can be cleared of liability under certain specific conditions that are unrelated to
considerations of fault or negligence. It is generally admitted, for instance, that manufacturers may raise
as a defense that the state of scientific or technical knowledge at the time the product was put into
circulation could not allow manufacturers to detect the defect.18 Manufacturers may also evade liability
if they prove that no defect existed when the product left their hands.19
At first glance, product liability seems like an attractive regime to hold manufacturers of AI-powered
products responsible for injuries caused by these products. For instance, when an autonomous vehicle
is manufactured or designed in a flawed way that is inherently dangerous to those around it, or when a
manufacturer fails to inform customers of the dangers associated with the vehicle, product liability
principles may be applicable. However, just like with tort liability principles, we shall see that the
application of product liability principles to modern AI faces significant challenges and shortcomings.
As previously mentioned, AI systems fall on a spectrum. Whereas the aforementioned liability regimes
may suggest appropriate answers in “simpler” cases of damages caused by AI, their application may be
barred by insurmountable obstacles when dealing with cases implicating the most advanced forms of
AI.20 Amongst these obstacles are high number of potentially involved stakeholders (b), AI’s autonomy
(b), lack of explainability (c) and lack of foreseeability (d). Specific considerations also further
complicate the application of product liability principles to AI’s actions (e).
Emerging digital technologies, including AI, are becoming increasingly complex due to the
interdependency between their different components such as i) the tangible parts/devices (sensors,
actuators, hardware), ii) the different software components and applications, to iii) the data itself, iv) the
15
Design defects are clearly covered by product liability law in some jurisdictions, such as the United States: see RACHUM-TWAIG (n 7), p.
16. In the European Union, however, the Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and
administrative provisions of the Member States concerning liability for defective products (the “EU Product Liability Directive”) does not
clearly cover design defects. In practice, courts limit the application of strict liability under this directive to manufacturing defects, while
generally applying negligence principles to design and instruction defects; see Martin UEFFING, “Directive 85/374 – European Victory or a
Defective Product Itself?” in MaRLBe Research Papers, 2013, vol. 4, p. 373-424, at p. 392.
16
Articles 1 and 4 of the EU Product Liability Directive (n 15).
17
Margaret HORN and Kelly DAWSON, “Product Liability – United States” in Getting the Deal Through, July 2019, sections 18-19, available
at: https://gettingthedealthrough.com/area/30/jurisdiction/23/product-liability-united-states/
18
See for instance in the United States: HORN et als. (n 17), section 29; in the European Union: EU Product Liability Directive (n 15), article
7(d).
19
See for instance in the European Union: EU Product Liability Directive (n 15), article 7(b).
20
See for instance Emad Abdel Rahim DAHIYAT, “Intelligent agents and liability: is it a doctrinal problem or merely a problem of
explanation?” in Artificial Intelligence and Law, March 2010, Volume 18, Issue 1, pp. 103-121, at pp. 107-108.
5
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
data services (i.e. collection, processing, curating, analysing), and v) the connectivity features21. The
number of stakeholders involved in the creation and operation of AI systems is concurrently rising:
hardware manufacturers, software designers, sellers, equipment and software installers, facility owners,
AI owners, AI users and trusted third parties, amongst others, may all have a role to play in ensuring
that AI does not cause harm, and allocating liability in this context is not an easy task.
Some legal regimes, such as product liability, may facilitate the allocation of liability by prescribing
joint liability between some of these potential defendants; however, this is not the case for all liability
regimes and in any event, the current provisions on joint liability may not adequately cover all relevant
stakeholders in an AI context. Moreover, even in cases of strict liability, it is necessary to determine
which of the commercial parties along the AI value chain can be held liable (if only for the jointly liable
defendants to allocate liability between themselves in the context of a recursory action), which may
prove impossible when conclusions are autonomously reached by AI.
In addition, digital technologies are continuously modified after their launch into the market – via
incorporation of new data, software updates or patches applied either by the manufacturer of the AI
system, manufacturers of individual system components or even third parties. These new codes add or
remove features in ways that change the risk profile of the “original” AI and affect the behavior of the
entire system or of individual components in a way that can affect the safety of AI as a whole.22 In this
context, it has become increasingly difficult to pinpoint who is responsible when something goes wrong.
Also, the performance of the AI may be well tested when it is launched, but there will be little or no
scientific evidence of the AI’s features at the time giving rise to damages23. The legal decision regarding
a possible lack of diligence will have to be taken based on possibly outdated and incomplete information.
Moreover, it may useless to try to compare AI tools, as conclusions reached for one AI tool (i.e., the one
that is defective) will not be transposable to a second AI tool, because the two, even when designed
together, will have – over time - learned and evolved differently24.
It may also be especially unfair to assign blame under strict liability principles (such as product liability
law) to a manufacturer or designer whose work was far-removed in both time and geographic location
from the completion and operation of the [original] AI system.25 Similarly, if the AI is modified after its
manufacturing or programming via open source software, for instance, then we can hardly conclude that
the product sold initially caused the injury and rely on product liability principles.26
In the past, AI could usually be seen as a simple tool of the person programming, using or operating it,
who could be held liable in case of injury caused by such tool. Today’s AI is becoming increasingly
autonomous in that although initially programmed by a human counterpart, it can now process data,
learn from it and take independent decisions that can hardly, if not at all, be linked to the initial design
or programming.
Notwithstanding which liability regime is considered, courts confronted with liability claims arising
from AI’s actions must attempt to determine which legal or natural person is responsible for the damage
caused by these actions. AI’s increased autonomy makes a fundamental liability assessment difficult, if
not impossible in some cases. Whereas the existing rules on liability cover cases where the cause of the
robot’s act or omission can be traced back to a specific human agent (e.g. manufacturer, operator, owner
or user) and where that agent could have foreseen and avoided the robot’s harmful behavior, in the
21
EUROPEAN COMMISSION, Liability for emerging digital technologies, Accompanying the document Communication from the
Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the
Committee of the Regions, April 25, 2018, SWD/2018/137 final, available at: https://eur-lex.europa.eu/legal-
content/en/ALL/?uri=CELEX%3A52018SC0137
22
Id.
23
See JUNOD, referring to ERNST & YOUNG (2018), p. 85: "[I]n the field of robotics, it could be difficult to distinguish between a defect which
was present at the moment in which the robot was put into circulation from a defect that appeared after that moment, given the characteristics
of self-learning systems“.
24
JUNOD, referring to the judgment of the EU Court of Justice in C-503/13 of 5 March 2015.
25
SULLIVAN et als. (n 4).
26
BARFIELD (n 30), p. 197.
6
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
scenario where AI takes autonomous decisions, the traditional rules will not suffice to give rise to legal
liability for damage caused by a robot, since they would not make it possible to identify the party that
caused the damage30. Indeed, the more autonomous an AI system becomes, the less control physical
parties have over it, and the general liability principles founded on agency, control, and foreseeability
collapse31.
Under tort law, assessing how and when entities such as manufacturers, operators and/or users of AI
may commit a breach of duty or a fault and establishing causation is not simple when AI with a high
degree of autonomy is concerned. In cases where AI decisions are not directly related to one party but
rather result from AI’s interpretation of reality, who is to blame and in which proportion? Even if a
person can be seen as having played a role in causing damage (e.g. the programmer who instructed AI
to take a specific type of data into account), should its responsibility be proportional to the degree of
autonomy of the concerned AI, and how may we properly evaluate this degree of autonomy?32 When AI
reinforces itself without human input, by learning from its own past experiences and making adjustments
to improve efficiency, then acts on this new knowledge, is it even possible to speak of “fault” or “breach
of duty” from any of the persons who may have been involved with the AI at some remote point in time?
AI’s autonomy also poses problems under product liability law, which is currently not designed to
cover errors resulting from the autonomous AI’s “thinking” – a major flaw in the current legal approach
to AI, according to some authors.33 Indeed, in many cases, it will simply not be possible to draw the line
between damages resulting from the AI’s autonomous decisions and damages resulting from a product
defect. Even if fault needs not be proven, the plaintiff has to demonstrate that the product was defective;
this is not an easy task when AI-powered systems operate successfully without a mechanical defect, but
still cause property damage or injuries due to their machine learning capabilities.34 As previously
mentioned, under most regimes, it will be possible for manufacturers and producers to escape liability
if they can establish that, at the time the AI was put into circulation, they were not aware and could not
have been aware of the risk which later materialized; in other words, that they could not know that the
AI was dangerous or defective when it left their hands. However, because AI develops and reinforces
itself with machine-learning and adjusts on its own to become more “intelligent” without human
intervention, it might simply never be the same AI at a later point in time than it was when it left the
manufacturer’s hands35 - thus leaving the victim uncompensated almost every time.
In any event, applying product liability principles to the most autonomous forms of AI has been said to
be unfair and commercially unreasonable towards manufacturers, as it would equate holding them liable
for actions over which they have absolutely no control and thus potentially stifle innovation. Indeed, in
cases where AI is meant to replace human decision-making, applying product liability law in cases of
“defective” decisions would imply that manufacturers are liable in almost every case (due to strict
liability), yet humans making those same mistakes could plead the absence of fault or negligence under
general tort law principles. For instance, a manufacturer could be held strictly liable if an AI-powered
medical device fails to detect a specific condition, yet a physician would benefit from the more lenient
tort liability regime in the same situation and could escape liability by proving that it did not act
negligently. Moreover, too large a burden of responsibility could lead to fear on the part of the
manufacturer not to reveal his identity in public, or otherwise could stop the progress of technology
development in official markets, moving all the programming work into unofficial markets.36
30
European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics
(2015/2103(INL)), available at: http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html, par. AD. On this topic, see also
Woodrow BARFIELD, “Liability for Autonomous and Artificially Intelligent Robots” in Paladyn Journal of Behavioral Robotics, 2018,
Volume 9, Issue 1, pp. 193-203; Mark. A. CHINEN, “The co-evolution of autonomous machines and legal responsibility” in Virginia Journal
of Law & Technology, Fall 2016, Vol. 20, no. 02, p. 338-393; George S. COLE, “Tort liability for artificial intelligence and expert systems” in
The John Marshall Journal of Information Technology & Privacy Law, 1990, Vol. 10, issue 2, p. 127-231.
31
SULLIVAN et als. (n 4).
32
Giangiacomo OLIVI, Claudio Orlando MIELE and Valeria SCHIAVO, “Robots and Liability: who is to blame?”, Dentons, December 20,
2018, available at: https://www.dentons.com/en/insights/articles/2018/december/20/robots-and-liability
33
BARFIELD (n 30), p. 196; ČERKA et als. (2015) (n 6), p. 386.
34
BARFIELD (n 30), p. 196.
35
Valérie JUNOD, “Liability for damages caused by AI in medicine : progress needed” in Christine CHAPPUIS and Bénédict WINIGER
(eds.), Journée de la responsabilité civile 2018, Schulthess, 2019, Zürich, pp.119-150, at p. 123.
36
ČERKA et als. (2015) (n 6), p. 386.
7
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
The operation of AI is based on the achievement of goals.41 AI’s designers do not program all the
possible scenarios in advance, nor give specific instructions for each of them; rather, they set a goal for
the machine and let AI process the data input, learn from it, and decide the best course of action to reach
its goal. This leads to the scenario where the AI’s programmers may not have exact understanding of
how it reached such goal or what the stages leading to success were;42 in other words, they cannot explain
the AI’s “thought process” leading to the final result. The same is true for AI’s failures which cannot
always be explained or understood by humans. For instance, algorithms in precision medicine process
patient and hospital data to predict patient risks and formulate diagnoses, but it is not always possible to
identify which data elements were processed, the weight that was given to each element in the global
assessment and whether there are unethical bias in the processing.43 Even in cases where the algorithm
itself is rather simple, the data fed into the algorithm may be so diverse and ever changing (in the case
of autonomous vehicles, one may think about inputs from cameras, sensors, lasers, microphones, etc.)
that it is often impossible to reproduce the environment in which the injury happened and identify its
source.
This so-called “black box” nature of AI creates challenges of interpretability and eventually affects
causation and allocation of liability for all aforementioned liability regimes. Indeed, identifying the
cause of an AI system’s failure to perform is the key element for establishing a fault/breach of duty of
care and a causal link in tort claims, or a link between defect and damage in product liability claims.
In the context of judicial proceedings, if a plaintiff cannot go back the chain of data processing and
recreate the circumstances of AI’s reasoning process to understand what led to a specific (faulty) output,
its action may very well be doomed as he will not be able to fulfill the basic evidentiary requirements
regarding fault and/or causation.44
Some authors have argued that in order to make AI more explainable and remedy to this important
shortcoming, its designers should be legally required to disclose algorithms’ codes and implement a way
to record all aspects of their functioning, which would allow to reconstruct and understand the causes
of its behavior and facilitate liability assessments.45 However, this suggestion is not always possible with
modern AI and also raises important issues with regard to trade secrets and competition law.46
The most advanced is AI, the less predictable or foreseeable it becomes. This is because many forms of
modern AI function based on unsupervised learning, as opposed to supervised learning. In cases of
supervised learning, the AI’s designers (and potentially users, if they participate in the process) have
considerable control over the results of an operation as they provide the basis for AI’s decisions; they
can therefore foresee, at least up to a certain point, how the AI will react to new data (e.g. with intelligent
telephones who can identify someone in a photo). However, in cases of unsupervised learning (such as
AI relying on deep learning mechanisms)48, the algorithms are only given input data without
corresponding output values, and are left free to function “as they please” in order to learn more about
the data and present interesting findings. This lack of predictability or foreseeability challenges the
41
ČERKA et als. (2015) (n 6), p. 383; O. RACHUM-TWAIG (n 7), p. 7 ; BARFIELD (n 30), p. 193.
42
RACHUM-TWAIG (n 7), p. 7-8.
43
BARFIELD (n 30), p. 195.
44
JUNOD (n 35), p. 124; Chris TEMPLE, “AI-Driven Decision-Making May Expose Organizations to Significant Liability Risk”, Corporate
Compliance Insights, September 11, 2019, available at: https://www.corporatecomplianceinsights.com/ai-liability-risk/; see also DAHIYAT
(2018) (n 9), p. 38.
45
Shane O’SULLIVAN, Nathalie NEVEJANS, Colin ALLEN, Andrew BLYTH, Simon LEONARD, Ugo PAGALLO, Katharina
HOLZINGER, Andreas HOLZINGER, Mohammed Imran SAJID and Hutan ASHRAFIAN, “Legal, regulatory, and ethical frameworks for
development of standards in artificial intelligence (AI) and autonomous robotic surgery” in The International Journal of Medical Robotics and
Computer Assisted Surgery, 2019, available at: https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.1968, p. 7.
46
O’SULLIVAN et als. (n 45), p. 5; V. JUNOD (n 35), p. 124, footnote 30 (citing other sources).
48
Eduardo MAGRANI, “New perspectives on ethics and the laws of artificial intelligence” in Internet Policy Review, Volume 8, Issue 9, 2019,
available at: https://policyreview.info/articles/analysis/new-perspectives-ethics-and-laws-artificial-intelligence
8
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
liability principles: a defendant will only be found liable if it could reasonably anticipate and prevent
the potential results of an action.49
Indeed, when trying to apply tort law principles to AI’s actions, unforeseeability may cause problems
in evaluating both the fault/breach of duty of care aspect as well as the causation aspect51. On the side
of fault, because AI’s actions are unpredictable, it is difficult for a person operating or interacting with
it to anticipate the probability that it will eventually inflict harm on others, the optimal precautions that
should be put in place by its programmers or operators, the safety measures that should be taken by
potential victims engaging with it and all the potentially new and unpredictable types of harms that may
be inflicted by it.52 In this context, we may hardly expect human stakeholders to be able to take
preventive measures to prevent harm caused by AI. Similarly, when AI acts in an unexpected way after
having learned from its own experiences and reinforced itself, it will be difficult to conclude to a fault
or breach of duty of care on the part of manufacturers or programmers if they can demonstrate that the
AI was properly developed and tested before release, that their employees and auxiliaries were well
trained and supervised and that they implemented proper quality control mechanisms.53 Even in cases
where we can identify a fault from a human stakeholder interacting with AI, the lack of foreseeability
will in many cases break the causation link between this person’s fault and the injured victim.54
AI’s lack of foreseeability poses similar problems under product liability principles. In many
jurisdictions, the law specifically states that manufacturers are only liable for defects or inadequate
instructions when there was a foreseeable risk of harm posed by the product.55 Once again, because AI-
related risks are unforeseeable by nature, they simply cannot be covered by the product/design defect or
duty of warning and instruction doctrines.56
Finally, one specific challenge is worth noting when attempting to apply product liability law principles
to AI’s actions: the fact that modern AI may simply not be covered by these principles at all as it may
not be a “product”. Indeed, although the term “product” may be interpreted broadly, product liability
generally only concerns tangible movables (such as hardware), not services;57 and key modern
technologies such as software and algorithms are most often considered services, not products.
Moreover, in AI’s complex environment, characterized by an inter-dependency among different
components, “products” and “services” are increasingly intertwined and it can be difficult to identify
whether a failure is due to either one of those components. In some cases, damage may be caused by a
simple hardware (product) defect, but it may also stem (amongst other examples) from
miscommunication between the physical infrastructure and AI’s “brain”, from incorrect data analysis,
or from corrupted third-party data being fed into the AI algorithm. In these contexts, doctrinal opinions
49
RACHUM-TWAIG (n 7), p. 13, citing various references in common law; BARFIELD (n 30), p. 199; DAHIYAT (2010) (n 20), p. 113.
51
Andrew D. SELBST, “Negligence and AI’s Human Users” in Boston University Law Review (forthcoming), 2019, p. 26, available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350508; RACHUM-TWAIG (n 7), p. 15 ; see also BARFIELD (n 30), p. 193, citing
Andreas MATTHIAS, “The responsibility gap: ascribing responsibility for the actions of learning automata” in Ethics and Information
Technology, 2004, 6(3), p. 175-193.
52
RACHUM-TWAIG (n 7), p. 23, 27; BARFIELD (n 30), p. 194, 200.
53
JUNOD (n 35), p. 130.
54
RACHUM-TWAIG (n 7), p. 23.
55
See for instance in the United States Restatement (Third) Of Torts: Product Liability §2(b) (1998), which states that the manufacturer is
liable for design defects “when the foreseeable risk of harm posed by the product could have been reduced or avoided by the adoption of a
reasonable alternative design by the seller or other distributor, or a predecessor in the commercial chain of distribution, and the omission, of
the alternative design renders the product not reasonably safe” as well as for its “inadequate instructions or warnings when the foreseeable
risks of harm posed by the product could have been reduced or avoided by the provision of reasonable instructions or warnings by the seller or
other distributor, or a predecessor in the commercial chain of distribution, and the omission of the instructions or warnings renders the product
not reasonably safe.” A similar rule exists under article 7(b) of the EU Product Liability Directive (n 15) and under article 5(b) of the Swiss
Product Liability Act (RS 221.112.944), amongst others, establishing the “development risk” defense that manufacturers are likely to raise in
cases of AI’s unforeseeable actions. Under this defense, producers can escape liability if they can establish that, at the time the AI was put into
circulation, they were not aware and could not have been aware of the risk which materialized.
56
RACHUM-TWAIG (n 7), p. 19 ; JUNOD (n 35), p. 128.
57
The European Union Commission is currently exploring whether the definition should also cover software embedded in (or downloaded on
to) a physical product, but has not implemented concrete changes in the law so far.
9
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
The shortcomings and challenges discussed above illustrate that notwithstanding the concerned
jurisdiction, current liability regimes are ill-adapted to adequately allow the indemnification of AI’s
potential victims. Although more academic and political discussion and development is required before
concrete solutions can be implemented, this section aims to present a few creative propositions which
may inspire policymakers dealing with questions of AI liability. It is however by no means exhaustive,
especially with regard to policy-driven solutions where multiple other avenues have been evoked in the
past years.63
One way to circumvent the pitfalls currently associated with AI is to find a way to hold it directly liable
for its actions instead of looking for a faulty human behind it. By reviewing the existing legal framework,
lawmakers could decide to ascribe legal personhood to modern AI, thus giving it rights and a
corresponding set of duties. The idea in itself is not shocking and may possibly be implemented without
requiring too many legal reforms, since the law already grants legal personality to other non-natural
persons such as corporations, and AI likely fits (if not exceeds, in comparison to corporations) the
required criteria to benefit from a similar status.64 In fact, the European Parliament is considering this
pathway and is currently studying the implications of “creating a specific legal status for robots in the
long run, so that at least the most sophisticated autonomous robots could be established as having the
status of electronic persons responsible for making good any damage they may cause, and possibly
applying electronic personhood to cases where robots make autonomous decisions or otherwise interact
with third parties independently”65.
58
Marguerite E. GERSTNER, “Comment: Liability Issues with Artificial Intelligence Software” in Santa Clara Law Review, Volume 33,
Number 1, 1993, pp. 239-269, at p. 255; JUNOD (n 35), p. 126, citing Kerstin Noëlle VOKINGER, “Artificial Intelligence and Machine
Learning in der Medizin”, Jusletter, August 28, 2017.
59
COLE (n 30).
60
BARFIELD (n 30), p. 197.
61
Ibid.
62
See notably JUNOD (n 35), p. 126, citing Jessica ALLAIN, “From Jeopardy! To Jaundice: The Medical Liability Implications of Dr. Watson
and Other Artificial Intelligence Systems” in Louisiana Law Review, 2013, volume 73, issue 4, pp. 1049-1078, at p. 1067.
63
To give a few examples, the following solutions have notably been put forward: (1) the creation of regulatory rules regarding coding and
design of robots and autonomous products: RACHUM-TWAIG (n 7), p. 32 (citing others); (2) establishing a system where the AI would need
to be licensed: John KINGSTON, “Artificial Intelligence and Legal Liability”, conference paper, November 2016, available at:
https://www.researchgate.net/publication/309695295_Artificial_Intelligence_and_Legal_Liability (citing others) and JUNOD (n 35), p. 136
(for medical products); (3) developing a system where AI developers and manufacturers would agree to adhere to certain ethical guidelines to
govern AI, providing a framework that courts could use to resolve legal claims where AI is implicated: ALLIANZ GLOBAL CORPORATE
& SPECIALTY (n 5); (4) establishing a regulatory authority dedicated to regulating and governing the development of AI: Matthew U.
SCHERER, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” in Harvard Journal of Law &
Technology, Volume 29, No. 2, spring 2016, pp. 353-400, at pp. 393-397.
64
Shawn BAYERN, “The Implication of Modern Business-Entity Law for the Regulation of Autonomous Systems” in Stanford Technology
Law Review, Volume 19, 2015, pp. 93-112; Shawn BAYERN, Thomas BURRI, Thomas D. GRANT, Daniel M. HAUSERMANN, Florian
MOSLEIN and Richard WILLIAMS, “Company Law and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators” in
Hastings Science and Technology Law Journal, Volume 9, 2017, pp. 135-161; Paulius ČERKA, Jurgita GRIGIENĖ and Gintarė SIRBIKYTĖ,
“Is it possible to grant legal personality to artificial intelligence software systems?” in Computer Law & Security Review, Vol. 33, Issue 5,
October 2017, pp. 685-699.
65
European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (n 29), par.
59(f).
10
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
Key questions however remain to be settled before concretizing this idea. Contrary to corporations, AI
would not necessarily have a patrimony of its own and would thus not be able to indemnify its potential
victims even if it is found liable. This could however be circumvented if some form of compulsory
insurance scheme for human stakeholders involved with AI (either designers, manufacturers, service
providers and/or end users) or a compensation fund was to be established at the same time. In-depth
analyses and reflections would also be required to cover the other consequences of granting legal
personality to AI, such as the implications concerning its potential criminal responsibility.
This suggested solution is also criticized by many. Following the European Parliament’s proposal, more
than 250 experts from various AI-related fields signed an open letter in 2018 calling upon the European
Commission to reject it as it would be – in their opinion – inappropriate, ideological, nonsensical and
non-pragmatic.66 Some have held that contrary to what is the case with corporations, it will not always
be possible to identify a natural person behind the AI-legal person who may in all cases be ultimately
responsible, thus leaving liability voids in some cases.67 Moreover, even if AI was to have legal
personality, problems of unexplainability and unforeseeability would remain;68 it would not be
straightforward to establish that AI should have been able to avoid the mistakes it made, nor to
understand its “thought process” and the specific steps that led it to take a particular decision.
b. Creating a new form of strict liability for operators of technologies increasing risk of harm
Strict liability regimes could be the most appropriate way to ensure compensation, in particular for
operators of technology that exposes third parties to an increased risk of harm, such as AI-driven robots
in public spaces (non-private environments), eventually combined with a compulsory liability
insurance schemes69. Assessing AI’s liability under the strict liability theories discussed below may
require legislative changes, but could also potentially be done through case-law, at least in some
circumstances and/or jurisdictions if courts were to follow creative attorneys’ arguments and render
innovative decisions. Amongst the suggested sources of inspiration are the following liability theories:
• Liability for greater sources of danger or ultra-hazardous activities. Some jurisdictions have
established a strict liability regime holding those who create or handle particularly dangerous items
or perform abnormally dangerous activities liable for the damage caused by such items or activities,
even if they took every reasonable step to prevent this damage.70 Some authors hold that “since AI
is able to draw individual conclusions from the gathered, structured, and generalized information as
well as to respond accordingly, it should be accepted that its activities are hazardous.”71 Because
AI’s activities are inherently risky, and the risk may not always be prevented by safety precautions,
AI may meet the requirements for being considered a greater source of danger,72 which would imply
that either its developer or manager should be required to assume strict liability for its actions and
potentially be required to take out compulsory insurance to cover its civil liability.73 This would be
especially true when AI is performing a function in which mistakes may be directly life-threatening
(e.g. administering medicine to a patient).74
66
Open Letter to the European Commission Artificial Intelligence and Robotics, 2018, available at: http://www.robotics-openletter.eu/
67
O’SULLIVAN et als. (n 45), p. 7; Opinion of the European Economic and Social Committee on “Artificial intelligence – The consequences
of artificial intelligence on the (digital) single market, production, consumption, employment and society”, 31 May 2017, 2017/C 288/01, par.
3.33, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016IE5369&from=EN; Open Letter to the European
Commission Artificial Intelligence and Robotics (n 66).
68
DAHIYAT (2010) (n 20), p. 107-112.
69
See EU Report, 39 and 41, which defines “operator“ (and abandoning the traditional concepts of owner owner/user/keeper) as the person
who is in control of the risk connected with the operation of emerging digital technologies and who benefits from such
operation. “Control“ is a variable concept, ranging from merely activating the technology, to determining the output or result (such as entering
the destination of a vehicle or defining the next tasks of a robot), and may include further steps in between, which affect the details of the
operation from start to stop. For a discussion on compulsory insurance schemes, see below 3.d.
70
See for instance Restatement (Third) of Torts § 20 (2009) in the United States; Rylands v. Fletcher, [1868] UKHL 1 in the United Kingdom;
ČERKA et als. (2015) (n 6), p. 386. Examples of ultra-hazardous activities include the use or storage of explosives, disposing of nuclear wastes
and activities involving radioactive materials.
71
ČERKA et als. (2015) (n 6), p. 386.
72
Ibid.; see however RACHUM-TWAIG (n 7), p. 21, who argues that this theory likely does not apply to AI.
73
ČERKA et als. (2015) (n 6), p. 386.
74
KINGSTON (n 63).
11
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
• Liability for animals. Although liability for the acts of animals is generally based on fault or
negligence by their owners or keepers, it can also be strict in some jurisdictions and circumstances.
In the United Kingdom, for instance, a specific act provides that the keeper of a dangerous animal
(e.g. its owner who is in its possession, a head of household or a keeper) is strictly liable for any
harm which may have been caused by that animal, notwithstanding whether or not he was at fault.75
Under another United Kingdom act, keepers of dangerous wild animals are required to take out
insurance policies against liability for damage caused to third parties and to be licensed by the local
authority.76 Some authors as well as the European Commission have linked the unpredictability of
AI systems to that of animals, “where liability is typically attributed to those responsible for
supervising the animal because they are in the best position to adopt measures to mitigate the risk
of damages”77; we could therefore imagine creating a strict liability regime akin to what exists for
dangerous animals in the United Kingdom for users or supervisors of AI systems.
• Common enterprise liability. One author argues that a new strict liability regime for AI could be
established based on the common enterprise liability doctrine, under which each entity within a set
of interrelated companies may be held liable jointly and multiply for the actions of other entities
that are part of the group, allowing the injured party to obtain redress without having to assign every
aspect of the general wrongdoing to one party or another.78 Under this liability scheme, persons
working towards a common aim, such as the manufacturers, programmers and designers of AI and
its various components, would jointly share the responsibility of indemnifying the plaintiff for AI’s
wrongdoings and no finding of fault would be required. The defendant(s) having indemnified the
plaintiff in such a suit would have the opportunity to file a recursory action to obtain reimbursement
from other potential defendants. In order for this solution to be implemented, however, courts would
need to depart from some of the traditional criteria of common enterprise liability; indeed, they
usually apply this doctrine when the liable entities have some sort of organizational relationship,
which may not always be the case with AI.79
These solutions are appealing in that they allow to circumvent issues of AI’s autonomy, unexplainability
and unforeseeability discussed above; lawmakers and courts should however remain careful before
implementing them, as they may also have a “chilling effect” on the manufacturing, design and use of
future AI-based products. Indeed, as previously mentioned, holding human stakeholders responsible for
acts performed by AI beyond their control – with no regard as to whether or not they exercised an
appropriate level of care – may be placing too high of a burden on their shoulders and may lead to less
innovation and/or use of AI in the future.
Vicarious liability regimes could be the most appropriate way to ensure compensation, in particular for
autonomous technologies, which makes the principal for auxiliary’s action80. This interpretation is
based on the way AI’s actions are interpreted in the field of contracts, where strict liability rules apply
to a machine’s actions and bind the person on whose behalf it acts, regardless of whether these actions
were planned or envisaged, and “complies with the general rule that the principal of a tool is responsible
for the results obtained by the use of that tool since the tool has no independent volition of its own.”81
75
Animals Act 1971, Chapter 22.
76
Dangerous Wild Animals Act 1976, Chapter 38.
77
J. Scott MARCUS, “Liability: When Things Go Wrong in an Increasingly Interconnected and Autonomous World: A European View” in
IEEE Internet of Things Magazine, December 2018, available at: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8717593. However
some authors disagree about the possibility of making such a parallel: see ČERKA et als. (2015) (n 6), p. 386.
78
David C. VLADECK, “Machines Without Principals: Liability Rules and Artificial Intelligence” in Washington Law Review, Volume 89,
Number 1, 2014, pp. 117-150. See also ČERKA et als. (2015) (n 6), p. 386; and SULLIVAN et als. (n 4).
79
Under the common enterprise doctrine, courts may find that a common enterprise exists if, for example, businesses (1) maintain officers and
employees in common, (2) operate under common control, (3) share offices, (4) commingle funds, and (5) share advertising and marketing;
see FTC v. Wash. Data Res., 856 F. Supp. 2d 1247, 1271 (M.D. Fla. 2012).
80
EU Report, 45 ; ČERKA et als. (2015) (n 6), p. 384-385 (citing Ugo PAGALLO, The laws of robots: crimes, contracts, and torts, Springer,
2013, p. 98).
81
Id. The authors notably evoke article 12 of the United Nations Convention on the Use of Electronic Communications in International
Contracts, under which a person (whether a natural person or a legal entity) on whose behalf a computer was programmed should ultimately
be responsible for any message generated by the machine. See also Explanatory note by the UNCITRAL secretariat on the United Nations
Convention on the Use of Electronic Communications in International Contracts, para. 213, p. 70, available at:
12
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
This is consequent with many decisions rendered by courts around the world, where actions of
automated technologies have been attributed to the person using them, and have considered the user
liable even when he was unaware of the operations of his automated machines.82 By considering AI as
a tool, we could therefore hold the person on whose behalf it acts or at whose disposal and supervision
it is (which could be either users or owners of AI) liable for its actions.83 Such users or owners could not
evade liability towards a plaintiff by claiming that they did not instruct the AI to act like it did; however,
they would have the opportunity to claim damages against the manufacturer or designer of AI under
product liability rules when possible (e.g. if they can prove that AI was defective, that such defect existed
while the AI was under the manufacturer or designer’s control, and that the defect caused the damages
suffered by the plaintiff).84
Other authors are however wary of this solution. Indeed, according to some, even if we were to hold
that AI can be assimilated to an agent or tool allowing the application of vicarious liability to its actions,
its autonomy creates challenges that remain difficult to overcome. An agency relationship implies some
form of control by the principal over the agent85 which becomes tenuous as AI’s autonomy increases,86
making it difficult to conceptualize truly intelligent machines as mere agents or tools of humans. In other
words, “a machine that can define its own path, make its own decisions, and set its own priorities may
become something other than an agent. Exactly what that may be, though, is not a question that the law
is prepared to answer.”87 Moreover, due to the ever-changing nature of AI, identification of a specific
liable principal could prove difficult as different stakeholders could be considered the (agent) AI’s
principals at different points in time and/or in different contexts.88
The product liability of producers should apply to emerging technologies, regardless of whether they
are incorporated into hardware. The distinction between products and services makes less and less sense
with respect to IT tools. Since the risks and benefits are the same, whether or not the product is physically
incorporated, the legal regime should be the same89. Damage caused by defective digital content should
trigger the producer’s liability because digital content fulfils many of the functions tangible movable
items used to fulfil when product liability schemes have been drafted and implemented drafted90.
The point in time at which a product is placed on the market should not set a strict limit on the
producer’s liability for defects when, after that point in time, the defect is a result of the producer’s
interference or failure to interfere with the product already put into circulation (by way of a software
update for example that is required to maintain the expected level of safety within the time period for
which the producer is obliged to provide such updates)91. Finally, due to the lack of explainability and
predictability explained above, there shall be no development risk exception (which allows the
https://www.uncitral.org/pdf/english/texts/electcom/06-57452_Ebook.pdf. DAHIYAT (2010) (n 20) also gives the example of the “Guide to
Enactment” accompanying the UNCITRAL Model Law, which provides that “the Data messages that are generated automatically by computers
without human intervention should be regarded as “originating” from the legal entity on behalf on which the computer is operated”.
82
DAHIYAT (2018) (n 9), p. 37
83
ČERKA et als. (2015) (n 6), p. 384-385.
84
Id.
85
See for instance Restatement (Third) of Agency §7.03-7.07 (2006) stating that a principal is subject to vicarious liability for an agent's actions
only when the agent is acting within the scope of employment, which is not the case when the employee’s act occurs within an independent
course of conduct not intended by the employee to serve any purpose of the employer.
86
DAHIYAT (2010) (n 20), p. 106.
87
VLADECK (n 78), p. 145, citing PAGALLO (n 80). See also RACHUM-TWAIG (n 7), p. 12, who holds that “[i]n some cases, no human
being could be considered the principal behind the AI-robot acts.”
88
RACHUM-TWAIG (n 7), p. 12, who states for instance that “when a corporation (whether designing the product or distributing it) is actually
operating it, we may think of the robot as being operated on behalf of such corporation. In other cases, a user may be considered as a principal
with respect to a machine that it operates, while the designer of such robot would likely not be considered a principal in this context.”
89
JUNOD, … ; EU Report, 43.
90
See EU Report, 43: “This is all the more true for defective digital elements of other products, some of which come separately from the
tangible item (for example, as a control app to be downloaded onto the user’s smartphone), or as over-the-air updates after the product has
been put into circulation (security updates for example), or as digital services provided on a continuous basis during the time the product is
being used (for example, navigation cloud services)“.
91
See EU Report, 43, referring to (i) the Directive (EU) 2019/771 on the sale of goods that recently confirmed tat a seller is also liable for such
digital elements being in conformity with the contract, including for updates provided for as long a period as the consumer may reasonably
expect, and (ii) the Directive (EU) 2019/770 that establishes a similar regime for digital content and digital services.
13
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
producer to avoid liability for unforeseeable defects, such as the one set out in the EU Product Liability
Directive), at least in cases where it was predictable that unforeseen developments might occur92.
Insurance shifts risks from potentially liable persons to insurance carriers who will defend and indemnify
their insureds for losses and pay for settlements or judgments to resolve third party claims. Insurance
can be fault-based (a system based on tort liability, where each insurance company pays for the damages
sustained by a victim according to the degree of fault of their policyholder) or no-fault (where each
individual insurance company compensates – generally up to a certain threshold – its policyholder for
injuries without regard as to who is responsible). One area in which compulsory insurance (either fault-
based or no-fault) applies, and which may be a source of inspiration, is with regard to vehicles.
Establishing a compulsory fault-based insurance scheme regarding AI could allow a victim to be easily
indemnified in most cases, but the issues discussed above would subsequently remain for insurers
attempting to allocate liability between their respective policyholders. As for the adoption of a no-fault
compulsory insurance scheme in the field of AI, this could be, according to some authors93 and
policymakers,94 an interesting solution that would allow to circumvent the challenges discussed above.
In fact, the United Kingdom – which has a fault-based insurance regime in place for regular vehicles –
has recently enacted the Automated and Electric Vehicles Act 2018, under which an insurer is liable for
damage where an accident is wholly or partly caused by an automated vehicle “when driving itself” and
is insured at the time of the accident,95 notwithstanding any reference to a specific person’s liability
(driver, manufacturer, etc.). It has thus established a form of no-fault – although not yet compulsory96 –
insurance regime for automatic (AI-powered) vehicles.
Other authors however raise concerns regarding this solution, notably regarding the “lack of deterrence”
effect caused by no-fault insurance regimes on humans’ and/or AI’s behavior at large, the difficulty to
impose such a system on all stakeholders involved with AI and the fact that it might be extremely
challenging to determine insurance premiums in this context.97 In every case, this solution may
unsatisfactorily cover the foreseeability issues discussed above since insurers could potentially attempt
to exclude unforeseeable damages from their coverage. Moreover, although mandatory insurance is an
interesting option for AI-powered items in sector-specific fields where regular (non-AI) products are
already insured, such as the field of vehicles, it may not be appropriate or feasible when dealing with
products that do not normally require insurance, at least not in a near future. Indeed, for a compulsory
insurance scheme to work, insurers notably need sufficient data to assess the expected frequency and
size of claims, sufficient similarity in the risks being covered, sufficient insurance/reinsurance capacity
and adequate competition, which is currently not the case for AI-powered items in general (outside of
any sector-specific items).98
92
EU Report, 44, which also discusses the difficulty for the average user to prove facts such as the expected level of safety and the capacity
for the producer to prove such relevant facts (asymmetry) and justifies consequently the reversal of the burden of proof and an alleviation of
evidentiary burden with regard to the causal relationship between a defect and the damage. If we fully support this type of mechanisms (reversal
or alleviation of burden of proof), they shall apply in our view to other liability regimes as well, as the discussed justification (asymmetry of
available information between the average consumer and the producer) is relevant for the other liability regimes as well.
93
See notably Jin YOSHIKAWA, “Sharing the Costs of Artificial Intelligence: Universal No-Fault Social Insurance for Personal Injuries” in
Vanderbilt Journal of Entertainment and Technology Law, 2019, Volume 21, issue 4, pp. 1155-1187; JUNOD (n 35), p. 135
94
See notably European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics
(n 29), par. 57-59.
95
Automated and Electric Vehicles Act 2018, chapter 18, article 2(1).
96
Indeed, article 2(2) of the Act also provides that where an accident is wholly or partly caused by an automated vehicle which is driving itself
at the time but is not insured, the registered owner is liable for the loss and damage.
97
RACHUM-TWAIG (n 7), p. 29-32.
98
INSURANCE EUROPE, “Insight briefing - Compulsory insurance : when it works and when it doesn’t”, available at:
https://www.insuranceeurope.eu/compulsory-insurance-when-it-works-and-when-it-doesn-t; LLOYD’S, Autonomous Vehicles Handing Over
Control: Opportunities and Risks for Insurance, 2014, p. 8.
14
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
Instead of considering new liability principles (solutions that require certain amendments of the current
liability regimes), one should consider simply adapting current fault-based liability regimes with
enhanced duties of care and precisions regarding shared liability and solidarity between tortfeasors,
which could potentially be done through case-law in most jurisdictions.
Adapting current fault-based liability regimes should be contemplated, simply by enhancing the
negligence principles with supplementary rules that will set a predetermined acceptable level of care,
applicable to producers and operators of emerging technologies99. This fault liability can apply
exclusively or cumulatively with other strict liability regimes. In other words, these enhanced duties of
care are without prejudice to any other liability regimes that may apply or be developed (e.g. enhanced
product liability or vicarious liability regimes).
Under this solution, stakeholders involved with AI and better situated to implement those supplementary
rules (and thus prevent or mitigate potential harms) – whether manufacturer or designer, programmer,
operator or end-user, as the case may be100 – who fails to meet this level of care would be exposed to
liability under a presumption of negligence. On the other hand, meeting the level of care would trigger
the application of the basic negligence rule and plaintiffs would have to prove actual negligence, forming
a quasi-safe-harbor for the concerned stakeholder.101 This solution would allow to circumvent one of the
important shortcomings of the product liability regime – e.g. its applicability to products only – as well
as the unique problems related to modern AI such as foreseeability and agency.
Amongst the contemplated obligations that could lead to this quasi-safe-harbor are:
- With respect to operators, they should have to comply with an adapted range of duties of care.
They relate to: (a) the choice of technology, in particular in light of the tasks to be performed
and the operator’s own skills and abilities; (b) the organizational framework provided, in
particular with regard to proper monitoring; and (c) maintenance, including any safety checks
and repair. Failure to comply with such duties may trigger fault liability regardless of whether
the operator may also be strictly liable for the risk created by the technology102.
- With respect to producers, while the risk of insufficient skills should still be borne by the
operators, it would be unfair to leave producers entirely out of the equation. Rather, producers,
whether or not they incidentally also act as operators within the meaning of, should have to: (a)
design, describe and market products in a way effectively enabling operators to comply with
the operator’s duties; and (b) in the light of the characteristics of emerging digital technologies,
in particular their openness and dependency on the general digital environment, including the
emergence of new malware adequately monitor the product after putting it into circulation.
This (superior) monitoring duty could be achieved by supervising and studying AI even after
release.103 This could be achieved by implementing anomaly-based monitoring systems
programmed to give warning when an AI behaves in an unexpected manner as well as by
upstream observation of the tendencies of AI to predict such behaviours104. Once such
monitoring is implemented, a duty to inform potential victims of the AI would follow.105 When
feasible, producers should be required to include mandatory backdoors (“emergency brakes”
99
EU Report, 16-17. RACHUM-TWAIG (n 7), p. 32
100
Id, p. 36-38.
101
RACHUM-TWAIG (n 7), p. 33.
102
EU Report, giving the following illustration 9. “Despite adverse weather conditions due to a heavy storm, which were entirely foreseeable,
retailer (R) continues to employ drones to deliver goods to customers. One of the drones is hit by a strong wind, falls to the ground and severely
injures a passerby. R may not only be strictly liable for the risks inherent in operating drones, but also for its failure to interrupt the use of
such drones during the storm“.
103
JUNOD (n 35), p. 136
104
Id., p. 33.
105
Id., p. 34.
15
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
by design), shut-down capabilities, or features allowing operators or users to shut down the AI
or make it “unintelligent” at the press of a button. Not doing so would be considered a design
defect under the products liability doctrine. Depending on the circumstances, manufacturers or
operators could also be required to shut down the robots themselves as part of their monitoring
duties.106
Similar to the already-existing post-sale duties of warning and instruction and to recall defective
products, producers could also have a support and patching duties107 This suggested duty is
consistent with other recent developments regarding software developers’ potential obligation
to update insecure software; indeed, although no law clearly contains a clear and explicit
obligation to do so yet, some courts have started to interpret existing legal norms in a way that
creates such an obligation.108
With AI, the number of stakeholders and the interconnectedness of emerging digital technologies and
their increased dependency on external input and data make it increasingly doubtful whether the damage
at stake was triggered by a single original cause or by the interplay of multiple (actual or potential)
causes109. Even if something is proven to have triggered the harm (for example, because an autonomous
car collided with a tree), the real reason for it is not always equally evident.
Tort law regimes handle these cases of multiple potential sources of harm quite differently. When it
remains unclear which one of several possible causes was the decisive influence to trigger the harm, the
classic response by existing tort laws in such cases of alternative causation is that either all parties are
jointly and severally liable (which is undesirable for those who did not in fact cause harm), or that no-
one is liable (since the victim fails to prove causation of one cause) (which is undesirable for the
victim)110. The problem of who really caused the harm in question will therefore often not be solved in
the first round of litigation initiated by the victim, but on a recourse level, if ever.
To remediate the cases of alternative causation, the following solutions should be contemplated:
- With respect to the victim, when more than one person is liable for the same damage and
where it remains unclear which one of several possible causes was the decisive influence to
trigger the harm, there shall be joint liability of all tortfeasors, i.e. the victim may request
payment of the full sum or part of the sum from any of the multiple tortfeasors, at the victim’s
discretion, but the total sum requested may not exceed the full sum due. In any case, there shall
be joint liability, when tortfeasors act with knowledge of the other tortfeasors wrongful conduct
(wrongful cooperation)111.
- With respect to the recursory action, each tortfeasor should be liable only for its individual
share of responsibility for the damage, when only part of the damage can be attributed to one or
more tortfeators (identified shares), unless some of them form a commercial and/or
technological unit, in which case the members of this unit should be jointly and severally liable
for their cumulative share also to the tortfeasor seeking redress112. Such unit rule may apply
106
RACHUM-TWAIG (n 7), p. 35.
107
Id.
108
See for instance Pieter Wolters’ analysis of the Dutch Consumentenbond v. Samsung decision in his recent article: Pieter T.J. WOLTERS,
“The obligation to update insecure software in the light of Consumentenbond/Samsung” in Computer Law & Security Review, Volume 35,
Issue 3, May 2019, pp. 295-305. According to this decision, software developers would have a general duty of care to update (e.g. make their
product conform) and to provide security updates to consumers that bought their product from an intermediary. This duty of conformity could
be extended to extracontractual obligations; such an obligation could also perhaps exist under a general duty of care in some jurisdictions.
Failure for software developers to do so could be seen as negligence or a fault.
109
EU report, 22.
110
EU report, 22, quoting B. WINIGER et al (eds), Digest of European Tort Law I: Essential Cases on Natural Causation (2007), p. 387 ff.
111
Under Swiss law, see Vincent PERRITAZ, p. 63 ss : in the first case, the terms “perfect solidarity“ (solidarité parfait) within the meaning
of CO 50 are used and, in the second case, the terms “imperfect solidarity“ (solidarité imparfaite) within the meaning of CO 51 are used.
112
EU report, 23, giving the following illustration : “The producer of hardware has a contract with a software provider and another one with
the provider of several cloud services, all of which have caused the damage, and all of which collaborate on a contractual basis. Where another
16
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
when the parties have a joint or coordinated marketing for their respective elements (commercial
unit) or when their elements present a technical interdependency and interoperation (technical
unit). When no individual shares can be identified between tortfeasors, each potential tortfeasor
shall be liable to a quota corresponding to the likelihood that each of them in fact caused the
harm in question (proportional liability)113.
This solution is also in the interests of efficiency, as parties are incentivised to make contractual
arrangements for tort claims in advance114.
1. General considerations
The main purpose of tort law is to indemnify victims for losses they should not have to bear themselves
entirely on the basis of an assessment of all the interests involved. Traditionally, such indemnification
is governed by the compensation principle (according to which only the actual harm must be
compensated)125. With AI, there may be several types of harms, ranging from injury to a person or a
physical property (e.g. self-driving car crashing a pedestrian or a house), damage resulting from the
infringement of an intellectual property right or a privacy rule, to pure economic loss (e.g. costs
associated to repair damage to data).
However, only compensable harm will be indemnified, meaning damage to a limited range of interests
that a legal system deems worthy of protection126. While there is unanimous accord that injuries to a
person or physical property, and damage resulting from the infringement of an absolute right are
compensable harms, this is not universally accepted for pure economic loss (including damage to data,
such as alteration or suppression of data)127. Pure economic loss may be nevertheless compensated via
contractual liability (e.g. insurance contract extending the coverage to these losses). Policy-makers and
courts also tend to recognize a damage to data.
While compensation for physical property or bodily injuries do not raise specific issues, and
compensation for pure economic losses is rather a matter for policy-makers (whether or not recognizing
a damage to data per se) or for contractual liability (whether or not the insurance or service contract
cover such losses), damage resulting from the infringement of an absolute right deserves more analysis,
in particular as their intangible nature make the damage less quantifiable.
Tort liability applies where the relevant data is protected by intellectual property law or a similar regime,
such as database protection or trade secret protection (collectively referred to “IPR“) and is used in
connection with an AI (e.g. as input to feed an AI). However, the quantification of damages is delicate.
tortfeasor has paid compensation to the victim and seeks redress, the three parties may be seen as a commercial unit, and the paying tortfeasor
should be able to request payment of the whole cumulative share from any of the three parties.“
113
See I Gilead/M Green/BA Koch (eds), Proportional Liability: Analytical and Comparative Perspectives (2013).
114
EU Report, 23.
125
Yaniv BENHAMOU. Compensation of Prejudice for Infringements of Intellectual Property Rights in France, under the Directive
2004/48/EC and Its Transposition Law : New Notions?, International Review of Intellectual Property and Competition Law (IIC), 2009, 126.
126
Some scholars consider that any type of harm caused by AI should be compensated for; otherwise users and consumers will be left without
proper compensation for their injuries. See VLADECK, 128. Others authors consider that an injury must be compensated for only if the injurer
has a correlative duty to refrain from inflicting the harm to begin with (corrective justice approach) or only if it is efficient to do so under a
cost-benefit analysis (e.g. in order to internalize negative externalities or in order to deter wrongdoers from doing wrong in the first place)
(economic analysis approach). See Guido CALABRESI & A. DOUGLAS MELAMED, Property Rules, Liability Rules, and Inalienability:
One View of the Cathedral, 85 HARV. L. REV. 1089 (1972); SHAVELL, supra note 81, at 297-98.
127
Damage caused by self-learning algorithms on financial markets, for example, will therefore often remain uncompensated, because some
legal systems do not provide tort law protection of such interests at all or only if additional requirements are fulfilled, such as a contractual
relationship between the parties or the violation of some specific rule of conduct.
17
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
The quantification of damages varies between jurisdictions but damages are mostly of two kinds: actual
damage, which is defined as the claimant’s loss incurred or lost profits128, and the unfair profits, which
refer to the profits unduly made by the infringer with the infringement of right. Actual damage or unfair
profits may be relevant, when the relevant data forms the major part of an AI (e.g. when a software is
used for analytic purposes, or a whole database for feeding the AI). However, actual damage may be
difficult to claim, when the claimant is a SME without capacities to commercialize the data, as he / she
will fail to prove that there was a decline in, or a non-increase of his turnover (actual damage) and that,
in the absence of IP infringement, he would have sold his products instead of the infringer (lost profits).
Unfair profits may be also delicate when the relevant data forms part of a complex multifaceted
device – which shall be the rule (e.g. when several interconnected elements compose a particular process
or environment, such in the Internet of Things context) - as only the profits attributable to the
infringement shall be recordable, and reduced accordingly if there are other factors causing that profits,
such as non-infringing component incorporated into a multifaceted device129. Another difficulty is when
the relevant data is not recognizable in the output (e.g. because the input is used simultaneously with
thousand other inputs to generate a single output)130 or not even expressed in the output (e.g. because
the input is used for training purposes only)131, it being recalled that most jurisdictions considers that
there is an act of reproduction, no matter whether the input exists or is recognizable in the output132.
In these cases (multifaceted device or no input used or recognizable in the output), when the claimant
cannot claim unfair profits due to a delicate causation test or actual damage due to a lack of own
capacities, the claimant may rely on a royalty fee provided by many legislations and/or granted by case-
law as a minimum standard in lieu of the actual damage (i.e. without the proof of a lost royalty fee) 133.
The reasonable royalty is assessed on a case-by-case basis, usually by reference to comparables (i.e.
previous licensing agreements, tariffs or recommendations of the respective sectors) or to a hypothetical
negotiation (i.e. based on what “reasonable parties“ would have agreed on based on all the
circumstances and with full knowledge of the relevant facts)134. The principles set out in the patent-
related US decision Georgia-Pacific Corp. v. United States Plywood Corp., might be relevant to
determine the amount of the hypothetical license fee, and is often quoted by caselaw also outside the
US135. Consequently, the claimant may claim damages up to the amount he / she would have claimed as
royalty fee in a contract with the tortfeasor for the use of the relevant data. No matter whether the data
is existant or recognizable in the output, and whether the claimant may be able to prove an actual damage
or unfair profits, he / she may be able to receive this amount as a minimum indemnification. The share
to which the royalty fee applies shall be the identified share for each tortfeasor (identified share) (unless
there is a commercial and/or technological unit) or, when no individual shares can be identified, to the
quota corresponding to the likelihood that each of them in fact caused the harm in question (proportional
liability).
128
AIPPI, Summary report, 2017 Study question on Quantification of monetary relief, 3 ss.
129
For a discussion regarding the delicate calculation of unfair profits with references to case-law and doctrine (notably the American case
Apple-Samsung), see BENHAMOU, Damages, profits, statutory and punitive damages, in: ALAI Enforcement of Intellectual Property Rights,
Montreal (Themis) 2020, 100 ss.
130
Think of the Edmond de Balamy portrait, 1st painting based on 15’000 portraits, sold at an auction house for USD 432'000, Google Dream
trained on open access images. http://obvious-art.com/edmond-de-belamy.html
131
Think of the making a copy of a student’s papers for the purposes of detecting plagiarism.
132
This is linked to the broad interpretation of the reproduction right, which covers identical, partial, direct or indirect reproduction by any
means, in whole or in part. Reproduction right covers the “exclusive right to authorise or prohibit direct or indirect, temporary or permanent
reproduction by any means and in any form, in whole or in part“ (Copyright Directive 2001/29/CE ; CJUE, Infopaq, C-5/08, § 51). See Strowel,
12.
133
For EU-Law, see art. 13 Directive 2004/48 of 29 April 2004 on the enforcement of intellectual property rights (“Enforcement-Directive“)
(“as an alternative to [lost profit and unfair profits, Courts] may, in appropriate cases, set the damages as a lump sum on the basis of elements
such as at least the amount of royalties or fees which would have been due if the infringer had requested authorisation to use the intellectual
property right in question“). For national transpositions into German and French law and several case-law granting a reasonable royalty-fee,
see Benhamou, 37 ss. For American Law and several case-law granting a reasonable royalty-fee, see Benhamou, 87 ss.
134
See Jenny, Die Eingriffskondiktion bei Immaterialgüterrechtsverletzungen: unter Berücksichtigung der Ansprüche aus unerlaubter
Handlung und unechter Gesch.ftsführung ohne Auftrag, Zurich/ Bâle/Genève 2005, 317: depending on the concrete needs of the parties, the
reasonable royalty can be a lump-sum, a per-unit royalty, a percentage of revenues or, or a combination of the aforementioned.
135
Georgia-Pacific Corp. v. United States Plywood Corp., 446 F.2d 295, 170 USPQ 369 (2d Cir. 1971), cert.den., 404 U.S. 870 (1971), which
established that the licensor’s established policy and marketing program to maintain its monopoly by not licensing others to use his invention.
18
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
3. Privacy violations
Tort liability also applies when the relevant data is a personal information protected by privacy rules. A
separate sum of money may be also claimed in the event of injury to personality rights (e.g.
infringement of moral rights of the copyright owner), which seriousness may justify an additional
separate sum of money. So, depending on the nature of the data, claimant may rely on an additional
injury to personality rights, which seriousness may justify an additional separate sum of money, and
statutory damages provided in certain jurisdictions.
In view of the authors, the above calculation methods shall apply equally for personal data breaches, at
least when data are tradable similarly as IPR. Indeed, personal data have become tradable, as users
commonly consent to make their personal data available in exchange for other services. In a standard
business model for the internet, those data are used by the online platforms (social network, search
engine, content streaming, etc.) to offer targeted advertising for other products or services. Although the
data protection authorities are reluctant to consider personal data as a simple commodity and it is
difficult to put a price on data, the new framework for data protection has somewhat validated the idea
that personal data are part of the market exchanges and “contractual practice treats data like property
rights“. Consequently, each time when a personal data is used without authorization, data subject should
be able to claim damages in the form of a royalty fee, if not unfair profits based on the on the whole
end-product. From the outset, it must be however recall that not all unauthorized use of an IPR may lead
to damages. In particular, courts tend to conclude that there are no damages in case of works subject to
open access136, specifically open licenses137, as the copyright owner intended to distribute his work
freely. Similarly, not all unauthorized use of personal data may lead to damages, and this will depend
on the relevant market, i.e. whether the relevant marketplace draw benefits in exchange of the personal
data.
Given the difficulties in calculating the damage and to take into account the specificities of IPR or
privacy rights138, additional economic methods may be considered to calculate the damages in general,
such as the Discounted Cash Flow Method (DCF) and the Financial Indicative Running Royalty Model
(FIRRM), and, the Royalty Rate Method for reasonable royalty calculations specifically139. Caselaw
about Fair, Reasonable and Non-Discriminatory license terms (FRAND) for disputes about the licensing
of Standard Essential Patents (SEP) may be also relevant data to set reasonable royalties. If courts are
still reluctent to rely on economic methods, it is because plaintiffs do not bring this type of evidence and
because courts are not all familiar with economic methods. We further believe that each method gives a
useful index-value for the courts (sort of benchmark to calculate the damages) and that the solution
consists of a combination of these methods. Finally, this path is in line with the increasing complexity
of IP infringements (e.g. software incorporated into multi-component products, or infringements of
online content without visible loss or lost profits).
136
Open access is understood here as the possibility to view the work, which may be either fully unrestricted (in particular covering the right
to reproduce, share, and disseminate the digitized work) or restricted (in particular permits users to view but not to reproduce, share, and
disseminate the digitized work).
137
Open licenses are understood here as standardized licenses (whether partly restricted or not), such as those proposed by certain organizations,
such as Creative Commons for literary and artistic works by creativecommons.org, or General Public Licenses (GPL) for software by the Free
Software Foundation.
138
BENHAMOU, Dommages-intérêts suite à une violation de droit de la propriété intellectuelle, Berne 2012, p. 5 ss (for specificities
of IP rights in general) and 7 ss (for repercussions on the quantum of damages).
139
For an in-debth analysis of the calculation of damages based on these economic methods, see BENHAMOU, 286 ss. For the DCF method,
see Chappuis, Quelques dommages dits irréparables. Réflexions sur la théorie de la différence et la notion de patrimoine, SJ 2010, 165, 269 ss.
19
Early draft (version 08.02.2020), submitted to as a book chapter: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers
and the Law, Pina D’Agostino / Carole Piovesan / Aviv Gaon (éd.), Thomson Reuters Canada 2020
answer. Personal data is a valuable asset140 but the actual value of a given personal data or dataset is
context-dependent: the value varies in particular according to: the category of personal data (e.g. basic
usage data, such as data regarding age, gender, ethnicity, zip-code have been estimated 0.005 per record ;
specific data regarding credit history, criminal records, bankruptcies or convictions have been estimated
45 per record) ; and the business models (e.g. Usage-Based Pricing, Package Pricing, Flat Pricing and
Freemium), from the stock value of a firm or profit per record (e.g. the value per record of a big data
broker is about USD 1 per record)141. Without claiming to be exhaustive, and as a very simple path for
the future, the following chart may be contemplated to help lawyers and courts when pricing data and
flat-rating damages:
IV. CONCLUSION
Existing liability regimes already offer basic protection of victims, to the extent that specific
characteristics of emerging technologies are taken into account. Consequently, instead of considering
new liability principles (solutions that require certain amendments of the current liability regimes), one
should consider simply adapting current fault-based liability regimes with enhanced duties of care and
precisions regarding shared liability and solidarity between tortfeasors, which could potentially be done
through case-law in most jurisdictions.
When it comes to the calculation of damages, given the difficulties in calculating the damage and to take
into account the specificities of IPR or privacy rights, economic methods may be considered to calculate
the damages in general, such as the Discounted Cash Flow Method (DCF) and the Financial Indicative
Running Royalty Model (FIRRM). To set reasonable royalties, the Royalty Rate Method as well as
caselaw about Fair, Reasonable and Non-Discriminatory license terms (FRAND) for disputes about the
licensing of Standard Essential Patents (SEP) may be also relevant data. This path will lead to a certain
“flat-rating“damages (“barémisation“ or “forfaitisation“), at least when IPR and personal data are
illegally used by AI-tools and mostly not visible, hence barely quantifiable in terms of damages.
***
140
Rodrigo Zapata, How much is data worth?, available at https://www.researchgate.net/publication/328692193_How_much_is_data_worth
141
It seems to be extremely valuable in the hands of sophisticated data processors, such as Facebook, which online marketing consumer
targeting services represents 97% of its revenue. In 2018, the average revenue per user was Worldwide 5,97$, US & Canada 25,91$ and Europe
8,76$. See ZAPATA.
20