Ai Blackbox and Transparency
Ai Blackbox and Transparency
Stefan Larsson
Dans Droit et société 2019/3 (N° 103), pages 573 à 593
Éditions Lextenso
ISSN 0769-3362
DOI 10.3917/drs1.103.0573
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
Stefan Larsson
Lund University, Department of Technology and Society, Box 118, 221 00 Lund, Sweden.
<[email protected]>
Summary This article draws on socio-legal theory in relation to growing concerns over
fairness, accountability and transparency of societally applied artificial
intelligence (AI) and machine learning. The purpose is to contribute to a
broad socio-legal orientation by describing legal and normative challenges
posed by applied AI. To do so, the article first analyzes a set of problematic
cases, e.g., image recognition based on gender-biased databases. It then
presents seven aspects of transparency that may complement notions of
explainable AI (XAI) within AI-research undertaken by computer scientists.
The article finally discusses the normative mirroring effect of using human
values and societal structures as training data for learning technologies; it
concludes by arguing for the need for a multidisciplinary approach in AI
research, development, and governance.
Algorithmic accountability and normative design – Applied artificial intelli-
gence – Explainable AI and algorithmic transparency – Machine learning and
law – Technology and Social change.
1. Cathy O’Neil, computer scientist and author of the book, Weapons of Math Destruction (2016).
2. I would like to extend my thanks to the lnternational Institute of the Sociology of Law in Oñati, the
Basque Country, for my research stay in June and July 2018, and for allowing me to use their well-stocked
library while preparing an early draft of this article.
3. Stefan LARSSON, “Sociology of Law in a Digital Society—A Tweet from Global Bukowina”, Societas/
Communitas, 15 (1), 2013, p. 281-295; cf. Danièle BOURCIER, “De l’intelligence artificielle à la personne
virtuelle : émergence d’une entité juridique ?”, Droit et Société, 49, 2001, p. 847-871.
4. Or BIRAN and Courtenay COTTON, “Explanation and Justification in Machine Learning: A Survey”, IJCAI-17
Workshop on Explainable AI (XAI), 2017.
5. Cf. Iyad RAHWAN, “Society-in-the-Loop: Programming the Algorithmic Social Contract”, Ethics and
Information Technology, 20 (1), 2018, p. 5-14.
6. Susan Leigh ANDERSON, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics”, AI & Society,
22 (4), 2008, p. 477-493.
7. Cf. Nick BOSTROM, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press, 2014.
8. Arthur SAMUEL, “Some Studies in Machine Learning Using the Game of Checkers”, IBM Journal of
Research and Development, 3 (3). 1959, p. 210-229.
9. AI HLEG, “Draft Ethics Guidelines for Trustworthy AI,” 18 December 2018, <https://ec.europa.eu/
digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai>.
10. ID., Ethics Guidelines for Trustworthy AI, Brussels: The European Commission. 2019.
11. REGERINGSKANSLIET, Nationell inriktning för artificiell intelligens. Näringsdepartementet, 2018, p. 10.
I.1. Fairness
There are a number of examples where unintended social prejudices are repro-
duced or automatically strengthened by AI systems which often only become appar-
ent following rigorous study. A few examples:
— Computer science researchers at the University of Virginia discovered that
some popular image databases had a gender-based bias which portrayed
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
12. E.g., see <https://www.fatml.org>; For an overview of research on ethical, social and legal consequenc-
es of AI, see Stefan LARSSON, Mikael ANNEROTH, Anna FELLÄNDER et al., Sustainable AI: An Inventory of the
State of Knowledge of Ethical, Social, and Legal Challenges Related to Artificial Intelligence, Stockholm: AI
Sustainability Center, 2019.
13. For an analysis on the conceptual origins and background of “transparency” with regards to AI, see
Stefan LARSSON and Fredrik HEINTZ, “AI Transparency”, Internet Policy Review, 2019 (forthcoming).
14. As reported in Wired, “Machines taught by photos learn a sexist view of women”, by Tom SIMONITE,
21 August 2017: <https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women/amp>;
for a study, see Jieyo ZHAO, Tianlu WANG, Mark YATSKAR, Vicente ORDONEZ and Kai-Wei CHANG. “Men also
like shopping: Reducing gender bias amplification using corpus-level constraints”, arXiv preprint, 2017,
arXiv:1707.09457.
15. The study was carried out and published by civil rights-motivated investigative journalists at
ProPublica, “Machine Bias”, by Julia ANGWIN, Jeff LARSON, Surya MATTU and Lauren KIRCHNER, 23 May 2016,
<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>.
predictions, i.e., the probability of relapses into crime, showed that the so-
called Compas system 16 was more likely to incorrectly predict increased
crime rates among black offenders while simultaneously, and incorrectly,
predicting the opposite where white offenders were concerned. 17
— In an effort to improve transparency in automated marketing distribution, a
research group developed a software tool to study digital traceability and
found that such marketing practices had a gender bias that mediated well-
paid job offers more often to men than to women. 18
— A study of three commercial, gender-based image recognition systems
showed that the most incorrectly categorized group consisted of dark-
skinned women. 19 This means, among other things, that their services, and
the applications based on them, work poorly for people with certain physi-
cal characteristics. Also, there is a significantly narrower margin of error
when it comes to white males.
The term “bias” is also used in statistics and computer science and therefore has
several different meanings, which implies that there is some confusion surrounding
this term which might complicate social scientific and techno-scientific under-
standings of the concept. 20 In the present context, I will use the term “social bias”,
based in a socio-legal understanding of social norms and cultural values.
Value-based discussions surrounding machine learning and AI are often con-
ducted in terms of “ethics”, as in the report Ethically Aligned Design, published by
the global technical organization IEEE. 21 Such discussions on the topic of “ethics”
and artificial intelligence, in this context, reflect a broad understanding that we as a
society need to reflect on values and norms in AI developments, as well as—and
this understanding is gaining force in social scientific literature—the impact AI is
having on us, on society, and the values, culture, power and opportunities that are
reproduced and reinforced by autonomous systems. Therefore the use of the con-
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
a kind of proxy; i.e., it represents a conceptual platform with the capacity to bring
together the diverse groups that develop these methods and technologies—i.e.,
mathematicians and computer scientists—with groups that commercialise and
implement them in the market, as well as those groups that study these methods
and technologies and their role in society from a social scientific and humanities-
oriented perspective, in order to gain a better understanding of their impact. Dis-
cussions on ethics in AI will, in time, likely be replaced by more clearly defined
concepts in the areas of regulation, industry standards, certifications, and more in-
depth analyses of culture, power, market theory, norms, etc., in the main areas of
traditional scientific fields. For many years, sociologists of law have studied legiti-
macy in terms of social norms, in line with Émile Durkheims “social facts” 22 or
Eugen Erlich’s “living law”, 23 Roscoe Pound’s “law in action”, 24 which see social
norms as an object that can be empirically measured, is structurally widely dis-
persed, but has not necessarily been formalised in terms of law “in books”. 25
The fact that computerised systems may be biased or have socially problematic or
one-sided cultural values is not necessarily new knowledge, 26 but the rapid develop-
ment of such systems in conjunction with society’s dependence on them is, now,
greater than ever, and has consequences for key social functions, such as credit
rating, employment opportunities, health care issues, and the dissemination of
knowledge and news. 27 For example, an analysis on two large, publicly available
image data sets found that these exhibit what was called an observable
“amerocentric and eurocentric representation bias”. 28 That is, they were skewed
towards cultural expressions in the western world, resulting in lack of precision for
expressions in the developing world. Furthermore, social, political, economic and
cultural aspects of search engines, for example, have been the subject of a large
number of studies, 29 as have the cultural implications of policies on obscene or
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
30. Rex L. TROUMBLEY, Taboo Language and the Politics of American Cultural Governance, Doctoral disser-
tation, University of Hawai’i at Manoa, 2015.
31. Safiya NOBLE, Algorithms of Oppression: How Search Engines Reinforce Racism, New York: New York
University Press, 2018.
32. It is sometimes attributed to American sociologist John McKnight, cf. William NORTON, Cultural Geography:
Environments, Landscapes, Identities, Inequalities, Oxford: Oxford University Press, 2013. A number of
studies suggest a long‐standing relationship between geography, race and contemporary housing and credit
markets; cf. Jesus HERNANDEZ, “Redlining Revisited: Mortgage Lending Patterns in Sacramento 1930-2004”,
International Journal of Urban and Regional Research, 33 (2), 2009, p. 291-313.
33. Safiya Noble in Robyn CAPLAN, Joan DONOVAN, Lauren HANSON and Jeanna MATTHEWS, Algorithmic
Accountability: A Primer, op. cit., p. 4.
34. Robyn CAPLAN, Joan DONOVAN, Lauren HANSON and Jeanna MATTHEWS, Algorithmic Accountability: A Primer,
op. cit.
Systems that reproduce bias have also been criticized from the standpoint that
an overly homogeneous design community leads to blind spots. For example, a
report by AI research centre AI Now on “legacies of bias” argues that:
AI is not impartial or neutral. Technologies are as much products of the context in
which they are created as they are potential agents for change. Machine predictions and
performance are constrained by human decisions and values, and those who design, de-
velop, and maintain AI systems will shape such systems within their own understanding
of the world. Many of the biases embedded in AI systems are products of a complex his-
tory with respect to diversity and equality. 35
In line with this, one may conclude that values and normativity can be found on
both sides of the design process; i.e., in the use of structurally biased data retrieved
from individuals and society, as well as in the design and development of applica-
tions and services. This prompts complex but necessary questions of who is to be
held accountable for what in autonomous systems applied in society.
35. Alex CAMPOLO, Madelyn SANFILIPPO, Meredith WHITTAKER and Kate CRAWFORD, AI Now 2017 Report, AI
Now Institute at New York University, 2017, p. 18.
36. Mireille HILDEBRANDT, Smart Technologies and the Ends of Law, Cheltenham: Edward Elgar Publishing, 2015.
37. Susan Leigh ANDERSON, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics”, op. cit., p. 477-493.
38. Sundar PICHAI, “AI at Google: Our Principles”, Google blog, 7 June, 2018. <https://www.blog.google/
topics/ai/ai-principles/>.
39. The Verge, “Google Reportedly Leaving Project Maven Military AI Program After 2019”, by Nick STATT,
June 1, 2018, <https://www.theverge.com/2018/6/1/17418406/google-maven-drone-imagery-ai-contract-
expire> (last visited 10 June 2019).
40. Miles BRUNDAGE et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,
2018, <https://maliciousaireport.com>.
41. Marco T. BASTOS and Dan MERCEA, “The Brexit Botnet and User-Generated Hyperpartisan News”,
Social Science Computer Review, 2017, <https://doi.org/10.1177/0894439317734157>.
42. E.g., David A. BRONIATOWSKI, Amelia M. JAMISON, SiHua QI et al., “Weaponized Health Communication:
Twitter Bots and Russian Trolls Amplify the Vaccine Debate”, American Journal of Public Health, 2018. DOI:
10.2105/AJPH.2018.304567; for more on the social impact of platforms, see Stefan LARSSON and Jonas
ANDERSSON SCHWARZ, Developing Platform Economies. A European Policy Landscape, Brussels: European
Liberal Forum asbl, Stockholm: Fores, 2018.
43. Miles BRUNDAGE et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,
op. cit., p. 7.
44. Cf. Engin BOZDAG, “Bias in Algorithmic Filtering and Personalization”, Ethics and Information Technology,
15 (3), 2013, p. 209-227.
45. Cf. Nicholas DIAKOPOULOS, “Algorithmic Accountability: Journalistic Investigation of Computational
Power Structures”, Digital Journalism, 3 (3), 2015, p. 398-415.
46. Robyn CAPLAN, Joan DONOVAN, Lauren HANSON and Jeanna MATTHEWS, Algorithmic Accountability: A
Primer, op. cit., p. 12.
This statement echoes legal scholar Lawrence Lessig’s arguments over a decade
ago that “code is law” and that the actual digital architecture itself must be included
when analysing norms and behaviours. 47 However, AI, it seems, comes with an
additional layer as the code does not singlehandedly reveal what steering model is
being developed when a machine learning algorithm is analyzing patterns in large
sets of data. Code—and its analytical and “learning” data processing—may lead to
the informal coded laws L. Lessig formulated, the digital architecture governing
automated decisions, today on digital platforms influencing billions. This is a new-
found AI-driven architecture layered on top of the code L. Lessig likely was aiming
for originally, but his core argument remains intact, that we need to understand
how the code regulates, what values that emerge from it. A major shift, however,
from the 15-20 years that has passed since the inception of those ideas is that the
Internet has gone through fundamental changes, from a highly distributed non-
professional web to one highly moderated by a fewer set of gigantic digital plat-
forms. 48
Another related, inherent challenge has to do with making future predictions:
i.e., machine learning applications that can be used to make probability assess-
ments of events that have not yet occurred. How serious a problem this poses—
what stakes that are involved—depends on what such assessments are used for. If a
probability assessment is used, for example, for credit rating purposes, medical
diagnoses, delegation of law enforcement resources or penal recommendations, it
is surely underlining the extreme importance of ensuring that the prediction is as
fair and auditable as possible.
To demonstrate how AI and machine learning have become components of
complex areas in society which further highlight the need to recognize AI as a social
challenge, two examples can be mentioned, here: digital platforms and autono-
mous vehicles.
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
47. Lawrence LESSIG, “Code is Law”, The Industry Standard, 18, 1999; Lawrence LESSIG, Code: Version 2.0,
2006; Cf. Stefan LARSSON, “Sociology of Law in a Digital Society—A Tweet from Global Bukowina”, op. cit.
48. Cf. Jonas ANDERSSON SCHWARZ, “Platform Logic: An Interdisciplinary Approach to the Platform-Based
Economy”, Policy & Internet, 9 (4), 2017, p. 374-394; Tarleton GILLESPIE, Custodians of the Internet: Platforms,
Content Moderation, and the Hidden Decisions that Shape Social Media, New Haven: Yale University Press,
2018.
49. When the persons running The Pirate Bay file-sharing site were prosecuted in 2009 for complicity in
violation of the Copyright Act, a similar conceptual challenge emerged when the court was forced to assess
this “platform’s” liability; Stefan LARSSON, “Metaphors, Law and Digital Phenomena: The Swedish Pirate Bay
Court Case”, International Journal of Law and Information Technology, 21 (4), 2013, p. 329-353; ID., Concep-
tions in the Code. How Metaphors Explain Legal Challenges in Digital Times, Oxford: Oxford University
Press, 2017.
50. Cf. Tarleton GILLESPIE, Custodians of the Internet: Platforms, Content Moderation, and the Hidden
Decisions that Shape Social Media, op. cit.
51. Ulrich DOLATA, Apple, Amazon, Google, Facebook, Microsoft: Market concentration-competition-innovation
strategies, 2017-01, Stuttgarter Beiträge zur Organisations-und Innovationsforschung, SOI Discussion Paper, 2017.
52. A news story that received much attention when journalist Carole CADWALLADR published an article
about a whistle-blower in The Guardian, 18 March 2018, <https://www.theguardian.com/news/2018/mar/17/
data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump>.
53. Evgeny MOROZOV, To Save Everything, Click Here: The Folly of Technological Solutionism, New York:
Public Affairs, 2013.
54. Kirsten GOLLATZ, Felix BEER and Christian KATZENBACH, “The Turn to Artificial Intelligence in Governing
Communication Online”, Social Science Open Access Repository, 21, 2018. Cf. BuzzFeed News, “Why Facebook
Will Never Fully Solve Its Problems With AI”, by Davey ALBA, 11 April 2018, <https://www.buzzfeednews.com/
article/daveyalba/mark-zuckerberg-artificial-intelligence-facebook-content-pro>.
manufacturer Tesla. Public transport company Nobina, based in Kista, Sweden, has
conducted unmanned bus tests, and a bus route has been running since 2018. Devel-
opers in China, Poland, Switzerland, Las Vegas, among other places, are conducting
similar, ongoing projects using self-driving public transport vehicles, and it is only a
question of time before autonomous vehicles become a common feature of everyday
transport in many cities around the world. Automation, which in data-driven applica-
tions often largely depends on algorithms designed to perform automation functions,
is an area that is of central importance for self-driving vehicles, and raises questions
of accountability here too. In Sweden, for example, regulations are being created
that address developments in the field of self-driving vehicles, 55 and the question
of accountability is a key issue in the context of traffic accidents and has also been
discussed in the literature for some time. 56 These questions have been raised not
least in connection with fatal accidents involving autonomous vehicles. In 2016, a
Tesla S model, which uses both radar and cameras to interpret its surroundings,
mistook a lorry for the sky, resulting in a fatal accident. In March 2018, a SUV used
by Uber to develop self-driving vehicles struck and killed a woman in Arizona,
which led to extensive discussions on accountability issues and the use of self-
driving vehicles on public roads. Even if comparisons to manned vehicles would
show that autonomous vehicles are safer, accidents like this will have an impact on
people’s trust and acceptance of highly autonomous vehicles.
55. Cf. SOU 2018:16, Vägen till självkörande fordon–introduktion, in which delegation of responsibility and
data protection issues is a key component.
56. Cf. Alexander HEVELKE and Julian NIDA-RÜMELIN, “Responsibility for Crashes of Autonomous Vehicles:
An Ethical Analysis”, Science and Engineering Ethics, 21 (3), 2015, p. 619-630.
57. Riccardo GUIDOTTI, Anna MONREALE, Salvatore RUGGIERI et al., “A Survey of Methods for Explaining Black
Box Models”, ACM Computing Surveys (CSUR), 51 (5), 2018, p. 1-45; cf. Frank PASQUALE, The Black Box Society.
The Secret Algorithms That Control Money and Information, Cambridge: Harvard University Press, 2015.
58. Mike ANANNY and Kate CRAWFORD, “Seeing Without Knowing: Limitations of the Transparency Ideal
and its Application to Algorithmic Accountability”, New Media & Society, 20 (3), 2018, p. 973-989.
59. COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE EUROPEAN COUNCIL, THE EUROPEAN
ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, Artificial Intelligence for Europe,
SWD (2018) 137 final.
1. Proprietorship
A proprietary approach with corporate software and data is a legitimate way of
conducting competitive innovation with a commercial logic. It can be the result of
commercialization and upscaling of a product, and can constitute a prerequisite for
investors. Some companies view the user data they hold as being directly related to
their stock market value, and their software and algorithms as valuable “recipes”
and business secrets. 61 However, proprietary set-ups involving company-owned
software and data are often referenced as a problematic issue in discussions on
overview and scrutiny practices. 62 At worst, and according to Rashida Richardson
of the AI Now Institute, proprietary set-ups may ”inhibit necessary government
oversight and enforcement of consumer protection laws” in that it contributes to
the black box effect. 63 This may be particularly problematic for public sector pro-
curement. For example, one component of the challenge posed by the aforemen-
tioned COMPAS example regarding the risks of recidivism is the lack of transparen-
cy and ensuing lack of informative feedback. 64
2. Avoiding Abuse
Some algorithm-dependent and automated processes could be abused if the af-
fected parties were made aware of their precise functions. Transparency can, at worst,
lead to manipulation or gaming of the purpose of a process. This could apply for
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
3. Literacy
For the everyday dispersion of new technologies, here applied AI, the data literacy
or algorithm literacy can be one additionally fruitful way to conceptualize how indi-
vidual’s abilities interact with the technologies, implicating their transparency. 66 To
even begin to assess algorithms and how they use data, specific expertise is required
that people in general do not have. The importance of this type of literacy can also
be expanded to an argument targeting contemporary supervisory authorities that
are increasingly struggling with supervising data-driven and automated markets
and activities (see also point 6 below). 67
66. Derived from media and information literacy, cf. Jutta HAIDER and Olof SUNDIN, Invisible Search and
Online Search Engines: The Ubiquity of Search in Everyday Life, Chicago: Routledge Studies in Library and
Information Science, 2019.
67. Stefan LARSSON, “Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven
Markets”, Internet Policy Review, 7 (2), 2018.
68. Finale DOSHI-VELEZ, Mason KORTZ, Ryan BUDISH et al., “Accountability of AI Under the Law: The Role Of
Explanation”, arXiv preprint, 2017, arXiv:1711.01134.
69. Stefan LARSSON, Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times, op. cit.
70. Wolfie CHRISTL, Corporate Surveillance in Everyday Life: How Companies Collect, Combine, Analyze,
Trade, and Use Personal Data on Billions, Vienna: Cracked Labs, 2017.
71. Frank PASQUALE, “Exploring the Fintech Landscape”, Written Testimony of Frank Pasquale Before the
United States Senate Committee on the Banking, Housing, and Urban Affairs, 2017, September 12; Stefan
markets have been stated to be particularly opaque and complex (and lacking con-
sent) in its automated setup with a large number of involved actors. 72
LARSSON, “Algorithmic Governance and the Need for Consumer Empowerment in Data-driven Markets”,
Internet Policy Review, 7 (2), 2018, p. 1-12.
72. INFORMATION COMMISSIONER’S OFFICE (ICO), UK, Update Report into Adtech and Real Time Bidding, 20 June
2019.
73. Stefan LARSSON, “Algorithmic Governance and the Need for Consumer Empowerment in Data-driven
Markets”, op. cit.
74. Riccardo GUIDOTTI, Anna MONREALE, Salvatore RUGGIERI et al., “A Survey of Methods for Explaining
Black Box Models”, op. cit.
75. Or BIRAN and Courtenay COTTON, “Explanation and Justification in Machine Learning: A Survey”, op. cit.
76. Tim MILLER, “Explanation in Artificial Intelligence: Insights from the Social Sciences”, Artificial Intelligence,
267, 2019, p. 1-38, <https://doi.org/10.1016/j.artint.2018.07.007>.
77. Mireille HILDEBRANDT, Smart Technologies and the Ends of Law, op. cit., p. 133 sq.
78. Eugen EHRLICH, Fundamental Principles of the Sociology of Law, op. cit.
79. Cf. THE IEEE GLOBAL INITIATIVE ON ETHICS OF AUTONOMOUS AND INTELLIGENT SYSTEMS, Ethically Aligned
Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, op. cit., p. 36.
80. Cf. Måns SVENSSON and Stefan LARSSON, “Intellectual Property Law Compliance in Europe: Illegal File
Sharing and the Role of Social Norms”, op. cit.
81. Cf. Tarleton GILLESPIE, Custodians of the Internet: Platforms, Content Moderation, and the Hidden
Decisions that Shape Social Media, op. cit.
82. E.g., as noted by researchers and published in Nature; James ZOU and Londa SCHIEBINGER, “AI Can Be
Sexist and Racist—It’s Time to Make It Fair”, Nature, comment, 18 July 2018.
83. Cf. Meredith WHITTAKER, Kate CRAWFORD, Roel DOBB et al., AI Now Report 2018, op. cit.
84. Cf. ibid., p. 6, point 10.
85. Karen YEUNG, “‘Hypernudge’: Big Data as a Mode of Regulation by Design”, Information, Communication &
Society, 20 (1), 2017, p. 118-136.
86. Iyad RAHWAN, “Society-in-the-Loop: Programming the Algorithmic Social Contract”, op. cit.
87. This is in line with for example AI HLEG’s Ethics guidelines for trustworthy AI (2019); the IEEE’s Ethi-
cally Aligned Design, 2019; and Luciano FLORIDI, Josh COWLS, Monica BELTRAMETTI and al., “AI4People—An
Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds
and Machines, 28, 2018, p. 689-707.
to stakes and needs posed in each context, which may mean that transla-
tions to ethical and legal needs will be required.
It is important to emphasise that a focus on these challenges should not dis-
courage efforts to apply a normative perspective to artificial intelligence. Rather, the
intent is to contribute to, and clarify, issues that need to be developed further and
require greater knowledge and awareness. To a large degree, we already live in a high-
ly digitalised environment in which the data we generate in our daily lives is increas-
ingly used and reused as training data for self-learning technologies in automated
processes and autonomous decision-making. There are strong indications that our
lives will increasingly be enabled and affected by different kinds of artificial intelli-
gence and machine learning in the years to come, since these methods and technolo-
gies have already been proven to have great potential. This means that it becomes all
the more important to strengthen fairness and trust in applied AI through well-
advised notions of accountability and transparency in multidisciplinary research of
socio-legal relevance.
© Lextenso | Téléchargé le 06/11/2023 sur www.cairn.info (IP: 185.153.85.22)
L’auteur
Stefan Larsson est juriste (LLM) et professeur associé en technologie et changement social
à l’Université de Lund, au Département Technologie et Société. Il est conseiller scienti-
fique de l’Agence suédoise de la consommation ainsi que du Centre pour l’IA durable.
Ses recherches portent sur les questions de confiance et de transparence sur les mar-
chés numériques axés sur les données et l’impact sociojuridique des technologies auto-
nomes et sur l’IA. Il a notamment publié :
— “Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven
Markets”, Internet Policy Review, 7 (2), 2018 ;
— Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times,
Oxford: Oxford University Press, 2017.