731509
731509
Abstract
There are many rights, freedoms, and principles that build our society and nourish it day after day. No
right, freedom, or principle is absolute; they must always be balanced. Our starting point is the respect
for internationally recognized human rights and we focus on the principle of autonomy, which is not
being adequately treated and protected in the ever-changing algorithmic world. In this article we review
some of the most influential bodies of knowledge and ethical recommendations in artificial intelligence,
and analyze the extent to which they address the principle of autonomy. We ground the concept of
autonomy in operational terms such as being well-informed and being able to make free decisions. Under
these two different aspects, we analyze the technical and social risks and propose different ways in which
artificial intelligence requires further exploration with the aim of preserving human autonomy.
Keywords
human autonomy, machine learning, AI ethics, bioethics and human rights
1. Introduction
The basis of the European society is the respect for the rights and freedoms constitutionally
recognized [1]. But we must keep in mind that no right or freedom is absolute, and that there is
always some tension between the rights and freedoms at stake. And this is the real challenge.
Weighing what should prevail in specific cases gives rise to ethical debates for which there is
not a unique valid answer. But technologists cannot work with uncertain terms.
The need of well-defined concepts leaves no room for interpretation. The articulation of the
right to non-discrimination illustrates this point clearly. This right has been mathematically
addressed through different perspectives corresponding to the notions of independence, sepa-
ration, or sufficiency. However, these concepts are mutually exclusive among them [2]. This
means that a general regulating wording requiring non-discrimination can be multiply realized.
And there is no common agreement on which is the most appropriate realization for a given
situation (this can be seen in the discussion between Northpointe [3] and ProPublica [4] about
the performance of COMPAS).
IAIL 2022 - Imagining the AI Landscape after the AI Act, June 13, 2022, Amsterdam, Netherlands
$ [email protected] (P. Subías-Beltrán); [email protected] (O. Pujol); [email protected] (I. de
Lecuona)
0000-0003-1167-1259 (P. Subías-Beltrán); 0000-0001-7573-009X (O. Pujol); 0000-0002-5081-5756 (I. de Lecuona)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
Automation has a long history. Automation has been motivated by the search for efficiency,
cost reduction, and even equal access to opportunities, among others. In a society with ambitions
to prosper while preserving all rights, automation is necessary to ensure progress. Until recently,
only repetitive processes that required solely trivial reasoning were automated. But for some
time now, more complex reasoning, such as decision making hitherto relegated to individuals,
has started to be automated. It is in this context that machine learning (ML) has appeared as
a predictive tool that makes it possible to evaluate the impact of certain actions in the future.
Systems using ML are based on finding complex patterns in data that allow to find associations
with the desired outcomes to be predicted. The products and systems based on ML are sometimes
referred as artificial intelligence (AI) in current folk terminology. ML explicitly refers to the
branch of this discipline concerned with systems and methods that improve their performance
based on experience. Experience that many times is translated in data. ML contributes to
improved decision making by removing noise from reasoning, making automated systems more
robust decision-making agents than humans, but the quality of the data they are trained on
should be accurate enough to avoid the leakage of undesired judgement that may perpetuate
and magnify injustices [5]. The problem, in the case of undesired judgement, arises in systems
that process personal data, as they directly shape the being of the individuals that the data
represent.
The inexorable advance of technology means that, over time, more and more actions may
be automated [6]. But, should they? Parasuraman and Wickens [7] thoroughly analyse the
particularities of automation by reflecting on a number of use cases for the different stages of
automation. They suggest that reliance and compliance ought to be balanced and recognize the
importance of adaptively integrate human and automation performance [7]. The current almost
ubiquitous presence of ML makes it necessary to review this issue in terms of its permeability
and reach. Laitinen and Sahlgren’s work makes progress among these lines by providing a
study on the effects that ML may have on people’s autonomy by examining the sociotechnical
bases of the principle of autonomy [8]. This approach reinforces the importance of the respect
for autonomy from a philosophical perspective but, in our view, lacks a technical analysis
that reflects how ML can better address this principle. From our point of view, a systematic
understanding of how ML may contribute to the respect of people guaranteeing our autonomy
is still lacking.
In the last decade, the ML community has been active and concerned with rights and principles
such as privacy, transparency, accountability, or fairness [9]. But there are still other rights,
freedoms, and principles not properly tackled by ML. And the principle of autonomy is one
such example.
Addressing the problems laying at the intersection of many disciplines, such as in this case,
is difficult. These here require the scientific point of view, that of the technology itself, together
with ethical, normative, and societal perspectives. But even in that case, a transdisciplinary team
should be able to go beyond the traditional boundaries of each discipline and face challenges
not necessarily well-defined or present in their own field of study. A first step into this direction
in order to successfully implement this practice is to work on developing a common language,
which would allow us to understand the challenges of each discipline and address them with
a holistic approach. One of the ambitions of this work is to bring us a step closer to this end
by analysing the challenges that arise from the need to assure the respect for autonomy in the
design, development, and implementation of ML-powered systems.
Stakeholder profiles of ML solutions are very diverse and, today, they have different needs.
This article contributes to the creation of an ethical framework through an ethical evaluation of
the current limitations and challenges posed by ML, as well as to narrow the gap between the
abstract definition of autonomy and its operational translation.
To put it concisely, there are four primary aims of this study:
This paper begins introducing the concept of autonomy in section 2, with a reflection of its
importance on the development of democratic societies and the different trends and dynamics
that are endangering it. Then, the current normative European framework is presented in
section 3. This includes a bioethical perspective to analyse ethical, legal, and social issues from
an interdisciplinary perspective, as well as a review of the different voices demanding that
autonomy be given the weight it deserves. This section closes with an analysis of the AI Act,
the first European law on AI. The fourth section is concerned with the thorough presentation
of the risks and challenges posed by ML that govern the respect for autonomy. There, we
dissect the concept of autonomy to raise the challenges we face in technical development and
for other stakeholders. The article ends with a discussion of the reflections raised so far and the
conclusions.
1
The term “negative" is used to define autonomy through complementary concepts. In a mathematical analogy,
given the term 𝐴 ⊆ 𝐵, 𝐴 may be defined as 𝐵 ∖ ¬𝐴.
Furthermore, we need to take into account other norms that regulate this issue. Therefore,
we must consider the General Data Protection Regulation (GDPR). This regulation presented
new rights that react and allow people to exercise control over the algorithmic world. However,
it was not enough. There is no symmetry in the relations among the different ML stakeholders:
ML professionals have the technical knowledge about the solution and have a tendency to
evaluate algorithms solely from the perspective of their accuracy, without assessing whether
they are adequate under other social rules. On this account, the opinion of other stakeholders
is mistreated during the design, development, and implementation of the systems in favour
of the search for the most accurate algorithm. And this imbalance expresses the evident need
for better strategies. The challenge is then to build a framework capable of addressing the ML
challenges from the prism of human rights [29].
Demands for reliable AI have culminated in the AI Act [30], which puts on the table the EU’s
intention to become a global AI standard setter. The AI Act proposes a risk-based approach
constructed on a pyramid of criticality accompanied with a layered enforcement mechanism. It
has received criticisms that may be summarised in two key points: the AI Act does not provide
coverage for all people as it does not sufficiently focus on vulnerable groups, and it does not
provide meaningful rights and redress mechanisms for people impacted by AI systems, as stated
by the Ada Lovelace Institute [31], whose mission is to ensure that AI serves people and society,
as well as many other organisations representing civil society [32]. Additionally, returning
to the issue at hand, the AI Act does not explicitly and completely address the principle of
autonomy, although it does so implicitly and partially. Multiple applications are forbidden,
like manipulative, exploitative, and social control practices; AI systems intended to distort
human behavior; or those that provide social scoring of natural people. But there are less clear
prohibitions, such as the one that forbids “the placing on the market, putting into service or
use of an AI system that deploys subliminal techniques beyond a person’s consciousness in
order to materially distort a person’s behaviour in a manner that causes or is likely to cause
that person or another person physical or psychological harm" (Title II, Article 5, 1a). But what
about practices that are explicitly manipulating our conception of reality, such as image filters
that perpetuate impossible standards of beauty? Besides, this framework wrongly implies that
a person’s behaviour can be altered in an innocuous way, while such practices are used to
undermine the essence of our autonomy.
One mechanism proposed by the AI Act is mandatory human oversight in high-risk AI-based
systems. The implementation of such a measure implies the involvement of a person or group
of people trained to perform the tasks defined in Title III, Chapter 2, Article 14, among which
are: to fully understand the capabilities and limitations of the system such that anomalies and
malfunctions are identified and mitigated as early as possible, to be able to detect automation
biases, to interpret the system output correctly by knowing the particularities of the system, to
be able to decide to refuse the use of a high-risk AI system, and to be able to intervene during
its execution and even to stop the use of the system if considered appropriate. But, are we
ready to respond to this point effectively? On the one hand, there are few profiles trained in all
the disciplines necessary to carry out this task correctly. On the other hand, the immediacy
required in the responses of AI-powered systems also makes this type of role uncertain due to
the different pressures to which it may be subjected. It should be noted that the AI Act makes
no mention of the characteristics of the person or group of individuals who can perform this
task. Consequently, we find ourselves in a scenario where there is freedom for individuals
both internal and external to an association to carry out this task. An in-house worker may
be under pressure from the employer to withhold certain information from the public or may
be overworked performing this role in several initiatives and not be able to perform the work
adequately due to the pressure of having to cope with everything. On the other hand, an
outsourced worker has the benefit of being impartial and protected from employer pressure.
But accessing the innards of the system in question is going to be more difficult. Although this
mechanism has the potential to bring a lot of value not only in detecting and mitigating risks,
but also in bringing ethical debates closer to those who develop the system, are we prepared to
put it into practice?
The AI Act proposes a regulation based on several rigid components. This is exemplified by
the list of prohibited systems, which covers specific casuistries that may become obsolete due to
the rapid development and updating of use cases for data-driven systems. In our opinion, this
rigidity calls into question the future relevance of the AI Act and its ability to respond to future
developments and emerging risks, so it would be worth exploring the rationale for this rigidity
in more detail. As is common practice, we rely on the past to legislate, but how certain are we
about its suitability to cover future issues? Compliance with the legal framework is necessary,
as it marks out what can and cannot be done. But the legal framework does not provide enough
information to know what is right and what is better. Although laws and regulations are gearing
towards the idea of ML as a tool to support decisions, this has not yet been translated into
practice. Changes in the perception and use of such solutions do not happen at a rapid pace.
There is still not a sufficiently respectful data management culture (note the fines that the Dutch
Tax and Customs Administration [33], Google [34], and Enel Energia [35] got because of their
inadequate data management in the last year). On the other hand, there will come a time when,
for our own efficiency, we may want to delegate our decisions to ML systems and, consequently,
these will cease to be decision support systems and will take the decisions themselves. And, as
of today, the regulation does not contemplate this case. It is necessary to contemplate that this
may happen and, therefore, systems must be designed to assume this delegation. This implies
that they must be able to understand the context, to understand that a human decision is being
replaced in a human context, and that it must preserve the purpose of the person who delegated
autonomy to it. Whatever the implementation, the balance must remain positive for the person
involved.
The legal framework is insufficient to steer society in the right direction. This is the task of
both the ethics of ML and a good ML governance. ML governance is the practice of defining
and executing procedures for the proper development and use of ML, while ML ethics evaluates
what is socially acceptable or preferable. We must go beyond compliance [36] and set out an
adequate ethical framework that complements the legal bodywork and allow us to progress
towards the kind of society we want to be. In this way, we will be prepared to react to situations
to which the body of law does not yet respond.
4. How machine learning modulates the principle of autonomy
We focus on a social framework where people are autonomous agents and technological solutions
should not limit their autonomy by default. Furthermore, we will address the actions that
are triggered after human-ML interaction. We evaluate the equilibrium between how much
autonomy we should give up, in which cases, and according to which rules. As we stated before,
autonomy can be simplified to being well-informed and able to make free decisions. In the
following paragraphs we elaborate on the risks and challenges posed by ML in both aspects
of autonomy from two different perspectives: that of technological profiles and that of the
remaining stakeholders.
2
The following excerpts from newspaper’s headlines exemplify the aforesaid criminalisation of algorithms:
“algorithms are biased" [45] or “this is how the algorithms that decide who gets fired from jobs work" [46] from the
Spanish newspaper El País, or the characterization of “creepy algorithm" from the US magazine Wired [47].
them out3 [49]. Identity is a social construct which is derived from the self-determination of
each individual. Thus, ignoring people’s self-determined identity results in non-consensual
classifications. Understanding who defines human-created data permit us to understand under
what perspective we are analysing the information [50].
ML carries the patina of objectivity and neutrality because of the misconception related to
ML reasoning capabilities: ML decreases variance and noise in judgements, but this does not
imply an increase in neutrality. ML reasons in a deterministic manner, which does not entail
an objective reasoning. However, this patina of objectivity and neutrality makes some people
place machines “opinion" before theirs [51]. We, as users of ML systems, must understand
these solutions for what they are: tools that allow us to analyse information more robustly,
eliminating variance in decision-making, but which are limited by the restrictions of data.
Another challenge we face is to respond to people who are not de facto autonomous. Ade-
quately informing people who have certain limitations to be so, for example due to medical
conditions or age, is a challenge to which ML may be able to make some proposals for improve-
ment.
Acknowledgments
This work is partially supported by MCIN/AEI/10.13039/501100011033 under project PID2019-
105093GB-I00.
References
[1] United Nations, Universal Declaration of Human Rights, 1948.
[2] S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, The (im)possibility of fairness:
Different value systems require different mechanisms for fair decision making, Communi-
cations of the ACM 64 (2021) 136–143.
[3] W. Dieterich, C. Mendoza, T. Brennan, COMPAS risk scales: Demonstrating accuracy
equity and predictive parity, Northpointe Inc (2016). URL: https://www.documentcloud.
org/documents/2998391-ProPublica-Commentary-Final-070616.html.
[4] J. Larson, J. Angwin, Technical response to Northpointe, ProPublica 29 (2016). URL:
https://www.propublica.org/article/technical-response-to-northpointe.
[5] D. Kahneman, O. Sibony, C. R. Sunstein, Noise: A flaw in human judgment, William Collins,
Dublin, 2021.
[6] P. A. Hancock, R. J. Jagacinski, R. Parasuraman, C. D. Wickens, G. F. Wilson, D. B.
Kaber, Human-automation interaction research: Past, present, and future, Er-
gonomics in Design 21 (2013) 9–14. URL: http://erg.sagepub.com/cgi/alerts. doi:10.1177/
1064804613477099.
[7] R. Parasuraman, C. D. Wickens, Humans: still vital after all these years of automation,
Human factors 50 (2008) 511–520. URL: https://pubmed.ncbi.nlm.nih.gov/18689061/. doi:10.
1518/001872008X312198.
[8] A. Laitinen, O. Sahlgren, AI Systems and Respect for Human Autonomy, Frontiers in
Artificial Intelligence 4 (2021) 151. doi:10.3389/FRAI.2021.705164/BIBTEX.
[9] N. Diakopoulos, S. Friedler, M. Arenas, S. Barocas, M. Hay, B. Howe, H. V. Ja-
gadish, K. Unsworth, A. Sahuguet, S. Venkatasubramanian, C. Wilson, C. Yu, B. Zeven-
bergen, Principles for Accountable Algorithms and a Social Impact Statement for
Algorithms, Technical Report, FAT/ML, 2017. URL: https://www.fatml.org/resources/
principles-for-accountable-algorithms.
[10] J. Gumbis, V. Bacianskaite, J. Randakeviciute, Do Human Rights Guarantee Autonomy?,
Cuadernos Constitucionales de la Cátedra Fadrique Furió Ceriol 62 (2008) 77–93. URL:
www.un.org.
[11] M. Foucault, Discipline & Punish: The Birth of the Prison, 1975.
[12] Z. Bauman, Liquid life, Polity, 2005.
[13] E. Morozov, To save everything, click here: The folly of technological solutionism, Public
Affairs, 2013.
[14] Y. N. Harari, 21 Lessons for the 21st Century, Random House, 2018.
[15] Y. N. Harari, Rebellion of the Hackable Animals, The Wall Street Journal (2020).
[16] I. de Lecuona, La tendencia a la mercantilización de partes del cuerpo humano y de la
intimidad en investigación con muestras biológicas y datos (pequeños y masivos), in:
Editorial Fontamara (Ed.), De la Solidaridad al Mercado, Edicions de la Universitat de
Barcelona, 2016, pp. 267–296. URL: www.bioeticayderecho.ub.edu.
[17] I. de Lecuona Ramírez, M. Villalobos-Quesada, The value of personal data in the digi-
tal society, in: El cuerpo diseminado: estatuto, uso y disposición de los biomateriales
humanos, Aranzadi, 2018, pp. 171–191. URL: https://dialnet.unirioja.es/servlet/articulo?
codigo=6499584.
[18] I. de Lecuona Ramírez, Ethical, legal and societal issues of the use of artificial intelligence
and big data applied to healthcare in a pandemic, Revista Internacional de Pensamiento
Político (2020) 139–166. URL: https://dialnet.unirioja.es/servlet/articulo?codigo=7736125.
[19] M. Casado, Los derechos humanos como marco para el Bioderecho y la Bioética, in: Derecho
biomédico y bioética, Comares, 1998, pp. 113–136. URL: https://dialnet-unirioja-es.sire.ub.
edu/servlet/articulo?codigo=568994.
[20] General Conference of UNESCO, Universal Declaration on Bioethics and Hu-
man Rights, 2005. URL: https://en.unesco.org/themes/ethics-science-and-technology/
bioethics-and-human-rights.
[21] News European Parliament, Artificial intelligence: the EU
needs to act as a global standard-setter, 2022. URL: https:
//www.europarl.europa.eu/news/en/press-room/20220318IPR25801/
artificial-intelligence-the-eu-needs-to-act-as-a-global-standard-setter.
[22] Directorate general of human rights and rule of law, Consultative committee of the
convention for the protection of individuals with regard to automatic processing of personal
data (Convention 108). Guidelines on artificial intelligence and data protection (2019)
1–4. URL: https://www.coe.int/en/web/human-rights-rule-of-law/artificial-intelligence/
glossary:.
[23] Council of Europe - Commissioner for Human Rights, Unboxing Artificial Intelligence: 10
steps to protect Human Rights, Technical Report, Council of Europe, 2019.
[24] COMEST, Preliminary study on the Ethics of Artificial Intelligence, Technical Report, 2019.
URL: https://unesdoc.unesco.org/ark:/48223/pf0000367823.
[25] European Union Agency for Fundamental Rights, Getting the future right – Artificial intelli-
gence and fundamental rights, Technical Report, European Union Agency for Fundamental
Rights, Luxembourg, 2020. doi:10.2811/58563.
[26] AI HLEG, Ethics Guidelines for Trustworthy AI, Technical Report, High-Level Expert
Group on Artificial Intelligence, Brussels, 2019. URL: https://ec.europa.eu/futurium/en/
ai-alliance-consultation.1.html.
[27] T. Metzinger, Ethics washing made in Europe, Der Tagesspiegel (2019). URL: https://www.
tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html.
[28] I. Ben-Israel, J. Cerdio, A. Ema, L. Friedman, M. Ienca, A. Mantelero, E. Matania,
C. Muller, H. Shiroyama, E. Vayena, Towards regulation of AI systems. Global perspec-
tives on the development of a legal framework on Artificial Intelligence systems based
on the Council of Europe’s standards on human rights, democracy and the rule of law,
Technical Report, Council of Europe, Strasbourg Cedex, 2020. URL: https://rm.coe.int/
prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/1680a0c17a.
[29] I. de Lecuona, M. J. Bertrán, B. Bórquez, L. Cabré, M. Casado, M. Corcoy, M. Dobernig,
F. Estévez, F. G. López, B. Gómez, C. Humet, L. Jaume-Palasí, E. Lamm, F. Leyton,
M. J. L. Baroni, R. L. d. Mántaras, F. Luna, G. Marfany, J. Martínez-Montauti, M. Mau-
tone, I. Melamed, M. Méndez, M. Navarro-Michel, M. J. Plana, N. Riba, G. Rodríguez,
R. Rubió, J. Santaló, P. Subías-Beltrán, Guidelines for reviewing health research
and innovation projects that use emergent technologies and personal data, Phys-
ical Education and Sport for Children and Youth with Special Needs Researches –
Best Practices – Situation (2020) 343–354. URL: https://research.wur.nl/en/publications/
guidelines-for-reviewing-health-research-and-innovation-projects-. doi:10.2/JQUERY.
MIN.JS.
[30] European Commission, Proposal for a Regulation of the European Parliament and of
the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelli-
gence Act) and Amending Certain Union Legislative Acts, Technical Report, European
Commission, Brussels, 2021.
[31] L. Edwards, Regulating AI in Europe: four problems and four solutions, Technical Re-
port, Ada Lovelace Institute, 2022. URL: https://www.adalovelaceinstitute.org/report/
regulating-ai-in-europe.
[32] European Digital Rights, Access Now, Panoptykon Foundation, epicenter.works, Algo-
rithmWatch, European Disability Forum, Bits of Freedom, Fair Trials, PICUM, ANEC,
ANEC, An EU Artificial Intelligence Act for Fundamental Rights. A Civil Society State-
ment, Technical Report, European Digital Rights, 2021.
[33] Autoriteit Persoonsgegevens, Boete Belastingdienst voor zwarte lijst FSVPersoon-
sgegevens, Technical Report, Autoriteit Persoonsgegevens, 2022. URL: https://
autoriteitpersoonsgegevens.nl/nl/nieuws/boete-belastingdienst-voor-zwarte-lijst-fsv.
[34] Commission Nationale de l’Informatique et des Libertés, Cookies: GOOGLE fined 150
million euros, Technical Report, Commission Nationale de l’Informatique et des Libertés,
Paris, 2022. URL: https://www.cnil.fr/en/cookies-google-fined-150-million-euros.
[35] Garante per la protezione dei dati personali, Ordinanza ingiunzione nei confronti di Enel
Energia S.p.a. - 16 dicembre 2021 [9735672], Technical Report, Garante per la protezione dei
dati personali, Rome, 2021. URL: https://www.garanteprivacy.it/web/guest/home/docweb/
-/docweb-display/docweb/9735672.
[36] L. Floridi, Soft ethics, the governance of the digital and the General Data Protection
Regulation, Philosophical Transactions of the Royal Society A: Mathematical, Physical
and Engineering Sciences 376 (2018). doi:10.1098/rsta.2018.0081.
[37] E. Pariser, The Filter Bubble: What the Internet Is Hiding from You, Penguin Press, New
York, 2011.
[38] A. Tversky, D. Kahneman, Judgment under Uncertainty: Heuristics and Biases, Science
185 (1974) 1124–1131. URL: https://pubmed.ncbi.nlm.nih.gov/17835457/. doi:10.1126/
SCIENCE.185.4157.1124.
[39] R. Boudon, Beyond rational choice theory, Annual review of sociology 29 (2003) 1–
21. URL: https://www.annualreviews.org/doi/abs/10.1146/annurev.soc.29.010202.100213.
doi:10.1146/ANNUREV.SOC.29.010202.100213.
[40] J. von Neumann, O. Morgenstern, Theory of games and economic behavior, Princeton
University Press, 2007. doi:10.1515/9781400829460.
[41] A. Tversky, A critique of expected utility theory: Descriptive and normative considerations,
Erkenntnis (1975) 163–173. URL: https://www.jstor.org/stable/20010465.
[42] D. Kahneman, A. Tversky, Prospect theory: An analysis of decision under risk, Economet-
rica 47 (1979) 263–292. doi:10.2307/1914185.
[43] T. Misra, The Tenants Fighting Back Against Facial Recognition Technology,
Bloomberg CityLab (2019). URL: https://www.bloomberg.com/news/articles/2019-05-07/
when-facial-recognition-tech-comes-to-housing.
[44] J. Sánchez-Monedero, L. Dencik, The politics of deceptive borders: ’biomarkers of de-
ceit’ and the case of iBorderCtrl, Information, Communication & Society (2020) 1–18.
URL: https://www.researchgate.net/publication/337438212_The_politics_of_deceptive_
borders_%27biomarkers_of_deceit%27_and_the_case_of_iBorderCtrl.
[45] R. Gimeno, Los algoritmos tienen prejuicios: ellos son informáticos y ellas, amas de casa,
2017. URL: https://elpais.com/retina/2017/05/12/tendencias/1494612619_910023.html?rel=
buscador_noticias.
[46] M. Echarri, 150 despidos en un segundo: así funcionan los algoritmos que deciden a quién
echar del trabajo, 2021. URL: https://elpais.com/.
[47] D. Jemio, A. Hagerty, F. Aranda, The Case of the Creepy Algorithm That
‘Predicted’ Teen Pregnancy, Wired (2022). URL: https://www.wired.com/story/
argentina-algorithms-pregnancy-prediction/.
[48] R. Parasuraman, V. Riley, Humans and automation: Use, misuse, disuse, abuse,
Human factors 39 (1997) 230–253. URL: https://journals.sagepub.com/doi/abs/10.1518/
001872097778543886. doi:10.1518/001872097778543886.
[49] K. Yang, K. Qinami, L. Fei-Fei, J. Deng, O. Russakovsky, Towards fairer datasets: Filtering
and balancing the distribution of the people subtree in the imagenet hierarchy, in: Pro-
ceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp.
547–558. URL: https://www.image-net.org/filtering-and-balancing/.
[50] K. Crawford, Atlas of AI, 2021. URL: https://www.katecrawford.net/index.html.
[51] V. Eubanks, Automating inequality: How high-tech tools profile, police, and punish the
poor, St. Martin’s Press, 2018.
[52] J. Bryson, AI & global governance: no one should trust AI, United
Nations University (2018). URL: https://cpr.unu.edu/publications/articles/
ai-global-governance-no-one-should-trust-ai.html.
[53] M. Ryan, In AI we trust: ethics, artificial intelligence, and reliability, Science and Engi-
neering Ethics 26 (2020) 2749–2767. doi:10.1007/S11948-020-00228-Y.
[54] P. Robinette, W. Li, R. Allen, A. M. Howard, A. R. Wagner, Overtrust of Robots in Emergency
Evacuation Scenarios, in: 2016 11th ACM/IEEE International Conference on Human-Robot
Interaction (HRI), IEEE, 2016, pp. 101–108.