A - 78 - 310 - Right To Privacy
A - 78 - 310 - Right To Privacy
Seventy-eighth session
Item 73 (b) of the provisional agenda*
Promotion and protection of human rights: human rights
questions, including alternative approaches for improving
the effective enjoyment of human rights and
fundamental freedoms
Right to privacy
Note by the Secretary-General
The Secretary-General has the honour to transmit to the General Assembly the
report prepared by the Special Rapporteur on the right to privacy, Ana Brian
Nougrères, submitted in accordance with Human Rights Council resolution 28/16.
* A/78/150.
Summary
In the present report, the Special Rapporteur on the right to privacy, Ana Brian
Nougrères, stresses the importance of the principles of transparency and
explainability in the processing of personal data using artificial intelligence. The
omnipresence of artificial intelligence in all activities and decision-making about
people using artificial intelligence demand that the issue be examined and that
measures be taken to ensure that the use of artificial intelligence is ethical, responsible
and human rights-compliant.
This is important because transparency and explainability not only help to build
trust and reliability in artificial intelligence, but also contribute to the protection of
human rights. These principles allow individuals affected by artificial intelligence to
be informed in a timely, comprehensive, simple and clear manner about basic issues
concerning the use of their personal information in artificial intelligence processes or
projects and the consequences thereof, and about the specific reaso ns behind such
use. This makes it possible for them to exercise their rights, such as the right to due
process and to a defence when faced with decisions made using artificial intelligence
tools or technologies.
2/20 23-15851
A/78/310
I. Introduction
1. The High-Level Expert Group on Artificial Intelligence 1 of the European
Commission has noted that the principles of transparency and explainability are
important components for the promotion of reliable artificial intelligence. To that end,
artificial intelligence must be lawful, ethical and robust, “both from a technical and
social perspective since, even with good intentions, artificial intelligence systems can
cause unintentional harm”. 2
2. In the same vein, the United Nations Educational, Scientific and Cultural
Organization (UNESCO) has noted that “transparency and explainability relate
closely to adequate responsibility and accountability measures, as well as to the
trustworthiness of artificial intelligence systems,” 3 and that “the transparency and
explainability of artificial intelligence systems are often essential preconditions to
ensure the respect, protection and promotion of human rights, fundamental freedoms
and ethical principles.” 4
3. Artificial intelligence is very present on the global age nda. Towards the end of
December 2022, for example, the Organisation for Economic Co -operation and
Development (OECD) issued a statement on a trusted, sustainable and inclusive
digital future, 5 in which it committed to work towards, among other things, ad vancing
a human-centric and rights-oriented digital transformation that includes promoting
the enjoyment of human rights, both offline and online, strong protections for
personal data, laws and regulations fit for the digital age, and trustworthy, secure,
responsible and sustainable use of emerging digital technologies and artificial
intelligence. 6 With regard to artificial intelligence, OECD member States have called
on the organization to support the development of forward -looking, coherent and
implementable policy and legal frameworks for governing artificial intelligence and
managing its risks effectively, and to provide evidence, foresight, tools and incident
monitoring for effective policy planning and execution to implement trustworthy
artificial intelligence. 7
4. On 23 January 2023, the European Parliament, the Council of Europe and the
European Commission adopted the European Declaration on Digital Rights and
Principles, in which they committed to:
(a) Promoting human-centric, trustworthy and ethical artificial intelligence
systems throughout their development, deployment and use, in line with European
Union values;
(b) Ensuring an adequate level of transparency about the use of algorithms
and artificial intelligence, and that people are empowered to use them and are
informed when interacting with them;
__________________
1
A group of independent experts formed by the European Commission in June 2018.
2
High-Level Expert Group on Artificial Intelligence, Ethical guidelines for trustworthy artificial
intelligence, (2019), p. 2. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-
guidelines-trustworthy-ai.
3
UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2021, p. 22. Available at
https://unesdoc.unesco.org/ark:/48223/pf0000381137.
4
Ibid.
5
OECD, Declaration on a Trusted, Sustainable and Inclusive Digital Future, 2022. The declaration
was the outcome of the meeting held on the island of Gran Canaria, Spain, on 14 and 15 December
2022. Available at https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0488.
6
Ibid.
7
Ibid.
23-15851 3/20
A/78/310
(c) Ensuring that algorithmic systems are based on adequate datasets to avoid
discrimination and enable human supervision of all outcomes affecting people’s
safety and fundamental rights;
(d) Ensuring that technologies such as artificial intelligence are not used to
pre-empt people’s choices, for example regarding health, education, employment, and
their private life;
(e) Providing for safeguards and taking appropriate action, including by
promoting trustworthy standards, to ensure that artificial intelligence and digital
systems are, at all times, safe and used in full respect for fundamental rights;
(f) Taking measures to ensure that research in artificial intelligence respec ts
the highest ethical standards and relevant European Union law. 8
5. In view of the above, some considerations on artificial intelligence are set out
below, with a brief reference to the following issues that are meant to clarify the
content of the principles of transparency and explainability in the context of the
processing of personal data in artificial intelligence processes or projects.
4/20 23-15851
A/78/310
__________________
10
European Commission, White Paper on Artificial Intelligence – a European approach to
excellence and trust, COM (2020) 65 final. Available at https://eur-lex.europa.eu/legal-
content/EN/TXT/?qid=1603192201335&uri=CELEX%3A52020DC0065 .
11
See https://unesdoc.unesco.org/ark:/48223/pf0000381137 , p. 21.
12
See https://globalprivacyassembly.org/wp-content/uploads/2020/11/GPA-Resolution-on-
Accountability-in-the-Development-and-Use-of-AI-EN.pdf, p. 3.
13
See https://unesdoc.unesco.org/ark:/48223/pf0000381137, pp. 21–22.
14
Ibero-American Data Protection Network, “General recommendations for the treatment of
personal data in artificial intelligence”, (2019). Text adopted by the members of the Network at
the session of 21 June 2019, held in Naucalpan de Juárez, Mexico. Availab le at
https://www.redipd.org/sites/default/files/2020-02/guia-recomendaciones-generales-tratamiento-
datos-ia.pdf.
23-15851 5/20
A/78/310
of the regulations on personal data processing right from the product design stage.
The recommendations are as follows:
• Comply with local regulations on the processing of personal data;
• Conduct privacy impact assessments;
• Embed privacy, ethics and security by design and by default;
• Implement the principle of accountability;
• Design appropriate governance schemes on the processing of personal data in
organizations that develop artificial intelligence products;
• Adopt measures to ensure the implementation of the principles on the processing
of personal data in artificial intelligence projects;
• Respect the rights of data owners and implement effective mechanisms for the
exercise of such rights;
• Ensure the quality of personal data;
• Use anonymization tools;
• Increase trust and transparency with personal data owners.
14. For details on the implementation of some of these recommendations, the
Ibero-American Data Protection Network has prepared additional and more detailed
guidelines, contained in the document entitled “Specific Guidelines for Compliance
with the Principles and Rights that Govern the Protection of Personal Data in
Artificial Intelligence Projects”. 15 The principle of transparency, which will be
referred to later, is discussed in more detail in the present report.
__________________
15
Ibero-American Data Protection Network, “Specific Guidelines for Compliance with the
Principles and Rights that Govern the Protection of Personal Data in Artificial Intelligence
Projects”, (2019). Available at: https://www.redipd.org/sites/default/files/2020-02/guide-specific-
guidelines-ai-projects.pdf.
6/20 23-15851
A/78/310
significant risks can be expected to occur. […]. For instance, health care,
transport, energy and parts of the public sector […];
(b) Second, the [artificial intelligence] application in the sector in
question is, in addition, used in such a manner that significant risks are likely to
arise. […]. The assessment of the level of risk of a given use could be based on
the impact on the affected parties. For instance, uses of [artific ial intelligence]
applications that produce legal or similarly significant effects for the rights of
an individual or company; that pose risk of injury, death or significant material
or immaterial damage; that produce effects that cannot reasonably be avoi ded
by individuals or legal entities. 16
18. Artificial intelligence involves different types of risk. The contingencies that
should be considered include the inherent risks of operating with algorithms (human
bias, technical flaws, security vulnerabilities and failures in their implementation) and
their faulty design. Certain issues affect the management and performance of
algorithms, as shown in the following graphic: 17
__________________
16
See https://eur-lex.europa.eu/legal-content/ES/TXT/?qid=1603192201335
&uri=CELEX%3A52020DC0065.
17
See https://www.redipd.org/sites/default/files/2020-02/guia-recomendaciones-generales-
tratamiento-datos-ia.pdf, p. 18.
23-15851 7/20
A/78/310
__________________
18
Alejandro Useche and Jeimy Cano, Robo-Advisors: Asesoría automatizada en el mercado de
valores, Universidad del Rosario and Autorregulador del Mercado de Valores de Colombia
(2019), pp. 9–10. Available at: https://www.researchgate.net/publication/331358231_Robo-
Advisors_Asesoria_automatizada_en_el_mercado_de_valores.
19
UNESCO, Recommendation on the Ethics of Artificial Intelligence, 2021, p. 22.
20
Organisation for Economic Co-operation and Development (OECD), Guidelines on the
Protection of Privacy and Transborder Flows of Personal Data, 23 September 1980 and the
updated guidelines from July 2013; Council of Europe, Convention for the Protection of
Individuals with regard to Automatic Processing of Personal Data, No. 108, 28 January 1981;
United Nations, Guidelines for the regulation of computerized personal data files, 14 December
1990; Council of Europe, Additional Protocol to the Convention for the Protection of Individuals
with regard to Automatic Processing of Personal Data, regarding supervisory authorities and
transborder data flows, 8 November 2001; Asia-Pacific Economic Cooperation Forum, Asia-
Pacific Economic Cooperation Forum Privacy Framework, 2004; Spanish Data Protection
Agency, Joint Proposal for a Draft of International Standards on the Protection of Privacy wi th
regard to the Processing of Personal Data, Madrid, 5 November 2009; Regulation (European
Union) 2016/679 of the European Parliament and of the Council on the protection of natural
persons with regard to the processing of personal data and on the free mo vement of such data,
and repealing Directive 95/46/EC (General Data Protection Regulation), 27 April 2016;
Ibero-American Data Protection Network, Guidelines for Harmonization of Data Protection in
the Ibero-American Community, 2017; Council of Europe, Pro tocol amending the Convention for
the Protection of Individuals with regard to Automatic Processing of Personal Data, October
2018, and Organization of American States, Inter-American Juridical Committee, Updated
Principles on Privacy and Personal Data Protection, 2021.
21
A/77/196, para. 45.
8/20 23-15851
A/78/310
23-15851 9/20
A/78/310
__________________
24
Ibid., pp. 23–24.
25
See https://unesdoc.unesco.org/ark:/48223/pf0000381137 , p. 22.
26
See https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, pp. 2 and 3.
27
See https://unesdoc.unesco.org/ark:/48223/pf0000381137 , p. 22.
28
Ibid., p. 22.
29
See https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, p. 18.
10/20 23-15851
A/78/310
• Traceability: The data sets and the processes that yield the [artificial
intelligence] system’s decision, including those of data gathering and data
labelling as well as the algorithms used, should be documented to the best
possible standard to allow for traceability and an increase in transparency. This
also applies to the decisions made by the [artificial intelligence] system. This
enables identification of the reasons why an [artificial intelligence] -decision
was erroneous which, in turn, could help prevent future mistakes. Traceability
facilitates auditability as well as explainability.
• Explainability: Explainability concerns the ability to explain both the technical
processes of an [artificial intelligence] system and the related human decisions
(e.g. application areas of a system). Technical explainability requires that the
decisions made by an [artificial intelligence] system can be understood and
traced by human beings. Moreover, trade-offs might have to be made between
enhancing a system’s explainability (which may reduce its accuracy) or
increasing its accuracy (at the cost of explainability). Whenever an [artificial
intelligence] system has a significant impact on people’s lives, it should be
possible to demand a suitable explanation of the [artificial intelligence] system’s
decision-making process. Such explanation should be timely and adapted to the
expertise of the stakeholder concerned (e.g. layperson, regulator or researcher).
In addition, explanations of the degree to which an [artificial intelligence]
system influences and shapes the organisational decision-making process,
design choices of the system, and the rationale for deploying it, should be
available (hence ensuring business model transparency).
• Communication. [Artificial intelligence] systems should not represent
themselves as humans to users; humans have the right to be informed that they
are interacting with an [artificial intelligence] system. This entails that [artificial
intelligence] systems must be identifiable as such. In addition, the option to
decide against this interaction in favour of human interaction should be provided
where needed to ensure compliance with fundamental rights. Beyond this, the
[artificial intelligence] system’s capabilities and limitations should be
communicated to [artificial intelligence] practitioners or end -users in a manner
appropriate to the use case at hand. This could encompass communication of the
[artificial intelligence] system’s level of accuracy, as well as its limitations. 30
33. The European Data Protection Board and the European Data Protection
Supervisor have issued a joint opinion in which they stated that:
Data subjects should always be informed when their data is used for [artificial
intelligence] training and/or prediction, of the legal basis for such processing,
general explanation of the logic (procedure) and scop e of the [artificial
intelligence] system. In that regard, the individuals’ right of restriction of
processing (article 18 GDPR and article 20 EUDPR as well as of
deletion/erasure of data (article 16 GDPR and article 19 EUDPR should always
be guaranteed in those cases. Furthermore, the controller should have the
explicit obligation to inform the data subject of the applicable periods for
objection, restriction, deletion of data, etc. The [artificial intelligence] system
must be able to meet all data protection requirements through adequate technical
__________________
30
Ibid.
23-15851 11/20
A/78/310
12/20 23-15851
A/78/310
__________________
35
European Data Protection Supervisor, Opinion 4/2020, European Data Protection Supervisor
Opinion on the European Commission’s White Paper on Artificial Intelligence – a European
approach to excellence and trust, 29 June 2020, p. 14. Available at: https://edps.europa.eu/sites/
edp/files/publication/20-06-19_opinion_ai_white_paper_en.pdf.
36
European Data Protection Board and the European Data Protection Supervisor, “Joint Opinion
5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying
down harmonised rules on artificial intelligence (Artificial Intelligence Act)”, 18 June 2021,
p. 22.
37
See https://www.redipd.org/sites/default/files/2020–02/guia-orientaciones-espec%C3%ADficas-
proteccion-datos-ia.pdf, pp. 17–19.
23-15851 13/20
A/78/310
• “Continuously inform data subjects so that they know how automated decision -
making can affect them and how to request human intervention when needed,
so they can make an informed decision as to whether or not to consent to the
processing”.
40. The Ibero-American Data Protection Network has noted that:
The information provided regarding the logic of the [artificial intelligence]
model should include at least the basic aspects of its operation, as well as the
weighting and correlation of the data, written in clear, simple and easily
understood language. It will not be necessary to provide a complete explanation
of the algorithms used or even to include them. 38
41. The Ibero-American Data Protection Network has called on those responsible
for the processing of data by artificial intelligence to be innovative in order to convey
information in a simple and concise manner, indicating that “[t]here are several
innovative approaches to providing privacy notices, including the use of videos,
cartoons and standardized icons. The use of a combination of approaches can help
make complex information on [artificial intelligence] easier for data subjects to
understand”. 39
42. The following paragraphs contain an enunciative and non-exhaustive set of
examples of countries that have explicitly or implicitly addressed in their local laws
the principle of transparency in the processing of personal data using artificial
intelligence.
43. In Ecuador, the Organic Data Protection Act, adopted in 2021, establishes in its
article 12, paragraphs 14 and 17, the right to be informed about the existence of the
right to not be subject to a decision based solely on automated evaluations, the manner
in which that right can be exercised and the existence of automated assessments an d
decisions, including profiling.
44. The Act also stipulates that in cases in which data are obtained directly from
data subjects, the information shall be communicated in advance (at the time the
personal data are collected). Article 12 further states t hat:
When personal data are not obtained directly from the data subjects or when
they have been collected from sources accessible to the public, the data subjects
shall be informed within thirty (30) days or in the first communication they
receive, whichever occurs first. The data subjects shall be given clear,
unambiguous, transparent, understandable, concise and accurate information
with no technical hurdles.
45. In Peru, article 72 of the Implementing Regulations of Act No. 29733, the
Personal Data Protection Act, addresses the right to the objective processing of
personal data, stating as follows:
To uphold the right to objective processing pursuant to article 23 of the Act, 40
when personal data are processed as part of a decision-making process that does
not involve the data subject, the controller of the personal data database or the
__________________
38
See https://www.redipd.org/es/documentos/guia, pp. 17–19.
39
Ibid.
40
“Article 23. Right to objective processing. Data subjects have the right to not be subjected to a
decision that has legal effects on them or affects them significantly and is supported only by the
processing of personal data intended to evaluate certain aspects of their personalities or
behaviour, unless it occurs during the negotiation, execution or performance of a contract or in
cases of an evaluation for the purposes of taking a position at a public entity, pursuant to the law,
without prejudice to the possibility of defending their point of view for the protection of their
legitimate interests.”
14/20 23-15851
A/78/310
controller of the processing shall inform the data subject without delay, except
as otherwise provided in the Regulations on the exercise of the other rights set
out in the Act and its […] Regulations.
46. In Sao Tome and Principe, Act No. 3/2016 of 2 May 2016, the Individual
Personal Data Protection Act, is unique in that it stipulates in its article 21 that
controllers or their representatives shall notify the National Personal Data Protection
Agency, in writing and no more than eight days before the processing is to begin, that
they will begin fully or partially automated processing or batch processing to achieve
one or more interrelated ends, with some exceptions. Arti cle 11 of the Act also
provides that data subjects, when exercising their right to access, have the right to be
informed by the controller of the reasons behind the automated processing of data
concerning them.
47. In Uruguay, article 13 of Act No. 18831 of 11 August 2008, the Personal Data
Protection Act, establishes that data subjects have the right to be informed, in an
express, clear and unmistakable manner, prior to data collection, about the assessment
criteria, the processes applied and the technological solution or software utilized in
cases in which automated data processing is used to evaluate certain aspects of their
personality, such as job performance, creditworthiness, reliability and conduct, to
make decisions with legal effects that could significantly affect the data subjects. The
Act also states that “when personal data are not collected directly from the data
subjects, the information […] shall be provided to them within a period of five
business days from the date on which the request is received by the controllers”.
23-15851 15/20
A/78/310
__________________
42
See https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, p. 13.
43
Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ%3AJOC_2023_023_
R_0001.
44
See https://www.redipd.org/sites/default/files/2020-02/guia-recomendaciones-generales-
tratamiento-datos-ia.pdf, pp 23 and 24.
45
See https://globalprivacyassembly.org/document-archive/adopted-resolutions/, p. 3.
46
See http://eur-lex.europa.eu/legal-content/ES/TXT/?uri=CELEX:32016R0679, art. 14,
para. 2 (g).
16/20 23-15851
A/78/310
56. The following table explains the most relevant aspects of each principle
according to the National Institute of Standards and Technology document: 49
Explanation Accuracy This principle requires the technical explanation to be rigorous, accurate
and comprehensive.
Knowledge Limits Identifying and declaring the limits of knowledge implies making it clear
that the system is neither perfect nor infallible because the [artificial
intelligence] operates within certain limits and constraints within which it
has been programmed. They also depend on the quality and quantity of
information processed, among other factors.
57. It has been argued that the explanation must: (a) “be understandable and
convincing to the user”; (b) “accurately reflect the system’s reasoning”; (c) “be
comprehensive”, and (d) “be specific in the sense that different users with different
__________________
47
Ibid., art. 15 (1).
48
National Institute of Standards and Technology, Four Principles of Explainable Artificial
Intelligence, - NISTIR 8312 (2021), p. 3. Available at https://doi.org/10.1017/bhj.2023.10.
49
The explanation in the table is an adaptation and summary of the original English text cited and
available at https://doi.org/10.6028/NIST.IR.8312.
23-15851 17/20
A/78/310
__________________
50
Gavilán, Ignacio, “Cuatro principios para una buena explicabilidad de los algoritmos” (2022).
Available at: https://ignaciogavilan.com/cuatro-principios-para-una-buena-explicabilidad-de-los-
algoritmos/.
51
Ibid.
52
See https://unesdoc.unesco.org/ark:/48223/pf0000381137 p. 23.
53
Nelson Remolina Angarita, “Del principio de explicabilidad en la inteligencia artificial (notas
preliminares)”, in Protección de datos personales: doctrina y jurisprudencia , Pablo Palazzi, ed.,
vol. III (Centre for Technology and Society, University of San Andrés, Buenos Aires, 2023).
54
Statutory Act No. 1581 of 2012, which establishes general provisions for the protection of
personal data, art. 4 d).
55
Statutory Act No. 2157 of 2021, which amends and supplements Statutory Act No. 1266 of 2008
and establishes general provisions on habeas data in relation to financial, credit, commercial,
service and third-country information and other provisions, art. 5, para. 1.
18/20 23-15851
A/78/310
they were obtained, and contest the decision before the person responsible or in
charge (with certain exceptions).
62. In Uruguay, article 16 of Act No. 18331 establishes that:
individuals have the right not to be subjected to a decision with legal effects that
significantly affects them that is based on automated data processing intended
to evaluate certain aspects of their personality, such as their job performance,
creditworthiness, reliability and conduct. Whoever is affected shall have the
right to obtain information from the person responsible for the database both on
the evaluation criteria and on the program used in the processing that was used
to reach the decision set out in the act.
VII. Conclusions
63. The following conclusions can be drawn from the foregoing:
(a) Transparency and explainability help to build trust in artificial
intelligence and to respect human rights;
(b) Developers of artificial intelligence must be transparent about how
data are processed (how they are collected, stored and used), and about how
decisions based on artificial intelligence are made, the reliability of such
decisions and the security of the information;
(c) Persons affected by decisions made on the basis of artificial
intelligence deserve a clear, simple, complete, truthful and understandable
explanation of the reasons for that decision. In that regard, the principle of
explainability is of cardinal importance not only because it aligns with the
principle of transparency, but also because it will make it possible to uphold such
persons’ right to a defence and due process;
(d) Explainability and transparency demand clarity, completeness,
truthfulness, impartiality and publicity of the decisions made using arti ficial
intelligence and of the logic, method or reasoning for making decisions about
human beings based on information, particularly personal data. Explainability
and transparency are, of course, the opposite of opacity, obscurity, deceit, lies
and abuse of computing power, which are some of the symptoms of illegal and
unethical processing that reflects a lack of respect for human beings and their
dignity.
VIII. Recommendations
64. In the light of the above, the Special Rapporteur urges States to:
(a) Promote transparency in artificial intelligence in order to mitigate the
risks that opacity may generate in society, especially with respect to the
protection of human rights;
(b) Incorporate into their laws the principle of explainability, not only to
enable people to understand how the decisions that affect them were made, but
also to provide them with the tools to defend their human rights in the face of
artificial intelligence;
(c) Promote ethical practices that ensure transparency and explainability
in the processing of personal data in artificial intelligence projects or processes;
23-15851 19/20
A/78/310
(d) Foster, support and facilitate education and digital literacy to enable
citizens to better understand the concepts relating to artificial intelligence,
transparency and explainability, in order to be able to demand that their rights
be respected.
20/20 23-15851