0% found this document useful (0 votes)
37 views54 pages

06 - Ed Tech 1 - Humans Right

Artificial intelligence systems are becoming ubiquitous, including in education through applications like customized learning and facial recognition. While AI can provide benefits, it also raises ethical concerns regarding human and children's rights. As AI is adopted in developing countries with large families and structural inequalities, it is important to analyze how AI impacts children's access to and enjoyment of education, and ensure responsible development that respects children's rights. Oversight is needed of governments and companies to protect children.

Uploaded by

Leandro Mol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views54 pages

06 - Ed Tech 1 - Humans Right

Artificial intelligence systems are becoming ubiquitous, including in education through applications like customized learning and facial recognition. While AI can provide benefits, it also raises ethical concerns regarding human and children's rights. As AI is adopted in developing countries with large families and structural inequalities, it is important to analyze how AI impacts children's access to and enjoyment of education, and ensure responsible development that respects children's rights. Oversight is needed of governments and companies to protect children.

Uploaded by

Leandro Mol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Eye on Developments in Artificial Intelligence and Children's Rights: Artificial Intelligence in

Education (AIEd), EdTech, Surveillance, and Harmful Content

Susan von Struensee, JD, MPH


Global Research Initiative Working Paper
June 2021

Artificial Intelligence (AI) systems, although little perceived as part of our daily lives, are becoming ubiquitous
for all people, including children. AI is used in cities for public safety and traffic organization purposes 1; in
hospitals, through applications in devices that assist doctors in detecting diseases 2; in education, with the use of
algorithms that create possibilities for customized learning 3 or facial recognition technologies 4; and
entertainment,5 to name a few.

1 Thierer, Adam D. and Castillo O'Sullivan, Andrea and Russell, Raymond, Artificial Intelligence and Public
Policy (August 17, 2017). Mercatus Research Paper, Available at SSRN: https://ssrn.com/abstract=3021135
or http://dx.doi.org/10.2139/ssrn.3021135; and Berk, Richard, Artificial Intelligence, Predictive Policing, and
Risk Assessment for Law Enforcement (January 2021). Annual Review of Criminology, Vol. 4, pp. 209-237,
2021, Vol. 4, pp. 209-237, Available at SSRN: https://ssrn.com/abstract=3777804 or
http://dx.doi.org/10.1146/annurev-criminol-051520-012342
2 What Doctor? Why AI and Robotics Will Define New Health, PWC (June 2017),
https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/ai-robotics-new-
health.pdf.
3 Zeide, Elana, Robot Teaching, Pedagogy, and Policy (Forthcoming 2019). Forthcoming in The Oxford
Handbook of Ethics of AI, Oxford University Press (Markus D. Dubber, Frank Pasquale, and Sunit Das eds.),
Available at SSRN: https://ssrn.com/abstract=3441300
4 E.g., Barrett, Lindsey, Ban Facial Recognition Technologies for Children—And for Everyone Else (July 24,
2020). Boston University Journal of Science and Technology Law. Volume 26.2, Available at SSRN:
https://ssrn.com/abstract=3660118 (“Facial recognition technologies enable a uniquely dangerous and
pervasive form of surveillance, and children cannot escape it any more than adults can. Facial recognition
technologies have particularly severe implications for privacy, as they can weaponize existing photographic
databases in a way that other technologies cannot, and faces are difficult or impossible to change, and often
illegal to publicly obscure. Their erosion of practical obscurity in public threatens both privacy and free
expression, as it makes it much harder for people to navigate public spaces without being identified, and
easier to quickly and efficiently identify many people in a crowd at once. To make matters even worse, facial
recognition technologies have been shown to perform less accurately for people of color, women, non-binary
and transgender people, children, and the elderly, meaning that they have the potential to enable
discrimination in whatever forum they are deployed. As these technologies have developed and become
more prevalent, children are being subjected to them in schools, at summer camp, and other child-specific
contexts, as well as alongside their parents, through CCTV, private security cameras, landlord-installed
apartment security systems, or by law enforcement. The particular vulnerability of young people relative to
adults might make them seem like natural candidates for heightened protections from facial recognition
technologies. Young people have less say over where they go and what they do, inaccurate evaluations of
their faces could have a particularly strong impact on their lives in contexts like law enforcement uses, and
the chilling effects of these technologies on free expression could constrain their emotional and intellectual
development. At the same time, some of the harms young people experience are near-universal privacy
harms, such as the erosion of practical obscurity, while the discriminatory harms of facial recognition’s
inaccurate assessment of their faces are shared by other demographic groups.”)
5 Hasse, Alexa and Cortesi, Sandra Clio and Lombana-Bermudez, Andres and Gasser, Urs, Youth and Artificial
Intelligence: Where We Stand (May 24, 2019). Berkman Klein Center Research Publication No. 2019-3,
Available at SSRN: https://ssrn.com/abstract=3385718 or http://dx.doi.org/10.2139/ssrn.3385718 (This
article seeks to share Youth and Media’s initial learnings and key questions around the intersection between
AI and youth (ages 12-18), in the context of domains such as education, health and well-being, and the
future of work. It aims to encourage various stakeholders — including policymakers, educators, and parents

1
Electronic copy available at: https://ssrn.com/abstract=3882296
The rise of AI is a globally ubiquitous phenomenon associated with what many have called the 4th Industrial
Revolution.6

Despite all this technological development and its potentially positive aspects, the automation of various
processes and the facilitation of human life as a whole, AI has provoked a series of ethical questions and related
discussions around human rights and security. 7These immediate concerns do not include the consideration of
the discussions of “the singularity” a future where machine technologies might merge with human biology and
physiology 8 through the use of general or super AI9.

These AI based systems challenges are even more complex when one considers the demographics of the
countries of the Global South which boast large numbers of children per family. AI can contribute to the
exponential mitigation of structural inequalities. It can assist in guaranteeing human rights such as the right to
adequate food, basic sanitation, quality education, employability and security. However, it can also exasperate
preexisting discrimination, including in education, impacting children's accessibility to education and enjoyment
of accessible education.10

It is necessary to continue to analyze the relationship of AI as directly or indirectly impacting the educational
processes of children, including in the Global South, where the structural challenges of formal education, as
impacted by AI, are increasing. 11 Oversight of the responsibility of participating entities to respect and protect
the rights of children, including government and state (in the development of policy), and as well tech-
companies (in the design, development and provision of AI technologies, products and services) will need to be

and caregivers — to consider how we can empower young people to meaningfully interact with AI based
technologies to promote and bolster learning, creative expression, and wellbeing, while also addressing key
challenges and concerns.)
6 Technological innovation has been changing the economic and social landscape for the past 300 years, from
the First Industrial Revolution (water and steam power), to the Second Industrial Revolution (electric power
and the assembly line), to the Third Industrial Revolution (also called the digital revolution, comprising of
computers and the Internet). See The Future Computed: Artificial Intelligence and Its Role in Society,
MICROSOFT at 93 (2018), https://news.microsoft.com/uploads/2018/01/The-Future-Computed.pdf. AI is the
latest technological innovation and has been coined by some as the Fourth Industrial Revolution. As
explained by Klaus Schwab, Founder and Executive Chairman of the World Economic Forum: There are three
reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution
but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current
breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth
is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in
every country. And the breadth and depth of these changes herald the transformation of entire systems of
production, management, and governance.” Klaus Schwab, The Fourth Industrial Revolution: What It
Means, How to Respond, WORLD ECON. F. (Jan. 14, 2016), https://www.weforum.org/agenda/2016/01/the-
fourth-industrial-revolution-what-it-means-and-how-torespond/.
7 Raso, Filippo and Hilligoss, Hannah and Krishnamurthy, Vivek and Bavitz, Christopher and Kim, Levin Yerin,
Artificial Intelligence & Human Rights: Opportunities & Risks (September 25, 2018). Berkman Klein Center
Research Publication No. 2018-6, Available at SSRN: https://ssrn.com/abstract=3259344 or
http://dx.doi.org/10.2139/ssrn.3259344
8 'Stephen Hawking warns artificial intelligence could end mankind', Rory Cellam-Jones, BBC, Dec. 2014.
Available at: https://www.bbc.com/news/technology-30290540
9 Haney, Brian, The Perils & Promises of Artificial General Intelligence (October 5, 2018). Brian S. Haney, The
Perils & Promises of Artificial General Intelligence, 45 J. Legis. 151 (2018). , Available at SSRN:
https://ssrn.com/abstract=3261254 or http://dx.doi.org/10.2139/ssrn.3261254
10 Isabella Henriques and Pedro Hartung, Children's Rights by Design in AI Development for Education,
International Review of Information Ethics, Vol. 29 (03/2021)
https://informationethics.ca/index.php/irie/article/view/424/401
11 Isabella Henriques and Pedro Hartung, Children's Rights by Design in AI Development for Education,
International Review of Information Ethics, Vol. 29 (03/2021)
https://informationethics.ca/index.php/irie/article/view/424/401

2
Electronic copy available at: https://ssrn.com/abstract=3882296
developed alongside the utilization of a Children's Rights by Design (CrbD),standard focused on the best
interests of children,12as contemplated by the UN Convention on the Rights of the Child. 13

AI technologies both positively and negatively impact children’s human rights. 14 There are valuable
opportunities to use artificial intelligence in ways that maximize children’s well being, but there are critical
questions that we need ask and answer in order to better protect children from potential negative impacts of
artificial intelligence.

Many are working on addressing the challenges and opportunities children and youth encounter in the digital
environment.15 How can artificial intelligence be leveraged to protect, benefit and empower youth globally? 16

As UNICEF and other organizations emphasize, we must pay specific attention to children and the evolution of
AI technology in a way that children-specific rights and needs are recognized. The potential impact of artificial
intelligence on children deserves special attention, given children’s heightened vulnerabilities and the numerous
roles that artificial intelligence will play throughout the lifespan of individuals born in the 21st century.

12 Isabella Henriques and Pedro Hartung, Children's Rights by Design in AI Development for Education,
International Review of Information Ethics, Vol. 29 (03/2021)
https://informationethics.ca/index.php/irie/article/view/424/401
13 United Nations, Convention on the Rights of the Child - Available at
https://www.ohchr.org/en/professionalinterest/pages/crc.aspx
14 UNICEF, Artificial Intelligence and Children’s Rights, 2018
https://www.unicef.org/innovation/media/10726/file/Executive%20Summary:%20Memorandum%20on
%20Artificial%20Intelligence%20and%20Child%20Rights.pdf (“The authoring team of this memorandum
are Mélina Cardinal-Bradette, Diana Chavez-Varela, Samapika Dash, Olivia Koshy, Pearlé Nwaezeigwe,
Malhar Patel, Elif Sert, and Andrea Trewinnard, who conducted their research and writing under the
supervision of Alexa Koenig of the UC Berkeley Human Rights Center”)
15 Cortesi, Sandra Clio and Gasser, Urs and Adzaho, Gameli and Baikie, Bruce and Baljeu, Jacqueline and
Battles, Matthew and Beauchere, Jacqueline and Brown, Elsa and Burns, Jane and Burton, Patrick and
Byrne, Jasmina and Colombo, Maximillion and Douillette, Joseph and Escobar, Camila and Flores, Jorge and
Ghebouli, Zinelabidine and Gonzalez-Allonca, Juan and Gordon, Eric and Groustra, Sarah and Hertz, Max and
Junco, Reynol and Khan, Yasir and Kimeu, Nicholas and Kleine, Dorothea and Krivokapic, Djordje and Kup,
Viola and Kuzeci, Elif and Latorre Guzmán, María and Li, David and Limbu, Minu and Livingstone, Sonia and
Lombana-Bermudez, Andres and Massiel, Cynthia and McCarthy, Claire and Molapo, Maletsabisa and Mor,
Maria and Newman, Sarah and Nutakor, Eldad and Onoka, Christopher and Onumah, Chido and Passeron,
Ezequiel and Pawelczyk, Katarzyna and Roque, Ricarose and Rudasingwa, Kanyankore and Shah, Nishant
and Simeone, Luca and Siwakwi, Andrew and Third, Amanda and Wang, Grace, Digitally Connected: Global
Perspectives on Youth and Digital Media (March 26, 2015). Berkman Center Research Publication No. 2015-
6, Available at SSRN: https://ssrn.com/abstract=2585686 or http://dx.doi.org/10.2139/ssrn.2585686
(Reflecting on the 25th anniversaries of the invention of the World Wide Web by Sir Tim Berners-Lee and
the adoption of the Convention on Rights of the Child by the US General Assembly, the Berkman Center for
Internet & Society at Harvard University and UNICEF co-hosted in April 2014 — in collaboration with PEW
Internet, EU Kids Online, the Internet Society (ISOC), Family Online Safety Institute (FOSI), and
YouthPolicy.org — a first of its kind international symposium on children, youth, and digital media to map
and explore the global state of relevant research and practice, share and discuss insights and ideas from the
developing and industrialized world, and encourage collaboration between participants across regions and
continents. With a particular focus on voices and issues from the Global South, the symposium addressed
topics such as inequitable access, risks to safety and privacy, skills and digital literacy, and spaces for
participation, and civic engagement and innovation. The event also marked the launch of Digitally Connected
— an initiative that brings together academics, practitioners, young people, activists, philanthropists,
government officials, and representatives of technology companies from around the world who, together,
are addressing the challenges and opportunities children and youth encounter in the digital environment.)
16 World Economic Forum, Empowering Generation AI , Sustainable Development Summit 2020, (Dec. 31,
2020) https://www.youtube.com/watch?v=gi625laHeGs

3
Electronic copy available at: https://ssrn.com/abstract=3882296
As AI-based technologies become increasingly integrated into modern life, the onus is on companies,
governments, researchers, parents, most, to consider the ways in which such technologies impact children’s
human rights. The potential impact of artificial intelligence on children deserves special attention, given
children’s heightened vulnerabilities and the numerous roles that artificial intelligence will play throughout the
lifespan of individuals who are born in the 21st century. As much of the underlying technology is proprietary to
corporations, corporations’ willingness and ability to incorporate human rights considerations into the
development and use of such technologies will be critical. Governments will also need to work with corporations,
parents, children and other stakeholders to create policies that safeguard children’s human rights and related
interests. 17

There are valuable opportunities to use artificial intelligence in ways that maximize children’s well-being, but
critical work needs to be done in order to better protect children from AI negative consequences.

AI, Machine Learning, and Deep Learning

The terms artificial intelligence, machine learning, and deep learning, are often used interchangeably by the
general public to reflect the concept of replicating “intelligent” behavior in machines.

Generally, AI refers to a sub-field of computer science focused on building machines and software that can
mimic such behavior. Machine learning is the sub-field of artificial intelligence that focuses on giving computer
systems the ability to learn from data. Deep learning is a subcategory of machine learning that uses neural
networks to learn to represent and extrapolate from a dataset. There are numerous ways that machine learning
and deep learning processes impact children’s lives and ultimately, their human rights, and how artificial
intelligence technologies are being used in ways that positively or negatively impact children at home, at school,
and at play.18

Role of AI in Children’s Lives

The role of artificial intelligence in children’s lives—from how children play, to how they are educated, to how
they consume information and learn about the world—is expected to increase exponentially. A number of
initiatives have started to map the impact of AI on children. 19 Thus, it is imperative that stakeholders come
together now to evaluate the risks of using such technologies and assess opportunities to use artificial
intelligence to maximize children’s well being in a thoughtful and systematic manner. As part of this assessment,
stakeholders should work together to map the potential positive and negative uses of AI on children’s lives, and
develop a child rights-based framework for artificial intelligence that delineates rights and corresponding duties
for developers, corporations, parents, and children around the world.

The potential impact of artificial intelligence on children deserves special attention, given children’s heightened
vulnerabilities and the numerous roles that artificial intelligence will play throughout the lifespan of individuals
who are born in the 21st century. As much of the underlying technology is proprietary to corporations,
corporations’ willingness and ability to incorporate human rights considerations into the development and use of
such technologies will be critical. Governments will also need to work with corporations, parents, children and
other stakeholders to create policies that safeguard children’s human rights and related interests.

17 Cedric Villani, “For a Meaningful Artificial Intelligence Towards a French and European Strategy,” March 8,
2018, available at https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf.
18 “Office of Innovation, UNICEF Office of Innovation,”UNICEF Innovation Home Page, available at
https://www.unicef.org/innovation/.
19 UNICEF. 2020. ‘Policy Guidance on AI for Children (Draft)’.
https://www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policyguidance-AI-children-
draft-1.0-2020.pdf See also Kardefelt-Winther, Daniel. 2017. ‘How Does the Time Children Spend Using
Digital Technology Impact Their Mental Well-Being, Social Relationships and Physical Activity?: An Evidence
Focused Literature Review’. Innocenti Discussion Papers 2017/02. Vol. 2017/02. Innocenti Discussion
Papers. UNICEF. https://doi.org/10.18356/cfa6bcb1-en.

4
Electronic copy available at: https://ssrn.com/abstract=3882296
The Convention on the Rights of the Child

The United Nations Convention on the Rights of the Child (CRC), 20 adopted by the UN General Assembly on 20
November 1989,21 provides the international legal framework for children’s rights. 22The CRC is the most
comprehensive legal framework that protects children--defined as human beings 18 years old and under--as
rights bearers.23Until a few years ago,24 the CRC did not contain an actual enforcement mechanism, which was
considered a manifest flaw. Children could not file complaints, and the Convention could not be tested in specific
cases by the courts. 25 In 2011, however, the Optional Protocol on a Communications Procedure was adopted, 26
which allows individual children to submit complaints regarding specific violations of their rights under the
Convention and its first two optional protocols. The Protocol entered into force in April 2014. 27 In addition, the
UNCRC has a symbolic function28 and a strong moral force. 29The UN Committee on the Rights of the Child
monitors the implementation of the UNCRC and issues critical remarks or recommendations. 30 It is then up to
the national governments to take these into account.

20 United Nations, Convention on the Rights of the Child, 20.11.1989,


http://www.unhchr.ch/html/menu3/b/k2crc.htm [hereinafer: CRC].
21 Previous international documents on children’s rights were: “Declaration on the Rights of Child”, adopted by
the League of Nations in 1924, and the 1959 “UN Declaration on the Rights of Child”, which was adopted
unanimously by the General Assembly of the United Nations on 20 November 1959,
http://www.unhchr.ch/html/menu3/b/25.htm. For a detailed overview cf. Van Bueren, Geraldine, The
international law on the rights of the child, Dordrecht, Martinus Nijhoff Publishers, 1995, 6‐12.
22 See also Commission of the European Communities, Commission Staff working document accompanying the
Communication from the Commission Towards an EU strategy on the rights of the child, Impact assessment,
COM (2006) 367 final, SEC (2006) 888, 04.07.2006,
http://register.consilium.europa.eu/pdf/en/06/st12/st12107‐ad01.en06.pdf, 6: “The UNCRC provides a
coherent and comprehensive framework against which to evaluate legislation, policy, structures and
actions”.
23 UN General Assembly, “Convention on the Rights of the Child, 20 November 1989,” United Nations, Treaty
Series, vol. 1577, p. 3, Article 1. See also von Struensee, Susan, Highlights of the United Nations Children's
Convention and International Response to Children's Human Rights, Suffolk Transnational Law Review, Vol.
18, Issue 2 (Summer 1995), pp. 589-628 Available at SSRN: https://ssrn.com/abstract=657363
24 Kilkelly, Ursula, “The best of both worlds for children’s rights? Interpreting the European Convention on
Human Rights in the light of the UN Convention on the Rights of the Child”, Human Rights Quarterly 2001,
Vol. 23, 309; McLaughlin, Sharon, Rights v. restrictions. Recognising children’s participation in the digital
age, in Brian O’Neill, Elisabeth Staksrud, Sharon McLaughlin, Towards a Better Internet for Children? Policy
Pillars, Players and Paradoxes, Nordicom, 2013, 316. For more on the implementation of CRC cf. Van
Bueren, Geraldine, The international law on the rights of the child, Dordrecht, Martinus Nijhoff Publishers,
1995, 378‐422.
25 Bainham, Andrew, Children – the modern law, Bristol, Family Law, 2005, 67. But supranational courts, such
as the European Court of Justice, did refer to the CRC in its caselaw.
26 United Nations, Optional Protocol on a Communications Procedure, 2011,
https://treaties.un.org/doc/source/signature/2012/ctc_4‐11d.pdf. See, e.g., Spronk, Sarah Ida, Realizing
Children’s Right to Health: Additional Value of the Optional Protocol on a Communications Procedure for
Children (August 10, 2012). Available at SSRN: https://ssrn.com/abstract=2127644 or
http://dx.doi.org/10.2139/ssrn.2127644 and Binford, W. Warren Hill, Utilizing the Communication
Procedures of the ACERWC and the UNCRC (October 29, 2012). Available at SSRN:
https://ssrn.com/abstract=2209507. or http://dx.doi.org/10.2139/ssrn.2209507
27 The entry into force of the Third Optional Protocol on a Communications Procedure (OPIC) in 2014 was
groundbreaking as it allowed children to lodge complaints with the UN about violations of their rights, if
violations cannot be addressed effectively at national level. However, to advance access to justice for
children, it is important to increase States’ ratification of the OPIC and to work for its effective
implementation at the national level. In 2021, seven years since the entry into force of the Optional
Protocol, 47 States have ratified the OPIC, 17 have signed but not yet ratified it, and 133 have taken no
action. https://opic.childrightsconnect.org/ratification-status/ and
https://treaties.un.org/Pages/ViewDetails.aspx?src=TREATY&mtdsg_no=IV-11-d&chapter=4&clang=_en

5
Electronic copy available at: https://ssrn.com/abstract=3882296
The CRC aims to ensure children’s equality of treatment by States. The CRC is the key international instrument
on children’s rights and represents an extraordinary level of international consensus on the legal rights that
children should have.31 The Convention imposes obligations on 195 states parties. 32 to provide legal protection
for a wide range of rights that inhere in children by virtue of their human dignity. Many of these rights inhere in
all human beings and were therefore already protected by pre-existing instruments of international law such as
the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic,
Social and Cultural Rights (ICESCR). The CRC aims to emphasize that these rights apply equally to children,
regardless of their age, and to provide explicit measures to ensure that children can enjoy these rights on an
equal basis with other human beings. 33

The Convention grants rights to children across categories often referred to as the three Ps: Protection (from
harm, violence or exploitation); Provision (with the resources of servicers necessary for a decent life) and
Participation (in society and in decisions affecting the child). 34

In addition, the Convention makes specific provision for some rights that are particular to children due to their
stage of development and their comparatively disempowered position in society. It seems clear that some of the
rights protected in the CRC are either not relevant, or less relevant, to adults with full legal capacity, and thus
do not tend to feature in the general human rights conventions. These include the best interests principle in
Article 3; the right to special protection and assistance for children deprived of their family environment in
Article 20; the right to development under Article 6; and a range of rights in Articles 7-12, including the right to
name and nationality, preservation of identity, the right to maintain contact with parents, and the right to
express views in all matters affecting the child. In this way, the CRC does not just re-state that children enjoy
the same rights as adults, but supplements the rights afforded to adults with important child-specific rights.
Cutting across the CRC as a whole are four general principles that have been identified by the Committee on the
Rights of the Child (hereinafter ‘the CRC Committee’): the right to life, survival and development (Article 6);
non-discrimination (Article 2); that the best interests of children should a primary consideration in all matters
affecting them (Article 3), and the right of children to
participate in decision affecting them (Article 12). 35

28 Van Bueren, Geraldine, The international law on the rights of the child, Dordrecht, Martinus Nijhoff
Publishers, 1995, xx.
29 Kilkelly, Ursula, “The best of both worlds for children’s rights? Interpreting the European Convention on
Human Rights in the light of the UN Convention on the Rights of the Child”, Human Rights Quarterly 2001,
Vol. 23, 310.
30 Kilkelly, Ursula, “The best of both worlds for children's rights? Interpreting the European Convention on
Human Rights in the light of the UN Convention on the Rights of the Child”, Human Rights Quarterly 2001,
Vol. 23, 309.
31 O'Mahony, Conor, Constitutional Protection of Children’s Rights: Visibility, Agency and Enforceability (January
28, 2019). (2019) 19 Human Rights Law Review, Forthcoming, Available at SSRN:
https://ssrn.com/abstract=3324280 or http://dx.doi.org/10.2139/ssrn.3324280
32 Eugeen Verhellen, ‘The Convention on the Rights of the Child: Reflections from a historical, social policy and
educational perspective’ in Wouter Vanderhole (ed), Routledge International Handbook of Children’s Rights
Studies (Oxford: Routledge, 2015) at 43.
33 Id. at 48. See also Adam Lopatka, ‘Introduction’ in Legislative History of the Convention on the Rights of the
Child (New York/Geneva: United Nations, 2007) at xxxvii, available at
http://www.ohchr.org/Documents/Publications/LegislativeHistorycrc1en.pdf. For a skeptical view of the
strategy of providing children with ‘special rights’ rather than relying on general rights guarantees, see
James G. Dwyer, ‘Inter-Country Adoption and the Special Rights Fallacy’ (2013) University of Pennsylvania
Journal of International Law 189 at 198-208.
34 For a discussion of the three Ps and how they interact with the four general principles of the CRC, see
Verhellen, supra at 49-50.
35 Committee on the Rights of the Child, General Comment No. 5 (2003): General measures of implementation
of the Convention on the Rights of the Child, CRC/GC/2003/5, 27 November 2003 at para 12.

6
Electronic copy available at: https://ssrn.com/abstract=3882296
National constitutions take different approaches to the protection of children’s rights. The CRC Committee has
set out what it describes as a child rights approach, defined in General Comment No. 13 as: A child rights
approach is one which furthers the realization of the rights of all children as set out in the Convention by
developing the capacity of duty bearers to meet their obligations to respect, protect and fulfill rights (art. 4) and
the capacity of rights holders to claim their rights, guided at all times by the rights to non-discrimination (art. 2),
consideration of the best interests of the child (art. 3, para. 1), life, survival and development (art. 6), and
respect for the views of the child (art. 12). Children also have the right to be directed and guided in the exercise
of their rights by caregivers, parents and community members, in line with children’s evolving capacities (art. 5).
This child rights approach is holistic and places emphasis on supporting the strengths and resources of the child
him/herself and all social systems of which the child is a part: family, school, community, institutions, religious
and cultural systems. 36

Moreover, the Committee has stressed in General Comment No. 5 that it is not enough for the law to say that
children have rights along the lines set out above: it must give meaning to those rights by providing a means for
their enforcement:

For rights to have meaning, effective remedies must be available to redress violations. This
requirement is implicit in the Convention and consistently referred to in the other six major
international human rights treaties. Children’s special and dependent status creates real
difficulties for them in pursuing remedies for breaches of their rights. So States need to give
particular attention to ensuring that there are effective, child-sensitive procedures available
to children and their representatives. 37

More than a binding international document, the Convention is an ethical 38 and legal framework for assessing
states’ progress or regress on issues of particular interest to children. 39 Because of the recent exponential

36 Committee on the Rights of the Child, General Comment No. 13: Article 19: the right of the child to freedom
from all forms of violence, CRC/C/GC/13, 18 April 2011 at para 59.
37 Committee on the Rights of the Child, General Comment No. 5: General measures of implementation of the
Convention on the Rights of the Child, CRC/GC/2003/5, 27 November 2003 at para 24.
38 See, e.g., McGee, Robert W., Abolishing Child Labor: Some Overlooked Ethical Issues (May 20, 2016).
Available at SSRN: https://ssrn.com/abstract=2782715 or http://dx.doi.org/10.2139/ssrn.2782715 ( Under a
rights regime, any act or policy that violates someone’s rights is automatically labeled as unethical). See also
Mousin, Craig B., Rights Disappear When US Policy Engages Children As Weapons of Deterrence (January 1,
2019). AMA J Ethics. 2019;21(1):E58-66. , Available at SSRN: https://ssrn.com/abstract=3317913
(“Although the United States provided significant guidance in drafting the Convention on the Rights of the
Child (CRC) it has never ratified the convention. The failure to ratify has taken on critical significance in light
of new federal policies that have detained over 15,000 children in 2018, separated families, accelerated
removal of asylum seekers, and emphasized deterring families from seeking asylum. This article raises
ethical and health implications of these refugee policies in light of the United States’ failure to ratify the CRC.
It first examines the development of the CRC and international refugee law. It next lists some of the new
policies and case law implemented by Attorney General Sessions in 2018 that have led to the detention and
separation of children from their family, undermining legal protections for asylum applicants. The CRC calls
for governments to examine the best interests of children seeking refugee status, but federal policies
preclude consideration of that goal. In addition, although the CRC calls for appropriate legal protection for
children, current policies neglect that goal and instead criminalize children and families before they have
been provided with legal representation or assistance. Such policies exacerbate the trauma of children
fleeing violence in their homeland and undergoing the risks of flight. This article raises ethical issues
including whether judges and lawyers for the government should participate in legal proceedings when
toddlers appear unrepresented. The failure to ratify the CRC in conjunction with these new deterrence
polices undermines legal protections for children worldwide.”)
39 See, e.g., Meier, Benjamin Mason and Motlagh, Mitra and Rasanathan, Kumanan, The United Nations
Children's Fund: Implementing Human Rights for Child Health (April 26, 2018). Human Rights in Global
Health: Rights-Based Governance for a Globalizing World (Oxford University Press 2018), Available at SSRN:
https://ssrn.com/abstract=3214766; See generally UNICEF “State of the World’s Children” 2019 available at

7
Electronic copy available at: https://ssrn.com/abstract=3882296
advancement of artificial intelligence-based technologies, the current international framework that protects
children’s rights, just as local frameworks, does not explicitly address many of the issues raised by the
development and use of artificial intelligence. 40

AI Impacts Children

AI impacts children. They are exposed to algorithms at home, at school, and at play. Algorithms shape the
environments in which they live, the services they have access to, and how they spend their time. Children play
with interactive smart toys, they watch videos recommended by algorithms, use voice commands to control their
phones, and use image manipulation algorithms for fun in social media.

The presence of AI in children’s lives raises many questions. Is it acceptable to use recommendation algorithms
with children or to provide an interactive toy if the child cannot understand that they are dealing with a
computer? How should parents be advised on the possible impact of AI-based toys on the cognitive
development of a child? What should children learn about AI in schools in order to have a sufficient
understanding of the technology around them? At what point should a child be given the right to decide about
the consents involved? How long should the data be stored?

As UNICEF and other organizations emphasize, we must pay specific attention to children and the evolution of
AI technology in a way that children-specific rights and needs are recognized. The potential impact of artificial
intelligence on children deserves special attention, given children’s heightened vulnerabilities and the numerous
roles that artificial intelligence will play throughout the lifespan of individuals born in the 21st century.

The CRC Identifies Several Rights Implicated by AI Technologies

The CRC identifies several rights implicated by AI technologies 41, and thus provides an important starting place
for any analysis of how children’s rights may be positively or negatively affected 42 by new technologies. 43 Since

https://www.unicef.org/reports/state-of-worlds-children-2019 and https://www.unicef.org/reports/state-of-


worlds-children
40 UNICEF, AI for children, https://www.unicef.org/globalinsight/featured-projects/ai-children (Recent
progress in the development of artificial intelligence (AI) systems, unprecedented amounts of data to train
algorithms, and increased computing power are expected to profoundly impact life and work in the 21st
century, raising both hopes and concerns for human development. However, despite the growing interest in
AI, little attention is paid to how it will affect children and their rights.) See also
https://www.unicef.org/innovation/GenerationAI and https://www.weforum.org/projects/generation-ai
41 Verdoodt, Valerie, The Role of Children's Rights in Regulating Digital Advertising (2019). International
Journal of Children’s Rights, 27 (3), 455-481, 2019, doi: 10.1163/15718182-02703002., Available at SSRN:
https://ssrn.com/abstract=3703312 or http://dx.doi.org/10.2139/ssrn.3703312 (“An important domain in
which children’s rights are reconfigured by internet use, is digital advertising. New advertising formats such
as advergames, personalized and native advertising have permeated the online environments in which
children play, communicate and search for information. The often immersive, interactive and increasingly
personalized nature of these advertising formats makes it difficult for children to recognize and make
informed and well-balanced commercial decisions. This raises particular issues from a children’s rights
perspective, including inter alia their rights to development (Article 6 UNCRC), privacy (Article 16 UNCRC),
protection against economic exploitation (Article 32 UNCRC), freedom of thought (Article 17 UNCRC) and
education (Article 28 UNCRC). The paper addresses this reconfiguration by translating the general principles
and the provisions of the United Nations Convention on the Rights of the Child into the specific context of
digital advertising. Moreover, it considers the different dimensions of the rights (i.e. protection, participation
and provision) and how the commercialization affects children and how their rights are exercised.”)
42 Lievens, Eva, "A children’s rights perspective on the responsibility of social network site providers", 25th
European Regional Conference of the International Telecommunications Society (ITS), Brussels, Belgium,
2014. https://www.econstor.eu/bitstream/10419/101441/1/795276834.pdf
43 See, e.g. Livingstone, Sonia, John Carr, and Jasmina Byrne, "One in Three: Internet Governance and
Children’s Rights", Paper Series Centre for International Governance Innovation and the Royal Institute of

8
Electronic copy available at: https://ssrn.com/abstract=3882296
the creation of the CRC it has been accepted across the globe that children are entitled to a number of
fundamental rights that are important in the media environment, freedom of expression (article 13 CRC) and the
right to privacy (article 16 CRC). At the same time, children sometimes need to be protected, for instance, from
content or behavior that may harm them (article 17, infra, article 19 ‐ concerning protection from all forms of
violence ‐ and article 34 ‐ concerning protection from sexual exploitation ‐ CRC).

Article 13 confirms the child‐specific version 44 of the right to freedom of expression 45“which includes the freedom
to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or
in print, in the form of art, or through any other media of the child’s choice”. 46This fundamental right can only
be restricted if this is provided by law and necessary “for respect of the rights or reputations of others, or for the
protection of national security or of public order, or of public health or morals” (para. 2). The article has a broad
scope of application, which certainly extends to the internet as well as any other (future) medium. Recently, the
UN Committee on the Rights of the Child emphasized that the increasing extent to which information and
communication technologies are a central dimension in the lives of children entails that (equal) access to the
internet and social media for them is crucial, also for the realization of other rights closely linked to the right to
freedom of expression, such as the right to leisure, play and culture (article 31 CRC). 47

Equally important is the child’s right to privacy, formulated in article 16 CRC. 48According to this article, children
cannot be subjected to any arbitrary or unlawful interference – by state authorities or by others (e.g., private
organizations)49 – with their privacy, family, home or correspondence, nor to unlawful attacks on their honor and
reputation. Moreover, it is clearly stated that the law should protect a child against such interference. The right

International Affairs, 2015, 22. https://www.unicef-irc.org/publications/795-one-in-three-internet-


governance-and-childrens-rights.html (“Typically, in the discussions around the use of the Internet, children
are acknowledged only in the context of child protection while their rights to provision and participation are
overlooked. This paper specifically argues against an age-generic (or ‘age-blind’) approach to ‘users’,
because children have specific needs and rights that are not met by governance regimes designed for
‘everyone’. Policy and governance should now ensure children’s rights to access and use digital media and
consider how the deployment of the Internet by wider society can enhance children’s rights across the
board. The paper ends with six conclusions and recommendations about how to embed recognition of
children’s rights in the activities and policies of international Internet governance institutions.”)
44 Kilkelly, Ursula, “The best of both worldsfor children’s rights? Interpreting the European Convention on
Human Rights in the light of the UN Convention on the Rights of the Child”, Human Rights Quarterly 2001,
Vol. 23, 311.
45 Similar articles are article 19 Universal Declaration of Human Rights, article 19 International Covenant of
Civil and Political Rights, and article 10 European Convention on Human Rights and Fundamental Freedoms.
46 The United Nations Committee on the Rights of the Child has stressed that it is not sufficient to just include
the ‘general’ right to freedom of expression applicable to everyone in a country’s constitution. It is
necessary, according to the Committee, to also expressly incorporate the child’s right to freedom of
expression in legislation. See for instance: United Nations Committee on the Rights of the Child, General
Guidelines for Periodic Reports, CRC/C/58, 20.11.1996,
http://www.unhchr.ch/tbs/doc.nsf/(Symbol)/CRC.C.58.En?Opendocument: “States parties are requested to
provide information on the measures adopted to ensure that the civil rights and freedoms of children set
forth in the Convention, in particular those covered by articles 7, 8, 13 to 17 and 37 (a), are recognized by
law specifically in relation to children and implemented in practice, including by administrative and judicial
bodies, at the national, regional and local levels, and where appropriate at the federal and provincial levels”.
47 United Nations Committee on the Rights of the Child, General comment No. 17 (2013) on the right of the
child to rest, leisure, play, recreational activities, cultural life and the arts (art. 31), UN Doc. CRC/C/GC/17,
2013, n° 45.
48 This is a child‐specific ‘translation’ of the general right to privacy, which is granted to everyone by, inter alia,
article 12 Universal Declaration on Human Rights, article 17 International Covenant on Civil and Political
Rights, and article 8 European Convention on Human Rights and Fundamental Freedoms
49 Hodgkin, Rachel and Newell, Peter, Implementation handbook for the Convention on the Rights of the Child,
New York, UNICEF, 2002, 216.

9
Electronic copy available at: https://ssrn.com/abstract=3882296
to privacy is directed at the child itself and is to be protected in all situations. 50 In the online environment,
privacy issues could, for instance, arise with respect to identification mechanisms or with regard to the collection
of their personal data by service providers. Furthermore, monitoring a child’s internet use could be considered
in conflict with the child’s right to privacy. Finally, parents may neither, according to article 16, interfere with
their child’s correspondence. There is no reason to limit the application of this article to ‘paper’ correspondence,
so monitoring e‐mail conversations could be in conflict with the child’s right to privacy as well.

Another crucial article with regard to media content and services is article 17 CRC. 51This article requires states to
ensure that children have access to “information and material from a diversity of national and international
sources, especially those aimed at the promotion of his or her social, spiritual and moral well‐being and physical
and mental health”,52since access to a wide diversity of information is a prerequisite for the exercise of other
fundamental rights, most importantly the right to freedom of expression. States are thus incited to pursue a
proactive policy which stimulates the cultural, educational and informational potential of media with respect to
children. At the same time article 17 CRC also encourages the development of guidelines to protect children
from harmful material. On the one hand, the internet and other new media technologies enable children to
access a huge variety of educational material 53 and cultural opportunities, as “powerful tool[s] that can help to
meet children’s rights under the CRC (e.g., to participation, information and freedom of expression)”. 54

The Committee on the Rights of the Child expressed concern that these technologies have also lowered the
threshold of access to illegal and harmful material. The Committee also indicated concern about the extent to
which access to the internet and social media lead to exposure to cyberbullying, pornography and
cybergrooming.55

50 Hodgkin, Rachel and Newell, Peter, Implementation handbook for the Convention on the Rights of the Child,
New York, UNICEF, 2002, 213; Meuwese, Stan, Blaak, Mirjam and Kaandorp, Majorie (eds),
51 The European Court of Justice has also referred to this article in a case concerning potential harmful new
media content: ECJ, Dynamic Medien v. Avides Media AG, C‐244/06, 14.02.2008, para. 40.
52 A general discussion on ‘The child and the media’ was held by the Committee on the Rights of the Child on
the 7th of October 1996. A report of this discussion was included in the Report on the thirteenth session:
United Nations Committee on the Rights of the Child, Report on the thirteenth session, CRC/C/57,
31.10.1996,
http://www.unhchr.ch/tbs/doc.nsf/898586b1dc7b4043c1256a450044f331/5a7331a09a8b4f3fc1256404003d1
0bd/$FILE/G9618895.pdf. Following this discussion, an informal Working Group was set up (CRC/C/57, p.
45).This Working Group met twice (cf. United Nations Committee on the Rights of the Child, CRC/C/66,
06.06.1997, http://www.unhchr.ch/tbs/doc.nsf/898586b1dc7b4043c1256a450044f331/
b27bf9857a55819d802564f3003b10ee/$FILE/G9717203.pdf, 51; United Nations Committee on the Rights of
the Child, CRC/C/79, 27.07.1998,
http://www.unhchr.ch/tbs/doc.nsf/898586b1dc7b4043c1256a450044f331/
a505a81ff8dcaf89802566d6003b6298/$FILE/G9817376.pdf, 46) and was also involved with the development
of ‘The Oslo Challenge’, a call for action, addressed to “everyone engaged in exploring, developing,
monitoring and participating in the complex relationship between children and the media”. This document
elaborates on ways to effectively implement articles 12, 13 and especially 17 CRC: “The Oslo challenge
signals to governments, the media, the private sector, civil society in general and young people in particular
that Article 17 of the Convention on the Rights of the Child, far from isolating the child/media relationship, is
an entry point into the wide and multi‐faceted world of children and their rights – to education, freedom of
expression, play, identity, health, dignity and self‐respect, protection – and that in every aspect of child
rights, in every element of the life of a child, the relationship with children and the media plays a role”
(http://www.mediawise.org.uk/files/uploaded/Oslo%20Challenge.pdf).
53 Article 17 (a) emphasizes the importance of disseminating information and material of social and cultural
benefit to the child and in accordance with the spirit of article 29, which is related to education.
54 Ruxton, Sandy, What about us? Children’s rights in the European Union? Next Steps, Brussels, The European
Children’s Network, 2005, 109.
55 United Nations Committee on the Rights of the Child, General comment No. 17 (2013) on the right of the
child to rest, leisure, play, recreational activities, cultural life and the arts (art. 31), UN Doc. CRC/C/GC/17,
2013, n° 46. https://www.refworld.org/docid/51ef9bcc4.html (“The Committee is concerned at the

10
Electronic copy available at: https://ssrn.com/abstract=3882296
The Family Online Safety Institute 56 (“FOSI”) Global Resource and Information Directory (GRID) is "designed to
create a single, factual and up-to-date source for governments, industry, lawyers, academics, educationalists
and all those dedicated to making the Internet a safer and better place". 57

An on-line safety profile for most countries is available, divided into sections detailing basic country profile data;
an overview of online safety in the country; pointers to related research; the education system (this is actually a
short profile of ICT use in education -- very useful!); legislation; organizations active in this area in the country;
and a list of sources of information.

It has been argued that the word ‘guidelines’, used in article 17 CRC, indicates a preference for voluntary, rather
than legislative constraints.58 However, the Committee on the Rights of the Child has in one of their observations
recommended to “enact special legislation to protect children from harmful information, in particular from
television programs and films containing brutal violence and pornography” (own emphasis). 59 This attitude is not

growing body of evidence indicating the extent to which these environments, as well as the amounts of time
children spend interacting with them, can also contribute to significant potential risk and harm to children.
UNICEF, Child Safety Online: Global Challenges and Strategies. Technical report (Florence, Innocenti
Research Centre, 2012). For example: - Access to the Internet and social media is exposing children to
cyberbullying, pornography and cybergrooming. Many children attend Internet cafes, computer clubs and
game halls with no adequate restrictions to access or effective monitoring systems; - The increasing levels of
participation, particularly among boys, in violent video games appears to be linked to aggressive behavior as
the games are highly engaging and interactive and reward violent behavior. As they tend to be played
repeatedly, negative learning is strengthened and can contribute to reduced sensitivity to the pain and
suffering of others as well as aggressive or harmful behavior toward others. The growing opportunities for
online gaming, where children may be exposed to a global network of users without filters or protections,
are also a cause for concern. - Much of the media, particularly mainstream television, fail to reflect the
language, cultural values and creativity of the diversity of cultures that exist across society. Not only does
such monocultural viewing limit opportunities for all children to benefit from the potential breadth of cultural
activity available, but it can also serve to affirm a lower value on non-mainstream cultures. Television is also
contributing to the loss of many childhood games, songs, rhymes traditionally transmitted from generation
to generation on the street and in the playground; - Growing dependence on screen-related activities is
thought to be associated with reduced levels of physical activity among children, poor sleep patterns,
growing levels of obesity and other related illnesses. See also comment 47.” Marketing and
commercialization of play: The Committee is concerned that many children and their families are exposed to
increasing levels of unregulated commercialization and marketing by toy and game manufacturers. Parents
are pressured to purchase a growing number of products which may be harmful to their children’s
development or are antithetical to creative play, such as products that promote television programmes with
established characters and storylines which impede imaginative exploration; toys with microchips which
render the child as a passive observer; kits with a pre-determined pattern of activity; toys that promote
traditional gender stereotypes or early sexualization of girls; toys containing dangerous parts or chemicals;
realistic war toys and games. Global marketing can also serve to weaken children’s participation in the
traditional cultural and artistic life of their community.”
56 https://www.fosi.org/about-fosi The Family Online Safety Institute is an international, non-profit
organization which works to make the online world safer for kids and their families. Id.
57 https://fosigrid.org/
58 Hodgkin, Rachel and Newell, Peter, Implementation handbook for the Convention on the Rights of the Child,
New York, UNICEF, 2002, 236. See also: United Nations Committee on the Rights of the Child, Report on the
thirteenth session, CRC/C/57, 31.10.1996, retrieved from
http://www.unhchr.ch/tbs/doc.nsf/898586b1dc7b4043c1256a450044f331/5a7331a09a8b4f3fc1256404003d1
0bd/$FILE/G9618895.pdf (on 22.09.2006), 44.
59 United Nations Committee on the Rights of the Child, Concluding observations of the Committee on the
Rights of the Child: Cambodia, CRC/C/15/Add.128, 28.06.2000, retrieved from
http://www.unhchr.ch/tbs/doc.nsf/(Symbol)/30dce34798ef39f480256900003397ac?Opendocument (on
27.09.2006), para. 36; United Nations Committee on the Rights of the Child, Concluding observations of the

11
Electronic copy available at: https://ssrn.com/abstract=3882296
limited to traditional media: the Committee is concerned about online media as well. 60 Recently, it has been
argued that there is confusion about the scope of article 17 e) (in part created by the United Nations Committee
on the Rights of the Child).

The scope of this paragraph does not concern the protection of children from harmful material by States
themselves.61 This particular State task is included within the scope of other articles (such as article 6 CRC,
related to the protection and care necessary for the well‐being of each child) and that article 17 e) solely
concerns the encouragement of other actors, such as industry, to develop the guidelines mentioned in this
paragraph.62

Article 17 also refers to article 18 CRC. This recalls the primary responsibility of parents for the upbringing and
development of the child.63 However, according to article 18 para. 2, States must “render appropriate assistance
to parents and legal guardians in the performance of their child‐rearing responsibilities.” An example of this
‘assistance’ or, otherwise put, the ‘duty of care’ of the state, could be the provision of adequate information by
States to parents about media content to which their children can be exposed. 64

Committee on the Rights of the Child: Marshall Islands, CRC/C/15/Add.139, 16.10.2000, retrieved from
http://www.unhchr.ch/tbs/doc.nsf/(Symbol)/e91ea24ff52b434ac125697a00339c0c?Opendocument (on
27.09.2006), para. 34‐35.
60 “The Committee is concerned that no legislation exists to protect children from being exposed to violence
and pornography through video movies and other modern technologies, most prominently, the Internet”:
United Nations Committee on the Rights of the Child, Concluding observations of the Committee on the
Rights of the Child: Luxembourg, CRC/C/15/Add.92, 24.06.1998,
http://www.unhchr.ch/tbs/doc.nsf/(Symbol)/62258a94c261c9318025662400376374?Opendocument, para.
30.2 “The Committee is concerned that no legislation exists to protect children from being exposed to
violence and pornography through video movies and other modern technologies, most prominently, the
Internet”: United Nations Committee on the Rights of the Child, Concluding observations of the Committee
on the Rights of the Child: Luxembourg, CRC/C/15/Add.92, 24.06.1998,
http://www.unhchr.ch/tbs/doc.nsf/(Symbol)/62258a94c261c9318025662400376374?Opendocument, para.
30.
61 “Article 17 is not to be a vehicle for State control of content: Article 17 does not require or authorize State
censorship of the content of mass media communications”; Wheatley Sacino, Sherry, Article 17 Access to a
diversity of mass media sources, A commentary on the United Nations Convention on the Rights of the Child,
Leiden, Martinus Nijhoff Publishers, 2012, 30.
62 Id.
63 Article 5 is also relevant when dealing with harmful content. Article 5 refers to the responsibilities, rights
and duties of parents (or other persons legally responsible for the child), to offer, in a manner consistent
with the evolving capacities of the child, appropriate direction and guidance to the child when exercising his
or her rights. This provision could be interpreted as implying that parents have a responsibility to support
their children in their approach to new media. The United Nations General Assembly has also touched upon
the responsibilities of parents et al. in this respect: “ Encourage measures to protect children from violent or
harmful web sites, computer programmes and games that negatively influence the psychological
development of children, taking into account the responsibilities of the family, parents, legal guardians and
caregivers” (United Nations General Assembly, Resolution A world fit for children, A/RES/S‐27/2, 11.10.2002,
http://www.unicef.org/specialsession/docs_new/documents/A‐RES‐S27‐2E.pdf, 16). Ultimately, parents or
other carers are the only persons who will be able to monitor their children’s actual media use.
64 Hodgkin, Rachel and Newell, Peter, Implementation handbook for the Convention on the Rights of the Child,
New York, UNICEF, 2002, 236. Sacino finds that this reference deliberately avoids clarifying the relationship
between the role of the States and the role of parents in the protection of young people from harmful media
content, because there could not be found a consensus on the division of this responsibility: Wheatley
Sacino, Sherry, Article 17 Access to a diversity of mass media sources, A commentary on the United Nations
Convention on the Rights of the Child, Leiden, Martinus Nijhoff Publishers, 2012, 31.

12
Electronic copy available at: https://ssrn.com/abstract=3882296
AI and the Children's Rights by Design (CRbD) Standard

Just as it has been successfully argued through countless studies and principles globally, that AI must be
grounded in human-centric design65 the design, development and provision of AI, that can directly or indirectly,
affect children should always put the rights and best interests of child users first. AI that directly or indirectly
impacts children, including in the educational processes. 66 must always prioritize children's rights and interests.
There is a legal and ethical duty to respect and protect and children’s rights by States and private actors,
including tech companies, in the design, development, and provision of any AI technology, product or service. In
this sense, the Children's Rights by Design (CRbD) standard should be always considered and applied.

Children, as recognized by the CRC and other national legal norms, experience a unique stage of physical,
psychological and social development, with evolving capacities and, therefore, must be specially protected,
ensuring their rights are guaranteed as priority, no matter the circumstances, whether by family, States and
society, or companies.

AI Systems have to promote children's rights and to support their development worldwide, it is essential to
maintain a critical perspective, to keep a human in the loop to vet AI’s risk as well as benefits. As stated by
UNICEF in their first draft Policy Guidance on AI:

"While AI is a force for innovation and can support the achievement of the Sustainable Development Goals
(SDGs), it also poses risks for children, such as to their privacy, safety and security. Since AI systems can work
unnoticed and at great scale, the risk of widespread exclusion and discrimination is real." 67

AI for children is any AI that directly or indirectly impacts children Although the population of children impacted
by AI systems is significant - they represent 1/3 of users worldwide on the Internet alone (without accounting
for the AI applied massively in schools, cities and other spaces) - the vast majority of AI policy initiatives that
exist around the world hardly mention them or when they do, they are limited to broad citations, without details
or deeper considerations about their particularities. They do not deal, for example, with the possible uses of
predictive analysis or other types of algorithmic modeling that can make determinations about the future of
children, causing them unpredictable consequences. 68

This demonstrates the immense urgency to expand the study of the implications of AI in multiple global
childhoods, including among children in the Global South, in which accessibility to the internet

65 Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,
Version 2. IEEE, 2017. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.htm; see also
Yeung, Karen and Howes, Andrew and Pogrebna, Ganna, AI Governance by Human Rights-Centred Design,
Deliberation and Oversight: An End to Ethics Washing (June 21, 2019). Forthcoming in M Dubber and F
Pasquale (eds.) The Oxford Handbook of AI Ethics, Oxford University Press (2019) , Available at SSRN:
https://ssrn.com/abstract=3435011 or http://dx.doi.org/10.2139/ssrn.3435011 and Kazim, Emre and
Koshiyama, Adriano, Human Centric AI: A Comment on the IEEE’s Ethically Aligned Design (April 13, 2020).
Available at SSRN: https://ssrn.com/abstract=3575140 or http://dx.doi.org/10.2139/ssrn.3575140
66 Isabella Henriques and Pedro Hartung, Children's Rights by Design in AI Development for Education,
International Review of Information Ethics, Vol. 29 (03/2021)
https://informationethics.ca/index.php/irie/article/view/424/401
67 UNICEF, Executive Summary, Policy Guidance on AI for Children - Draft 01, September, 2020. Available
https://www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-
draft-1.0-2020.pdf
68 Alexa Hasse, Sandra Cortesi, Andres Lombana-Bermudez, and Urs Gasser. Youth and Artificial Intelligence:
Where We Stand (2019), available at https://cyber.harvard.edu/publication/2019/youth-and-artificial-
intelligence/where-we-stand

13
Electronic copy available at: https://ssrn.com/abstract=3882296
is often conditioned to commercial exploitation models, for some applications and services 69, all of which abound
in automated decisions.

One of the few documents on this subject is the first draft of the Policy Guidance on AI for Children, recently
launched by UNICEF, which set out nine requirements for a child-centered AI, which should be based on the
defense of children's rights, through the lens protection, provision and participation. They are: (1) Support
children's development and well-being; (2) Ensure inclusion of and for children; (3) Prioritize fairness and non-
discrimination for children; (4) Protect children's data and privacy; (5) Ensure safety for children; (6) Provide
transparency, explainability, and accountability for children; (7) Empower governments and businesses with
knowledge of AI and children's rights; (8) Prepare children for present and future developments in AI; (9)
Create an enabling environment for all to contribute to child-centered AI.70

In view of the cross-border multiplication of AI systems, including those that impact children, new global
initiatives such as that of UNICEF, guided by ethics and human-centric, will be of paramount importance.
Undoubtedly, AI systems that impact children, directly or indirectly, must also be, as any AI systems, first and
foremost, human-centered, as mentioned in the European Commission, which seeks to promote a reliable AI:

" AI systems need to be human-centric, resting on a commitment to their use in the service of humanity and the
common good, with the goal of improving human welfare and freedom ." 71

Thus, in addition to the challenge of harmonizing innovation, efficiency and freedom of business models with the
protection of human rights, accountability, explainability and transparency of AI systems, there is one more:
finding a balance that guarantees the best interest of children and their specific rights, in all applications that
are not prohibited and can be used by them or impact them, even indirectly. And not only in those AI
applications specifically aimed at the use and consumption of children 72 - also as a precaution against potential
risks to which they may be subjected. AI that can directly or indirectly affect children must take their rights and
interests first, in addition to ensuring their best interest and being human-centered. This means that the best
interest of children and their rights must be pursued with priority by every AI developer, even though their
product or service was meant not to be used by children or affect them indirectly at first sight.

In this sense, efforts must be expanded to democratize the benefits of AI systems for children, as well as to
mitigate possible risks, especially in different contexts and for the multiple childhood development around the
planet.

Hence, it is essential to guarantee Children's Rights by Design of AI systems which impact children, based in
their best interest, so that the promotion of children's rights, as well as their protection, is effective, generating
real positive impacts on the lives of children, including those who are socioeconomically vulnerable.

69 UN IGF, Net Neutrality Reloaded: Zero Rating, Specialised Service, Ad Blocking and Traffic Management.
Annual Report of the UN IGF Dynamic Coalition on Net Neutrality. Available at:
https://bibliotecadigital.fgv.br/dspace/bitstream/handle/10438/17532/Net%20Neutrality%20Reloaded.pdf
70 UNICEF, Policy Guidance on AI for Children - Draft 01, September, 2020. Available at:
https://www.unicef.org/globalinsight/media/1171/file/UNICEF-Global-Insight-policy-guidance-AI-children-
draft-1.0-2020.pdf
71 The European Commission, ´Ethics Guidelines for Trustworthy AI´, Independent High-Level Expert Group on
Artificial Intelligence, p. 4. Disponível em
https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines
72 ICO, UK Age-Appropriate Design Code, 2020: "This code applies to "information society services likely to be
accessed by children" in the UK. This includes many apps, programs, connected toys and devices, search
engines, social media platforms, streaming services, online games, news or educational websites and
websites offering other goods or services to users over the internet. It is not restricted to services
specifically directed at children." Available in
https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/age-appropriate-
design-a-code-of-practice-for-online-services/executive-summary/

14
Electronic copy available at: https://ssrn.com/abstract=3882296
AI and Children’s Social Media Platforms

Social media platforms that rely on streaming technologies are overturning how adults and children consume
media content. Platforms are working hard to ensure consumers maximize their time on these sites.

YouTube73 stands out as the dominant player in this space, especially when it comes to today’s youth. In 2017,
80% of U.S. children ages 6 to 12 used YouTube on a daily basis. 74 YouTube was the 2016 and 2017 “top kids
brand” according to Brand Love studies. 75 In the 2017 study, 96% of children ages 6 to 12 were found to be
“aware of YouTube,” and 94% of children ages 6 to 12 said they “either
loved or liked” YouTube.76 The YouTube phenomenon isn’t just occurring in the United States as YouTube has
massive user bases in India, Moscow, across Europe, and beyond. 77

Watching video clips online is among the earliest internet activities carried out by very young children, resulting
in high popularity of YouTube channels targeting toddlers and preschoolers. 78 For example, YouTube’s Sesame
Street channel recently reached a billion views 79 and a TuTiTu channel (owned by a small animation company
targeting infants and toddlers) was ranked 40th among YouTube’s 100 most viewed channels. 80 YouTube’s
simple user interface, that allows even toddlers to proceed to the next item on the playlist and affords them
easy access to favorite videos that can be watched again and again, has been suggested as the key to its
popularity with very young audiences. 81 It is thus not surprising that producers of content targeting toddlers and
preschoolers soon discovered YouTube’s appeal and began using it as a major content promotion platform by
uploading complete episodes or short clips of programs broadcast on television channels. 82

Besides providing an extensive variety of content produced specifically for young children, YouTube has also
spawned new formats in children’s entertainment that once baffled people outside their target audiences. 83
Young children appear to be attracted to particular types of content, many of which are based on comic
situations, such as challenges (e.g., tasting hot pepper) and silly skits (e.g., a person in a rooster costume

73 YouTube is a subsidiary of Google, whose parent company is Alphabet, Inc.


74 “2017 Brand Love Study: Kid & Family Trends,” Smarty Pants: the Youth and Family Experts (2017), 14.
75 Id. at 7.
76 Id.
77 Alexis Madrigal, “Raised by YouTube,” Atlantic 322, no. 4 (November 2018): 72–80.
https://www.theatlantic.com/magazine/archive/2018/11/raised-by-youtube/570838/
78 Holloway, D., Green, L., & Livingstone, S. (2013). Zero to eight. Young children and their internet use.
London, UK: EU Kids Online. Retrieved from http://eprints.lse.ac.uk/52630/1/Zero_to_eight.pdf
79 Luckerson, V. (2013, March 13). How Sesame Street Counted All the Way to 1 Billion YouTube Views. Time.
Retrieved from http://business.time.com/2013/03/15/how-sesame-street-counted-all-the-way-to-1-billion-
youtube-views
80 Fox, A. (2014, March 26). The Israelis that conquered toddlers around the world. Mako Magazine. Retrieved
from http://www.mako.co.il/home-family-weekend/Article-a4dda21f12ef441006.htm
81 Buzzi, M. (2011). What are your children watching on YouTube? In V. F. Cipolla, K. V. Ficarra, & D. Verber
(Eds.), Advances in new technologies interactive interferences and communicability (pp. 243-252). Berlin,
Germany: Springer.
82 Grossaug, R. (2017). What influences the influencers: Preschool television production in an era of media
change: The case of Israel’s ‘Hop! Group’ [Unpublished doctoral dissertation]. The Hebrew University of
Jerusalem. See also Elias, N., Sulkin, I., & Lemish, D. (in press). Gender segregation on Baby TV: Old-time
stereotypes for the very young. In D. Lemish & M. Gotz (Eds.), Beyond the stereotypes: Boys, girls and their
images. Nordicom.
https://www.researchgate.net/publication/321481837_Gender_segregation_on_BabyTV_Old-
time_Stereotypes_for_the_Very_Young
83 Dredge, S. (2015, November 19). Why YouTube is the new children’s TV and why it matters. The Guardian.
Retrieved from http://www.theguardian.com/technology/2015/nov/19/youtube-is-the-new-childrens-tv-
heres-why-that-matters

15
Electronic copy available at: https://ssrn.com/abstract=3882296
surprising a police officer). Topping the list of children’s favorites, however, are unboxing videos 84, in which
boxes containing different products are opened. 85 The attraction of unboxing may lie in the mystery of the
unwrapping process. Young children enjoy mystery and suspense, especially when it is likely to have safe and
predictable outcomes. One prime example of this trend is a series of YouTube videos in which a person opens
Kinder Surprise Eggs86, with hundreds of millions of hits. Although no data are available regarding the viewers’
ages, Jordan maintains that such videos are particularly appealing to children aged 2-4 because they expose
them to shape transformation, thereby gratifying a developmental need characteristic of this age group. 87

Notwithstanding its benefits, YouTube also has significant drawbacks as a source of children’s entertainment.
When the YouTube Kids application was launched, Google declared it to be a safe and educational media
environment for the very young, equipped with a safety mode for automatic filtering of content marked as
inappropriate. The result, however, fell far short of fulfilling Google’s promise and YouTube Kids has been
criticized heavily for its lack of professional selection and display of commercial content, ignoring the well-
established advertising safeguards adopted by both broadcast and cable television. 88

In 2015, YouTube decided to launch a dedicated platform called YouTube Kids as a means to provide safe, age
appropriate content for children.89 This ‘YouTube Kids’ app, is a different product than the standard YouTube
app. YouTube Kids features a children-friendly layout, which, according to YouTube, is designed to “make it
safer for children to explore the world through online video”.

The app has multiple integrated parental controls. Prior to using the YouTube Kids app for instance, a parent is
required to unlock the app and verify their children’s age. Other parental controls include the possibility to turn
the ‘search’ option on or off, with the latter meaning that the kid can only see video’s from video creators
verified by YouTube itself, and a timer which limits the amount of time that a user can use the app. The
YouTube Kids app therefore offers a ‘barebones’ version of the original YouTube app, by removing several
features. It is not possible to leave a rating on videos in the YouTube Kids app, and there is no comment section
below the videos where the viewers can leave their thoughts. This is purposefully designed in order to limit the
unwanted exposure to some of the content that is available on YouTube, which was deemed inappropriate for
younger audiences. In order to prevent exposure to inappropriate content, all videos on the YouTube Kids app
are checked whether they are child friendly. The YouTube Kids app contains a ‘recommended’ tab under videos,
which displays other videos that are related to the video that a user is currently watching. These videos are all
videos from the YouTube Kids app only, subjected to the same age restrictions as other YouTube Kids videos.
Advertisements are also displayed on the videos. These advertisements are extensively checked by YouTube to
ensure that these are family-friendly. 90

84 Marsh, J. (2016). Unboxing’ videos: Co-construction of the child as cyberflaneur. Discourse: Studies in the
cultural politics of education, 37, 369-380. https://doi.org/10.1080/01596306.2015.1041457
85 Knorr, C. (2016, March 15). What kids are really watching on YouTube? Common Sense Media. Retrieved
from http://www.commonsensemedia.org/blog/what-kids-are-really-watching-on-youtube
86 https://www.amazon.com/Kinder-Surprise-Eggs/s?k=Kinder+Surprise+Eggs
87 Jordan, A. B. (2015, November). Digital natives and digital immigrants: Media use and generational identity.
Keynote lecture. Ben-Gurion University of the Negev, Beer- Sheva, Israel.
88 Golin, J., Chester, F., & Campbell, A. (2015, April 7). Advocates file FTC complaint against Google’s YouTube
Kids. Campaign for a Commercial-Free Childhood. Retrieved from http://www.commercialfreechildhood.org;
See also Luscombe, B. (2015, September 7). YouTube view’s master. Time, 70-75.
89 “Introducing the Newest Member of Our Family, the YouTube Kids App--Available on Google Play and the
App Store,” Official YouTube Blog, https://youtube.googleblog.com/2015/02/youtube-kids.html; “YouTube
Kids,” https://www.youtube.com/yt/kids/.
90 http://arno.uvt.nl/show.cgi?fid=152292;

16
Electronic copy available at: https://ssrn.com/abstract=3882296
All the content submitted to the YouTube Kids app is subjected to a verification process by a machine
algorithm.91 In the case that the algorithm approves a video for YouTube Kids, then every user can view this
video.

On both YouTube and YouTube Kids, machine learning algorithms are used to both recommend and mediate the
appropriateness of content.92YouTube representatives, however, have been opaque about differences in the
input data and reward functions underlying YouTube Kids and YouTube. 93Lack of transparency about the input
data used in algorithms makes it difficult for concerned parties to understand the distinction. 94 More generally,
the issue of algorithmic opacity is of concern with both YouTube and YouTube Kids, since YouTube, and not
YouTube Kids, continues to account for the overwhelming majority of viewership of children’s programming
within the YouTube brand.95

The machine learning algorithms – primarily the recommendation engine employed by YouTube and YouTube
Kids – are optimized to ensure that children view as many videos on the platform as possible. 96 Children do not
need to enter any information or affirm any acquired permissions to watch thousands of videos on YouTube and
YouTube Kids. 97Touchscreen technology and the design of the platforms allow even young children substantial
ease of access.98 Unfortunately, neither recommendation system appears to optimize for the quality or
educational value of the content.99 Because companies developing children’s programming are similarly
concerned about maximizing viewers and viewer hours, their posts are often designed around YouTube’s
privileging of quantity with little consideration for quality, including educational value. 100There is particular

91 Kantrowitz, Alex. ‘YouTube Kids Is Going To Release A Whitelisted, Non-Algorithmic Version Of Its App’
(Buzzfeed News, April 6, 2018). https://www.buzzfeednews.com/article/alexkantrowitz/youtube-kids-is-
going-to-release-a-whitelistednon#.ftVwoX5dp and Wojcicki, Susan. ‘Protecting Our Community’ (YouTube
Creator Blog, 2017). https://youtube-creators.googleblog.com/2017/12/protecting-our-community.html
92 Karen Louise Smith and Leslie Regan Shade, “Children’s Digital Playgrounds as Data Assemblages:
Problematics of Privacy, Personalization and Promotional Culture,” Big Data & Society, Vol. 5 (2018), at 5.
https://journals.sagepub.com/doi/pdf/10.1177/2053951718805214
93 Adrienne LaFrance, “The Algorithm That Makes Preschoolers Obsessed With YouTube Kids,” The Atlantic,
July 27, 2017, https://www.theatlantic.com/technology/archive/2017/07/what-youtube-reveals-aboutthe-
toddler-mind/534765/.
94 “Terms of Service - YouTube,” https://www.youtube.com/static?template=terms, (November 13, 2018);
Matt O’Brien. “Consumer Groups Say YouTube Violates Children’s Online Privacy,” Time.Com, April 10, 2018,
1. https://www.aol.com/news/consumer-groups-youtube-violates-children-012240300.html
95 Madrigal, “Raised by YouTube,” 80. See also Tőkés, Gyöngyvér, Digital Practices in Everyday Lives of 4 to 6
Years Old Romanian Children (November 30, 2016). Journal of Comparative Research in Anthropology and
Sociology, Volume 7, Number 2, Winter 2016, Available at SSRN: https://ssrn.com/abstract=2915463
96 Matt O’Brien. “Consumer Groups Say YouTube Violates Children’s Online Privacy,” Time.Com, April 10, 2018,
1. https://www.aol.com/news/consumer-groups-youtube-violates-children-012240300.html
97 Id.
98 Elias, Nelly, and Idit Sulkin. “YouTube Viewers in Diapers: An Exploration of Factors Associated with Amount
of Toddlers’ Online Viewing.” Cyberpsychology, November 23, 2017, at 2, available at
https://cyberpsychology.eu/article/view/8559/7739.
99 Madrigal, “Raised by Youtube,” 79.
100Adrienne LaFrance, “The Algorithm That Makes Preschoolers Obsessed With YouTube
Kids.”https://www.theatlantic.com/technology/archive/2017/07/what-youtube-reveals-about-the-toddler-
mind/534765/

17
Electronic copy available at: https://ssrn.com/abstract=3882296
concern that with YouTube and YouTube Kids’ algorithm-derived “related-videos” recommendations 101 children
can become easily trapped in “filter bubbles” 102 of poor-quality content.103

Filtering algorithms also raise other problems, 104 especially when a significant number of external entities are
able to co-opt YouTube and YouTube Kids’ algorithmic discovery processes to maximize viewer time with
sometimes startling consequences for children. 105 For example, anyone over the age of 18 can create and upload
content onto YouTube and their creations are not regulated by professional protocols.

YouTube and YouTube Kids’ algorithmic discovery processes can be manipulated to push content that the
pusher expects will perform well on the platform’s “related-videos” engine, incentivizing sensational content. 106
Prioritizing such content is one of the critical impacts of YouTube’s use of machine learning algorithms. 107 Kids

101Children, too, access information and news from a variety of social media sites and platforms. But how
confident are they that what they encounter online is not misinformation or deliberate disinformation, or so-
called ‘fake news’? According to the Global Kids Online study, between 20 and 40 per cent of children
between the ages of 9 and 11 ‘find it easy to check if the information [they] find online is true.’ Byrne,
Jasmina, et al. Global Kids Online Research Synthesis: 2015-2016. UNICEF Office of Research – Innocenti
and London School of Economics and Political Science. Florence. Available at:
https://www.unicef-irc.org/publications/869-global-kids-online-research-synthesis-2015-2016.html The
emergence of so-called ‘filter bubbles’ occurring when platforms and search engines make use of algorithms
to select information a user would want to see underlines the potential seriousness of this issue with respect
to children. Instead of exposing children to a variety of ideas, different perspectives and ways of thinking,
web platforms in general, and ‘fake news’ in particular, may lead to their engagement with news or
information sources that confirm existing points of view or prejudices. See also Bezemek, Christoph, The
'Filter Bubble' and Human Rights (November 2, 2018). Petkova/Ojanen (eds), Fundamental Rights Protection
Online: The Future Regulation of Intermediaries, Forthcoming, Available at SSRN:
https://ssrn.com/abstract=3277503 or http://dx.doi.org/10.2139/ssrn.3277503; and Dutton, William H. and
Reisdorf, Bianca and Dubois, Elizabeth and Blank, Grant, Social Shaping of the Politics of Internet Search
and Networking: Moving Beyond Filter Bubbles, Echo Chambers, and Fake News (March 31, 2017). Quello
Center Working Paper No. 2944191, Available at SSRN: https://ssrn.com/abstract=2944191 or
http://dx.doi.org/10.2139/ssrn.2944191; and V Verdoodt, and E Lievens, Targeting children with
personalised advertising: how to reconcile the (best) interests of children and advertisers, in Data Protection
and Privacy Under Pressure: Transatlantic tensions, EU surveillance, and big data, (2017)
https://biblio.ugent.be/publication/8541057/file/8541058
102See generally Tracy S. Bennett, Ph.D., Sorry to Burst Your [Filter] Bubble,
GetKidsInternetSafe,https://getkidsinternetsafe.com/sorry-to-burst-your-filter-bubble/ In 2011, Eli Parisier
released his book The Filter Bubble: What the Internet Is Hiding From You. Pariser E. The filter bubble:
What the Internet is hiding from you. London: Penguin UK; 2011. (“ Parisier explains how the internet
search engines and their algorithms are creating a situation where users increasingly are getting information
that confirms their prior beliefs. Search algorithms are using large quantities of information about the user to
find and present relevant information to the individual user. Your search and browse history is a key piece of
the information used to tailor the results you get when you perform online searches. Combining this with
information about your social network, viewing habits and geography leads to an increasingly narrow view
on the information available online. Parisier’s main argument is that this narrowing creates a filter bubble,
which is invisible to the user, but still has immense impact on the information available to the individual.
When you perform a Google search, the information about you is used in addition to your search term to
find and prioritize the search results most likely to be of your interest. Then, when you click among the first
search results (as most people do), you are confirming back to the search engine that the results were
indeed relevant and/or interesting. This in turn strengthens the filter, making it more likely that you will
receive similar results in the future. However, it is not only your own behavior that influences the results.
The interests and preferences among people in your social network are also part of the algorithms, making it
more likely that you will receive search results that your social network in general is gravitating toward. In
many cases, these filters are providing relevant and good results. However, it becomes a problem as soon as
your profile contains elements that make the search results gravitate toward misinformation. The filters are

18
Electronic copy available at: https://ssrn.com/abstract=3882296
are particularly susceptible to content recommendations, so shocking “related videos” can grab children’s
attention and divert them away from more child-friendly programming. 108

The situation on YouTube algorithms and how they have impacted many young children is concerning, even
disturbing. First, the transmission of child-oriented content is interrupted frequently by automatic
advertisements, many of which are inappropriate for younger viewers. Moreover, a recent report on children’s
safety on YouTube shows that very young children are only 2-4 clicks away from adult content while watching
children-oriented videos. For example, children watching Sesame Street are two clicks from a car accident video,
while those viewing Dora the Explorer -are four clicks from a video featuring swearing and nudity. 109

to a large degree invisible, which adds to the problem. Many users are not even aware that the filtering is
taking place, and even if they are, it is difficult to take control of how the filter is being applied. Granted, you
can go to Google and delete your search history, or click the “Hide private results” button in the top right of
the search results. Still, the complexity of the algorithms and the lack of usable explanations about how the
filters actually work make it difficult for the user to take control. The way the filters influence search results
have led our group to use the term Gravitational Black Holes of Information to illustrate how difficult it is to
break out of the force of the filters. As soon as you are aiming in at a core of misinformation, it is inherently
difficult to break out of the gravitational force of the search algorithms. On the way toward the gravitational
center, your prior believes are being strengthened by the new information you find, further pulling you into
the black hole.” Harald Holone, The filter bubble and its effect on online personal health information, Croat
Med J. 2016 Jun; 57(3): 298–301. doi: 10.3325/cmj.2016.57.298).
103Id.
104See, e.g., Siddiqui, Anaum, A Critical Look at YouTube Videos: Causing Behavioral Change Among Children
(March 1, 2019). Available at SSRN: https://ssrn.com/abstract=3453417 or
http://dx.doi.org/10.2139/ssrn.3453417 (“Media has always been assumed as one of the sources of building
realities for the society. Media content is considered as an important cause of behavioral change in society.
Children due to their minor age are more likely to get influenced by the content. Due to emergence of
YouTube and its easy accessibility and negligence of parents due to their busy lives, we cannot limit its
effects on our children. This research merely focuses on the behavioral change of children caused by
watching YouTube videos. As per this research findings children have used YouTube as institute from where
they have learned their basic education such as alphabets and counting, identification of colors and shapes
and nursery rhymes. On other hand these changes have increased aggression, unhealthy mental growth,
sleeping disorders and any other emotional or physical change. Semi-Structured interviews have been
conducted with N=30 mothers of preschooler from Islamabad. Proportionate Stratified Sampling method has
been adopted to cover most of the population. The sample has been divided into three class divisions such
as Upper, Middle and Lower class, out of which N=10 samples are interviewed randomly. Mothers who
belong to Upper class of the society are mostly more educated and they expose their children to
comparatively positive and educating content. Whereas mothers of middle class have less control over the
content and they are only focused on how to keep their child busy and distracted. Mothers of Lower class
have no idea about the quality of content, and time their children are spending watching that content. One
similar notion that can be seen in all three class divisions is that mothers want escapism and for that they
are exposing their children to YouTube videos.”)
105Elias, Nelly, and Idit Sulkin. 2.
106Elias, N., & Sulkin, I. (2017). YouTube viewers in diapers: An exploration of factors associated with amount
of toddlers’ online viewing. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 11(3), Article
2. https://doi.org/10.5817/CP2017-3-2
107Elias, N., & Sulkin, I. (2017). YouTube viewers in diapers: An exploration of factors associated with amount
of toddlers’ online viewing. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 11(3), Article
2. https://doi.org/10.5817/CP2017-3-2
108Elias, N., & Sulkin, I. (2017). YouTube viewers in diapers: An exploration of factors associated with amount
of toddlers’ online viewing. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 11(3), Article
2. https://doi.org/10.5817/CP2017-3-2

19
Electronic copy available at: https://ssrn.com/abstract=3882296
It was discovered that the YouTube algorithms approved videos that contained content not suited for kids, such
as violence and sexual misconduct. 110Numerous media reports111 covered the increasing popularity of amateur
live action videos that bear innocent tags using names of children’s most popular heroes, such as Elsa and Anna
(from the movie Frozen) or Spiderman, but contain offensive content and present explicit expressions of sexual
behavior, vandalism and violence. These videos very easily find their way into the suggested YouTube playlists
of episodes from the favorite children’s shows and gain popularity as millions of people view them.112 These
animated figures engaged in behavior, such as decapitation, pornographic acts and criminal behavior, including,
but not limited to, murder, theft and sexual assault. Younger audiences were thus subjected to severely
disturbing behavior, which is clearly detrimental to them.

This period of controversial videos being widely spread on YouTube Kids is referred to as ‘ElsaGate’. 113 Multiple
studies have found that media has a vast impact on youth, with studies finding correlations between increased
violent behavior when subjected to violent television programming, 114 and promoting sexual behavior.115 These
ElsaGate videos were exposed to millions of kids, whose behavior and emotional development has been
impacted due to these videos. Scientific research concerning deep learning architectures were published in
response to the ElsaGate, bringing up further discussion alongside potential solutions to the problem. 116

Another AI algorithmic governance challenge is children’s inappropriate exposure to YouTube and YouTube Kids-
related advertising.117 YouTube’s business model relies on tracking the IP addresses, search history, device

109Kaspersky Lab, (2013, February 5). Children at High Risk of Accessing Adult Content on YouTube.
PRNewswire. Retrieved from http://www.prnewswire.com/news-releases/children-at-high-risk-of-accessing-
adult-content-on-youtube-189770621.html
110 Maheshwari, Sapna. ‘On YouTube Kids, Startling videos slip past filters’. (New York Times, 2017).
https://www.nytimes.com/2017/11/04/business/media/youtube-kids-paw-patrol.html?_r=0
111Subedar, Anisa. "The Disturbing Youtube Videos That Are Tricking Children" (2019)
https://www.bbc.com/news/blogs-trending-39381889; Dredge, Stuart. 2016. "Youtube's Latest Hit: Neon
Superheroes, Giant Ducks And Plenty Of Lycra". The Guardian.
https://www.theguardian.com/technology/2016/jun/29/youtube-superheroeschildren-webs-tiaras ;
"Youtube: Wie Gefälschte Disney-Cartoons Kinder Verstören - Derstandard.At". 2019. DER STANDARD,
https://derstandard.at/2000055049856/Youtube-Wie-gefaelschte-Disney-CartoonsKinder-verstoeren;
Robertson, A. 2017. “What Makes YouTube’s Surreal Kids’ Videos So Creepy?” The Verge, November 21.
https://www .theverge.com/culture/2017/11/21/16685874/kids- youtube-video-elsagate-creepiness-
psychology.
112Mathijs Stals, The technological downside of algorithms:an ‘ElsaGate’ case study, Masters Thesis, (August
2020) http://arno.uvt.nl/show.cgi?fid=152292; See also Kostantinos Papadamou, Characterizing Abhorrent,
Misinformative, and Mistargeted Content on YouTube, Ph.D. Thesis, (May 16, 2021)
https://arxiv.org/pdf/2105.09819.pdf
113Brandom, Russell. "Inside Elsagate, The Conspiracy-Fueled War On Creepy Youtube Kids Videos". 2017. The
Verge. Accessed May 1 2019. https://www.theverge.com/2017/12/8/16751206/elsagate-youtube-kids-
creepyconspiracy-theory.
114 Johnson, JG et. al. ‘Television viewing and aggressive behavior during adolescence and adulthood’ (2002).
Accessed June 23 2019. https://www.ncbi.nlm.nih.gov/pubmed/11923542
115Strasburger, Victor C. ‘Adolescent Sexuality and the Media’ (1989).
https://www.sciencedirect.com/science/article/pii/S0031395516366949; and Brown, Jane D. ‘Mass media
influences on sexuality’ https://www.tandfonline.com/doi/abs/10.1080/00224490209552118
116Ishikawa, Akari et. al. “Combating the ElsaGate Phenomenon: Deep Learning Architectures for Disturbing
Cartoons”. 2019. Arxiv. https://www.semanticscholar.org/paper/Combating-theElsagate-Phenomenon%3A-
Deep-Learning-Ishikawa-Bollis/938d3fd2cede997006cae88bdc26b2af92e4d384
117 Sarah Perez, “Over 20 advocacy groups complain to FTC that YouTube is violating children’s privacy law,”
TechCrunch, April 9, 2018, https://techcrunch.com/2018/04/09/over-20-advocacy-groups-complain-to-
ftcthat-youtube-is-violating-childrens-privacy-law/’ The Children’s Online Privacy Protection Act of 19981
(“COPPA”) purportedly protects children on the internet. 15 U.S.C. §§ 6501–05 (2018). COPPA was passed
in response to growing concerns throughout the 1990s about the safe use of the internet by children. In
particular, COPPA was aimed at “(i) [the] overmarketing to children and collection of personally identifiable

20
Electronic copy available at: https://ssrn.com/abstract=3882296
identifiers, location and personal data of consumers so that it can categorize consumers by their interests, in
order to deliver “effective” advertising. 118

Although YouTube Kids claims to prohibit “interest-based advertising” and ads with “tracking pixels,” 119
advertising disguised as programming is ubiquitous on the YouTube Kids application. 120

YouTube’s terms of service state that its main app and website are meant only for viewers 13 and older, which
means that the site does not have to comply with the Children’s Online Privacy Protection Act of 1998 121
(“COPPA”), the law passed in the US in response to growing concerns throughout the 1990s about the safe use
of the internet by children. 122The company directs those under the age of 13 to the YouTube Kids app, which
pulls its videos from the main site.

Although YouTube restricts paid advertising of food and beverages on YouTube Kids, for example, food
companies may use their own branded channels to spotlight particular food and beverages that they produce,
burying what are essentially ads within programs, and thereby target children with their products. 123 Thus,
corporations are finding ways to target minors in ways that uphold the letter but not the spirit of the rules and in
ways that may be hidden from parents and other concerned parties. 124

information from children that is shared with advertisers and marketers, and (ii) children sharing information
with online predators who could use it to find them offline.” COPPA was implemented by the FTC through
its Child Online Privacy Protection Rule, which took effect April 21, 2000. In general, COPPA regulates the
collection of personal information from children and applies to websites “directed to children” and those
whose operators have “actual knowledge” of child users. Children are identified as individuals under the age
of thirteen. The five key requirements of the act are notice, parental consent, parental review, security, and
limits on the use of games and prizes. In order to legally collect covered personal information from a child, a
website operator must first obtain “verifiable parental consent” in a form that varies based on the intended
use of the information. The FTC’s most recent amendments to the COPPA rule took effect in 2013 and
clarified that the regulations are applicable to web services and mobile apps and that “personal information”
includes geolocation data, device identifiers, and media containing the voice or image of a child. In
September 2019, the FTC, acting with the Attorney General of New York, announced that it reached a
settlement with YouTube and parent company Google in response to allegations that the services “illegally
collected personal information from children without their parents’ consent,” in violation of COPPA. The
companies agreed to pay $34 million to New York and $136 million to the FTC. Press Release, Fed. Trade
Comm’n, Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law
(Sep. 4, 2019) https://www.ftc.gov/news-events/pressreleases/2019/09/google-youtube-will-pay-record-
170-million-alleged-violations. See also Beemsterboer, Stephen, COPPA Killed the Video Star: How the
YouTube Settlement Shows that COPPA Does More Harm Than Good (June 16, 2020). Stephen
Beemsterboer, COPPA Killed the Video Star: How the YouTube Settlement Shows that COPPA Does More
Harm Than Good, 25 Ill. Bus. L.J. 63 (2020), Available at SSRN: https://ssrn.com/abstract=3631855; See
generally Reddy, T. Raja and Reddy, Dr. E. Lokanadha and Reddy, T. Narayana, Ethics of Marketing to
Children: A Rawlsian Perspective (October 9, 2020). Journal of Economics and Business, Vol. 3 No. 4 (2020),
Available at SSRN: https://ssrn.com/abstract=3706544
118See, e.g., Campbell, Angela J., Rethinking Children's Advertising Policies for the Digital Age (2016). 29 Loy.
Consumer L. Rev. 1 (2016), Available at SSRN: https://ssrn.com/abstract=2911892
119Sapna Maheshwari, “New Pressure on Google and YouTube Over Children’s Data,” NY Times, September 20,
2018, https://www.nytimes.com/2018/09/20/business/media/google-youtube-children-data.html
120Sapna Maheshwari, “New Pressure on Google and YouTube Over Children’s Data,” NY Times, September 20,
2018, https://www.nytimes.com/2018/09/20/business/media/google-youtube-children-data.html
12115 U.S.C. §§ 6501–05 (2018).
122Lauren A. Matecki, Note, Update: COPPA Is Ineffective Legislation! Next Steps for Protecting Youth Privacy
Rights in the Social Networking Era, 5 NW. J. L. & SOC. POL’Y 369, 370 (2010)
123Cecilia Kang, “YouTube Kids App Faces New Complaints Over Ads for Junk Food,” NY Times, December 21,
2017, sec. Technology, https://www.nytimes.com/2015/11/25/technology/youtube-kids-app-faces-
newcomplaints.html

21
Electronic copy available at: https://ssrn.com/abstract=3882296
Concerns about these platforms impacts on children continue to result in lawsuits 125 and campaigns to regulate
them. Fairplay126 is leading a powerful international coalition of 100 experts, advocates, and organizations in
calling on Facebook to abandon its plans to create an Instagram for children. 127

AI and Children s Rights at Play: Smart Toys

Toys are more interactive than ever before. The emergence of the Internet of Things (IoT) makes toys smarter
and more communicative: they can now interact with children by "listening" to them and respond accordingly. 128
While there is little doubt that these toys can be highly entertaining for children and even possess social and
educational benefits, the Internet of Toys (IoToys) raises many concerns. Beyond the fact that IoToys that
might be hacked or simply misused by unauthorized parties, datafication of children by toy conglomerates,
various interested parties and perhaps even their parents could be highly troubling. IoToys could profoundly

124 Smith and Shade, “Children’s Digital Playgrounds,” 5. See also Tur-Viñes, Victoria & Castelló-Martínez,
Araceli. (2021). Food brands, YouTube and Children: Media practices in the context of the PAOS self-
regulation code. Communication & Society. 87-105. 10.15581/003.34.2.87-105. (“The objective of this study
is to analyze media practices involving food content on YouTube in terms of the self-regulatory framework
established by the PAOS code, which was originally designed for television. The study considers content
created and disseminated by two different sources: food brands and child YouTuber channels. We conducted
an exploratory qualitative-quantitative study based on a content analysis of videos posted in 2019 on the
most viewed YouTube channels in Spain (Socialblade, 2019). The final sample included 211 videos (29h
57m) divided into two subsamples: the official channels of 13 Spanish food brands (82 videos), and 15
Spanish child YouTuber channels (129 videos). The study has facilitated information on nine dimensions: (1)
adherence to regulations and ethical standards, (2) nutrition education and information, (3) identification of
advertising, (4) presence of risk, (5) clarity in the presentation of the product and in the language used, (6)
pressure selling, (7) promotions, giveaways, competitions, and children’s clubs, (8) support and promotion
through characters and programs and (9) comparative presentations. The main findings reveal the
experimental nature of videos featuring food brands that are posted on YouTube for child audiences,
especially videos broadcast on the channels of child YouTubers, who post content without an ethical strategy
sensitive to their target audience. The lack of compliance with the basic requirement of identifying the video
as advertising underscores the urgent need to adapt existing legal and ethical standards to these new
formulas of commercial communication.”)
125 Christina Davis, YouTube, Google Class Action Says Kid Data Collected Without Permission, (Oct. 30, 2019)
https://topclassactions.com/lawsuit-settlements/consumer-products/mobile-apps/929240-youtube-google-
class-action-says-kid-data-collected-without-permission/ Complaint available at
https://www.classaction.org/media/hubbard-v-google-llc-et-al.pdf
126https://fairplayforkids.org/ Fairplay is the leading nonprofit organization committed to helping children thrive
in an increasingly commercialized, screen-obsessed culture, and the only organization dedicated to ending
marketing to children. Id.
127https://fairplay.salsalabs.org/noinstagramforkids/index.html
128The Internet of Things (IoT) has penetrated the global market including that of children's toys. Worldwide,
Smart Toy sales have reached $9 billion in 2019 and is expected to exceed $15 billion by 2022. Connecting
IoT toys to the internet exposes users and their data to multivariate risk due to device vulnerabilities. When
IoT devices are marketed to individuals, especially children, the potential for negative impact is significant,
so their design must result in robust security implementations. For our study, we performed penetration
testing on a Fisher-Price Smart Toy. We were able to obtain root access to the device, capture live pictures
and videos, as well as install remote access software which allows surreptitious recordings over WiFi network
connections without user knowledge or permission. We propose solutions including adhering to rudimentary
standards for security design in toys, a mobile application for IoT threat assessment and user education, and
an ambient risk communication tool aligned with user risk perception. The proposed solutions are crucial to
empower users with capabilities to identify and understand ambient risks and defend against malicious
activities. Streiff, Joshua and Das, Sanchari and Cannon, Joshua, Overpowered and Underprotected Toys:
Empowering Parents with Tools to Protect Their Children (December 13, 2019). IEEE Humans and Cyber
Security Workshop (HACS 2019), Available at SSRN: https://ssrn.com/abstract=3509530

22
Electronic copy available at: https://ssrn.com/abstract=3882296
threaten children's right to privacy as it subjects and normalizes them to ubiquitous surveillance and datafication
of their personal information, requests, and any other information they divulge. 129

AI-based devices interact autonomously with children and convey their own cultural values, this impacts on the
rights and duties of parents to provide, in a manner consistent with the evolving capacities of the child,
appropriate direction and guidance in the child’s freedom of thought, including aspects concerning cultural
diversity.130
Due to the emergence of the Internet of Things, ordinary objects became connected to the internet, children
can now be constantly datafied during their daily routines, with or without their knowledge. IoT devices can
collect and retain mass amounts of data and metadata on children and share them with various parties—able to
extract data on where children are, what they are doing or saying, and perhaps even capture imagery and
videos of them.
Cayla is an internet-connected doll that uses voice recognition technology to chat and interact with children in
real time. Cayla’s conversations are recorded and transmitted online to a voice analysis company. This raised
concerns that hackers might spy on children or communicate directly with them as they play with the doll. There
are also concerns about how kids’ voice data was used. In 2017 German regulators urged parents to destroy the
doll, classifying it as an “illegal espionage apparatus”. 131

On February 17, 2017, the German Federal Network Agency banned Cayla 132 from being sold, and ordered the
destruction of all devices which had already been sold.133The legal basis of this decision was § 148 (1) no. 2, 90
of the German Telecommunication Act. The rationale was that because of the doll’s connectivity to its
manufacturer (required because the doll was AI enabled), the doll was effectively a spy on the child, recording
all the data the child says to devices including their most precious secrets. 134
Likewise, the agency was concerned that the devices were hackable, exposing children to threats such as
pedophilia or ideological communications. Since then, the regulator has used the law to ban similar devices as
well as smart watches 135This strict approach adopted to protect children, one of the most vulnerable

129Haber, Eldar, Toying with Privacy: Regulating the Internet of Toys (December 8, 2018). 80 Ohio State Law
Journal 399 (2019), Available at SSRN: https://ssrn.com/abstract=3298054; See also Haber, Eldar, The
Internet of Children: Protecting Children’s Privacy in A Hyper-Connected World (November 21, 2020). 2020
U. Ill. L. Rev. 1209 (2020), Available at SSRN: https://ssrn.com/abstract=3734842
130 See e.g. Norwegian Consumer Council (fn 179) referring to the connected dol Cayla (“Norwegian version of
the apps has banned the Norwegian words for “homosexual”, “bisexual”, “lesbian”, “atheism”, and “LGBT”
[…]” “Other censored words include ‘menstruation’, ‘scientology-member’, ‘violence’, ‘abortion’, ‘religion’,
and ‘incest’ ”); See Esther Keymolen and Simone Van der Hof, ‘Can I still trust you, my dear doll? A
philosophical and legal exploration of smart toys and trust’ (2019) 4(2) Journal of Cyber Policy 143-159
https://www.tandfonline.com/doi/pdf/10.1080/23738871.2019.1586970?needAccess=true (“Smart toys
come in different forms but they have one thing in common. The development of these toys is not just a
feature of ongoing technological developments; their emergence also reflects an increasing
commercialisation of children’s everyday lives”); See Valerie Steeves, ‘A dialogic analysis of Hello Barbie’s
conversations with children’ (2020) 7(1) Big Data & Society,
https://journals.sagepub.com/doi/pdf/10.1177/2053951720919151
131Kay Firth-Butterfield , Generation AI: What happens when your child's friend is an AI toy that talks back?
World Economic Forum, May 22, 2018) http://governance40.com/wp-content/uploads/2018/12/Generation-
AI-What-happens-when-your-childs-friend-is-an-AI-toy-that-talks-back-World-Economic-Forum.pdf
132MY FRIEND CAYLA, https://www.genesis-toys.com/my-friend-cayla
133Press Release, Bundesnetzagentur Removes Children’s Doll “Cayla” From the Market, Bundesnetzagentur
[BNetzA] [German FederalNetwork Agency], (Feb. 2, 2017).
134Kay Firth-Butterfield, Generation AI: What happens when your child's friend is an AI toy that talks back?, WORLD
ECONOMIC FORUM (May 20, 2018) https://www.weforum.org/agenda/2018/05/generation-ai-whathappens-when-
your-childs-invisible-friend-is-an-ai-toy-that-talks-back/.
135See DakshayaniShankar, Germany Bans Talking Doll Cayla over Security, Hacking Fears, NBC NEWS (Feb. 18, 2017,
6:43 PM),http://www.nbcnews.com/news/world/germany-bans-talking-doll-cayla-over-security-hacking-fears-

23
Electronic copy available at: https://ssrn.com/abstract=3882296
demographics, has a further legal basis in Art. 16 (1) of the Convention on the Rights of the Child. According to
which “no child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or
correspondence.136

Cayla is just one example of a new wave of artificial intelligence toys that “befriend” children. Manufacturers
often claim they are educational, enhancing play and helping children develop social skills. But consumer groups
warn that smart toys, like other “things” we connect to the internet, might put security and privacy at risk. How
do we navigate a world where AI toys are increasingly popular. What happens to the data from AI toys? The
toys are connected to the internet (via WiFi or Bluetooth to a phone or other device with internet access) and
send data to the supplier. This enables the company's AI to learn for the company and be better able to talk to
the child. The company records and collects all the child’s conversations with the toy, and possibly those with
other children and adults who also interact with it. The company is probably storing this data and certainly using
it to create a better product. The location of the toy affects how the data is stored. For example, in the US,
companies creating educational toys can store data for longer than other companies. So when the manufacturer
describes their toy as educational, it opens up that right to hold on to the data for longer. As more devices –
many marketed as educational toys – come onto the market, they are setting off alarm bells around privacy,
bias, surveillance, manipulation, democracy, transparency and accountability.

What issues should we be most concerned about? Germany banned Cayla and similar toys because of concerns
they could be used to spy on children and that someone could hack the device and communicate directly with
the child. But we are also talking about companies monetizing data. The data from AI toys contains everything a
child says to the device, including their most guarded secrets.

If that data is collected, does the child have a right to get it back? If that data is collected from very early
childhood and does not belong to the child, does it make the child extra vulnerable because his or her choices
and patterns of behavior could be known to anyone who purchases the data, for example, companies or political
campaigns.

Depending on the privacy laws of the state in which the toys are being used, if the data is collected and kept, it
breaches Article 16 of the Convention on the Rights of the Child – the right to privacy. Though, of course,
arguably this is something parents routinely do by posting pictures of their children on Facebook 137

What are the benefits of AI toys? Most economists would argue that improving and increasing access to
education is one of the best ways to close the gap between the developing and developed world. AI-enabled
educational toys and “teachers” could make a hugely beneficial difference in the developing world. According to
venture capitalist and former Google China CEO Kai Fu Lee, 138 the data collected from devices would likely be
used by the big AI companies in the West and China for their own purposes, rather than directed toward an
effort of benefiting children, their parents or the countries in which they live. 139

n722816;Jane Wakefield, Germany Bans Children’s Smartwatches, BBC NEWS (Nov. 17, 2017),
http://www.bbc.com/news/technology-42030109.
136United Nations Convention on the Rights of the Child, art. 16 (1), Nov. 20, 1989
https://www.ohchr.org/en/professionalinterest/pages/crc.aspx
137Steinberg, Stacey, Sharenting: Children's Privacy in the Age of Social Media (March 8, 2016). 66 Emory L.J.
839 (2017), University of Florida Levin College of Law Research Paper No. 16-41, Available at SSRN:
https://ssrn.com/abstract=2711442
138 Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order, Houghton Mifflin Harcourt
(2018). Despite these warnings, the book is ultimately optimistic that the complementarity between humans
and AI can lead to a productive human-AI coexistence. Offering a dose of optimism to counter the
doomsday singularity prediction, Lee reminds us that when it comes to shaping the story of AI, we humans
are not just passive spectators, we can take action. See also Kai-Fu Lee, How AI can save our humanity,
TED, (August 27, 2018) https://www.youtube.com/watch?v=ajGgd9Ld-Wc
139 Kay Firth-Butterfield , Generation AI: What happens when your child's friend is an AI toy that talks back?
World Economic Forum, May 22, 2018) http://governance40.com/wp-content/uploads/2018/12/Generation-

24
Electronic copy available at: https://ssrn.com/abstract=3882296
What influence could AI toys have on kids? As well as the risk of hacking, we also need to think about what
these toys are saying to our children. Who is the arbiter of these conversations? Who coded the algorithms
(their unintended biases could creep in)? Do the values the child is being exposed to align with those of the
parents? Will parents be able to choose the values the toy is coded with? 140

If the toy is educational, is the algorithm being checked by someone who is at least qualified to teach? These
toys will be very influential because the children will be conversing with them all the time. For example, if the
doll says it is cold and the child asks his or her parents to buy it a coat, is that advertising? If data is being
collected, even if it isn’t being stored, does the company have a duty to “red flag” children who share suicidal
thoughts or other self-harming behavior? What if the child confides in the toy that he or she is being abused,
will the company report this to the relevant authorities? And then what will the company do with that
information? So what can we do to protect children? 141

Parents need to have answers to these questions before they buy the devices. At the very least, they can check
that their child is learning values from AI toys that concur with their own. At the moment the onus is on
consumers to know what is being done with their data, but there is discussion that companies should be made
responsible for ensuring consumers understand how it’s being used. 142

A World Economic Forum project advocates for the role of regulators so that they would certify algorithms fit for
purpose, as opposed to the current situation where regulators issue a fine after something goes wrong. This
regulatory model is appropriate with IoToys because it is needed now and an agile governance mechanism. The
problem, though, with governance of smart toys is that the AI is learning and changing with each interaction
with the child. AI-enabled toys are not necessarily bad. They could one day help us achieve precision learning
(using AI to tailor education to each child’s needs). AI toys could be excellent for preparing children to work
alongside autonomous robots. The point is that children are vulnerable and we must consider how AI is used
around them and not beta test it on them. 143

While the US Congress responded to privacy threats toward children that emerged from the internet with the
enactment of COPPA,144 this regulatory framework only applies to a limited set of IoT devices, excluding those
which are not directed towards children nor knowingly collect personal information from them. COPPA is ill-
suited to properly safeguard children from the privacy risks that IoT entails, as it does not govern many IoT
devices that they are exposed to. The move towards an “always-on” era, by which many IoT devices constantly
collect data from users, regardless of their age, exposes us all to great privacy risks. 145
IoT will most likely play a substantive role in child-targeted devices in the foreseeable future. IoToys presents
children with interactive playing. Beyond the smart toys being fun, they could carry educational and social
benefits for children:146opportunities to learn, develop, and improve communication skills; encourage active play

AI-What-happens-when-your-childs-friend-is-an-AI-toy-that-talks-back-World-Economic-Forum.pdf
140 Kay Firth-Butterfield , Generation AI: What happens when your child's friend is an AI toy that talks back?
World Economic Forum, May 22, 2018) http://governance40.com/wp-content/uploads/2018/12/Generation-
AI-What-happens-when-your-childs-friend-is-an-AI-toy-that-talks-back-World-Economic-Forum.pdf
141 Kay Firth-Butterfield , Generation AI: What happens when your child's friend is an AI toy that talks back?
World Economic Forum, May 22, 2018) http://governance40.com/wp-content/uploads/2018/12/Generation-
AI-What-happens-when-your-childs-friend-is-an-AI-toy-that-talks-back-World-Economic-Forum.pdf
142Id.
143 Kay Firth-Butterfield , Generation AI: What happens when your child's friend is an AI toy that talks back?
World Economic Forum, May 22, 2018) http://governance40.com/wp-content/uploads/2018/12/Generation-
AI-What-happens-when-your-childs-friend-is-an-AI-toy-that-talks-back-World-Economic-Forum.pdf
144See Children’s Online Privacy Protection Act (COPPA), Pub. L. No. 106–70, 112 Stat. 2681 (1998) (codified as
amended at 15 U.S.C. §§ 6501–06 (2018))
145Haber, Eldar, Toying with Privacy: Regulating the Internet of Toys (December 8, 2018). 80 Ohio State Law
Journal 399 (2019), Available at SSRN: https://ssrn.com/abstract=3298054

25
Electronic copy available at: https://ssrn.com/abstract=3882296
and toy interaction, which might be preferable to passive TV screen time; identify learning difficulties; and be
affordable for parents.147

IoToys devices have been criticized for their potential educational, social, and psychological drawbacks. To name
a few: providing poor quality of play; potentially harming children’s development, impeding child–parent
interaction;148 obstructing children’s well being and healthy development, which require real relationships and
conversations149; and posing a risk to health from electromagnetic radiation (EMR). 150

In today’s constantly connected world, With almost everyone having access to the web, where anyone can
interact with anyone behind a veil of anonymity, the world faces a much higher risk of someone grooming our
children without us even knowing.151 Cyber grooming, a real threat, is a form of child grooming where the
predator targets a child online, building a virtual relationship with them, gaining their trust, and learning the
best way to gain access to them in the real world. 152

For a predator, connecting to children online can be easy. Some opt to join a kid-friendly chat room and pretend
to be a child while others play an online game with them where the predator can privately communicate with
the child. Often the predator will entice a child to trust them with gifts and promises while also using language
that normalizes sexual language and actions.

IoToys devices’ potential drawbacks subject children to various risks, for example, exposure to harmful
content.153 There is even the danger of mental and bodily harm by predators, some of whom could have access

146The Smart Toy Awards recognize ethical and responsible smart toys that use AI to create an innovative and
healthy play experience for children.Beatrice Di Caro, World Economic Forum, May 21, 2021
https://www.weforum.org/agenda/2021/05/smart-toy-awards-ede2d12ced/
147See Stéphane Chaudron et al., Kaleidoscope on the Internet of Toys: Safety, Security, Privacy and Societal
Insights, JRC TECHNICAL REP. 9 (2017),
http://publications.jrc.ec.europa.eu/repository/bitstream/JRC105061/jrc105061_final_online.pdf and 5
Benefits of Tech Toys for Children, ROBO WUNDERKIND (June 23, 2017), http://yuriy-
levin.squarespace.com/blog/benefits-tech-toys-kids [https://perma.cc/Q599-U3A8].
148See Kate Cox, Privacy Advocates Raise Concerns About Mattel’s Always-On ‘Aristotle’ Baby Monitor,
CONSUMERIST (May 10, 2017), https://consumerist.com/2017/05/10/privacy-advocates-raise-concerns-
about-mattels-always-on-aristotle-baby-monitor [https://perma.cc/VP3S-JEHB].
149See, e.g., Richard Chirgwin, Mattel’s Parenting Takeover Continues with Alexa-Like Dystopia, THE REGISTER
(Jan. 4, 2017),
https://www.theregister.co.uk/2017/01/04/mattels_parenting_takeover_continues_with_alexalike_dystopia
[https://perma.cc/NXP5-7GW3]
150See Stéphane Chaudron et al., Kaleidoscope on the Internet of Toys: Safety, Security, Privacy and Societal
Insights, JRC TECHNICAL REP. 9 (2017),
http://publications.jrc.ec.europa.eu/repository/bitstream/JRC105061/jrc105061_final_online.pdf
151 See Urs Gasser, Colin Maclay, John Palfrey, An Exploratory Study by the Berkman Center for Internet &
Society at Harvard University, in Collaboration with UNICEF, )June 16, 2010) https://dmlhub.net/wp-
content/uploads/files/SSRN-id1628276.pdf
152Daniel Bennett, What Is Cyber Grooming and How to Protect Children? (March 23, 2020). TechAcute
https://techacute.com/what-is-cyber-grooming/
153As these toys rely on remotely stored data, they could be subjected to harmful content as information might
become vulnerable and could be changed by a malicious entity which gained access to the toy or simply due
to bad or error in programing. See, for instance, how a misunderstanding led Amazon Echo to spout porn
search terms to a toddler. Amazon Alexa Gone Wild, YOUTUBE (Dec. 29, 2016),
https://www.youtube.com/watch?v=r5p0gqCIEa8 [https://perma.cc/SE3G-M5ZU]. See also how a specialist
team hacked Cayla to quote Hannibal Lecter and lines from 50 Shades of Grey. See David Moye, Talking Doll
Cayla Hacked to Spew Filthy Things, HUFFPOST (Feb. 9, 2015),
http://www.huffingtonpost.com/2015/02/09/my-friend-cayla-hacked_n_6647046.html
[https://perma.cc/78HN-89F6].

26
Electronic copy available at: https://ssrn.com/abstract=3882296
toys and use them to listen to, watch, track, and even directly contact children. 154Along with these important
challenges, these IoToys devices raise human rights concerns. 155 Potentially, they can subject children to
ubiquitous surveillance and datafication, which could profoundly impact their right to privacy. 156

Thus children’s leisure activities have changed significantly over the last two decades, from engaging with toys
with little interactive capacity to smart toys that are capable of responding back. 157 Through the use of weak
artificial intelligence, these toys incorporate a set of techniques that allow computers to mimic the logic and
interactions of humans.158 Such toys raise a host of human rights-related concerns. These include potential
violations of a child’s right to privacy, and whether corporations have (or should have) a duty to report sensitive
information that is shared with a toy and stored online such as indications that a child might be being abused or
otherwise harmed.159

154When children assume that it is the toy that is “talking” to them, predators might be able to persuade them
to convey sensitive information. These predators could obtain information from children like where they live
and, perhaps even worse, convince them to act on their behalf. See Abby Haglage, Hackable ‘Hello Barbie’
the Worst Toy of the Year (and Maybe Ever), DAILY BEAST (Dec. 10, 2015),
http://www.thedailybeast.com/hackablehello-barbie-the-worst-toy-of-the-year-and-maybe-ever
[https://perma.cc/85E4-AGQW]. For a typology of risks to children online, see ORG. FOR ECON. CO-
OPERATION & DEV. (OECD), THE PROTECTION OF CHILDREN ONLINE - RECOMMENDATION OF THE OECD
COUNCIL REPORT ON RISKS FACED BY CHILDREN ONLINE AND POLICIES TO PROTECT THEM 24–39
(2012), https://www.oecd.org/sti/ieconomy/childrenonline_with_cover.pdf [https://perma.cc/33T7-R645].
155Verdoodt, Valerie, The Role of Children's Rights in Regulating Digital Advertising (2019). International
Journal of Children’s Rights, 27 (3), 455-481, 2019, doi: 10.1163/15718182-02703002., Available at SSRN:
https://ssrn.com/abstract=3703312 or http://dx.doi.org/10.2139/ssrn.3703312 (An important domain in
which children’s rights are reconfigured by internet use, is digital advertising. New advertising formats such
as advergames, personalized and native advertising have permeated the online environments in which
children play, communicate and search for information. The often immersive, interactive and increasingly
personalized nature of these advertising formats makes it difficult for children to recognize and make
informed and well-balanced commercial decisions. This raises particular issues from a children’s rights
perspective, including inter alia their rights to development (Article 6 UNCRC), privacy (Article 16 UNCRC),
protection against economic exploitation (Article 32 UNCRC), freedom of thought (Article 17 UNCRC) and
education (Article 28 UNCRC). The paper addresses this reconfiguration by translating the general principles
and the provisions of the United Nations Convention on the Rights of the Child into the specific context of
digital advertising. Moreover, it considers the different dimensions of the rights (i.e. protection, participation
and provision) and how the commercialization affects children and how their rights are exercised.)
156Haber, Eldar, The Internet of Children: Protecting Children’s Privacy in A Hyper-Connected World (November
21, 2020). 2020 U. Ill. L. Rev. 1209 (2020)., Available at SSRN: https://ssrn.com/abstract=3734842
157Chris Nickson, “How a Young Generation Accepts Technology,” A Technology Society, September 18, 2018,
available at http://www.atechnologysociety.co.uk/howyoung-generation-accepts-technology.html. See also
Laura Rafferty, Patrick C. K. Hung, Marcelo Fantinato,Sarajane Marques Peres, Farkhund Iqbal, Sy-Yen Kuo,
and Shih-Chia Huang, “Towards a Privacy Rule Conceptual Model for Smart Toys” in Computing in Smart
Toys, 85-102, available at
https://www.researchgate.net/publication/319046589_Towards_a_Privacy_Rule_Conceptual_Model_for_Sma
rt_Toys. (“A smart toy is defined as a device consisting of a physical toy component that connects to one or
more toy computing services to facilitate gameplay in the cloud through networking and sensory
technologies to enhance the functionality of a traditional toy. A smart toy in this context can be effectively
considered an Internet of Things (IoT) with Artificial Intelligence (AI) which can provide Augmented Reality
(AR) experiences to users. In this paper, the first assumption is that children do not understand the concept
of privacy and the children do not know how to protect themselves online, especially in a social media and
cloud environment. The second assumption is that children may disclose private information to smart toys
and not be aware of the possible consequences and liabilities. This paper presents a privacy rule conceptual
model with the concepts of smart toy, mobile service, device, location, and guidance with related privacy
entities: purpose, recipient, obligation, and retention for smart toys. Further the paper also discusses an
implementation of the prototype interface with sample scenarios for future research works.”)

27
Electronic copy available at: https://ssrn.com/abstract=3882296
There are three nodes involved in smart toy processes, each of which comes with a set of challenges and
vulnerabilities: the toy (which interfaces with the child), the mobile application, which acts as an access point for
Wi-Fi connection, and the toy’s/consumer’s personalized online account, where
data is stored. Such toys communicate with cloud-based servers that store and process data provided by the
children who interact with the toy. 160

Privacy concerns arising from this model can be illustrated by the Cloud Pets 161 case, in which more than
800,000 toy accounts were hacked, exposing customers’ (including children’s) private information. 162 In 2017,
more than two million voice messages that had been recorded on these Cloud Pets cuddly toys were discovered
in an open database. Besides the numerous voice messages of children and adults, the database included
people’s email addresses, passwords, profile pictures, and even children’s names and names of authorized
family members. Thus, over 800,000 users’ personal information was compromised. 163

The database which contained all this information had no usernames or passwords to prevent someone from
seeing all the data. What’s worse, soon enough a ransomware attack happened on the database: the hackers
had deleted original databases leaving a ransom demand instead. Basically, the hackers locked the database
until a certain amount of money was paid.164

Another example of risky toys is the Hello Barbie doll,165which raised civil society concerns around the
interception of sensitive information and whether the doll allowed for pervasive surveillance in ways that were
not transparent to users.166 In that case, the toy’s manufacturer, Mattel – in collaboration with Toy Talk, Inc.–
released a FAQ to try to address these pressing questions. 167 The FAQ states that the conversations between the

158Rafferty et al., “Towards a Privacy Rule,” supra


159Benjamin Yankson, Farkhund Iqbal, and Patrick C. K. Hung, “Privacy Preservation Framework for Smart
Connected Toys,” Computing in Smart Toys,
https://www.researchgate.net/publication/319048771_Privacy_Preservation_Framework_for_Smart_Connect
ed_Toys (“Advances in the toy industry and interconnectedness resulted in rapid and pervasive development
of Smart Connected Toy (SCT), which built to aid children in learning, socialization, and development. A SCT
is a physical embodiment artifact that acts as a child user interface for toy computing services in cloud.
These SCTs are built as part of Internet of Things (IoT) with the potential to collect terabytes of personal
and usage information. They introduce the concept of increasing privacy, and serious safety concerns for
children, who are the vulnerable sector of our community and must be protected from exposure of offensive
content, violence, sexual abuse, and exploitation using SCTs. SCTs are capable to gather data on the context
of the child user’s physical activity state (e.g., voice, walking, standing, running, etc.) and store personalized
information (e.g., location, activity pattern, etc.) through camera, microphone, Global Positioning System
(GPS), and various sensors such as facial recognition or sound detection. In this chapter we are going to
discuss the seriousness of privacy implication for these devices, survey related work on privacy issues within
the domain of SCT, and discuss some global perspective (legislation, etc.) on such devices. The chapter
concludes by proposing some common best practice for parents and toy manufactures can both adopt as
part of Smart Connected Toy Privacy Common body of knowledge for child safety.”)
160Rafferty et al., “Towards a Privacy Rule,” supra
161https://en.wikipedia.org/wiki/CloudPets
162Alex Hern, “CloudPets stuffed toys leak details of half a million users,” The Guardian,
https://www.theguardian.com/technology/2017/feb/28/cloudpets-data-breach-leaksdetails-of-500000-
children-and-adults, (February 28, 2017).
163Marija Perinic, Cloud Pets: The Cuddly Cyber Security Risk, Secure Thoughts, (May 11, 2021)
https://securethoughts.com/cloudpets-app/
164Id.
165See Valerie Steeves, ‘A dialogic analysis of Hello Barbie’s conversations with children’ (2020) 7(1) Big Data &
Society, https://journals.sagepub.com/doi/pdf/10.1177/2053951720919151
166Corinne Moini, “Protecting Privacy in the Era of Smart Toys: Does Hello Barbie Have a Duty to Report,” 25
Cath. U. J. L. & Tech 281, (2017), 4. https://scholarship.richmond.edu/law-student-publications/157/
167Mattel, “Hello Barbie Frequently Asked Questions,”
(2015),http://hellobarbiefaq.mattel.com/wp-content/uploads/2015/12/hellobarbie-faq-v3.pdf

28
Electronic copy available at: https://ssrn.com/abstract=3882296
doll and the child cannot be intercepted via Bluetooth technology because the conversation takes place over a
secured network, making it impossible to connect the doll via Bluetooth. 168The document advises against
connecting the doll to third party Wi-Fi, which may be especially vulnerable to interception. 169

Further, the document claims that the Hello Barbie doll is not always listening but becomes inactive when not
expressly engaged.170According to the document released by Mattel, the doll has similar recognition technology
to Siri and is activated only when the user pushes down the doll’s belt buckle. 171Finally, the company states that
the doll does not ask questions that are intended to elicit personal information, in order to minimize the
circumstances in which a child might divulge sensitive information during his/her conversation with the doll. 172

Notably, parents can access their child’s ToyTalk cloud account and listen to what their child has said, deleting
any personal information.173As a safeguard, ToyTalk also participates in the FTC’s KidSafe Seal Program, a
compliance program for websites and online services targeted towards children. 174 There are two types of
certificates that a website or online service can obtain: the KidSafe certificate and the KidSafe+ certificate. 175
The KidSafe+ certificate requires additional requirements and compliance with COPPA. 176 The communications
between Hello Barbie and a child are encrypted and stored on a trusted network. 177

A key concern, despite these safeguards, is whether a company has a duty to report or otherwise “red flag”
sensitive information shared through their toys 178—for example, children who reveal they are being abused, or
children who share suicidal thoughts or other self-harm related behavior. 179

168Corinne Moini, Protecting Privacy in the Era of Smart Toys: Does Hello Barbie Have a Duty to Report,
Catholic University Journal of Law and Technology, no.25 (2017): 4; Hello Barbie FAQ
169Vlahos, James, “Barbie Wants To Get To Know Your Child,” New York Times, (September 16 2015). See
Hello Barbie Security: Part 1 – Teardown, Somerset Recon, (Nov. 20, 2015)
https://www.somersetrecon.com/blog/2015/11/20/hello-barbie-security-part-1-teardown and Hello Barbie
Security: Part 2 – Analysis, Somerset Recon, (Jan. 25, 2016)
https://www.somersetrecon.com/blog/2016/1/21/hello-barbie-security-part-2-analysis (“In the end, we
believe that ToyTalk started off well by utilizing pre-designed hardware and software, but fell short when it
came to their web security. The number of vulnerabilities found in both ToyTalk’s websites and web
services, and in such a short amount of time, indicate that they had little to no pre-production security
analysis and are relying on their bug bounty program to patch up the holes. However, this could have been
easily remedied by hiring a professional security team to audit the attack surface that is left. It also seems
that the KidSafe Seal Program does not provide strict or clear enough information security requirements for
web related technologies. In the end, it’s a decision for the parents about the trust they place in ToyTalk. If
ToyTalk’s servers are ever eventually breached, they wouldn’t be the first company to leak personal
information about children to hackers. It’s up to the parents to decide whether they want to take that risk.”)
170Mattel, “Hello Barbie Frequently Asked Questions,” (2015),
http://hellobarbiefaq.mattel.com/wp-content/uploads/2015/12/hellobarbie-faq-v3.pdf.
171Id.
172Id.
173Id.
174Federal Trade Commission, “KidSafe Seal Program:Certification Rules Version 3.0 (Final),”
https://www.ftc.gov/system/files/attachments/press-releases/ftc-approves-kidsafe-safeharbor-program/
kidsafe sealprograms certification rules ftcapprovedkidsafe-coppaguidelinesfeb_2014.pdf [hereinafter
KIDSAFE SEAL PROGRAM], (2014).
175Federal Trade Commission, “KidSafe Seal Program,”(2014).
176Corinne Moini, “Protecting Privacy in the Era of Smart Toys: Does Hello Barbie Have a Duty to Report,” 25
Cath. U. J. L. & Tech 281(2017), 12-291.
177Hello Barbie FAQs, supra note 3, at 4-5.
178 Woodrow Hartzog, Unfair and Deceptive Robots, 74 MD. L. REV. 785, 787 (2015) (arguing that young
children might become attached to robots “acting autonomously” and “disclose secrets that they would not
tell their parents or teachers”).
179Corinne Moini, “Protecting Privacy in the Era of Smart Toys: Does Hello Barbie Have a Duty to Report,” 25
Cath. U. J. L. & Tech 281(2017). https://scholarship.richmond.edu/law-student-publications/157/

29
Electronic copy available at: https://ssrn.com/abstract=3882296
Existing privacy laws and common law tort duties fall short of providing directly relevant protection. For
example, while COPPA protects the privacy rights of minors under the age of thirteen, requiring companies to
obtain parental consent and to disclose what information is being collected about a minor, it does not impose
any reporting requirements regarding suspected child abuse and neglect. 180

Ultimately, most mechanisms for tackling these challenges have been designed by the corporations themselves.
For example, stamping out the spread of child sexual abuse material (CSAM) is a priority for big internet
companies and content moderators. But it’s also a difficult and harrowing job for those on the frontline, human
moderators who have to identify and remove abusive content. Google released free AI software designed to
help these individuals.181

Most tech solutions in this domain work by checking images and videos against a catalog of previously identified
abusive material. (See, for example: PhotoDNA 182, a tool developed by Microsoft and deployed by companies like
Facebook and Twitter.) This sort of software, known as a “crawler,” is an effective way to stop people sharing
known previously-identified CSAM. But it can’t catch material that hasn’t already been marked as illegal. For
that, human moderators have to step in and review content themselves. 183

This is where Google’s new AI tool aims to help. Using the company’s expertise in machine vision, it assists
moderators by sorting flagged images and videos and “prioritizing the most likely CSAM content for review.” This
should allow for a much quicker reviewing process. In one trial, says Google, the AI tool helped a moderator
“take action on 700 percent more CSAM content over the same time period.” 184

Fred Langford, deputy CEO of the Internet Watch Foundation 185 (IWF), said the software would “help teams like
our own deploy our limited resources much more effectively.” “At the moment we just use purely humans to go
through content and say, ‘yes,’ ‘no,” says Langford. “This will help with triaging.”

180“Children’s Online Privacy Protection Rule,” 16 C.F.R. 312.1 (2001). See also Corinne Moini, “Protecting
Privacy in the Era of Smart Toys: Does Hello Barbie Have a Duty to Report,” 25 Cath. U. J. L. & Tech
281(2017). https://scholarship.richmond.edu/law-student-publications/157/
181 James Vincent, “Google Releases Free AI Tool to Help Companies Identify Child Sexual Abuse Material,” The
Verge, https://www.theverge.com/2018/9/3/17814188/googleai-child-sex-abuse-material-moderation-tool-
internetwatch-foundation, September 03, 2018. See also Nikola Todorovic and Abhi Chaudhuri, Using AI to
help organizations detect and report child sexual abuse material online, GOOGLE IN EUROPE, (Sept. 3,
2018) https://www.blog.google/around-the-globe/google-europe/using-ai-help-organizations-detect-and-
report-child-sexual-abuse-material-online/
182https://www.microsoft.com/en-us/photodna In 2009, Microsoft partnered with Dartmouth College to develop
PhotoDNA, a technology that aids in finding and removing known images of child exploitation. Today,
PhotoDNA is used by organizations around the world and has assisted in the detection, disruption, and
reporting of millions of child exploitation images. Id.
183Id.
184Id.
185https://www.iwf.org.uk/ (“Our vision is to eliminate child sexual abuse imagery online.”) (““IWF is one of the
most active and effective European hotlines fighting against child sexual exploitation. The work developed
by IWF in the process of notice and takedown, in close cooperation with Law Enforcement, is an example to
follow. “IWF’s contribution to the Strategic Assessment on Commercial Sexual Exploitation of Children
Online, produced by Europol in the frame of the European Financial Coalition against Commercial Sexual
Exploitation of Children, has been outstanding. The analytical findings shared by IWF and the work
developed through initiatives like the Website Brands Project have been an invaluable source of information
for the Law Enforcement community. “Europol will continue cooperating actively with IWF to achieve our
common goals: eradicate the production and dissemination of child abuse material through the internet. The
dedication and commitment from the IWF team is outstanding." Troels Oerting, Former Head of EC3,
European Cybercrime Centre) Id.

30
Electronic copy available at: https://ssrn.com/abstract=3882296
The IWF is one of the largest organizations dedicated to stopping the spread of CSAM online. It’s based in the
UK but funded by contributions from big international tech companies, including Google. It employs teams of
human moderators to identify abuse imagery, and operates tip-lines in more than a dozen countries for internet
users to report suspect material. It also carries out its own investigative operations; identifying sites where
CSAM is shared and working with law enforcement to shut them down. 186 Langford says that because of the
nature of “fantastical claims made about AI,” the IWF will be testing out Google’s new AI tool thoroughly to see
how it performs and fits with moderators’ workflow. He added that tools like this were a step towards fully
automated systems that can identify previously unseen material without human interaction at all. “That sort of
classifier is a bit like the Holy Grail in our arena.” But, he added, such tools should only be trusted with “clear
cut” cases to avoid letting abusive material slip through the net. “A few years ago I would have said that sort of
classifier was five, six years away,” says Langford. “But now I think we’re only one or two years away from
creating something that is fully automated in some cases.” 187

In the case of Hello Barbie, ToyTalk has created automatic responses for serious conversations such as bullying
or abuse. Such responses include “that sounds like something you should talk to a grown-up about.” 188While an
important step towards addressing this issue, this approach potentially pushes any responsibility for acting to
the parents or to the child herself. It is unclear how many children would act on this response to report
problems to a grownup or what it means for children if an adult in their household is the one perpetrating the
harm.

Modern technological tools can design "child-safe" toys, to prevent users from harming themselves and others.
AI programming can target moral as well as physical harms. The iPhone was designed to allow Apple to remove
applications from users’ devices, a capability it utilized to excise sexually suggestive applications in early
2010.Government may increasingly take advantage of this possibility. In an increasingly digital world, the
government could manipulate technological design to make it difficult or impossible to break laws using digital
devices. 189

Artificial Intelligence in Education (AIEd) and EdTech: Children’s Rights and Education

The introduction of artificial intelligence in education (“AIEd”) will have a profound impact on the lives of
children and young people. There are different types of artificial intelligence systems in common use in
education, alongside the growth of commercial knowledge monopolies. Data privacy rights issues for children
and young people are becoming more and more pronounced. Achieving a balance between fairness, individual
pedagogic rights, data privacy rights and effective use of data is a difficult challenge, and one not easily
supported by current regulation, and many continue to search for democratically aware and responsible use for
artificial intelligence use in schools. 190

The role and function of education cannot be overstated. Education enhances and develops human abilities,
consciousness, identity, integrity, potential, and autonomy. 191 AI can be welcome as a complement to
educational processes, but its design must be focused on the rights of its child users.

186Id.
187Id.
188Mattel, “Hello Barbie Frequently Asked Questions.”
189Rosenthal, Daniel M., Assessing Digital Preemption (and the Future of Law Enforcement?) (January 5, 2011).
New Criminal Law Review, Fall 2011, Available at SSRN: https://ssrn.com/abstract=1735479
190Leaton Gray, S; (2020) Artificial intelligence in schools: Towards a democratic future. London Review of
Education , 18 (2) pp. 163-177. 10.14324/lre.18.2.02.
191Lee, Jootaek, The Human Right to Education: Definition, Research and Annotated Bibliography (November
18, 2019). Emory International Law Review, Vol. 34, No. 3, 2019, Available at SSRN:
https://ssrn.com/abstract=3489328

31
Electronic copy available at: https://ssrn.com/abstract=3882296
Because the right to education has been recognized as a human right and defined in various human rights
instruments in various contexts, this right can be asserted against states and their agencies. 192 UDHR Article
26(1) states that “everyone has the right to education.” 193 This implies that every human, not just the young,
has the right. Article 18 of the International Covenant on Civil and Political Rights (ICCPR) and Article 12 of the
International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families
(MWC) protect parents’ right to control the religious and moral education of their children. 194 Under Article 13(1)
of the International Covenant on Economic, Social and Cultural Rights (ICESCR), state parties recognize the right
of everyone to education.195Article 28(1) of the Convention on the Rights of the Child (CRC) also recognizes the
right of the child to education as a progressive right. 196 The United Nations Educational, Scientific, and Cultural
Organization (UNESCO)’s Convention Against Discrimination in Education also prohibits discrimination in terms of
access to education, the standard and quality of education, and condition under which education is given. 197
Article 5(v) of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD)
urges states not to racially discriminate when their citizens enjoying the right to education and training. 198 Article
10 of the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) recognizes
women’s equal rights to education.199

Article 24 of the Convention on the Rights of Persons with Disabilities (CRPD) recognizes the right of persons
with disabilities to education. 200 The MWC recognizes migrant workers’ and their children’s right of access to
education.201The Convention Relating to the Status of Refugees (1951 Refugee Convention) also recognizes
refugees’ equal rights to elementary education and most favored treatment to other educations. 202 In 2007, the
General Assembly overwhelmingly adopted the United Nations Declaration on the Rights of Indigenous Peoples
(UNDRIP), 203 which includes the right to education. 204Within several years, the four nations in opposition—the

192Krajewski, Markus, The State Duty to Protect Against Human Rights Violations Through Transnational
Business Activities (December 3, 2018). Deakin Law Review, Vol. 23, 2018, Available at SSRN:
https://ssrn.com/abstract=3295305 or http://dx.doi.org/10.2139/ssrn.3295305
193Universal Declaration of Human Rights (Dec. 10, 1948), art. 26 [hereinafter UDHR].
194International Covenant on Civil and Political Rights art. 18, opened for signature Dec. 19, 1966, 999 U.N.T.S.
171; International Convention on the Protection of the Rights of All Migrant Workers and Members of Their
Families art. 12, Dec. 18, 1990, 2220 U.N.T.S. 3 [hereinafter MWC].
195International Covenant on Economic, Social and Cultural Rights art 13(1), opened for signature Dec. 19,
1966, 993 U.N.T.S. 3.
196Convention on the Rights of the Child art. 28(1), opened for signature Nov. 20, 1989, 1577 U.N.T.S. 3.
197Convention Against Discrimination in Education arts. 1–3, Dec. 14, 1960, 429 U.N.T.S. 93 (entered into force
May 22, 1962).
198 International Convention on the Elimination of All Forms of Racial Discrimination art. 5(v), opened for
signature Mar. 7, 1966, 660 U.N.T.S. 195.
199Convention on the Elimination of All Forms of Discrimination Against Women art. 10, opened for signature
Mar. 1, 1980, 1249 U.N.T.S. 13.
200Convention on the Rights of Persons with Disabilities art. 24, opened for signature Mar. 30, 2007, 2515
U.N.T.S. 3.
201International Convention on the Protection of the Rights of All Migrant Workers and Members of Their
Families art. 12, Dec. 18, 1990, 2220 U.N.T.S. 3 arts. 30, 43(1)(a).
202Convention Relating to the Status of Refugees art. 22, July 28, 1951, 189 U.N.T.S. 137.
203G.A. Res. 61/295, ¶ 12, U.N. Doc. A/RES/61/295 (Sept. 13, 2007) [hereinafter UNDRIP]; see also WALTER
R. ECHO-HAWK, IN THE LIGHT OF JUSTICE: THE RISE OF HUMAN RIGHTS IN NATIVE AMERICA AND THE
UN DECLARATION ON THE RIGHTS OF INDIGENOUS PEOPLES 3 (2013) (describing the UNDRIP as “a
landmark event that promises to shape humanity in the post-colonial age”). See also , see, e.g., Lorie M.
Graham & Siegfried Wiessner, Indigenous Sovereignty, Culture, and International Human Rights Law, 110 S.
ATLANTIC Q. 403, 405 (2011) (analyzing the recognition of provisions of the UNDRIP as customary
international law).
204UNDRIP, supra. The version of the Declaration presented to the General Assembly affirmed that indigenous
peoples have the right to full enjoyment, “as a collective or as individuals,” of all human rights recognized by
the U.N. Charter, Universal Declaration on Human Rights, and international human rights law. It retained the
language from early drafts on “indigenous peoples” and “self-determination,” as well as rights to traditional

32
Electronic copy available at: https://ssrn.com/abstract=3882296
United States, Canada, New Zealand, and Australia—all reversed their positions. 205 UNDRIP acknowledges rights
common to humanity—such as nondiscrimination, equality, and property—and contexts for the enjoyment of
those rights that may appear more particular to indigenous peoples, such as spiritual attachment to traditional
lands and a focus on community rights. 206

As the world proceeds deeper into the digital space, there is a growing need to explore the impact of novel
digital technologies on children’s right to education. Conceptualizing education as a human right necessitates
greater attention to the United Nations’ 4A-framework 207 (accessibility, adaptability, acceptability and
availability): the accessibility and adaptability of school environments, beyond merely their acceptability and
availability. New technologies have impacted all of these criteria, as the education sector continues to capitalize
on emerging opportunities. 208

Dependent on connectivity and resources, countries across the world have opted for differing ICT infrastructure
to support remote learning. Alongside digital platforms, social media, radio platforms and TV have all been used
to ensure continuity in education for all corners of the world. Notwithstanding, this transition to digital learning
has amplified societal inequities, as children living in remote locations with little to no internet connection
struggle to gain access to online services. 209 Though technology is designed to connect people by reaching
frequently excluded areas, only a few educational systems around the world were able to adequately respond to
the challenges of the COVID-19 pandemic. 210 This accessibility issue must be addressed.

lands, economic development, education, family and child welfare, self-government, culture, religion,
expression, and others. Key provisions call for states to obtain “free, prior and informed consent before
adopting and implementing legislative or administrative measures” affecting indigenous peoples.
205Carpenter, Kristen A. and Riley, Angela, Indigenous Peoples and the Jurisgenerative Moment in Human
Rights (February 18, 2013). California Law Review, Vol. 102, 2014, 192 Available at SSRN:
https://ssrn.com/abstract=2220573
206See Julian Burger, The UN Declaration on the Rights of Indigenous Peoples: From Advocacy to
Implementation, in Stephen Allen and Alexandra Xanthaki, eds., Reflections on the UN Declaration on the
Rights of Indigenous Peoples at 41, 42–43 (“[The Declaration] responds to the real-life problems that
threaten the existence of indigenous peoples as identified by indigenous peoples themselves. One of the
remarkable features of the Working Group . . . was that the rights proposed were garnered from specific
experiences, expressed in the language of the elder, community leader, woman or youth activist. How else
could the recognition of indigenous peoples’ spiritual relationship with their lands be included in an
international human rights instrument, if not through countless stories of this non-materialist and
harmonious bond between humankind and nature?”).
207https://sdgs.un.org/goals/goal4
208Vanessa Cezarita Cordeiro, Educational technology (EdTech) and children’s right to privacy, Humanium,
(June 15, 2021) https://www.humanium.org/en/educational-technology-edtech-and-childrens-right-to-
privacy/ Available at: https://aberta.org.br/educacao-dados-e-plataformas/ See also UNESCO, SDG4,
Education https://en.unesco.org/gem-report/sdg-goal-4 In September 2015, at the United Nations
Sustainable Development Summit, Member States formally adopted the 2030 Agenda for Sustainable
Development in New York. The agenda contains 17 goals including a new global education goal (SDG 4).
SDG 4 is to ensure inclusive and equitable quality education and promote lifelong learning opportunities for
all’ and has seven targets and three means of implementation. This goal came about through an intensive
consultative process led by Member-States, but with broad participation from civil society, teachers, unions,
bilateral agencies, regional organizations, the private sector and research institutes and foundations.
209 Human Rights Watch, COVID-19 and Children’s Rights, (April 9, 2020)
https://www.hrw.org/news/2020/04/09/covid-19-and-childrens-rights#_Toc37256528
210 Mercedes Mateo Diaz and Changha Lee ,A Silent Revolution, in What Technology Can and Can’t Do for
Education - A comparison of 5 stories of success, Inter-American Development Bank, (2020)
https://publications.iadb.org/publications/english/document/What-Technology-Can-and-Cant-Do-for-
Education-A-Comparison-of-5-Stories-of-Success.pdf

33
Electronic copy available at: https://ssrn.com/abstract=3882296
What is AIEd and EdTech?

AIEd is the latest innovation in educational technology, also known as EdTech, typically defined as the sector of
technology dedicated to the development and application of tools for educational purposes. The introduction of
these technologies pose numerous challenges to children’s rights to privacy.

Education and child development

As has been shown, the digital environment shapes children’s development in differing ways. 211 Technology
permeates most areas of children’s day-to-day lives, creating opportunities for greater learning, communication
and development, as well as new risks to children’s realization of their human rights. In the educational arena,
technology has provided new mediums for sharing and communicating information, connecting school
communities beyond the classroom, and tailoring the delivery of education to individual children, among other
innovations.212 However, with these developments come new challenges.

AIEd and Children’s Privacy

Tools and software utilized in classrooms to enhance learning experiences are quickly evolving. From the use of
advanced emotional AI and facial recognition, down to the simple migration of educational material onto online
shared platforms, children’s learning experiences are quickly becoming intertwined with technology. All of these
tools designed to support and facilitate children’s education are considered EdTech, and their emergence has
presented new challenges for both children and tech implementers. As described by the Council of Europe,
EdTech is often “deployed without various actors always being aware of the challenges to children’s private life
and personal data protection”.213

Numerous publications report problematic issues with EdTech which result in the collection and processing of
personal data from children without guaranteeing their best interest just for the purpose of commercial
exploitation of children.214

In the rush to implement new technologies, educational regulators have failed to ensure child data is adequately
protected. Children’s educational data is “far less protected” than health data, and a large number of countries
do not have data privacy laws which explicitly protect children. Without proper regulation, sensitive information

211Council of Europe. (2020, November 20). Consultative committee of the convention for the protection of
individuals with regard to automatic processing of personal data. ‘Children’s data protection in an education
setting guidelines.’ ; https://rm.coe.int/t-pd-2019-6bisrev5-eng-guidelines-education-setting-plenary-clean-
2790/1680a07f2b See also Council of Europe. (2020, November 27). ‘Protect children’s personal data in an
education setting.’ https://www.coe.int/en/web/data-protection/-/protect-children-s-personal-data-in-
education-setting- and Jen Persson, Director of defenddigitalme, Children’s Data Protection in Education
Systems: Challenges and Possible Remedies, (November 15, 2019) 1680a01b47 (coe.int)
212Council of Europe. (2020, November 20). Consultative committee of the convention for the protection of
individuals with regard to automatic processing of personal data. ‘Children’s data protection in an education
setting guidelines.’
213Council of Europe. (2020, November 27). ‘Protect children’s personal data in an education setting.’
214Vanessa Cezarita Cordeiro, Educational technology (EdTech) and children’s right to privacy, Humanium,
(June 15, 2021) https://www.humanium.org/en/educational-technology-edtech-and-childrens-right-to-
privacy/ See also The General Data Protection Regulation, requires that personal data must be “processed
lawfully, fairly and in a transparent manner in relation to the data subject” GDPR Article 5(1)(a). See also
Jones, Meg and Kaminski, Margot E., An American's Guide to the GDPR (June 5, 2020). Denver Law Review,
Vol. 98, No. 1, p. 93, 2021, U of Colorado Law Legal Studies Research Paper No. 20-33, Available at SSRN:
https://ssrn.com/abstract=3620198 See generally Data protection starts with a ban: one cannot process
personal data unless a lawful condition applies GABRIELA ZANFIR-FORTUNA & TERESA TROESTER-FALK,
FUTURE OF PRIV. F. AND NYMITY, PROCESSING PERSONAL DATA ON THE BASIS OF LEGITIMATE
INTERESTS UNDER THE GDPR: PRACTICAL CASES 3–4 (2018)
https://www.ejtn.eu/PageFiles/17861/Deciphering_Legitimate_Interests_Under_the_GDPR%20(1).pdf

34
Electronic copy available at: https://ssrn.com/abstract=3882296
about children – such as their names, addresses and behaviors – are open to exploitation. 215 In 2020, numerous
popular distance learning platforms drew criticism over their collection, sharing and management of child data. 216

Research from the eQuality Project 217lists some of the most pressing concerns around the use of EdTech:
tracking of student activity in and outside the classroom, discrimination against children from marginalized
communities, breaches of child data protection and autonomy, and the sale of child data to private third parties
such as advertising companies.218 These concerns can only be overcome if educators are mindful of the terms
and conditions of the software in use, whether it was designed for educational purposes or not (such as
videoconferencing applications such as Zoom or Skype).

Even technology designed for other purposes, but used as educational tools, necessitate greater attention on
their data protection policies and constraints. Recent versions of Zoom, for example, stated that data collected
from students included their name, school, devices and internet connections, and details about the content
viewed by children and their communication with others via those devices. Notably, consent to Zoom’s policies is
given by the “school subscriber”, rather than a child or their guardian, rendering the policy inconsistent with
children’s right to participate in decisions affecting them under the CRC. 219

COVID-19 and AIEd

COVID-19 has greatly exacerbated pre-existing EdTech risks. Overnight, education was forced to depend on
technology, rather than simply utilize it to enable new teaching methods. During the spring of 2020 alone,
schools in 192 countries were closed.220 UNESCO estimates support this assertion, stating that 91% of the
world’s student population were out of school in April of 2020.221This has vaulted EdTech from an incoming
phenomenon to a virtual necessity as one of the core mediums for the delivery of education. This occurrence
has been described as the “biggest distance learning experiment in history”, 222 bringing us closer to what The
Economist has dubbed “the coronopticon” — a brave new age of surveillance and data control catalyzed by

215Hye Jung Han, As schools close over coronavirus, protect kids’ privacy in online learning, Human Rights
Watch, (March 27, 2020) https://www.hrw.org/news/2020/03/27/schools-close-over-coronavirus-protect-
kids-privacy-online-learning#
216Hye Jung Han, As schools close over coronavirus, protect kids’ privacy in online learning, Human Rights
Watch, (March 27, 2020) https://www.hrw.org/news/2020/03/27/schools-close-over-coronavirus-protect-
kids-privacy-online-learning#
217http://www.equalityproject.ca/
218Jane Bailey, Jacquelyn Burkell, Priscilla Regan, and Valerie Steeves, ‘Children’s privacy is at risk with rapid
shifts to online schooling under coronavirus.’ The Conversation, (April 12, 2020)
https://theconversation.com/childrens-privacy-is-at-risk-with-rapid-shifts-to-online-schooling-under-
coronavirus-135787
219Jane Bailey, Jacquelyn Burkell, Priscilla Regan, and Valerie Steeves, ‘Children’s privacy is at risk with rapid
shifts to online schooling under coronavirus.’ The Conversation, (April 12, 2020)
https://theconversation.com/childrens-privacy-is-at-risk-with-rapid-shifts-to-online-schooling-under-
coronavirus-135787
220 Mercedes Mateo Diaz and Changha Lee ,A Silent Revolution, in What Technology Can and Can’t Do for
Education - A comparison of 5 stories of success, Inter-American Development Bank, (2020)
https://publications.iadb.org/publications/english/document/What-Technology-Can-and-Cant-Do-for-
Education-A-Comparison-of-5-Stories-of-Success.pdf
221Human Rights Watch. (2020, April 9). ‘COVID-19 and Children’s Rights’.
222Mercedes Mateo Diaz and Changha Lee ,A Silent Revolution, in What Technology Can and Can’t Do for
Education - A comparison of 5 stories of success, Inter-American Development Bank, (2020)
https://publications.iadb.org/publications/english/document/What-Technology-Can-and-Cant-Do-for-
Education-A-Comparison-of-5-Stories-of-Success.pdf

35
Electronic copy available at: https://ssrn.com/abstract=3882296
hasty tech decisions under COVID-19.223 Organizations such as Media Smarts, 224, Common Sense Media,225,
Consortium for School Networking,226 and Future of Privacy Forum,227have all updated their websites to provide
information on privacy and data protection practices of edtech products and services. The father of two
elementary school girls, claiming violations of Illinois Biometric Information Privacy Act or BIPA, has sued Google
for alleged violations of privacy.228

Policymakers should support teachers, administrators and school boards to insist that ed tech companies default
in favor of privacy-respecting practices. Educational policymakers must provide guidance and novel instruction
on the use of EdTech to better protect children’s data. In 2001, the UN Committee on the Rights of the Child
announced that “children do not lose their human rights by virtue of passing through the school gates”. 229The
majority of EdTech is developed and created by commercial actors, with scant regard for children’s vulnerability
and inability to police and protect their own digital footprint. As technologies evolve to analyze more behaviors
from children and further personalize learning experiences, there is a desperate need for regulation to ensure
EdTech is inclusive, mindful and complementary to children’s development.

United Nations General Comment No.16 of 2013 calls on countries to ensure that private enterprises are not
awarded public procurement contracts if they fail to respect children’s rights. 230 In the European context, the
Council of Europe have issued guidelines calling on States to adhere to The Convention for the Protection of
Individuals with regard to Automatic Processing of Personal Data, 231 specifically by realizing these rights in the
context of children.232

The 2019 Beijing Consensus on Artificial Intelligence and Education "reaffirms a humanistic approach to
deploying Artificial Intelligent technologies in education for augmenting human intelligence, protecting human
rights and for promoting sustainable development through effective human-machine collaboration in life,
learning and work." Its recommendations are in five areas: (i) AI for education management and delivery; (ii)
AI to empower teaching and teachers; (iii) AI for learning and learning assessment; (iv) Development of values
and skills for life and work in the AI era; and (v) AI for offering lifelong learning opportunities for all. 233

223The Economist, Creating the Coronopticon, Countries Are Using Apps and Data Networks to Keep Tabs on
The Pandemic, and Also, in the Process, Their Citizens, (March 26, 2020)
https://www.economist.com/briefing/2020/03/26/countries-are-using-apps-and-data-networks-to-keep-tabs-
on-the-pandemic
224https://mediasmarts.ca/
225https://www.commonsense.org/education/
226https://www.cosn.org/
227https://studentprivacypledge.org/
228See Nieva, R. (2020, April 3). ‘Two children sue Google for allegedly collecting students’ biometric data’,
https://www.cnet.com/news/two-children-sue-google-for-allegedly-collecting-students-biometric-data/
Google G Suite for Education Collects Children’s Biometrics BIPA Class Action,
https://classactionsreporter.com/google-g-suite-for-education-collects-childrens-biometrics-bipa-class-action/
and Farwell v. Google, LLC - Join Class Action Lawsuits https://www.classaction.org/media/farwell-v-
google-llc.pdf
229United Nations Committee on the Rights of the Child. (2001, April 17). ‘General Comment No. 1 Article
29(1): The aims of education’. CRC/GC/2001/1.
230United Nations Committee on the Rights of the Child. (2013, April 17). ‘General Comment No. 16 on State
obligations regarding the impact of the business sector on children’s rights.’ CRC/C/GC/16.
231https://rm.coe.int/1680078b37
232Council of Europe. (2020, November 20). Consultative committee of the convention for the protection of
individuals with regard to automatic processing of personal data. ‘Children’s data protection in an education
setting guidelines.’ https://rm.coe.int/t-pd-2019-6bisrev5-eng-guidelines-education-setting-plenary-clean-
2790/1680a07f2b
233UNESCO, Beijing Consensus on Artificial Intelligence and Education, (June 25, 2019) Available at:
https://unesdoc.unesco.org/ark:/48223/pf0000368303 (UNESCO has published the Beijing Consensus on
Artificial Intelligence (AI) and Education, the first ever document to offer guidance and recommendations on
how best to harness AI technologies for achieving the Education 2030 Agenda. It was adopted during the

36
Electronic copy available at: https://ssrn.com/abstract=3882296
Because ethical AIEd is a global challenge, spread across borders, it needs to be addressed also globally guided
by ethics and human rights considerations, cognizant of the complexities of childhood.

Four forces acting together and separately will impact the regulation of AI: the Law; the design of AI systems;
market regulation; and ethics and principles. The basis of regulation will likely consider the ethical values of:
explainability; accountability and transparency. 234

AIEd Ethics by Design

Ethics by design235 will continue to gain strength as a consideration throughout the development and use of AI
systems, including systems designed for children’s and youth’s use. With respect to children, the Children's
Rights by Design of AI systems (“CRbD”) standard236 is useful to employ against data-driven business models
from AIEd that could exploit or otherwise harm children.

An application of unethically designed AIEd that arose public protest during COVID-19, was the use of a biased
algorithm in grading students. Due to the COVID-19 pandemic in the United Kingdom, all secondary education
examinations due to be held in 2020 were cancelled. As a result, an alternative method had to be designed and
implemented at short notice to determine the qualification grades to be given to students for that year. A grades

International Conference on Artificial Intelligence and Education, held in Beijing from 16 – 18 May 2019, by
over 50 government ministers, international representatives from over 105 Member States and almost 100
representatives from UN agencies, academic institutions, civil society and the private sector. The Beijing
Consensus comes after the Qingdao Declaration of 2015, in which UNESCO Member States committed to
efficiently harness emerging technologies for the achievement of SDG 4.)
234Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI
(January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN:
https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482
235Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia;
Luetge, Christoph; Madelin,Robert; Pagallo, Ugo; Rossi, Francesca; Shafer, Burkhard; Valcke, Peggy;
Vayena, Vayena. AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles,
and Recommendations. Minds and Machines, 2018. Available in https://doi.org/10.1007/s11023-018-9482-5
and https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?
utm_medium=PR&utm_source=Web&utm_campaign=EAD1e&utm_content=geias&utm_term=undefined
(checked in 14.10.2020).
236The CRdD for AI standard could be translated into the following specific recommendations for actors who
govern, develop and provide products and services with AI that impacts direct or indirectly children: 1)
Integrate the Convention on the Rights of the Child provisions into all appropriate corporate policies and
management processes; 2) Use an interdisciplinary perspective to achieve the best interests of the child; 3)
Universal adoption of the best technology and policy available; 4) Due diligence of policies and community
standards; 5) Data minimization; 6) Children's full ownership of their data; 7) Commercial-free digital
spaces; 8) Promotion of meaningful and non-monetizable experiences; 9) Nudge techniques in the best
interest of the child; 10) Safety standards; 11) Default high-privacy settings; 12) Parental controls and
mediation (children should have age appropriate and transparent information about how it works and how it
affects their privacy); 13) Right use, play and participate without data collection (options free from children's
data processing); 14) Promotion of children's right to disconnect; 15) Adoption of Children's Data Protection
Impact Assessments; 16) Non-detrimental use of data (processing children's data should be always in their
best interests); 17) Transparency, accessibility and legibility of terms of use and privacy policies; and 18) No
data sharing.Hartung, Pedro. The Children's rights-by-design (CRbD) standard for data use by tech
companies. Unicef Data Governance Working Group, 2020.
https://www.unicef.org/globalinsight/media/1286/file/%20UNICEF-Global-Insight-DataGov-data-use-brief-
2020.pdf Additionally, for all automated decisions with AI it is important to guarantee the AI system’s
explicability and accountability, explaining how they protect and promote children's rights.

37
Electronic copy available at: https://ssrn.com/abstract=3882296
standardization algorithm was produced in June 2020 by the regulator Ofqual in England, The A Level grades
were announced in England, Wales and Northern Ireland on August 13, 2020. The release of results resulted in
a public outcry. Particular criticism was made of the disparate effect the grading algorithm had in downgrading
the results of those who attended state schools, and upgrading the results of pupils at privately funded
independent schools and thus disadvantaging pupils of a lower socio-economic background, in part due to the
algorithm's behavior around small cohort sizes.

Students and teachers felt deprived and upset following the controversial algorithm calculation and protested
against it, with many demanding Prime Minister Boris Johnson and his government take immediate action. In a
tone deaf response to the public outcry, Secretary of State for Education Gavin Williamson said that the grading
system is here to stay, and Boris Johnson stated that the results are "robust and dependable".

Legal action, in the form of judicial review, was initiated by multiple students and legal advocacy organizations,
such as the Good Law Project. 237 Finally, on August 17,2020, Ofqual and Secretary of State for Education Gavin
Williamson agreed that grades would be reissued using unmoderated teacher predictions. 238

AIEd Applications-Connecting AI with EdTech

Today, both startups and established EdTech companies seek to integrate AI into marketable products. In some
cases, AI performs functions independently of teachers, while in others it augments teaching capabilities. 239
Applications of AI based education technology include the following:

Tutoring. AI programs commonly referred to as Intelligent Tutoring Systems (ITS) or adaptive tutors
engage students in dialogue, answer questions, and provide feedback.

Personalizing Learning. ITS and adaptive tutors tailor learning material, pace, sequence, and difficulty to
each student’s needs. AI can also provide support for special needs students, for instance by teaching autistic
children to identify facial expressions.

Testing. Computer adaptive assessments adjust the difficulty of successive questions based on the accuracy of
the student’s answers, enabling more precise identification of a student’s mastery level.

Automating Tasks. AI can perform routine tasks such as taking attendance, grading assignments, and
generating test questions.

Thus, AI-based tools have three general orientations in terms of their use in schools: learner-facing, teacher-
facing and system-facing.240

237 Good Law Project, Legal action over A-Level results fiasco, https://goodlawproject.org/news/a-level-results-
fiasco/
238Adam Satariano 'British Grading Debacle Shows Pitfalls of Automating Government', New York Times, Aug.
2020. Available at https://www.nytimes.com/2020/08/20/world/europe/uk-england-grading-algorithm.html;
WILL BEDINGFIELD, Everything that went wrong with the botched A-Levels algorithm: flawed assumptions
about data led to the problems impacting hundreds of thousands of students, WIRED, (Aug. 18, 2020)
https://www.wired.co.uk/article/alevel-exam-algorithm (On March 18, the government announced that, like
so many annual institutions that have fallen victim to Covid-19, this summer’s exams would be cancelled. In
the exams’ place, the Office of Qualifications and Examinations Regulation (Ofqual) asked teachers to predict
the grades each of their students would have achieved.) See also, Jon Porter, UK ditches exam results
generated by biased algorithm after student protests, The Verge, (August 17, 2020)
https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-
pandemic-university-applications WIKIPEDIA, 2020 UK GCSE and A-Level Grading Controversy,
https://en.wikipedia.org/wiki/2020_UK_GCSE_and_A-Level_grading_controversy#cite_note-24
239Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
240Anissa Baker Smith, “Educ-AI-tion Rebooted?,” Nesta, https://www.nesta.org.uk/report/education-rebooted/

38
Electronic copy available at: https://ssrn.com/abstract=3882296
Adaptive learning systems that are learner-facing employ algorithms, assessments, student feedback and
various media to deliver material tailored to each student’s needs and progress. 241

For example, AI may be used to enhance social skills, especially for children with special needs. One company
that employs AI for this purpose is Brain Power, which addresses the issue of autism through a wearable
computer.242AI is deployed to help high school students build career skills, including language learning
applications. Duolingo243 is one such language learning application which gives students personalized feedback in
over 300,000 classrooms around the globe.244

Under the teacher-facing category, AI helps teachers in administrative tasks such as grading papers and
detecting cheating. For example, the Human-Computer Interaction Institute at Carnegie Mellon University is
partnering with startup Lumilo245, building an AI augmented reality assistant that will keep teachers in the loop
as students work on their assignments. 246

When used in classrooms, personalized learning software allows students to work at their own pace, while
freeing up the teacher to spend more time working one-on-one with students. Yet such personalized classrooms
also pose unique challenges for teachers, who are tasked with monitoring classes working on divergent
activities, and prioritizing help-giving in the face of limited time. 247

Intelligent tutoring systems are a class of advanced learning technologies that provide students with step-by-
step guidance during complex problem-solving practice and other learning activities.

AI companies connect their EdTech products using client or server-side software development kits (SDKS),
which then analyze their user’s data in real time. Data streams from a variety of learning contexts can be
aggregated to create in-depth psychometric profiles (learning models) of the interactions, preferences, and
achievements of each individual student. It then uses an item response theory, a psychometric framework to
determine the student's next challenge, instructional material, or optimal activity, which is then delivered to the
student via the partner's EdTech product. The AI system also provides personalized information and
recommendations to teachers and parents on the best ways they can help individual students. 248

241Id.
242Brain Power, “About Us,” http://www.brain-power.com/
243https://www.duolingo.com/
244Jackie Snow, “AI Technology is disrupting the traditional classroom,”
https://www.pbs.org/wgbh/nova/article/ai-technology-is-disrupting-the-traditional-classroom/
245Julia Mericle, With Lumilo, teachers can see classroom analytics floating above students' heads, Pittsburgh
Business Times, (Oct. 3, 2018) https://www.bizjournals.com/pittsburgh/news/2018/10/03/with-lumilo-
teachers-can-see-classroom-analytics.html
246Center for Curriculum Redesign, Artificial Intelligence in Education: Promises and Implications for Teaching
and Learning, (March 2019) https://curriculumredesign.org/wp-content/uploads/AI-in-Education-CCR-Copy-
Protected.pdf
247Holstein, K., Hong, G., Tegene, M., McLaren, B. M., & Aleven, V. (2018). The classroom as a dashboard:
Co-designing wearable cognitive augmentation for K-12 teachers. In Proceedings of the Eighth International
Learning Analytics & Knowledge Conference (pp. 79-88). ACM. This paper reports on the co-design,
implementation, and evaluation of a wearable classroom orchestration tool for K-12 teachers: mixed-reality
smart glasses that augment teachers’ realtime perceptions of their students’ learning, metacognition, and
behavior, while students work with personalized learning software. The main contributions are: (1) the first
exploration of the use of smart glasses to support orchestration of personalized classrooms, yielding design
findings that may inform future work on real-time orchestration tools; (2) Replay Enactments: a new
prototyping method for real-time orchestration tools; and (3) an in-lab evaluation and classroom pilot using
a prototype of teacher smart glasses (Lumilo), with early findings suggesting that Lumilo can direct teachers’
time to students who may need it most.
248See, e.g, https://www.kidaptive.com/

39
Electronic copy available at: https://ssrn.com/abstract=3882296
In addition to the software and tools described above, AI robots are increasingly transforming educational
methods.249 Even though educational robots promise benefits to children, e.g., personalized learning, developing
social skills, enabling distance education for children in remote regions, they also pose risks. 250 Human rights
that may be positively or negatively affected by their use include the right to education, as well as the right to
protection from exploitation and abuse, and the protection of children with disabilities, inasmuch as they could
be developed with commercial profit in mind, at the expense of the CRbD standard.

AI technologies could help facilitate “personalized learning” (tailoring instruction to the needs of each student)
and “blended learning” (combining technology with face-to face interaction). Many school officials hope that
such approaches will improve academic performance and reduce achievement gaps between groups of students.
Some teachers also suggest that personalized learning increases student engagement, motivation, and
independence.251

AI-based learning faces significant implementation challenges. Greater student independence could
disadvantage children who are less self-disciplined or who receive little educational support at home, potentially
exacerbating the achievement gap. Moreover, surveys indicate that some teachers struggle to translate the data
they receive from personalized learning tools into actionable instruction and
spend inordinate amounts of time creating individualized assignments. There is also debate over how well
students retain knowledge learned from an AI-based system, and whether spending substantial class time on
computers diminishes social learning at school. 252

The budget implications of using AI in education are problematic, given uncertainties about the cost-
effectiveness of the technology. For example, the versatility and scalability of AI could result in some institutions
to reduce teaching staff in favor of AI alternatives. However, AI could create demand for education professionals
who can design and implement personalized learning programs. 253

AIEd in the US

US government actions have addressed issues related to AI in schools, such as internet access and student data
privacy. Successful implementation of AI by schools requires significant investment in information technology as
well as reliable broadband internet access. These resources are not uniformly distributed across school districts;
for example, close to 80% of schools without fiber connections were located in rural areas as of 2017. Federal
efforts to address this disparity include such programs as the Universal Service Program for Schools and
Libraries. Commonly known as E-rate, the program provides subsidies of up to 90% to help ensure that
qualifying schools and libraries can obtain high-speed internet access and telecommunications at affordable
rates. The National Science Foundation254 and the Department of Education’s255 (ED’s) Institute of Education
Sciences256 have awarded grants to projects researching AI-enabled classroom technologies. In addition, ED’s

249Timms, M.J. (2016). Letting Artificial Intelligence in Education out of the Box: Educational Cobots and Smart
Classrooms. International Journal of Artificial Intelligence in Education, 26(2), 701-712.
250Jon-Chao Hong, Kuang-Chao Yu, and Mei-Yung Chen, “Collaborative Learning in Technological Project
Design,” International Journal of Technology and Design Education 21, no. 3 (August 2011): 335–47.;
Mazzoni, Elvis, and Martina Benvenuti, “A Robot-Partner for Preschool Children Learning English Using Socio-
Cognitive Conflict,” Journal of Educational Technology & Society 18, no. 4 (2015): 474–85.; Barak, Moshe,
and Yair Zadok, “Robotics Projects and Learning Concepts in Science, Technology and Problem Solving,”
International Journal of Technology and Design Education 19, no. 3 (August 2009): 289–307.
251Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
252Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
253Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
254https://www.nsf.gov/
255https://www.ed.gov/
256https://ies.ed.gov/

40
Electronic copy available at: https://ssrn.com/abstract=3882296
Office of Educational Technology257 has released several publications on topics relevant to AI in schools, such as
learning analytics and educational data mining, teacher preparation, personalized learning, and student privacy.

Selected AI Education Policy Considerations in the US

Although most education policies are set at the state and local level, Congress is involved in oversight and
legislative actions on issues such as student privacy, teacher preparation, product selection, and algorithmic
accountability.258

Student Privacy. Like many digital services, AI-enabled education tools collect and store PII. In response to
public concerns about data security and privacy, activists created a voluntary Student Privacy Pledge in 2014.
Signatories promise to place limits on the lifespan of stored data, maintain reasonable security measures, and
refrain from selling data. Although President Obama and several
Members of Congress endorsed the pledge, critics have asserted that the language is vague and the pledge is
little more than a publicity move. Meanwhile, 41 states have enacted laws governing student data collection,
use, reporting, and safeguarding since 2013. Several of those laws were modeled after California’s Student
Online Personal Information Protection Act (SOPIPA). Congress may consider whether such state efforts are
sufficient or if a federal law is needed.

Teacher Preparation. If AI technologies are adopted on a broader scale, teachers face the task of not only
learning to use specific products but also integrating a range of AI technologies into their lessons.

Preparation programs offered by teacher-certifying universities and institutes might provide such training. In
FY2018, ED’s Teacher Quality Partnership (TQP) competition plans to award approximately $14 million in grants
to these programs. If Congress decides to support funding teacher preparation for AI, options could include
redirecting funds toward teacher technology training and directing ED to develop best practices for teacher
technology competency.259

Product Procurement and Support. Choosing products can be a time- and energy-intensive effort involving
teachers, administrators, IT staff, and other school officials. While some schools allow teachers to experiment
freely, others require IT staff to vet hundreds of privacy policies and security measures. Some school districts
have turned to digital content consultants for guidance in selecting

257Meet the OET Team - Office of Educational Technology


258 Actions that Congress has taken include The Every Student Succeeds Act (P.L. 114-95), which
reauthorized the Elementary and Secondary Education Act of 1965, authorized the use of computer adaptive
testing in state student academic assessments mandated under the act. This marked the first time Congress
explicitly approved an AI testing technique for widespread use in schools. Congress has taken steps to
address public concerns regarding the privacy of students’ personal information, including concerns about
education technology companies collecting personally identifiable information (PII) from students to
maintain user accounts; The Family Educational Rights and Privacy Act of 1974 (FERPA), as
amended in 2013, limits the power of schools to disclose students’ education records but has been criticized
for weak enforcement mechanisms against third parties that misuse student data; The Protection of Pupil
Rights Amendment of 1978 (PPRA), as further amended in 2015, requires schools to notify parents and offer
an opt-out choice if a third party surveys students for marketing purposes; The Children’s Online Privacy
Protection Act of 1998 (COPPA) requires parental consent before websites collect information about
children aged 13 or under. Many experts worry that current law, passed largely before AI became a major
policy consideration, is insufficient to address today’s cybersecurity threats. Bills introduced in the 115th
Congress, such as the Protecting Student Privacy Act (S. 877), SAFE KIDS Act (S. 2640), and
Protecting Education Privacy Act (H.R. 5224), addressed how third parties can access and use
students’ PII.. See Congressional Research Service Report, Artificial Intelligence (AI) and Education (August
1, 2018) https://crsreports.congress.gov/product/pdf/IF/IF10937.
259Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937

41
Electronic copy available at: https://ssrn.com/abstract=3882296
products. To help schools gather research on educational tools and strategies, nonprofits and federal agencies
have developed resources. For example, the State Educational Technology Directors Association provides a best
practices guide for product procurement, 260 and ED’s What Works Clearinghouse rigorously reviews the
effectiveness of educational products and practices. Despite these resources, surveys indicate that peer
recommendation is a more prevalent basis for choosing products than research-based evidence. A centralized
platform to exchange information and collaboratively troubleshoot problems might help formalize inter-district
communication and allow schools to make wiser and less costly purchases. The Technology for Education
Consortium estimates that districts would collectively save $3 billion per year on education technology purchases
simply by sharing price information.261

Algorithmic Accountability. Parents and school administrators may find it difficult to trust AI technologies
used to influence or make decisions about student learning. Mistrust can stem from the refusal of companies to
disclose their algorithms, which they argue are trade secrets, or from the “black box problem,” which occurs
when an algorithm’s complexity renders its processes inscrutable even to developers. Options for Congress could
include holding hearings, conducting oversight, and considering requirements to enhance transparency and
accountability of data use more broadly, as the European Union has sought to do through the General Data
Protection Regulation.262

AIEd and Surveillance

Surveillance of children is another use of AI that is booming due to advance machine learning and deep learning
techniques.263 Although some degree of surveillance advances security, surveillance poses risks to children. A
use of facial recognition technology benefiting children is that of police in New Delhi, who trialed facial
recognition technology and identified almost 3,000 missing children in four days. 264 However, surveillance also
creates privacy, safety, bias, and security risks and, especially in education contexts, limit children’s ability and
willingness to take risks and otherwise express themselves. 265

Key legal issues surrounding advanced security technologies in public K-12 schools in the United States,
including the impact on student privacy rights. In using AI surveillance technology in schools, 266privacy must be
balanced against security concerns267; any apparent issues with efficacy and accuracy of the technology should
be addressed before implementation; and Fourth Amendment case law, federal student privacy legislation, and
state laws need to be further developed, with AI in mind. 268

260https://www.setda.org/master/wp-content/uploads/2017/10/Case_studies_full_10.15.17.pdf
261Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
262Congressional Research Service Report, Artificial Intelligence (AI) and Education (August 1, 2018)
https://crsreports.congress.gov/product/pdf/IF/IF10937
263 Emmeline Taylor, “Surveillance Schools: A New Era in Education,” in Surveillance Schools: Security,
Discipline and Control in Contemporary Education (London: Palgrave Macmillan UK, 2013), 15–39,
https://doi.org/10.1057/9781137308863_2.
264Anthony Cuthbertson, “Police Trace 3,000 Missing Children in Just Four Days Using Facial Recognition
Technology,” The Independent, https://www.independent.co.uk/life-style/gadgets-and-tech/news/india-
police-missingchildren-facial-recognition-tech-trace-find-reunite-a8320406.html, (April 24, 2018).
265Article 19, “The Global Principles on Protection of Freedom of Expression and Privacy,”
https://www.article19.org/resources/the-global-principles-on-protection-of-freedom-of-expression-and-
privacy/ ARTICLE 19 works for a world where all people everywhere can freely express themselves and
actively engage in public life without fear of discrimination. Id.
266Barbara Fedders, The Constant and Expanding Classroom: Surveillance in K-12 Public Schools, 97 N.C. L.
REV. 1673 (2019).
267See Sara Collins, Tyler Park & Amelia Vance, Ensuring School Safety While Also Protecting Privacy, FUTURE
PRIVACY F. (June 6, 2018), https://fpf.org/2018/06/06/ensuring-school-safety-while#also-protecting-
privacy-fpf-testim

42
Electronic copy available at: https://ssrn.com/abstract=3882296
In response to the fears of additional school violence and calls for enhanced school security, schools have begun
tightening security through the use of these emerging AI technologies. 269 Recognizing the market opportunity,
technology companies are developing new devices they claim will prevent or reduce the likelihood of school
shootings.270 These new devices, which include advanced cameras and body scanners, use biometrics and
artificial intelligence to recognize faces; detect weapons, gunshots, and other threats; and track individuals’
locations in schools.271

In schools, biometric and AI technologies cover a wide spectrum of programs. The AI industry has seen a boom
within the education market, and the worldwide AI education market value is predicted to surpass six billion
dollars by 2024,272 with classroom applications accounting for twenty percent of that growth.273

Much of the reason for the AIEd growth is the integration of AI systems for personalized learning, which enables
students to receive “immediate and personalized feedback and instructions . . . without the intervention of a

268Maya Weinstein, School Surveillance: The Students' Rights Implications of Artificial Intelligence as K-12
School Security, 98 N.C. L. Rev. 438 (2020). Available at: https://scholarship.law.unc.edu/nclr/vol98/iss2/12
269See, e.g., Kaitlyn DeHaven, Texas ISD Makes Major Security Upgrades Over the Summer, CAMPUS
SECURITY & LIFE SAFETY (Aug. 9, 2019), https://campuslifesecurity.com/articles/2019/08/09/texas-isd-
makes-major-security-upgrades-over-the-summer.aspx [https://perma.cc/3YFR#TZ47] (“Two apps will now
be used as part of the security measures—the Anonymous Alerts app and the Smart Button. . . . In terms of
physical security, the district installed video intercoms at each school entrance.”); Mark Keierleber, Inside
the $3 Billion School Security Industry: Companies Marketed Sophisticated Technology To ‘Harden’
Campuses, but Will It Make Us Safe?, 74 (Aug. 9, 2018), https://www.the74million.org/article/inside-the-3-
billion-school-security-industry-companies#market-sophisticated-technology-to-harden-campuses-but-will-it-
make-us-safe/ (“Schools have increasingly locked and monitored campus entrances in recent years, though
the rise in school security is most evident in the growth of video surveillance.”)
270The media streaming company RealNetworks is offering its facial recognition software to over 100,000
school districts for free, with the goal of making schools safer. Eli Zimmerman, Company Offers Free Facial
Recognition Software To Boost School Security, EDTECH (Aug. 3, 2018),
https://edtechmagazine.com/k12/article/2018/08/company-offers-free-facial-recognition-software#boost-
school-security [https://perma.cc/4V9N-TMSD]; see also Press Release, SAFR, RealNetworks Provides SAFR
Facial Recognition Solution for Free to Every K-12 School in the U.S. and Canada (July 17, 2018),
https://safr.com/press-release/realnetworks-provides-safr-facial-recognition-solution#for-free-to-every-k-12-
school-in-the-u-s-and-canada/
271Maya Weinstein, School Surveillance: The Students' Rights Implications of Artificial Intelligence as K-12
School Security, 98 N.C. L. Rev. 438 (2020). Available at: https://scholarship.law.unc.edu/nclr/vol98/iss2/12
272Ankita Bhutani & Preeti Wadhwani, Artificial Intelligence (AI) in Education Market Size Worth $6bn by 2024,
GLOBAL MKT. INSIGHTS (Aug. 12, 2019), https://www.gminsights.com/pressrelease/artificial-intelligence-ai-
in-education-market [https://perma.cc/W3RP-SNDQ].
273Michele Molnar, K-12 Artificial Intelligence Market Set To Explode in U.S. and Worldwide by 2024, EDWEEK
MKT. BRIEF (July 10, 2018), https://marketbrief.edweek.org/marketplace-k-12/k-12artificial-intelligence-
market-set-explode-u-s-worldwide-2024/ [https://perma.cc/6HBQ-JCEF]; see also Hao, Karen. “China has
started a grand experiment in AI education. It could reshape how the world learns.” MIT Technology
Review. .technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-
could-reshape-how-the/.

43
Electronic copy available at: https://ssrn.com/abstract=3882296
human tutor.”274 Biometrics have been incorporated into the classroom as well, 275 and some schools even use
biometrics to allow students to pay for lunch with just a fingerprint.276

One popular new area of school surveillance technology is location tracking. For instance, the program “e-
hallpass”277 is a modern, electronic hall pass that “continuously logs and monitors student time in the halls” and
claims to “improv[e] school security and emergency management while reducing classroom disruptions by as
much as 50%.”A similar program, “iClicker Reef,” 278 rebranded as “iClicker,”279 tracks attendance through a
geolocation feature.280Using geolocation,281 these location systems have the ability to identify when a student is
in class, log attendance for the teacher, and track where students are in school. 282

Although the technology has some benefits from a security standpoint, these technologies are intrusive and
create an environment where students are tracked, monitored, and watched. Many of these programs involve
constant monitoring of children, and some collect personally identifying data, including fingerprints and face
images. There are a number of potential adverse consequences of these technologies: students are inhibited to
participate in class, risks of false data matches may lead to harmful and wrongful disciplinary actions, and
otherwise encroaching on student privacy rights. 283

Another type of facial recognition program, “affect recognition,” uses biometric analysis to scan individuals’
faces and purportedly identify emotions. 284 An Australian university is currently testing a product called the

274Artificial Intelligence in Education Market To Hit $6bn by 2024, GLOBAL MKT. INSIGHTS (June 6, 2018),
https://www.globenewswire.com/news-release/2018/06/06/1517441/0/en/Artificial#Intelligence-in-
Education-Market-to-hit-6bn-by-2024-Global-Market-Insights-Inc.html
275Jen A. Miller, Biometrics in Schools To Yield Security Benefits and Privacy Concerns, EDTECH MAG. (May 7,
2019), https://edtechmagazine.com/k12/article/2019/05/biometrics-schools-yield-security-benefits-
and#privacy-concerns (“Biometric technology is already part of the K-12 ecosystem, where administrators
are using iris scans and ‘facial fingerprints’ to grant access to buildings and computer labs, track attendance,
manage lunch payments, loan library materials and ensure students get on the right buses.”); Mae Rice, 13
EdTech Applications that Are Transforming Teaching and Learning, BUILT IN (June 22, 2019),
https://builtin.com/edtech/technology-in-classroom-applications (describing an online test proctoring system
which confirms test takers’ identities through fingerprints and voice biometrics).
276Biometrics Allows Students To Purchase with Fingerprint, GOV’T TECH. (Oct. 17, 2007),
https://www.govtech.com/health/Biometrics-Allows-Students-to-Purchase-with.html
277E-Hallpass, EDUSPIRE SOLUTIONS, https://www.eduspiresolutions.org/what-is-e-hallpass/
278https://community.macmillanlearning.com/t5/institutional-solutions-blog/new-name-who-s-this-iclicker-reef-
to-be-re-named-iclicker/ba-p/15007
279https://www.iclicker.com/students/apps-and-remotes/web
280David Rosen & Aaron Santesso, How Students Learned To Stop Worrying—and Love Being Spied On,
CHRON. HIGHER EDUC. (Sept. 23, 2018), https://www.chronicle.com/article/How-Students#Learned-to-
Stop/244596
281Daniel Ionescu, Geolocation 101: How It Works, the Apps, and Your Privacy, ITWORLD (Mar. 31, 2010),
https://www.itworld.com/article/2756095/networking-hardware/geolocation-101--how-it#works--the-apps--
and-your-privacy.html [https://perma.cc/AMB3-8VLK] (“Typically, geolocation apps do two things: They
report your location to other users, and they associate real-world locations (such as restaurants and events)
to your location.”)
282David Rosen & Aaron Santesso, How Students Learned To Stop Worrying—and Love Being Spied On,
CHRON. HIGHER EDUC. (Sept. 23, 2018), https://www.chronicle.com/article/How-Students#Learned-to-
Stop/244596
283Maya Weinstein, School Surveillance: The Students' Rights Implications of Artificial Intelligence as K-12
School Security, 98 N.C. L. Rev. 438 (2020). Available at: https://scholarship.law.unc.edu/nclr/vol98/iss2/12
284MEREDITH WHITTAKER ET AL., AI NOW REPORT 2018, at 4 (Dec. 2018),
https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/2EAJ-AALT] (“Affect recognition is a
subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health,
and ‘worker engagement’ based on images or video of faces.”). See also Milly Chan, This AI reads children's
emotions as they learn, CNN, (Feb. 17, 2021) https://www.cnn.com/2021/02/16/tech/emotion-recognition-

44
Electronic copy available at: https://ssrn.com/abstract=3882296
“Biometric Mirror285,” which reads faces and ranks them according to fourteen characteristics, including gender,
age, ethnicity, attractiveness, “weirdness,” and emotional stability. 286Schools in China have implemented a
similar technology to analyze students’ facial expressions, including expressions like “neutral, happy, sad,
disappointed, angry, scared and surprised.”287 The main goal of this so-called “smart eye” is to alert teachers
when students are distracted in class. 288

Some argue that the identification of changes in mood could assist educators with identifying students
experiencing mental health crises, which could help flag potential threats. 289 However, many believe that affect
recognition, the idea that someone’s emotions can be read by a program is eerily reminiscent of debunked
psuedosciences of phrenology and physiognomy. 290 “These claims are not backed by robust scientific evidence
and are being applied in unethical and irresponsible ways…Linking affect recognition to hiring, access to
insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.” 291

The idea of banning facial recognition outright has also grown more popular in the past few years, particularly
with states and municipalities. For instance California and Massachusetts cities of San Francisco, Somerville,
Boston, Oakland, and Berkeley, have banned the use of facial recognition technology by city government,
including but not limited to law enforcement. A two-year moratorium on the use of facial recognition technology
in New York schools passed both houses of the state legislature and awaits the governor’s signature. California
recently passed a three-year ban
on law enforcement uses of facial recognition in body cameras, and a proposed ordinance in Portland, Oregon
would ban the use of facial recognition by both law enforcement and private businesses. A California legislator
announced plans to introduce a bill that would ban government uses of facial

ai-education-spc-intl-hnk/index.html (Ka Tim Chu, teacher and vice principal of Hong Kong's True Light
College uses an AI-powered learning platform monitors his students' emotions as they study at home.1
Students work on tests and homework on the platform as part of the school curriculum. While they study,
the AI measures muscle points on their faces via the camera on their computer or tablet, and identifies
emotions including happiness, sadness, anger, surprise and fear. The system also monitors how long
students take to answer questions; records their marks and performance history; generates reports on their
strengths, weaknesses and motivation levels; and forecasts their grades. The program can adapt to each
student, targeting knowledge gaps and offering game-style tests designed to make learning fun. Lam says
the technology has been especially useful to teachers during the pandemic because it allows them to
remotely monitor their students' emotions as they learn. Racial bias is also a serious issue for AI. Research
shows that some emotional analysis technology has trouble identifying the emotions of darker skinned faces,
in part because the algorithm is shaped by human bias and learns how to identify emotions from mostly
white faces.)
285https://biometricmirror.com/ (“Biometric Mirror is an ethically provocative interactive system that enables
public participation in the debate around ethics of artificial intelligence. The system enables people to have
their face photographed and to witness the reveal of their psychometric analysis, including attributes such as
aggressiveness, weirdness and emotional instability. Ultimately, a personalized scenario of algorithmic
decision-making is shown in order to stimulate individual reflection on the ethical application of artificial
intelligence.”)
286Jo Lauder, Mirror, Mirror: How AI Is Using Facial Recognition To Decipher Your Personality, ABC AUSTL.
(July 23, 2018), https://www.abc.net.au/triplej/programs/hack/how-ai-is-using-facial#recognition-to-
decipher-your-personality/10025634
287Neil Connor, Chinese School Uses Facial Recognition To Monitor Student Attention in Class, TELEGRAPH
(May 17, 2018), https://www.telegraph.co.uk/news/2018/05/17/chinese-school-uses#facial-recognition-
monitor-student-attention
288Id.
289 See, e.g., Randy Rieland, Can Artificial Intelligence Help Stop School Shootings?, SMITHSONIAN (June 22,
2018), https://www.smithsonianmag.com/innovation/can-artificial-intelligence-help-stop#school-shootings-
180969288/ (describing the use of machine learning to analyze student language and behavior and help
counselors with risk assessment).
290AI NOW 2018 REPORT at 8. https://ainowinstitute.org/AI_Now_2018_Report.pdf
291Id. at 4.

45
Electronic copy available at: https://ssrn.com/abstract=3882296
recognition for the next five years, while Senators Booker and Merkley introduced a bill that would ban federal
uses of the technology and prohibit states and local entities from using federal funding for it until Congress
passes legislation regulating it. The goal of a comprehensive and federal ban on facial recognition may be lofty,
but it is not impossible given the growing awareness and political will to regulate these AI technologies. 292

A major concern related to implementing AI technologies anywhere, but in schools as well, is the risk of
machine bias, the systematic disparities in accuracies of algorithm results, typically with respect to race, but
also gender or age. The identification abilities of AI in biometrics are only as good as the humans who develop
them. A prominent AI expert and co-founder of AI4ALL 293 described the issue as such: “bias in, bias out.”294

Surveillance practices that continuously monitor everything from children’s engagement in the classroom to their
emotional states throughout the day threaten the creativity, freedom of choice and self-determination of
children by potentially fostering an overabundance of self-censorship and social control. 295 Once automated
surveillance technologies are deployed at schools and in classrooms, children’s rights such as the right to
privacy, the right not to be subjected to discrimination, the right to flourish, and freedom of expression may be
compromised due to the surveillance environment in which children are confined. 296

The risks vary depending on who does the surveilling (governments, teachers, parents etc.) and for what
purposes.297 However, the chilling effect of having cameras constantly turned on children is undeniable. 298 It is
important to consider and evaluate the actors involved, their purposes, the tools and methods they’ll use, and
the safeguards they’ll put in place. The emerging trend of classroom surveillance should help children, not harm
them.

New technologies are expanding schools’ ability to keep students under surveillance—inside the classroom and
out, during the school year and after it ends. Schools have moved quickly to adopt a dizzying array of new tools.
These include digital learning products that capture and store student data; anonymous tip lines encouraging
students to report on each other; and software that monitors students’ emails and social media posts, even
when they are written from home. Steadily growing numbers of police officers stationed in schools can access
this information, compounding the technologies’ power. 299

Advocates of these tools argue that they improve student safety and learning outcomes, but this Article reveals
that the evidence for this argument is in fact quite thin. Moreover, policymakers have failed to consider
important countervailing considerations—most notably, student privacy and its significance for child

292Barrett, Lindsey, Ban Facial Recognition Technologies for Children—And for Everyone Else (July 24, 2020).
Boston University Journal of Science and Technology Law. Volume 26.2, at 277-278. Available at SSRN:
https://ssrn.com/abstract=3660118
293https://ai-4-all.org/ AI4ALL Opens Doors to Artificial Intelligence for Historically Excluded Talent Through
Education and Mentorship. Id.
294Jessi Hempel, Fei-Fei Li’s Quest To Make AI Better for Humanity, WIRED (Nov. 13, 2018),
https://www.wired.com/story/fei-fei-li-artificial-intelligence-humanity
295Rich Haridy, “AI in Schools: China’s Massive and Unprecedented Education Experiment,” New Atlas – New
Technology & Science News, https://newatlas.com/china-aieducation-schools-facial-recognition/54786/,
(May 28, 2018).
296Article 19, “Privacy and Freedom of Expression in the Age of Artificial Intelligence,”
https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-ofExpression-In-the-Age-of-
Artificial-Intelligence-1.pdf, (2018), 8.
297William Michael Carter, “Big Brother Facial Recognition Needs Ethical Regulations,” Phys.org,
https://phys.org/news/2018-07-big-brother-facial-recognition-ethical.html#jCp. (July 23, 2018).
298Id.
299Fedders, Barbara, The Constant and Expanding Classroom: Surveillance in K-12 Public Schools (September
1, 2019). North Carolina Law Review, Vol. 97, No. 6, 2019, Available at SSRN:
https://ssrn.com/abstract=3453358

46
Electronic copy available at: https://ssrn.com/abstract=3882296
development; unequal impact, particularly for poor, Black, and LGBTQ youth; and potential liability for school
administrators.300

The twin justifications for student surveillance are safety and improved educational outcomes. The companies
developing these technologies market them against a backdrop of fear of violence, especially school shootings,
and anxiety about academic success. State lawmakers appear convinced by these justifications, passing
legislation that mandates adoption of some technologies and allocates funds for the purchase of others. Local
school districts take advantage of increased state funding to hire school resource officers for kindergarten
through the twelfth grade. The various mechanisms of surveillance combine to make more information available
about more students, for a longer period of time, and accessible to a greater number of actors than was
possible before the digital
age.301

AIEd Continues to Develop

Thus, AIEd has the potential to dramatically automate and help track the learner’s progress in all these skills and
identify where best a human teacher’s assistance is needed. For teachers, AIEd can potentially be used to help
identify the most effective teaching methods based on students’ contexts and learning background. It can
automate monotonous tasks, generate assessments, and allegedly automate grading and feedback. AI does not
only impact what students learn through recommendations, but also how they learn, what are the learning gaps,
which pedagogies are most effective and how to retain learner’s attention. In these cases, teachers are the
‘human-in-the-loop’, where in such contexts, the role of AI is only to enable more informed decision making by
teachers, by providing them predictions about students performance or recommending relevant content to
students after teachers' approval. 302

Although AIEd around the globe is increasing, 303 educational technology companies building AI powered
products have always complained about the lack of relevant data for training algorithms. 304 The advent of
COVID-19 pushed educational institutions online and dependent on EdTech products to organize content,
manage operations, and communicate with students. This shift generated huge amounts of data for EdTech
companies on which they can build AI systems. According to a joint report: ‘Shock to the System’, published by
Educate Ventures and Cambridge University, optimism of EdTech companies about their own future increased
during the pandemic and their most pressing concern was too many customers to serve effectively. 305

300Fedders, Barbara, The Constant and Expanding Classroom: Surveillance in K-12 Public Schools (September
1, 2019). North Carolina Law Review, Vol. 97, No. 6, 2019, Available at SSRN:
https://ssrn.com/abstract=3453358
301Fedders, Barbara, The Constant and Expanding Classroom: Surveillance in K-12 Public Schools (September
1, 2019). North Carolina Law Review, Vol. 97, No. 6, 2019, Available at SSRN:
https://ssrn.com/abstract=3453358 See generally Julie E. Cohen, Surveillance Versus Privacy: Effects and
Implications, in THE CAMBRIDGE HANDBOOK OF SURVEILLANCE 455, 458–59 (David Gray & Stephen E.
Henderson eds., 2017) [hereinafter Cohen, Surveillance Versus Privacy] (documenting “emergence of
pervasive, networked surveillance”).
302Chaudhry, Muhammad and Kazim, Emre, Artificial Intelligence in Education (Aied) a High-Level Academic
and Industry Note 2021 (April 24, 2021). Available at SSRN: https://ssrn.com/abstract=3833583 or
http://dx.doi.org/10.2139/ssrn.3833583
303Weller, M., 2018. Twenty years of EdTech. Educause Review Online, 53(4), pp.34-48.
304Chaudhry, Muhammad and Kazim, Emre, Artificial Intelligence in Education (Aied) a High-Level Academic
and Industry Note 2021 (April 24, 2021). Available at SSRN: https://ssrn.com/abstract=3833583 or
http://dx.doi.org/10.2139/ssrn.3833583
305Cambridge University Press and Educate Ventures (2021). Shock to the system: lessons from Covid-19
Volume 1: Implications and recommendations. Available at:
https://www.cambridge.org/pk/files/1616/1349/4545/Shock_to_the_System_Lessons_from_Covid19_Volume
_1.pd

47
Electronic copy available at: https://ssrn.com/abstract=3882296
As noted above, an intelligent tutoring system is a computer program that tries to mimic a human teacher to
provide personalized learning to students. 306 Recently, ITSs such as ASSISTments, 307 iTalk2Learn,308 and Aida
Calculus309, have gained attention.310 Despite being limited in terms of the domain that a particular intelligent
tutoring system addresses, they have proven to be effective in providing relevant content to students,
interacting with students, and improving students’ academic performance. 311

Teachers have abandoned the technology in some instances because it was counterproductive. They conducted
a formative intervention with sixteen secondary school mathematics teachers and found systemic contradictions
between teachers’ opinions and ITS recommendations, eventually leading to the abandonment of the tool. 312

There have been a number of ed-tech companies that are leading the AIEd revolution. New funds are also
emerging to invest in ed-tech companies and to help ed-tech startups in scaling their products. There has been
an increase in investor interest.313 In 2020 the amount of investment raised by ed-tech companies more than
doubled compared to 2019.314

EDUCATE, a leading accelerator focused on ed-tech companies supported by UCL Institute of Education and
European Regional Development Fund was formed to bring research and evidence at the center of product
development for ed-tech. This accelerator has supported more than 250 ed-tech companies and 400
entrepreneurs and helped them focus on evidence-informed product development for education. 315

Companies such as Outschool316 and ClassDojo317 turn first profits while startups like Quizlet318 and ApplyBoard319
reached $1 billion valuations. Last year brought a flurry of record-breaking venture capital to the sector.
PitchBook320 data shows that edtech startups around the world raised $10.76 billion last year, compared to $4.7

306Mohamed, H., & Lamia, M. (2018). Implementing flipped classroom that used an intelligent tutoring system
into learning process. Computers & Education, 124, 62–76. https://doi.org/10.1016/j.compedu.2018.05.011
307 Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: building a platform that brings
scientists and teachers together for minimally invasive research on human learning and teaching.
308Hasan, M.A., Noor, N.F.M., Rahman, S.S.A. and Rahman, M.M., 2020. The Transition from Intelligent to
Affective Tutoring System: A Review and Open Issues. IEEE Access
309https://apps.apple.com/us/app/aida-calculus/id1450379917
310https://www.pearson.com/us/higher-education/products-services-teaching/learning-engagement-tools/
aida.html
311Fang Y, Ren Z, Hu X, Graesser AC. A meta-analysis of the effectiveness of ALEKS on learning. Educational
Psychology. 2019;39(10):1278–92
312Utterberg Modén, M., Tallvid, M., Lundin, J. and Lindström, B., 2021. Intelligent Tutoring Systems: Why
Teachers Abandoned a Technology Aimed at Automating Teaching Processes. In Proceedings of the 54th
Hawaii International Conference on System Sciences (p. 1538).
313Goryachikh, S.P., Sozinova, A.A., Grishina, E.N. and Nagovitsyna, E.V., 2020. Optimisation of the
mechanisms of managing venture investments in the sphere of digital education on the basis of new
information and communication technologies:audit and reorganisation. International Journal of Economic
Policy in Emerging Economies, 13(6), pp.587-594.
314Natasha Mascarenhas, 13 investors say lifelong learning is taking edtech mainstream, TechCrunch (Jan. 28,
2021) https://techcrunch.com/2021/01/28/12-investors-say-lifelong-learning-is-taking-edtech-mainstream/
315https://www.ucl.ac.uk/ioe/departments-and-centres/centres/ucl-knowledge-lab/educate
316https://outschool.com/
317https://www.classdojo.com/
318https://quizlet.com/
319https://www.applyboard.com/
320https://get.pitchbook.com/pitchbook-data/?utm_source=bing&utm_medium=cpc&utm_campaign=Brand-
US&adgroup=Brand-
Exact&utm_term=pitchbook&matchtype=e&creative=&device=c&utm_content=&kwdaud=kwd-
71400116180784:loc-190&_bt=71399641520904&sfid=rFC8fCnu-
dc_pcrid_71399641520904_pkw_pitchbook_pmt_be_slid__productid__pgrid_1142393172632981_ptaid_kwd
-71400116180784:loc-190&msclkid=a73b3243d1fd1b1616d352aa0b527832

48
Electronic copy available at: https://ssrn.com/abstract=3882296
billion in 2019. While reporting delays could change this total, VC dollars have more than doubled since the
pandemic began. In the United States, edtech startups raised $1.78 billion in venture capital across 265 deals
during 2020, compared to $1.32 billion the prior year.321

Seeing the business potential of AIEd and the kind of impact it can have on the future of humanity, some of the
biggest tech companies around the globe are moving into this space. The shift to online education during the
pandemic boosted the demand for cloud services. Amazon’s AWS (Amazon Web Services) was a leader in cloud
services provider facilitated institutions to scale their online examination services 322

Google’s CEO Sunder Pichai stated that the pandemic offered an incredible opportunity to reimagine education.
Google has launched more than 50 new software tools during the pandemic to facilitate remote learning. Google
Classroom which is a part of Google Apps for Education (GAFE) is being widely used by schools around the
globe to deliver education. Research shows that it improves class dynamics and helps with learner
participation.323

Development of the AIEd infrastructure is an issue. True progress will require the development of an AIEd
infrastructure.324 This will not, however, be a single monolithic AIEd system. Instead, it will resemble the
marketplace that has been developed for smartphone apps: hundreds and then thousands of individual AIEd
components, developed in collaboration with educators, conformed to uniform international data standards, and
shared with researchers and developers worldwide. These standards will enable system-level data collation and
analysis that help us learn much more about learning
itself and how to improve it.325

Ethical AIEd

A number of AI ethical misuses, 326 including safety and cybersecurity incidents, have occurred in the real world,
327
thus ethics in AI has become a real concern for AI researchers, practitioners, and governments alike. 328

321Natasha Mascarenhas, 13 investors say lifelong learning is taking edtech mainstream, TechCrunch (Jan. 28,
2021) https://techcrunch.com/2021/01/28/12-investors-say-lifelong-learning-is-taking-edtech-mainstream/
322About Amazon. (2020). Helping 700,000 students transition to remote learning. [online] Available at:
https://www.aboutamazon.com/news/community/helping700-000-students-transition-to-remote-learning
Amazon Web Services, Inc. (n.d.). Amazon Web Services, Inc. [online] Available at:
https://pages.awscloud.com/whitepaper-emerging-trends-in-education.html
323Al-Maroof, R.A.S. and Al-Emran, M., 2018. Students Acceptance of Google Classroom: An Exploratory Study
using PLS-SEM Approach. International Journal of Emerging Technologies in Learning, 13(6); Iftakhar, S.,
2016. Google classroom: what works and how. Journal of Education and Social Sciences, 3(1), pp.12-18;
Shaharanee, I.N.M., Jamil, J.M. and Rodzi, S.S.M., 2016, August. Google classroom as a tool for active
learning. In AIP Conference Proceedings (Vol. 1761, No. 1, p. 020069). AIP Publishing LLC. Shaharanee,
I.N.M., Jamil, J.M. and Rodzi, S.S.M., 2016. The application of Google Classroom as a tool for teaching and
learning. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 8(10), pp.5-8;
Sudarsana, I.K., Putra, I.B.M.A., Astawa, I.N.T. and Yogantara, I.W.L., 2019, March. The use of Google
classroom in the learning process. In Journal of Physics: Conference Series (Vol. 1175, No. 1, p. 012165).
IOP Publishing.
324Luckin, R., Holmes, W., Griffiths, M. and Pearson, L. (2016). Intelligence Unleashed an Argument for AI in
Education. [online] Available at: https://static.googleusercontent.com/media/edu.google.com/en//pdfs/
Intelligence-Unleashed-Publication.pdf
325Id.
326Johnson, D.G. and Verdicchio, M., 2019. AI, agency and responsibility: the VW fraud case and beyond. Ai &
Society, 34(3), pp.639-647.
327Yampolskiy, R.V. and Spellchecker, M.S., 2016. Artificial Intelligence Safety and Cybersecurity: a Timeline of
AI Failures. arXiv preprint arXiv:1610.07997.
328Leslie, David, Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and
Implementation of AI Systems in the Public Sector (June 10, 2019). Available at SSRN:
https://ssrn.com/abstract=3403301 or http://dx.doi.org/10.2139/ssrn.3403301

49
Electronic copy available at: https://ssrn.com/abstract=3882296
Within computer science, there is a growing overlap with the border Digital Ethics 329 and the ethics and
engineering focused on developing Trustworthy AI. 330

As stated above, ethics in AI focuses on fairness, accountability, transparency and explainability. 331 Ethics in AI
needs to be embedded in the entire development pipeline, from the decision to start collecting data until the
decision to deploy the machine learning model in production. From an engineering perspective, four verticals of
algorithmic auditing have been identified. These include auditing for performance and robustness, bias and
discrimination, interpretability and explanability and algorithmic privacy. 332

In education, ethical AI is crucial to ensure the wellbeing of learners, teachers and other stakeholders involved.

With the influx of large amounts of data due to online learning during the pandemic, we will witness an
increasing number of AI powered ed-tech products. There are concerns that ethics in AIEd is not a priority for
most EdTech companies, or even, schools. There is a lack of awareness of relevant stakeholders regarding
where AIEd can go wrong.

AIEd wrongly predicting that a particular student will not perform very well in end of year exams or might drop
out next year can play a very important role in determining that student’s reputation in front of teachers and
parents. This reputation will determine how these teachers and parents treat the student, resulting in a huge
psychological impact and even more, including lost opportunities, based on this wrong description by an AI tool.
We discussed above, the high-profile case in the UK where the grading AI system was shown to be biased
against students from poorer backgrounds.

There are important AIEd ethics developments. For example, Professor Rose Luckin, professor of learner
centered design at University College London along with Sir Anthony Seldon, vice chancellor of the University of
Buckingham and Priya Lakhani, founder and CEO of Century Tech founded the Institute of Ethical AI in
Education (IEAIEd)333 to create awareness and promote the ethical aspects of AI in education. In its interim
report, the institute identified seven different requirements for ethical AI to mitigate any kind of risks for
students. This included human agency and oversight to double-check AI’s performance; technical robustness
and safety to prevent AI going wrong with new data or being hacked; diversity to ensure similar distribution of
different demographies in data and avoid bias; nondiscrimination and fairness to prevent anyone from being
unfairly treated by AI; privacy and data governance to ensure everyone has the right to control their data;
transparency to enhance the understanding of AI products; societal and environmental well-being to ensure that
AI is not causing any harm and accountability to ensure that someone takes the responsibility for any
wrongdoings of AI. Recently, the institute has also published a framework 334 for educators, schools and ed-tech

329Floridi, L. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,
376(2133), 20180081
330Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., ... & Maharaj, T. (2020). Toward
trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213
331Zhang, Y., Liao, Q.V. and Bellamy, R.K.E. (2020). Effect of confidence and explanation on accuracy and trust
calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability,
and Transparency. Available at: https://arxiv.org/pdf/2001.02114.pdf; Kazim, Emre and Koshiyama, Adriano,
A High-Level Overview of AI Ethics (May 24, 2020). Available at SSRN: https://ssrn.com/abstract=3609292
or http://dx.doi.org/10.2139/ssrn.3609292; and Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R. and Yang,
Q., 2018. Building ethics into artificial intelligence. arXiv preprint arXiv:1812.02953
332Koshiyama, A., Kazim, E., Treleaven, P., Rai, P., Szpruch, L., Pavey, G., Ahamat, G., Leutner, F., Goebel, R.,
Knight, A., Adams, J., Hitrova, C., Barnett, J., Nachev, P., Barber, D., Chamorro-Premuzic, T., Klemmer, K.,
Gregorovic, M., Khan, S. and Lomas, E. (2021). Towards Algorithm Auditing: A Survey on Managing Legal,
Ethical and Technological Risks of AI, ML and Associated Algorithms. [online] papers.ssrn.com. Available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3778998.
333University of Buckingham. (n.d.). The Institute for Ethical AI in Education. Available at:
https://www.buckingham.ac.uk/research-the-institute-for-ethical-ai-ineducation/

50
Electronic copy available at: https://ssrn.com/abstract=3882296
companies to help them with the selection of ed-tech products with various ethical considerations in mind, like
ethical design, transparency, privacy etc.

With the focus on online learning during the pandemic, and more utilization of AI powered ed-tech tools, risks of
AI going awry increased significantly for all the stakeholders including EdTech companies, schools, teachers and
students. A lot more work needs to be done on ethical AI in learning contexts to mitigate these risks, including
assessments balancing AIEd risks and opportunities.

Moving Forward with AIEd

With the focus on online education due to COVID19 in the past year, it will be interesting to see what AI has to
offer for education with vast amounts of data being collected online through Learning Management Systems
(LMS) and Massive Online Open Courses (MOOCS).

With the influx of new educational data, AI techniques such as reinforcement learning will be utilized to
empower EdTech. Such algorithms perform best with the large amounts of data that was limited to very few
EdTech companies in 2021. These algorithms have achieved breakthrough performance in multiple domains
including games335 healthcare336 and robotics.337 This presents a great opportunity for AI’s applications in
education for further enhancing student’s learning outcomes, reducing teachers’ workloads and making learning
interactive and fun for teachers and students. With a growing number of AI powered EdTech products in future,
there will also be a lot of research on ethical AIEd. Thus, more work will be done to ensure robust and safe AI
products for all the stakeholders.

EdTech companies can begin by sharing detailed guidelines for using AI powered ed-tech products, particularly
specifying when not to rely on them. This includes the detailed documentation of the entire machine learning
development pipeline with the assumptions made, data processing approaches used, and the processes
followed, for selecting machine learning models.

Regulators will play a very important role in ensuring that certain ethical principles are followed in developing
these AI products or there are certain minimum performance thresholds that these products achieve. 338

The goal of AIEd is not to promote AI, but to support education. Cutting edge AI by researchers and companies
around the world is not of much use if it is not helping students learn. With the recent developments in AI,
particularly reinforcement learning techniques, the future holds exciting possibilities of where AI will take
education. For impactful AI in education, students and teachers always need to be at the epicenter of AI
development.339

334The Institute for Ethical AI in Education The Ethical Framework for AI in Education (IEAIED). 2021. [online] .
Available at: https://fb77c667c4d6e21c1e06.b-cdn.net/wp#content/uploads/2021/03/The-Ethical-
Framework-for-AI-in-Education-Institute-for#Ethical-AI-in-Education-Final-Report.pdf
335Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G.,Schrittwieser, J., Antonoglou,
I., Panneershelvam, V., Lanctot, M. and Dieleman, S., 2016. Mastering the game of Go with deep neural
networks and tree search. nature, 529(7587), pp.484-489.
336Callaway, E. (2020). “It will change everything”: DeepMind’s AI makes gigantic leap in solving protein
structures. Nature. Available at: https://www.nature.com/articles/d41586-020-03348-4.
337Kober, J., Bagnell, J.A. and Peters, J., 2013. Reinforcement learning in robotics: A survey. The International
Journal of Robotics Research, 32(11), pp.1238-1274
338 Kazim, E., Denny, D. M. T., & Koshiyama, A. (2021). AI auditing and impact assessment: according to the
UK information commissioner’s office. AI and Ethics, 1-10.
339Chaudhry, Muhammad and Kazim, Emre, Artificial Intelligence in Education (Aied) a High-Level Academic
and Industry Note 2021 (April 24, 2021). Available at SSRN: https://ssrn.com/abstract=3833583 or
http://dx.doi.org/10.2139/ssrn.3833583

51
Electronic copy available at: https://ssrn.com/abstract=3882296
A 2016 study, conducted on behalf of the European Parliament, concludes that AI applications will be used in
almost all fields of our daily lives. 340 The recent developments and future promises of AI technologies provide
myriad benefits that span across a multitude of interested parties, industries, and sectors. The lofty future that
AI could provide has been recognized by businesses, governments, and individuals, and with good reason. As
noted by the European Union’s Independent High-Level Expert Group (HLEG) on AI, “AI is not an end in itself,
but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-
being and the common good, as well as bringing progress and innovation.” 341

Safety, as it relates to AI and related technologies, “ought not to be confined to physical safety but should
extend to concern for nonphysical harm, such as privacy, security, and the dehumanization of care for people at
their most vulnerable.”342 Finding ways to navigate both the physical and nonphysical challenges presented by AI
will be essential to building trust and fostering its development. An additional element that deserves additional
attention are related cybersecurity concerns, which manifest themselves quite differently from cyber attacks
(fed, for example, by bugs in code) with AI attacks taking the form of pattern manipulation and poisoning along
with “inherent limitations in the underlying AI algorithms that currently cannot be fixed.” 343

Of growing significance alongside AI technological issues are those of ethics. AI is ideological. 344The concern
about AI is not that it won't deliver on the promise held forth by its advocates but, rather, that it will, but
without due consideration of ethical implications. There are assumptions embedded in the algorithms that will
shape how education is realized, and if students do not fit that conceptual model, they will find themselves
outside of the area where a human could apply human wisdom to alter or intervene an unjust outcome. Perhaps
one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in
education and everywhere else.345

Corporations’ and Governments’ Role in Mitigating Harmful Impacts of AI on Children

Microsoft and Google have both established principles for the ethical use of AI. 346 However, neither has public-
facing policies specific to AI and children. 347Several technology centers, trade associations, and computer science
groups have also drafted ethical principles with regard to AI. 348 However, most have excluded explicit reference
to child rights, or discussion of the risks to children on AI-incorporating technologies. 349

340Including applications for disabled people and the daily life of elderly people, healthcare, agriculture and
food supply, manufacturing, energy and critical infrastructure, logistics and transport as well as security and
safety. EUROPEAN PARLIAMENTARY RESEARCH SERV.: SCI. FORESIGHT UNIT, ETHICAL ASPECTS OF
CYBER-PHYSICAL SYSTEMS 36 (2016), at 9. [herinafter EPRS] For more information concerning the
increasing relevance of AI applications, see Commission Communication on Artificial Intelligence for Europe,
COM (2018) 237 final (Apr. 25, 2018) [hereinafter Artificial Intelligence for Europe].
341Ethics Guidelines for Trustworthy AI, INDEPENDENT HIGH-LEVEL EXPERT GP. ON ARTIFICIAL
INTELLIGENCE (Apr. 2019), at 4 [hereinafter referred to as HLEG AI Ethics Guidelines].
342Michael Guihot, Anne F. Matthew, & Nicolas Suzor, Nudging Robots: Innovative Solutions to Regulate
Artificial Intelligence, 20 VAND. J. ENT. & TECH. L. 385, 407 (2017).
343MARCUS COMITER, ATTACKING ARTIFICIAL INTELLIGENCE: AI’S SECURITY VULNERABILITY AND WHAT
POLICYMAKERS CAN DO ABOUT IT 1, 80 (Aug. 2019), https://www.belfercenter.org/sites/default/files/2019-
08/AttackingAI/AttackingAI.pdf.
344Audrey Watters, “AI Is Ideological,” New Internationalist, November 1, 2017.
https://newint.org/features/2017/11/01/audrey-watters-ai
345Audrey Watters, "AI Is Ideological," New Internationalist, November 1, 2017.
346“Microsoft Salient Human Rights Issues,” Report -FY17, Microsoft.
file:///Users/dreatrew/Downloads/Microsoft_Salient_Human_Rights_Issues_Report-FY17.pdf; Google,
“Responsible Development of AI” (2018).
347Microsoft, “The Future Computed: Artificial Intelligence and Its Role in Society” (2018).
348Alexa Hern, “Partnership on AI” Formed by Google, Facebook, Amazon, IBM and Microsoft,” The Guardian,
(September 28, 2016).
349 John Gerard Ruggie, “Global Governance and New Governance Theory,” Lessons from Business and Human
Rights, Global Governance 20, http://journals.rienner.com/doi/pdf/10.5555/1075-2846-20.1.5, (2014), 5.

52
Electronic copy available at: https://ssrn.com/abstract=3882296
Like corporations, governments around the world have adopted strategies for becoming leaders in the
development and use of AI, fostering environments congenial to innovators and corporations. 350 However, in
most cases, policymakers have not directly addressed how the rights of children fit into their national strategy. 351
While France’s strategy deals with the AI-related issues of achieving gender equality and implementing digital
literacy through education, the broader scope of impact on children is missing. 352 An example of a country that
has taken a more proactive look at the potential benefits of AI for children is India, whose AI initiative focuses
on using AI in education, such as creating adaptive learning tools for customized learning, integrating intelligent
and interactive tutoring systems, adding predictive tools to inform preemptive action for students predicted to
drop out of school, and developing automated rationalization of teachers and customized professional
development courses.353AI technologies should obviously be deployed in locating missing or exploited children,
and used in other ways to protect children.

Ultimately, both corporations and governments should think through how their AI systems and strategies can be
strengthened to maximize the benefits and minimize the harms of AI for children today, and in the future. The
role of artificial intelligence in children’s lives—from how children play, to how they are educated, to how they
consume information and learn about the world—is expected to increase exponentially over the coming years.
Thus, it’s imperative that stakeholders come together now to evaluate the risks of using AI technologies and
assess opportunities to use artificial intelligence to maximize children’s well being in a thoughtful and systematic
manner. As part of this assessment, stakeholders should work together to map the potential positive and
negative uses of AI on children’s lives, and develop a child rights-based framework for artificial intelligence that
delineates rights and corresponding duties for governments, educators, developers, corporations, parents, and
children around the world.

Recommendations from UNICEF on Deploying AI with children354

Corporations

Incorporate an inclusive design approach when developing child-facing products, which maximizes gender,
geographic and cultural diversity, and includes a broad range of stakeholders, such as parents, teachers, child
psychologists, and—where appropriate—children themselves.

Adopt a multi-disciplinary approach when developing technologies that affect children, and consult with civil
society, including academia, to identify the potential impacts of these technologies on the rights of a diverse
range of potential end-users.

Implement safety by design and privacy by design for products and services addressed to or commonly used by
children.

350 Council of Europe, “Recommendation CM/REC (2018)7 of the Committee of Ministers to member States on
Guidelines to respect, protect and fulfil the rights of the child in the digital environment,” (July 4th 2018).
351Cedric Villani, “For a Meaningful Artificial Intelligence Towards a French and European Strategy,” (March 8th
2018).
352Id.
353NITI Aayog, “Discussion paper: National Strategy for Artificial Intelligence,”
http://niti.gov.in/writereaddata/files/ document_publication/NationalStrategy-for-AI-DiscussionPaper.pdf,
(June 2018).
354https://www.unicef.org/innovation/media/10726/file/Executive%20Summary:%20Memorandum%20on
%20Artificial%20Intelligence%20and%20Child%20Rights.pdf

53
Electronic copy available at: https://ssrn.com/abstract=3882296
Develop plans for handling especially sensitive data, including revelations of abuse or other harm that may be
shared with the company through its products.

Educators

Be aware of and consider using artificial intelligence-based tools that may enhance learning for students, such
as specialized products that can assist non-traditional learners and children with special needs.

Avoid the overuse of facial and behavioral recognition technologies, including for security purposes, in ways that
may constrain learning and appropriate risk taking.

Governments

Set up awareness campaigns that help parents understand the importance of privacy for their children. Parents
should be aware of how their children’s data is being used and processed for diverse purposes, including for
targeted ad campaigns or non-educative social media recommendations. They should also be aware of the
impacts of posting pictures or other information about their children to social media, and the ways that what
they post can have a dramatic impact on their children’s future.

Adopt a clear, comprehensive framework for corporations that imposes a duty of care connected to the handling
of children’s data, and provides an effective remedy (judicial, administrative or other) for breach. This
framework should incorporate human rights principles.

Establish a comprehensive national approach to the development of artificial intelligence that pays specific
attention to the needs of children as rights-bearers and integrates children into national policy plans.

Parents

Carefully review and consider avoiding the purchase and use of products that do not have clear policies on data
protection, security, and other issues that impact children.

Incorporate children into the decision-making process about how their data will be used, including whether to
post their information to social media sites and whether to engage smart toys, helping children understand the
potential short and long-term impacts of that use.

Identify how schools might be using artificial intelligence-based technologies to assist or surveil children, and
raise concerns if some of the policies or procedures are unclear or seem inappropriate—for example, by
disincentivizing creativity and exploration.

Encourage the use of artificial intelligence-based technologies when they seem likely to enhance learning and
that positive benefit has been confirmed by peer-reviewed research-and-analysis

54
Electronic copy available at: https://ssrn.com/abstract=3882296

You might also like