0% found this document useful (0 votes)
14 views

layout_v3_web_page

The document discusses the role of artificial intelligence (AI) in promoting social good, emphasizing its potential to address various global challenges while also highlighting the risks associated with its use. It includes contributions from multiple authors and institutions, providing insights and policy recommendations aimed at leveraging AI for sustainable development in the Asia-Pacific region. The report aims to guide policymakers in creating frameworks that balance the benefits of AI with ethical considerations and societal impacts.

Uploaded by

joycellh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

layout_v3_web_page

The document discusses the role of artificial intelligence (AI) in promoting social good, emphasizing its potential to address various global challenges while also highlighting the risks associated with its use. It includes contributions from multiple authors and institutions, providing insights and policy recommendations aimed at leveraging AI for sustainable development in the Asia-Pacific region. The report aims to guide policymakers in creating frameworks that balance the benefits of AI with ethical considerations and societal impacts.

Uploaded by

joycellh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 284

ARTIFICIAL

INTELLIGENCE
FOR
SOCIAL
GOOD

1
Contents

Foreword
APRU 6
Christopher Tremewan,
Secretary General

United Nations ESCAP 8


Mia Mikic, Director, Trade,
Investment and Innovation Division

Keio University 10
Akira Haseyama,
President

Introduction 12
Appendix 1: Summaries of Papers and Policy Suggestions
Appendix 2: Project History

2
Philosophical point of view for social
implementation

Chapter 1 34
AI for Social Good: Buddhist Compassion as a Solution
Soraj Hongladarom

Chapter 2 50
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance
Matter in Shaping Ethical Guidelines and Regulatory
Frameworks?
M. Jae Moon and Iljoo Park

Chapter 3 78
Definition and Recognition of AI and Its Influence on the
Policy: Critical Review, Document Analysis and Learning
from History
Kyoung Jun Lee

3
Institutional and technological design
development through use of case based
discussion

Chapter 4 106
Regulatory Interventions for Emerging Economies
Governing the Use of Artificial Intelligence in Public
Functions
Arindrajit Basu, Elonnai Hickok and Amber Sinha

Chapter 5 154
AI Technologies, Information Capacity and Sustainable
South World Trading
Mark Findlay

Chapter 6 180
Governing Data-driven Innovation for Sustainability:
Opportunities and Challenges of Regulatory Sandboxes
for Smart Cities
Masaru Yarime

4
How to expand the capacity of AI to build
better society

Chapter 7 204
Including Women in AI-Enabled Smart Cities:
Developing Responsible, Gender-inclusive AI Policy and
Practice in the Asia-Pacific Region
Caitlin Bentley

Chapter 8 244
AI and the Future of Work: A Policy Framework for
Transforming Job Disruption into Social Good for All
Wilson Wong

Bios of authors 276

Acknowledgement/Partners 279

5
Foreword

By APRU

The dual character of artificial intelligence technology, its promise for social good, and its
threat to human society, is now a familiar theme. The authors of this report note that “the
challenge is how to balance the reduction of human rights abuses while not suffocating the
beneficial uses”. Offering a solution, they go on to say that “the realization of social good by
AI is effective only when the government adequately sets rules for appropriate use of data”.1

These observations go to the core of the challenge before all societies. Whose interests
do governments mainly represent? Are they accountable in real ways to their citizens or
are they more aligned to the interests of high-tech monopolies? As with all technologies,
we face the questions of ownership and of their use for concentrating political power and
wealth rather than ensuring the benefits are shared with those most in need of them.

The current COVID-19 crisis has shown that governments need to move decisively towards
the public interest. We confront crises within a new economic order of information
technology that “claims human experience as free raw material for hidden commercial
practices”2. The multidisciplinary studies in this report provide the knowledge and
perspectives of researchers from Singapore, Hong Kong, Korea, Thailand, India, and
Australia that combine the local understanding with the international outlook that is
essential if policymakers are to respond with appropriate regulation (and taxation) to ensure
technology companies with a global reach are enabled to contribute to the common good.
The insights in these chapters underpin the report‘s recommendations on developing an
enabling environment and a governance framework.

This is the third in a series of projects3 exploring the impact of AI on societies in the Asia-
Pacific region which offers research-based recommendations to policymakers. It is intended
that the reports support the work towards achieving the UN Agenda 2030 for Sustainable
Development and its goals.

6
Subsequent work might usefully look at the ways that social
movements can assist formal regulatory processes in shaping
AI policies in societies marked by inequalities of wealth, income
and political participation, and a biosphere at risk of collapse.

This project is a partnership between APRU, UN ESCAP and


Google. International circumstances permitting, we will work
together to hold a policy forum later in 2020 or early 2021 to
share these findings with policymakers and public officials from
around the region.

I thank our partners for their support and Professor Jiro Kokuryo,
Vice President of Keio University, Tokyo, along with members of
the Project Advisory Group for their leadership of this initiative.

Christopher Tremewan
Secretary General
Association of Pacific Rim Universities

1. Introduction p.4
2. Zuboff, S. (2019). The Age of Surveillance Capitalism. See ‘Definition’ in opening pages
3. AI for Everyone (2018) led by Keio University; The Transformation of Work in the Asia-Pacific (2019) led by
The Hong Kong University of Science and Technology. https://apru.org/resources/

7
By UN ESCAP

In 2015, governments agreed on the 2030 Sustainable Development


Agenda to “ensure peace and prosperity, and forge partnerships with
people and planet at the core”. In this global agenda, science, technology,
and innovation were identified both as a goal in itself and as a means of
supporting the achievement of other sustainable development goals.

Artificial intelligence (AI) offers a myriad of technological solutions to


today’s problems, including responding to COVID-19, enabling better
delivery of public services1, and supporting smart innovations for
the environment. However, the wave of optimism surrounding the
transformative potential of AI has been tempered by concerns regarding
possible negative impacts, such as unequal capabilities to design and
use this technology, privacy concerns, and bias in AI.

The world must ensure that AI-based technologies are used for the good
of our societies and their sustainable development. Public policies play
a critical role in promoting AI for social good. Governments can regulate
AI developments and applications so that they contribute to meeting
our aspirations of a sustainable future. Governments, in particular, are

8
encouraged to invest in promoting AI solutions and skills that bring
greater social good and help us “build back better” as we recover from
the impacts of the COVID-19 pandemic.

While much has already been written about AI and a world of possibilities
and limitations, this report is based on realities and experiences from
Asia and the Pacific, and provides various perspectives on what AI for
social good may look like in this region. More importantly, the report
offers suggestions from the research community on how policymakers
can encourage, use, and regulate AI for social good.

I look forward to more research collaborations with ARTNET on STI


Policy Network2 – a regional research and training network supporting
policy research to leverage science, technology, and innovation as
powerful engines for sustainable development in Asia Pacific.

Mia Mikic
Director
Trade, Investment and Innovation Division
Economic and Social Commission for Asia and the Pacific

1. Artificial Intelligence in the Delivery of Public Services (UN ESCAP, 2019).


https://www.unescap.org/publications/artificial-intelligence-delivery-public-services
2. https://artnet.unescap.org/sti

9
By Keio University

It has been a great pleasure for Keio University to take the academic
lead in such an important initiative as the UN/ESCAP-APRU-Google
project “AI for Social Good”. We are extremely pleased that the joint
efforts of government, academia, and industry have generated a set of
academically robust policy recommendations.

In our efforts to overcome COVID-19 with the help of information


technology (IT), we are reminded of the importance of having a firm
philosophy on the use of data. For example, we have seen first-hand the
effectiveness of IT-based “contact tracing” in controlling the spread of
the disease. At the same time, we are uncertain about the technology
and its implications on privacy. There are noticeably different views on
this topic concerning data and privacy, with cultural differences playing a
major role. Some cultures are happy to actively share data, while others
place greater emphasis and value on protecting privacy. At the same
time, although all cultures recognize the value of sharing data, they are
seemingly split on whether the data should belong to society or the
individual. The design of technologies and institutions vary depending on
such fundamental philosophies behind the governance of information.
We do not, however, want the world to be split along this divide, as this
leads to the fragmentation of data and everyone loses out. In order to
benefit from the great technologies that we possess, the world must
come together.

Since Keio University was founded by Yukichi Fukuzawa in the middle


of the 19th century, we have been a pioneer in introducing Western

10
thought to Asia. During his life, Fukuzawa advocated the introduction of
Western culture to Japan and placed great emphasis on relationships
between people for the creation of a modern civil society. Today, this
would encompass the idea of harmonious coexistence between people
and technology. From such a heritage, we are cognizant of our renewed
mission to bridge differences and create a new civilization that makes
full use of data while honoring the dignity of each and every person.
Of course, this is easier said than done. In reality, we face competition
among nations and businesses who all have interests in controlling,
monopolizing, and/or profiting from data. We should also be alert to the
possibility that technologies can actually widen rather than close the
inequality gap between the haves and have-nots.

With this in mind, academia should pledge to stay loyal only to evidence
and logic. Through such self-discipline, we can provide open forums to
orchestrate collaboration among various stakeholders to work together
for the good of humanity. This is a worthwhile endeavor, as we are certain
that artificial intelligence has the power to solve many issues, including
epidemics, and will help us to achieve the Sustainable Development
Goals proposed by the United Nations.

Akira Haseyama
President
Keio University

11
Introduction

Artificial
Intelligence for
Social Good
Yoshiaki Fukami and Jiro Kokuryo
Keio University

1. Harnessing AI to Achieve the United Nations Sustainable


Development Goals
We live in a complex world in which various factors affecting human wellness are
interconnected and cannot be analyzed by simple models. For example, solutions to the
challenges of pandemics require understanding of not just biology and/or medicine but
of social activities, as well as the psychology of people who spread groundless or even
malicious rumors on social media.

Expectations are high that artificial intelligence (AI) can help develop solutions to many
issues facing the world by identifying patterns in the vast body of data that is now available
through today’s sensor networks. By enabling machines to identify and analyze patterns in
data, we will be able to detect issues and causal relations in complex systems that were
previously unknown. Such knowledge is essential in our efforts to overcome complex
issues.

We should also be mindful that both wellness and these complex issues are embedded in
local contexts that are diverse and depend on geographic and social backgrounds. While
recognizing such diversity, it would be useful to have a meta-level understanding of how AI
could be applied to accomplish our goals. An integrated and comprehensive vision, as well
as its related policies, are needed to realize effective approaches for more people to enjoy
the benefits of AI.

12
With this in mind, the United Nations (UN) has already This report understands AI for social good as being
begun to take a higher-level approach to solving social the use of AI to support SDG achievement by providing
issues with AI. Set at the General Assembly (2015) institutions and individuals with relevant data and
and to be accomplished by 2030, the UN Sustainable analysis.
Development Goals (SDGs) look to harness AI in
support of inclusive and sustainable development Table 1 is a non-exhaustive list of initiatives by the UN
while mitigating its risks. For example, SDGs look to: and other institutions to use AI in support of achieving
• Provide people with access to data and information SDGs. Supplemented with additional examples,
• Support informed evidence-based decisions the table mainly presents initiatives included in
• Eliminate inefficiencies in economic systems, as the UN Activities on Artificial Intelligence report by
well as create new products and services to meet International Telecommunications Union (ITU, 2019).
formerly unmet needs While the table presents projects that use AI for social
• Provide data-driven diagnoses and prevent harmful good, it does not include initiatives that attempt to
events such as formerly unpredictable accidents mitigate the risks of AI, such as to address bias or
• Support city planning and development other ethical concerns.1

SDG Use of AI
1 No Poverty • Implementation of AI on the Global Risk Assessment Framework (GRAF) to
understand future risk conditions to manage uncertainties and make data-
driven decisions (ITU, 2019, p.54)

2 Zero Hunger • FAMEWS global platform: Real-time situational overview with maps and
analytics of Fall Armyworm infestations (ITU, 2019, p.3)
• Sudden-onset Emergency Aerial Reconnaissance for Coordination of
Humanitarian Intervention (SEARCH), and Rapid On-demand Analysis (RUDA)
using drones and AI to greatly reduce the time required to understand the
impact of a disaster (ITU, 2018, p.54)

3 Good Health and • Ask Marlo: An AI chatbot designed to provide sources for HIV-related queries in
Well-being Indonesia (ITU, 2019, p.22)
• Timbre: a pulmonary tuberculosis screening by the sound of the cough (ITU,
2019, p.22)

4 Quality Education • AI to ensure equitable access to education globally: Provide hyper-personal


education for students and access to learning content (UNESCO, 2019, p.12)
• Using AI and gamification to bridge language barriers for refugees: Machine
learnt translation for lesser-resourced languages (UNESCO, 2019, p.11)

5 Gender Equality • Sis bot chat: 24/7 information online services to women facing domestic
violence (United Nations Women, 2019)

Table 1: Notable initiatives using AI in support of achieving SDGs


(Created by Daum Kim)

1. It should be noted that most projects supporting Goal 5: Achieve gender equality and empower all women and girls focus on removing gender bias. We only found one initiative using AI
to empower women – a project that uses AI to fight against domestic violence.

13
6 Clean Water and • Water-related ecosystem monitoring through the Google Earth Engine and
Sanitation the European Commission’s Joint Research Centre to use computer vision
and machine learning to identify water bodies in satellite image data and map
reservoirs (ITU, 2019, p.32)
• Funding analysis and prediction platform using Microsoft’s Azure Machine
Learning Studio to capture global funding trends in the areas of environmental
protection by donors and member states (ITU, 2019, p.32)

7 Affordable and • Mitsubishi Hitachi Power Systems (MHPS) in the development of autonomous
Clean Energy power plants: A real-time data monitoring action to reduce supply or increase
generation and automated capability to manage power plants (Wood, 2019)
• Intelligent grid system to increase energy efficiency through AI (Microsoft &
PwC, 2019, p.17)

8 Decent Work and • Analysis of the impact on jobs and employment by investigating the rise and
Economic Growth effect of reprogrammable industrial robots in developing countries, along with
exploration of patent data in robotics and AI to understand the future impact of
AI robots on work (ITU, 2019, p.9)

9 Industry, Innovation, • E-navigation: Exchange and analysis of marine information on board and
and Infrastructure ashore by electronic means for safety and security at sea (ITU, 2019, p. 13)
• Maritime Autonomous Surface Ships (MASS): Attempts to apply automated
ships (ITU, 2019, p.13)

10 Reduced • Implementation of AI in a Displacement Tracking Matrix (DTM) to detect and


Inequalities contextualize data such as migration, urban and rural land classification, and
drone imagery in displacement camps (ITU, 2019, p.16)

11 Sustainable Cities • Risk Talk: An online community to exchange climate risk transfer solutions.
and Communities AI builds a neural network by mapping the expertise of the users through
interactions on the platform (ITU, 2019, p.37)
• United for Smart Sustainable Cities initiatives (U4SSC): A global platform for
smart cities stakeholders which advocates public policies to encourage the
use of ICT to facilitate smart sustainable cities transition (ITU, 2019, p.29)

12 Responsible • AI-driven system and robotics to reduce food waste by predicting customer
Consumption and demand (Fearn, 2019)
Production • iSharkFin: Identification of shark species from shark fin shapes to help users
without formal taxonomic training (ITU, 2019, p.3)

13 Climate Action • Shipping digitalization and electronic interchange with ports (ITU, 2019, p.12)
• Cyber-consistent Adversarial Networks (CyberGans) to simulate what houses
will look like after extreme weather events to allow individuals to make
informed choices for their climate future (Snow, 2019; Schmidt et al., 2019)

(Cont.) Table 1: Notable initiatives using AI in support of achieving SDGs


(Created by Daum Kim)

14
14 Life Below Water • Maritime Single Window (MSW) to electronically exchange maritime
information via a single portal without duplication (ITU, 2019, p.12)

15 Life on Land • DigitalGlobe’s Geospatial Big Data platform (GBDX) using machine learning to
analyze satellite imagery to predict human characteristics of a city and respond
to health crises (ITU, 2018, p.50)
• Land governance and road detection through satellite “computer vision”
(ITU, 2018, p.60)

16 Peace, Justice, and • International Monitoring System of Comprehensive Nuclear-Test-Ban Treaty


Strong Institutions Organization (ITU, 2019, p.1)
• Toolkit on digital technologies and mediation in armed conflict (ITU, 2019, p.27)

17 Partnerships • The International Telecommunication Union (ITU) Focus Group on AI for Health
(FG AI2H) (ITU, 2019, p.19)
• The AI for Good Global Summit: Identifying practical applications of AI towards
SDGs (ITU, 2019, p.19)
• Social Media Data Scraper: AI on natural language processing helps to
understand the thoughts of users (ITU, 2019, p.38)

(Cont.) Table 1: Notable initiatives using AI in support of achieving SDGs


(Created by Daum Kim)

2. Report Objectives: Research-based Policy Suggestions


Having reviewed how AI can be applied to promote competitive process that sought research inputs to
social good, we now turn to policies that adequately inform policy discussions in two broad areas:
promote and control AI, so that they can be used for
the good of society. This is important, as we believe 1. Governance frameworks that can help address
our goals cannot be accomplished through a laissez- risks/challenges associated with AI, while
faire approach. An adequate governance system maximizing the potential of the technology to be
for the development, management, and use of AI is developed and used for good.
crucial in ensuring that the benefits of integrating and
analyzing large quantities of data are maximized, while 2. Enabling environment in which policymakers
the potential risks are mitigated. can promote the growth of an AI for Social Good
ecosystem in their respective countries in terms
Following an agreement between APRU, UN ESCAP, of AI inputs (e.g., data, computing power, and AI
and Google to share best practices and identify expertise) and ensuring that the benefits of AI are
solutions to promote AI for social good in Asia- shared widely across society.
Pacific, the project AI for Social Good was launched
in December 2018 at the Asia-Pacific AI for Social Focusing on specific local contexts and with the
Good Summit in Bangkok. Each chapter of this report objective of informing international policy debates
presents a unique research project (Table 2), as well on AI, the research reports offer a range of unique
as key conclusions and policy suggestions based on perspectives from across the Asia-Pacific region.
the findings. The projects were selected following a

15
Chapter Title Resaerch Affiliation
Member(s)
1 AI for Social Good: Buddhist Compassion as a Soraj Hongladarom Chulalongkorn
Solution University, Thailand

2 Moralizing and Regulating Artificial Intelligence: M. Jae Moon Yonsei University,


Does Technology Uncertainty and Social Risk Iljoo Park Republic of Korea
Tolerance Matter in Shaping Ethical Guidelines
and Regulatory Frameworks

3 Definition and Recognition of AI and its Influence Kyoung Jun Lee Kyung Hee
on the Policy: Critical Review, Document Analysis University,
and Learning from History Republic of Korea

4 Regulatory Interventions for Emerging Economies Arindrajit Basu Centre for Internet
Governing the Use of Artificial Intelligence in (Team leader) & Society, India
Public Functions Elonnai Hickok
Amber Sinha

5 AI Technologies, Information Capacity, and Mark Findlay Singapore


Sustainable South World Trading Management
University

6 Governing Data-driven Innovation for Masaru Yarime The Hong Kong


Sustainability: Opportunities and Challenges of University of
Regulatory Sandboxes for Smart Cities Science and
Technology

7 Including Women in AI-Enabled Smart Cities: Caitlin Bentley University of


Developing Gender-inclusive AI Policy and Sheffield,
Practice in the Asia-Pacific Region Australian National
University

8 AI and the Future of Work: A Policy Framework Wilson Wong The Chinese
for Transforming Job Disruption into Social Good University of
for All Hong Kong

Table 2: List of project titles and their authors

16
The AI for Social Good Project believes that objective, In Wong’s discussion of AI’s impact on employment
evidence-based, and logical academic analyses which (Chapter 8), he also calls for social security policies
are free from political and/or economic interests can and a fair re-allocation of resources in the governance
play critical roles in the formation of sensible policies. of AI. The editors’ interpretation of such calls for
At the same time, we are aware of the tendency social equity surrounding AI is that there may be
of academics to stop at simply understanding the strong scale advantages in AI (or data) economy that
phenomena and not take a position in prescribing give unfair advantages to already powerful entities;
policies. Hence, we specifically asked the participants and that policy intervention is necessary for fairness
of this report to come up with short summaries of and to ensure the productive power of AI is able to
their findings, as well as suggested policy implications materialize. Bentley’s call (Chapter 7) for the inclusion
(see Appendix 1). of women as beneficiaries of AI is also along the same
lines.
We also firmly believe in the effectiveness of a multi-
disciplinary research approach for policy formation. To 3.1.2. Managing risk to allow experimentation
that end, the project organizers were careful to include All of the researchers recognize the potential for AI to
both the technical and social sciences/humanities. We both benefit and cause harm to society. The problem
are extremely happy to report that all of the diverse is, we will not know for sure what the positive and
teams, who shared a similar passion for taking a multi- the negative impacts might be until we test them. It
disciplinary approach, were able to conduct fruitful is therefore necessary to formulate a bold strategy
discussions which led to even stronger projects. to realize full potential of AI and manage the risks
involved at the same time.

3. Overview of the Recommendations In Chapter 6, Yarime looks at the possibility of


taking a “sandbox” approach to testing. In this way,
Based on discussions with the project members, this
experimental use of technology can be undertaken
section presents the editors’ own overview of the
for proof of concept in a controlled environment, and
policy agenda, giving readers a general idea of the
the results can then be used to take the technologies
issues that need to be addressed.
outside the “box” to be implemented in societies at
large. He also discusses the importance of preparing
3.1. Developing a governance framework
mechanisms for compensation, such as insurance,
to mitigate damage done to individuals or institutions
3.1.1. Ensuring equality and equity
despite all necessary preventative measures having
In Chapter 1, Hongladarom makes an important
been taken. This function is crucial, not just to protect
suggestion in that policymakers should start by
citizens but also to promote innovation.
agreeing on the basic principles for the governance of
data. That is, he discusses how altruism, as opposed
Uncertainty and unpredictability are inherent
to individualism, should be seen as the guiding
characteristics of emerging technologies and cannot
principle to realize the benefits of data sharing. He
be eliminated completely. It is worth remembering
also emphasizes its usefulness in correcting existing
that we should not sacrifice innovation through
social and economic inequalities, which may expand
excessive safety precautions. If we want to benefit
with advances in technology. While this assertion
from technological advancements, we must be willing
may be controversial, it nevertheless addresses the
to take certain risks. As such, we should be thinking
fundamental question of whether data should belong
about “managing” risk rather than “avoiding” risk.
to the individual or society, since we know that the
value of data increases as they accumulate. This line
of thought is also significant in that it reflects the
communal traditions of Asian societies.

17
3.1.3. Multi-stakeholder governance and co-regulation 3.1.4. Providing accountability
In Chapter 2, Moon and Park call upon the participation Basu, Hickok, and Sinha (Chapter 4) identify
of different stakeholders representing industries, accountability as one of five major areas where states
researchers, consumers, NGOs, international should play a role. This is an extremely important point
organizations, and policymakers in setting guidelines in light of the fact that AI can easily become a “black
for the ethical use of AI. Most AI applications require box” both technically and institutionally.
cooperation of multiple organizations, particularly in
the preparation of integrated datasets. For example, Accountability is a fundamental issue across various
automobile driving data from a car manufacturer aspects of AI utilization, from the collection of data
are only useful when combined with other data to the determination of evaluation functions in AI
sources. The value of such data is further enhanced algorithms. As such, it is vital that we review and
when combined with data from local and national evaluate the process by which AI functions, as well as
governments that control infrastructure, such as traffic identify appropriate entities to manage the technology.
lights. Each of these actors have different objectives
and, in the absence of adequate incentives, tend to Accountability must be realized not only through
tailor their systems to maximize the effectiveness legal systems, but also in the technical specifications
of their own services without regard for the needs of of systems that ensure transparency of data
others. Thus, not only do we need mechanisms to management. Due to the pace of technological
promote collaboration, governments should play a role advancement, this is a challenge. Hence, governments
in preparing them. need to assist in the development of a coordination
mechanism that can cope with the progress in a
Although a natural temptation under such timely manner.
circumstances is to centralize control, we must also
be aware of the dangers of a centralized approach 3.2. Developing an enabling environment
both technically and societally. On the technical side,
centralized databases are vulnerable to attacks and 3.2.1 Correctly understanding the technology
can result in large-scale data leaks once the system In Chapter 3, Lee cautions that, before discussing
is breached. On the societal side, a monopoly over policies concerning AI, we should first have a proper
data gives excessive power to the institution that understanding of the definition of AI. He points out
controls it, raising fears of a breach of human rights. the dangers of perceiving AI as simply machines that
A multi-stakeholder governance structure involving imitate and replace humans. Instead, he favors the
government, non-profit organizations, industry groups, perspective of the Organization for Economic Co-
and specialist groups should be established to provide operation and Development (2019) that defines AI
oversight of the major players controlling the data. It as “a machine-based system that can, for a given
is important that young policymakers and engineers set of human-defined objectives, make predictions,
participate in the discussion (Chapter 5). Given the recommendations, or decisions influencing real or
rapid advances in technology, we must also develop virtual environments” to form adequate expectations
and establish governance mechanisms that can for the benefits of the technology.
evolve in a timely manner.
An adequate definition of AI is therefore important,
as it greatly influences the design of the governance

18
structure around the technology. Whether or not we 3.2.3. Standardizing data models
recognize “intelligence” and “personality” (or at least Standardization of data formats is important in order
legal personality as we recognize corporations as to ensure universal access to data for a more equitable
pseudo-personalities) in machines that seemingly use of the technology. Not only does the differences in
have an intelligence of their own is becoming a serious data models (formats) hinder data integration, a lack
topic of debate. If we are to adopt Lee’s argument, of standardization nullifies the power of ubiquitous
then perhaps we should not. Internet connectivity that enables us to gather data
quickly and cheaply. In other words, aggregated data
3.2.2. Ensuring universal access to data does not automatically mean big data suitable for
In Chapter 5, Findlay looks at how information AI analysis. Data must still be standardized to be
asymmetries can create inequities for disadvantaged collectively meaningful. In addition, data specifications
economies, and calls for systems to guarantee them (e.g., syntax and vocabulary) facilitate interoperability
access to data which enables them to negotiate among distributed data resources and enable the
fairly in international trade. This reminds us that generation of relevant big data. Furthermore, quality
AI cannot work on its own. In the application of AI, criteria enable data consumers to appropriately handle
datasets, computing power, and expert analysts are all diversified data resources.
necessary to meet society’s needs.
However, standardization is a complex issue,
Naturally, the opportunities which computer not because it is technically difficult but because
networks create should not be underestimated. it is a political process involving many different
Recent advances in the reduction of communication stakeholders, pursuing different goals. Therefore, a
costs, improvement of computing capabilities, and top down approach to forcefully impose a single set
diffusion of sensing technology have facilitated the of standards will not work. That said, governments
generation of big data that can then be analyzed should still play a facilitator role, together with many
by data scientists. Findlay’s concern over inequity non-governmental standardization initiatives, to
is especially important as there still remain many prevent excessive proliferation of standards across
areas where access to essential data are limited and every sector of society. Governments should also
necessary data analyses are not possible. No matter ensure interoperability among systems that of
how sophisticated the AI algorithm, it can only work different standards.
effectively in an environment in which the dataset is
properly generated and stored for analyses, there is 3.2.4. Universal access to human resources for
the necessary computing power, and there is reliable utilization of AI
and affordable access to expertise and the Internet. Findlay also stresses the need for adequate assistance
(e.g., technology, training, and domestic policy advice)
It is worth remembering that network ubiquity does to fully realize the benefits of AI. This is a reminder that
not exist yet either. There are still many people in Asia- AI systems require people to function. In other words,
Pacific that do not have access to reliable, affordable, effective use of AI requires people to fine tune the
and high-speed Internet. As such, governments should algorithm and prepare the dataset to be fed into the
continue their efforts to provide everyone with Internet system. It is also necessary for people to interpret the
connectivity so that they have access to the data that outcome and give it practical meaning. As the use of AI
empowers them. grows, so too does the demand for data scientists who
can use the technology for social good.

19
However, as data scientists are fast becoming an There are two main reasons why citizens and
expensive human resource only available to more consumers are currently holding back from offering
developed economies and large corporations, the their data for social good. First, they fear that data
fewer number of them in less fortunate communities disclosure can lead to discrimination. This is especially
is limiting the opportunities to make use of AI. true in socially sensitive areas. For example, when
disclosure of infection to a disease leads to exposure
When talking about human resources, it is important to to social stigma and criticism for non-compliance to
recognize that not just software engineers and expert social norms, people will be reluctant to cooperate
statisticians need to be trained. Senior executives and with contact tracing. Second, certain consumers
ordinary people also need to be aware of the benefits, dislike the idea of having their data commercially
risks, and mitigation measures surrounding AI, so that exploited without their consent.2 For example, the
they are better informed and able to take advantage of emergence of target marketing as the key revenue
the technology. generator for online businesses has led to significant
hostility towards the use of personal data.
Another aspect is the need to educate engineers
about the ethical, legal, and social implications (ELSI) To address this issue there are technical and
of AI. As the power of AI grows, so too does its impact institutional solutions available. On the technology
on ELSI. For the technology to be developed and used side, various forms of anonymization, encryption, and
properly, governments need to ensure that technical distributed approaches in managing data have been
experts are educated to be sensitive to the concerns proposed. Institutionally, various forms of regulations
of ordinary people concerning AI. are in place to protect individuals from breach of
privacy. For both types of solutions, government
3.2.5. Removing the fear of using personal data involvement seems essential in light of the incentives
Another policy goal that the editors would like to that exist, particularly in the private sector, to keep
propose is the removal of (perceived) risk associated data secret for financial reasons. Not only should
with personal data disclosure. We believe that it is incentives be offered to make data public, but
important to make available as much data as possible enforcement power must be used in the protection
for the use of AI for social good. Of course, this is only of privacy.
achievable when people feel safe about disclosing
their information.

2. We should also be aware of people who are willing to give their information away for free, because they feel compelled or see a benefit in doing so.

20
References
International Telecommunication Union. (2018). United Nations Activities on Artificial
Intelligence (AI) 2018. https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2018-1-
PDF-E.pdf

International Telecommunication Union. (2019). United Nations Activities on Artificial


Intelligence (AI) 2019. https://www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2019-1-
PDF-E.pdf

Microsoft & PwC. (2019). How AI can enable a sustainable future. https://www.pwc.co.uk/
sustainability-climate-change/assets/pdf/how-ai-can-enable-a-sustainable-future.pdf

OECD 2019, “Recommendation of the Council on Artificial Intelligence”, https://


legalinstruments.oecd.org/en/instruments/oecd-legal-0449

Schmidt, V., Luccioni, A., Mukkavilli, S.K., Sankaran, K., Bengio, Y. (2019). Visualising the
Consequences of Climate Change Using Cycle-Consistent Adversarial Networks. https://
arxiv.org/pdf/1905.03709.pdf

Snow, J. (2019). How artificial intelligence can tackle climate change. https://www.
nationalgeographic.com/environment/2019/07/artificial-intelligence-climate-change/

United Nations Educational, Scientific, and Cultural Organization (2019). Artificial intelligence
in education, compendium of promising initiatives: Mobile Learning Week 2019 https://
unesdoc.unesco.org/ark:/48223/pf0000370307

United Nations General Assembly. (2015). Transforming our world: the 2030 Agenda for
Sustainable Development. https://doi.org/10.1163/157180910X12665776638740

United Nations Women. (2019). Using AI in accessing justice for survivors of violence.
https://www.unwomen.org/en/news/stories/2019/5/feature-using-ai-in-accessing-justice-
for-survivors-of-violence

Wood, J. (2019). This is how AI is changing energy. https://spectra.mhi.com/this-is-how-ai-


is-changing-energy

21
Appendix 1

Summaries of
Papers and
Policy Suggestions
AI for Social Good: A Buddhist Compassion as a Solution
Soraj Hongladarom, Department of Philosophy, Faculty of Arts, Chulalongkorn University

Abstract behave based on our own ethical beliefs—needs to


be programmed into the AI software from the very
In this paper, I argue that in order for AI to deliver beginning. I also reply to several objections against
social good, it must be ethical first. I employ the this idea. In essence, coding ethics into a machine
Buddhist notion of compassion (karunā) and argue does not imply that such ethics belongs solely to the
that for anything to be ethical, it must exhibit the programmer, nor does it mean that the machine is
qualities that characterize compassion, namely the thereby completely estranged from its socio-cultural
realization that everything is interdependent and context.
the commitment to alleviating suffering in others.
The seemingly incoherent notion that a thing (e.g., Policy Recommendations
an AI machine or algorithm) can be compassionate
is solved by the view—at this current stage of 1. Programmers and software companies need to
development—that algorithm programmers need to be implement compassionate AI programs. This is
compassionate. This does not mean that a machine the key message from this article. No matter what
cannot itself become compassionate in another kind of “social good” the AI is supposed to bring
sense. For instance, it can become compassionate about, the software needs to be compassionate
if it exhibits the qualities of a compassionate being. and ethical in the Buddhist sense.
Ultimately, it does not matter whether or not a 2. The public sector needs to ensure that rules and
machine is conscious in the normal sense. As long regulations are in place in order to create an
as the machine exhibits the outward characterization environment that facilitates the development
of interdependence and altruism, it can be said to of ethical AI for social good. Such rules and
be compassionate. I also argue that the ethics of regulations will ensure that private companies have
AI must be integral to the coding of its program. In a clear set of directives to follow, and will create
other words, the ethics—how we would like the AI to public trust in the works of the private sector.

22
Moralizing and Regulating Artificial Intelligence: Does Technology Uncertainty
and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory
Frameworks?
M. Jae Moon and Iljoo Park, Institute for Future Government, Yonsei University

Examining technology uncertainty and social risk in Specific AI ethical guidelines should be developed
the context of disruptive technologies, this study and customized for AI designers, developers,
reviews the development of ethical guidelines for AI adopters, users, etc. based on the AI lifecycle.
developed by different actors as a loosely institutional In addition, industry and sector specific ethical
effort to moralize AI technologies. Next, we specifically guidelines should be developed and applied to
examine the different regulatory positions of four each sector (care industry, manufacturing industry,
selected countries on autonomous vehicles (AVs). service industry, etc.).
Based on the status of moralizing and regulating AI, 4. In regulating AI and other disruptive technologies,
several policy implications are presented as follows: governments should align regulations with
key values and goals embedded in various AI
1. Moralizing disruptive technologies should precede, ethical guidelines (transparency, trustworthiness,
and should be fully discussed and shared among lawfulness, fairness, security, accountability,
different stakeholder prior to regulating them. robustness, etc.) and aim to minimize the potential
Before a society adopts and enacts specific social risks and negative consequences of AI by
regulatory frameworks for disruptive technologies, preventing and restricting possible data abuses or
ethical guidelines (i.e., AI principles or AI ethical misuses, ensuring fair and transparent algorithms,
guidelines) must be jointly formulated based upon in addition to establishing institutional and
a thorough deliberation of particular disruptive financial mechanisms through which the negative
technologies by different stakeholders representing consequences of AI are systematically corrected.
industries, researchers, consumers, NGOs, 5. Governments should ensure the quality of AI
international organizations, and policymakers. ecosystems by increasing government and non-
2. AI ethical guidelines should support sustainable government investment in R&D and human
and human-centric societies by minimizing resources for AI by maintaining fair market
the negative socio-economic and international competition among AI-related private companies,
consequences of disruptive technologies and by promoting AI utilities for social and
(i.e., inequality, unemployment, psychological economic benefits.
problems, etc.), while maximizing their potential 6. Governments should carefully design and introduce
benefits for environmental sustainability, quality of regulatory sandbox approaches to prevent
life among others. unnecessarily strict and obstructive regulations
3. Once a general consensus is made on general that may impede AI industries but also facilitate
ethical guidelines, they should be elaborated and developing AI and exploring AI-related innovative
specified in details targeting individual stakeholder business models.
groups representing different actors and sectors.

23
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History
Kyoung Jun Lee, School of Management, Kyung Hee University
Yujeong Hwangbo, Department of Social Network Science, Kyung Hee University

Abstract numerous jobs and industries, on which our future AI


policies should focus. Similar to how machine learning
Opacity of definitions hinders policy consensus; systems learn from valid data, AI policymakers should
and while legal and policy measures require agreed learn from history to gain a scientific understanding
definitions, to what artificial intelligence (AI) refers has of AI and an exact understanding of the effects of
not been made clear, especially in policy discussions. automation technologies. Ultimately, good AI policy
Incorrect or unscientific recognition of AI is still comes from a good understanding of AI.
pervasive and misleads policymakers. Based on
a critical review of AI definitions in research and Policy Recommendations
business, this paper suggests a scientific definition
of AI. AI is a discipline devoted to making entities (i.e., 1. Policy experts should be well educated about what
agents and principals) and infrastructures intelligent. AI is and what is really going on in AI research and
That intelligence is the quality which enables business. Specifically, AI should be considered a
entities and infrastructures to function (not think) discipline that allows entities and infrastructures to
appropriately (not humanlike) as an agent, principal, become intelligent. This intelligence is the quality
or infrastructure. We report that the Organisation for that enables agents, principals, and infrastructures
Economic Co-operation and Development (OECD) to function appropriately. AI should not be
changed its definition of AI in 2017, and how it has considered a humanlike or super-human system.
since improved it from “humanlike” to “rational” and As such, previous AI policies based on the old
from “thinking” to “action”. We perform document paradigm should be rewritten.
analysis of numerous AI-related policy materials, 2. Governments should create programs to educate
especially dealing with the job impacts of AI, and find administrative officials, policy experts in public-
that many documents which view AI as a system that owned research institutes, and lawmakers in
“mimics humans” are likely to over-emphasize the job national assemblies.
loss incurred by AI. Most job loss reports have either a 3. Similar to how machine learning systems learn
“humanlike” definition, “human-comparable” definition, from valid data, policymakers should learn from
or “no definition”. We do not find “job loss” reports history, as well as recognize the positive impacts of
that rationally define AI, except for Russell (2019). automation technology. New AI policies should then
Furthermore, by learning from history, we show be established based on this new recognition.
that automation technology such as photography, 4. When adopting AI, governments and society should
automobiles, ATMs, and Internet intermediation did recognize its characteristics as an optimization
not reduce human jobs. Instead, we confirm that system in order to create more public benefit, faster
automation technologies, as well as AI, creates business outcomes, and less risk.

24
Regulatory Interventions for Guiding and Governing the Use of Artificial Intelligence
by Public Authorities
Arindrajit Basu, Elonnai Hickok and Amber Sinha, Centre for Internet & Society, India

Summary Key Recommendations

The use of artificial intelligence (AI)-driven decision- 1. To adequately regulate AI in public functions,
making in public functions has been touted around the regulation cannot be entirely “responsive” as the
world as a means of augmenting human capacities, negative fall out of the use case may be debilitating
removing bureaucratic fetters, and benefiting society. and greatly harm constitutional values. We
This certainly holds true for emerging economies. therefore advocate for “smart regulation” – a notion
Due to a lack of government capacity to implement of regulatory pluralism that fosters flexible and
these projects in their entirety, many private sector innovative regulatory frameworks by using multiple
organizations are involved in traditionally public policy instruments, strategies, techniques, and
functions, such as policing, education, and banking. opportunities to complement each other.
AI-driven solutions are never “one-size-fits-all” and 2. The five key values that must be protected by
exist in symbiosis with the socio-economic context the state across emerging economies are: (1)
in which they are devised and implemented. As such, agency; (2) equality, dignity, and non-discrimination;
it is difficult to create a single overarching regulatory (3) safety, security and human impact; (4)
framework for the development and use of AI in accountability, oversight, and redress; and (5)
any country, especially those with diverse socio- privacy and data protection.
economic demographics like India. Configuring the 3. The scope, nature, and extent of regulatory
appropriate regulatory framework for AI correctly is interventions should be determined by a set of
important. Heavy-handed regulation or regulatory guiding questions, each of which has implications
uncertainty might act as a disincentive for innovation for one or more of constitutional values.
due to compliance fatigue or fear of liability. Similarly, 4. Whenever the private sector is involved in a
regulatory laxity or forbearance might result in the “public function”, either through a public–private
dilution of safeguards, resulting in a violation of partnership or in a consultation capacity, clear
constitutional rights and human dignity. By identifying modes, frameworks, and channels of liability must
core constitutional values that should be protected, be fixed through uniform contracts. The government
this paper develops guiding questions to devise a may choose to absorb some of the liability from the
strategy that can adequately chart out a regulatory private actor. However, if that is the case, this must
framework before an AI solution is deployed in a use be clearly specified in the contract and clear models
case. This paper then goes on to test the regulatory of grievance redressal should be highlighted.
framework against three Indian use cases studied 5. The case studies point to a need for constant
in detail – predictive policing, credit rating, and empirical assessment of socio-economic and
agriculture. demographic conditions before implementing AI-
based solutions.

25
6. Instead of replacing existing processes in their regarding the solution – all of which are challenges
entirety, decision-making concerning AI should in emerging economies.
always look to identify a specific gap in an existing 8. In situations where the likelihood or severity of harm
process and add AI to augment efficiency. cannot be reasonably ascertained, we recommend
7. The government must be open to feedback and adopting the precautionary principle from
scrutiny from private sector and civil society environmental law and suggest that the solution not
organizations, as that will foster the requisite be implemented until scientific knowledge reaches
amount of transparency, trust, and awareness a stage where it can reasonably be ascertained.

VALUE QUESTIONS
AGENCY Is the adoption of the solution mandatory?

Does the solution allow for end-user control?

Is there a vast disparity between primary user and impacted party?

EQUALITY, DIGNITY, Is the AI solution modelling or predicting human behavior?


AND NON-
DISCRIMINATION Is the AI solution likely to impact minority, protected, or at-risk groups?

SAFETY, SECURITY, Is there a high likelihood or high severity of potential adverse human impact
AND HUMAN IMPACT as a result of the AI solution?

Can the likelihood or severity of adverse impact be reasonably ascertained


with existing scientific knowledge?

ACCOUNTABILITY, To what extent is the AI solution built with “human-in-the-loop” supervision


OVERSIGHT, AND prospects?
REDRESS
Are there reliable means for retrospective adequation?

Is the private sector partner involved with either the design of the AI solution,
its deployment, or both?

PRIVACY AND DATA Does the AI solution use personalized data, even in anonymized form?
PROTECTION

26
AI Technologies, Information Capacity, and Sustainable South World Trading
Mark Findlay, Singapore Management University, School of Law – Centre for AI and Data Governance

This research is supported by the National Research engaged their role-plays and represented essential
Foundation, Singapore under its Emerging Areas understandings. General findings from the two focus
Research Projects (EARP) Funding Initiative. groups are provided.
Any opinions, findings, and conclusions or
recommendations expressed in this material are those Principal Policy Projections
of the author(s) and do not reflect the views of the
National Research Foundation, Singapore. • At the initiation of the project, an intensive needs
analysis should be initiated, grounded in developing
Abstract local skills around what questions to ask regarding
information deficit, then translating into learning
This paper represents a unique research methodology about what format to store and order data, and
for testing the assumption that AI-assisted information what data can accomplish in trading negotiations
technologies can empower vulnerable economies and domestic market sustainability. This exercise
in trading negotiations. Its social good outcome will empower domestic counterparts and achieve
is enhanced through additionally enabling these ownership. This exercise should be a collaboration
economies to employ the technology for evaluating between ESCAP, sponsor companies, and agencies;
more sustainable domestic market protections. The • Trading information asymmetries should be
paper is in two parts; the first presents the argument addressed by sponsor companies, donors, and
and its underpinning assumption that information associated international agencies, through AI-
asymmetries jeopardize vulnerable economies in assisted technologies for domestically empowering
trade negotiations and decisions about domestic information access capacity building. UN
sustainability. We seek to use AI-assisted information ESCAP should promote the use of AI-assisted
technologies to upend situations where power is technologies to flatten information asymmetries
the discriminator in trade negotiations because of that exist among trading partners in the region;
structural information deficits, and where the outcome • While AI has the potential for empowering presently
of such deficits is the economic disadvantage of disadvantaged economies to negotiate in equal
vulnerable stakeholders. The second section is a terms to raise the well-being of all people, such
summary of the empirical work piloting a more empowerment will not materialize without adequate
expansive engagement with trade negotiators and assistance, in the form of technology, training, and
AI developers. The empirical project provides a domestic policy advice;
roadmap for policymakers to adopt model reflections • Product sustainability is essential for the success
from focus groups and translate these into a real- of the project ongoing. Sponsor companies, and
world research experience. The research method ESCAP in oversight, should ensure certain crucially
has three phases, designed to include a diverse set sustainable deliverables covering: data sources,
of stakeholders – a scoping exercise, a solution data integrity and validation, accountability, and
exercise, and a strategic policy exercise. The the technical sustainability of technical products.
empirical achievement of this paper is validating the These issues require allied services from sponsors,
proposed action-oriented methodology through a providers, advisers, and locally trained experts.
“shadowing” pilot device, where representative groups

27
Governing Data-driven Innovation for Sustainability: Opportunities and Challenges of
Regulatory Sandboxes for Smart Cities
Masaru Yarime, Division of Public Policy, The Hong Kong University of Science and Technology

Abstract provision of personal data would require the consent


of people, it needs to be clear and transparent to
Data-driven innovation plays a crucial role in tackling relevant stakeholders how decisions can be made in
sustainability issues. Governing data-driven innovation procedures concerning the use of personal data for
is a critical challenge in the context of accelerating public purposes. The process of building a consensus
technological progress and deepening interconnection among residents needs to be well-integrated into
and interdependence. AI-based innovation becomes the planning of smart cities, with the methodologies
robust by involving the stakeholders who will interact and procedures for consensus-building specified and
with the technology early in development, obtaining institutionalized in an open and inclusive manner.
a deep understanding of their needs, expectations, As application programming interfaces (APIs) play a
values, and preferences, and testing ideas and crucial role in facilitating interoperability and data flow
prototypes with them throughout the entire process. in smart cities, open APIs will facilitate the efficient
The approach of regulatory sandboxes plays an connection of various kinds of data and services.
essential role in governing data-driven innovation
in smart cities, which faces a difficult challenge of Policy Recommendations
collecting, sharing, and using various kinds of data for
innovation while addressing societal concerns about 1. Data governance of smart cities should be open,
privacy and security. How regulatory sandboxes are transparent, and inclusive to facilitate data sharing
designed and implemented can be locally adjusted, and integration for data-driven innovation while
based on the specificities of the economic and social addressing societal concerns about security and
conditions, to maximize the effect of learning through privacy.
trial and error. Regulatory sandboxes need to be 2. The procedures for obtaining consent on the
both flexible to accommodate the uncertainties of collection and management of personal data should
innovation, and precise enough to impose society’s be clear and transparent to relevant stakeholders
preferences on emerging innovation, functioning as with specific conditions for the use of data for
a nexus of top-down strategic planning and bottom- public purposes.
up entrepreneurial initiatives. Data governance is 3. The process of building a consensus among
critical to maximizing the potential of data-driven residents should be well-integrated into the planning
innovation while minimizing risks to individuals and of smart cities, with the methodologies and
communities. With data trusts, the organizations procedures for consensus-building specified and
that collect and hold data permit an independent institutionalized in an open and inclusive manner.
institution to make decisions about who has access to 4. APIs should be open to facilitate interoperability and
data under what conditions, how that data is used and data flow for efficient connection of various kinds
shared and for what purposes, and who can benefit of data and sophisticated services in smart cities.
from it. A data linkage platform can facilitate close
coordination between the various services provided
and the data stored in a distributed manner, without
maintaining an extensive central database. As the

28
Including Women in AI-enabled Smart Cities: Developing Gender-inclusive AI Policy
and Practice in the Asia-Pacific Region
Caitlin Bentley, Katrina Ashton, Brenda Martin, Elizabeth Williams, Ellen O’Brien, Alex Zafiroglu, and Katherine
Daniell, 3A Institute, Australian National University

Smart city initiatives are widespread across the Governments can play a key coordination role,
Asia-Pacific region. AI is increasingly being used to whilst guiding the establishment of common goals
augment and scale smart city applications in ways and practices. Moreover, countries across Asia-
that can potentially support social good. We critically Pacific should review national policy to take into
reviewed the literature on two key AI applications account the interconnected nature of smart city
for social good: increasing safety and security in initiatives, and how they connect to multiple targets
public spaces through the use of facial recognition across the Sustainable Development Goals (SDGs).
technology, and improving mobility through AI-enabled National governments should institute a process to
transportation systems including smart traffic lights develop indicators that map smart city progress in
and public transportation route optimization. We the pursuit of achieving SDGs, namely SDG 5 and 11.
find that there is an urgent need to consider how 2. Institute formal consultation and participatory
best to include women in the design, development, processes involving diverse women and
management, and regulation of AI-enabled smart community representatives through all stages of
cities. After all, poorly designed or delivered AI- a smart city initiative: Our research identifies new
enabled smart city technology could potentially models of design, community ownership, and public
negatively and differentially impact women’s safety, debate supported by AI. Municipal actors, industry
security, and mobility. To address these pitfalls, we partners, and women’s community groups should
conducted interviews with a range of female and invest greater resources into experimenting with
feminist scholars, activists, and practitioners – many innovative engagement and representation models,
of whom are working in the technology space. We as well as building into project plans the time
carried out an analysis using the 3A Framework. needed for engagement. The 3A Framework can
This Framework focuses on investigating smart city be used to guide discussions with communities,
initiatives through the themes of agency, autonomy, women, and their representatives. Our research
assurance, interfaces, indicators, and intent. We highlights how the Framework sheds lights on
suggest the following actions be required: (1) commit multiple and interrelated systemic factors that need
to gender inclusive policymaking and praxis in national to be taken into consideration, rather than focusing
smart city policy; (2) institute formal consultation only on the perspectives of individuals.
and participatory processes involving diverse women 3. Devise clearer roles and responsibilities
and community representatives through all stages surrounding the protection and empowerment of
of a smart city initiative; and (3) devise clearer roles women in AI-enabled smart city initiatives: There is
and responsibilities surrounding the protection and an urgent need for policymakers to establish greater
empowerment of women in AI-enabled smart city transparency and clearer rules around the handling,
initiatives. ownership, and protection of data with, for, and
about women. Better understanding of the impacts,
1. Commit to gender inclusive policymaking and not only the performance of these systems,
praxis in national smart city policy: High-level should guide this discussion. Consequences for
national smart city documentation frequently mistreatment, harm, and mismanagement across
makes reference to social inclusion goals, but little all levels of smart city initiatives should be carefully
is mentioned on how social inclusion is practiced. and clearly outlined. More opportunities for
AI-enabled smart cities involve an interlaced women to be consulted and involved in the design,
network of actors, such as government ministries, management, evaluation, and regulation of AI-
private sector actors, and community groups. enabled smart city initiatives are warranted.

29
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into
Social Good for All
Wilson Wong, Chinese University of Hong Kongy

Abstract Policy Summary: Major Recommendations

This paper examines the impact of artificial 1. Theory and Practice: Governments should have
intelligence (AI) on the future of work to develop a more alignment and integration between theory and
policy framework for transforming job disruption policy in formatting their AI strategies. For example,
caused by AI into social good for all. While there is a they should discuss how enabling technologies
considerable amount of research and discussion on as well as social and creative intelligence are
the impact of AI on employment, there is relatively included in their retraining, reskilling, and education
less research on what governments should do to turn programs.
the risk and threat of AI into job opportunities and 2. International Organization and Developing World:
social good for all. This paper consists of two major AI impacts on both developed and developing
parts. It first builds on the typology of job replacement worlds. Many developing countries are ill-prepared
and AI to establish a policy framework on the role due to limitations in resources and other factors.
of the government, as well as the policy responses International organizations such as the United
it should make to address various concerns and Nations (UN) should offer more support to these
challenges. On the principle of “rise with AI, not race nations to help set up their own AI strategies to
with it”, the government must play an active or even evaluate threats and opportunities and formulate
aggressive role not only for retraining knowledge, solutions.
skill-building, and job re-creation, but also for social 3. AI for All (No One Left Behind): Equity, social security,
security and a fair re-allocation of resources in the and fair re-distribution, such as introducing Universal
job disruption process. Second, the paper conducts a Basic Income (UBI) to protect vulnerable populations,
survey of national AI strategies to assess the extent are the missing pieces in the AI strategies of most
to which AI policy of job disruption is addressed by countries. Governments should confront these
other countries. It concludes that many countries, important issues head on and incorporate them
especially developing ones, are not well-prepared explicitly in their national AI strategies.
for AI, and most countries seem to be overlooking
fairness and equity issues under job disruption in the
arrival of the AI era.

30
Appendix 2

Project History

The AI for Social Good Project is the heir to two series The project also stems from the partnership UN
of policy advocacy initiatives on the digital economy ESCAP has been building with ARTNET on STI Policy
by the Association of Pacific Rim Universities (APRU). – a regional research and training network supporting
The first series is the Digital Economy initiative and policy research to leverage science, technology,
its successor, the AI for Everyone project, hosted and innovation as powerful engines for sustainable
by Keio University. The second series, led by The development in Asia Pacific.
Hong Kong University of Science and Technology, is
“Transformation of Work in Asia Pacific in the 21st In addition to the authors represented in this project,
Century: Key Policy Implications”. the following advisory board members, to whom we
are extremely grateful for their valuable input, were
chosen to provide feedback about the projects.

Name Affiliation
Hideaki Shiroyama The University of Tokyo

Pascale Fung The Hong Kong University of Science and Technology

Toni Erskine Australia National University

Yudho Giri Sucahyo University of Indonesia

P. Anandan Wadhwani Institute of AI, Mumbai

Hoyoung Lee Korea Information Society Development Institute

Punit Shukla World Economic Forum

Yongyuth Yuthavong National Science and Technology Development Agency

Table 1: List of advisory board members

31
To kick-off this collaborative project, the first face-to-face was originally scheduled for February 20 – 21, 2020.
meeting was held on June 5, 2019 at Keio University’s However, due to the COVID-19 pandemic, it was replaced
Mita campus. A virtual policy fora for the dissemination by an online meeting of just the project members. The
and discussion of project findings is planned to be held project outputs were submitted in May 2020 for editing
later in the year. and subsequent publication in August 2020. When it is
safe to do so, an open-to-public forum will be held.
One last face-to-face meeting before final submission
of the output, together with an open-to-public forum, The project was organized by the following members:

Name Affiliation
Jiro Kokuryo, Project Coordinator Keio University

Yoshiaki Fukumi Keio University

Cherry Wong Keio University

Daum Kim Keio University

Minkyoung Cho Keio University

Christina Schönleber APRU

Tina Lin APRU

Sanghyun Lee Google

Jake Lucchi Google

Marta Perez Cuso UN ESCAP

Table 2: Organizing members

We are grateful for all the efforts of those involved and sincerely hope that this document will help
policymakers in the region accomplish their goals.

32
Philosophical
Point of View
for Social
Implementation
AI for
Soraj Hongladarom
Department of Philosophy,
Faculty of Arts,

Social Good: Chulalongkorn University

Buddhist
Compassion
as a Solution
AI for Social Good: Buddhist Compassion as a Solution

Abstract

In this paper, I argue that in order for artificial intelligence (AI) to deliver social good, it must
be ethical first. I employ the Buddhist notion of compassion (karunā) and argue that for
anything to be ethical, it must exhibit qualities that characterize compassion, namely the
realization that everything is interdependent and the commitment to alleviate suffering
in others. The seemingly incoherent notion that a thing (e.g., an AI machine or algorithm)
can be compassionate is solved by the view that – at this current stage of development –
algorithm programmers need to be compassionate. This does not mean that a machine
cannot become compassionate in another sense. For instance, a machine can become
compassionate if it exhibits the qualities of a compassionate being, regardless of
whether it is conscious. As long as the machine exhibits the outward characterization of
interdependence and altruism, then it can be said to be compassionate. This paper also
argues that the ethics of AI has to be integral to the coding of its program. In other words,
the ethics (i.e., how we would like the AI to behave based on our ethical standpoint) needs to
be programmed into the AI software from the very beginning. This study has also replied to
several objections against this idea. To summarize, coding ethics into a machine does not
imply that the ethics thus coded belongs solely to the programmer, nor does it mean that
the machine is thereby completely estranged from its socio-cultural context.

Introduction

In the past few years, few innovations in technology have aroused as much public interest
and discussion as AI. After many years of lying in the doldrums, with many broken promises
in the past decades, AI once again became a focal point after it defeated both the European
champion and reigning world champion at the ancient game of Go in 2016. The defeat was
totally unexpected, as computer scientists and the public believed that Go was much more
complex than chess. Since the number of possible moves that needed to be calculated was
too vast for any computer to calculate, many believed that Go represented the supreme
achievement of human beings, and could not be bested or emulated by a machine. Thus,

35
Philosophical point of view for social implementation

there was worldwide sensation after both the of atoms in the universe. Thus, AlphaGo used a new
European champion Fan Hui, and Lee Sedol, the world technique which was also being developed at that
champion, were soundly defeated at Go by a machine time. The new technique, known as deep learning,
in a relatively short span of time. Following this AI avoided the brute force search technique, and instead
victory, it became clear that no human could ever relied on very large amounts of data. The program
defeat a machine in a board game. learned from this data to determine the best moves.
The data from millions of past moves made by
What ensued was an explosion in the power of AI — humans limited the number of possible moves that
a resurgence after many years of dormancy and the algorithm would need to make, thus enabling it to
repeated failed promises. AI has been with us for focus on the most relevant moves. This, coupled with
many decades. Computer scientists who developed more powerful hardware, contributed to the program
it believed that a computer could actually mimic the defeating Lee Sedol. The event was watched by
workings of the human brain. The project seemed many people worldwide, and its success was a
promising at first; for example, the computers could “Sputnik moment” in terms of bringing AI back into
play Tic-Tac-Toe, Checkers, and eventually chess. the spotlight. Now, many researchers are racing
Some progress was also made in the field of natural against each other to find the most useful
language processing and machine translation. applications for the technology.
Nonetheless, these successes were not as spectacular
as the scientists themselves had envisioned, and Many applications are being touted as potential
AI was unable to fulfil the expectations that its ways in which deep learning AI could help to solve
developers had originally claimed. For example, the the world’s problems. The following applications
expert system environment was developed during are currently being promoted: self-driving cars, deep
the early 1980s, but was prone to mistakes and thus learning (AI use) in healthcare, voice search or voice
became not suitable for normal use. The market for assistants, adding sounds to silent movies, machine
expert systems thus largely failed. Many promises of translation, text generation, handwriting generation,
AI systems at that time, such as speech recognition, image recognition, image caption generation,
machine translation, and others, were not fulfilled. Asautomatic colorization, advertising, earthquake
a result, funding was largely cut, and AI research madeprediction, brain cancer detection, neural networks in
very little progress. These failures were largely due to
finance, and energy market price forecasting (Mittal,
the fact that computers at that time lacked power, and 2017). Some of these applications indeed address
data, so their predictive power remained limited. serious matters, such as self-driving cars and image
recognition, while others are rather quaint, such as
The software that created history, AlphaGo, was colorization or automatic sound generation in silent
developed by DeepMind, a British company founded in movies. In any case, Mittal mentions that some of
2010 and acquired by Google in 2014. The company the most prominent applications of deep learning
made history in 2015 and 2016 when its AI creation, (or machine learning) AI has emerged over the past
AlphaGo, defeated both the European champion and three or four years. One of the most powerful uses
the world champion of Go. The technique used by of today’s AI is its predictive power. Using vast data
AlphaGo was radically different from Deep Blue, a sources, AI promises to make predictions that would
software developed by IBM which defeated the chess not be conceivable by human analysts. One of the
world champion, Gary Kasparov, in 1997. Deep Blue promises, for example, concerns an AI system that
used GOFAI, or “good old-fashioned AI”, to blindly can detect the onset of cancer by analyzing images
search for the best possible moves using a brute force of those who are still healthy. In other words, the
search technique. This technique proved unfeasible power of today’s AI lies in its ability to “see” things
for much more complex games such as Go, where that are often undetected by trained specialists. The
the number of possible moves exceed the number algorithm gains this ability through its analysis of

36
AI for Social Good: Buddhist Compassion as a Solution

extensive data points that are fed into its system. The Consequently, this paper aims to find ways in which
machine analyzes these data and finds patterns and machine learning AI could deliver social good in an
correlations to make predictions. ethical manner. More specifically, this paper argues
that in order for AI to deliver social good, it must
This new technology has led many to look for ways be ethical first. Otherwise, it might lead to negative
in which AI could improve society. The applications outcomes that are similar to the aforementioned
mentioned in Mittal’s article identifies some of the scenario of flood forecasting and hoarding. This is a
potential uses, or “social goods” that could be delivered vital principle to address, as sophisticated technology,
by AI. Many large corporations have also jumped on such as facial recognition software, could be used
the bandwagon in search of AI opportunities. Google, to endanger people’s right to privacy. As mentioned
for example, has founded an initiative titled “AI for above, AI algorithms that forecast flooding could be
Social Good” (http://ai.google/social-good/), which used to gain unfair advantages over others. Hence,
aims at “applying AI to some of the world’s biggest there must be a way for these algorithms themselves
challenges”, such as forecasting floods, predicting to act as safeguards against such use. For flood
cardiac events, mapping global fishing activity, and so forecasting software, this might not be immediately
on (AI for Social Good, 2020). apparent as it does not typically involve autonomous
action. The software would likely deliver information
This paper analyzes some of the ethical concerns and forecasting, with humans ultimately being
arising from such applications. Researching the responsible for acting on the information. However,
potential of AI to solve these problems is important, even in this case, the software itself must be ethical on
but when the technology is applied in real-world its own. At the very least, there should be some form
scenarios, care must be taken to ensure that the of mechanism in which the possibility of misuse or
social and cultural environment is fully receptive to abuse by certain groups (such as those intent on using
the technology. Not being receptive to the imported the information to hoard food and other supplies) is
technology can lead to a sense of alienation, which minimized; such a mechanism should be installed
can happen when the local population is excluded as part of the software from the very beginning.
from the process of decision making regarding the Regarding facial recognition technology, the same
adoption of the technology in question (Hongladarom, type of mechanism should also be installed to avoid
2004). This could also lead to a resistance to AI potential misuse. Simply, AI should be an integral part
technology. For example, using AI to forecast floods of an ethical way of living, right from the moment of
may lead to administrative measures that could cause implementation. Hence, instead of regarding AI and its
mistrust or misunderstandings if the AI technology surrounding technologies as something imported and
is not made clear to those affected by the measure. inherently harmful towards the developing world, we
It is one thing for AI (if reliable) to identify when and must find a way in which AI becomes integral to help
where a severe flood will take place; it is another to these people flourish.
convince a local population that a flood will occur
and that their location will be affected. This shows Furthermore, this paper argues that the details of
that any successful employment of AI must factor in how to live an ethical life should include insights
local beliefs and cultures. Moreover, the forecasting obtained from Buddhism; specifically, the teachings
must not be used to gain an unfair advantage over on compassion (karunā), which is one of the most
others. For example, forecast knowledge of floods important tenets of Buddhism. It may be suggested
in a particular area and time might lead to hoarding that Buddhist compassion — a concept that will be
or other unfair measures designed to maximize further developed in this paper — should play a key
the individual gains of certain parties. This shows role in developing an ethical AI. This development
that ethics must always be integral to any kind of then comprises the possibility of AI to deliver social
deployment of technology and its products. good and function as an integral part of ethical living.

37
Philosophical point of view for social implementation

AI is undoubtedly powerful and has the potential to (Hagendorff, 2019) documents the ethical concepts
significantly change the world. Power always has that are mentioned in some of these guidelines, and
to be accompanied by corresponding responsibility, identifies the top five concepts, which include privacy,
restraint, and other ethical virtues. accountability, fairness, transparency, and safety
(Hagendorff, 2019). These factors largely correspond
The next section of this paper will review some of the with a list in another paper written by Luciano
current literature on the ethics of AI and AI for social Floridi and others (Floridi et al, 2020), where seven
good. Section 3 deals with the basic concepts of “essential factors” are listed, namely: (1) falsifiability
Buddhism. Section 4 presents the paper’s main and incremental deployment, (2) safeguards against
argument, together with replies to some of the manipulation of predictors, (3) receiver-contextualized
objections during the course of research. The last section intervention, (4) receiver-contextualized explanation and
concludes with two main policy recommendations for transparent purposes, (5) privacy protection and data
the public sector and tech companies. subject consent, (6) situational fairness, and (7) human-
friendly semanticization (Floridi et al, 2020, p. 5). Here,
AI for Social Good falsifiability means that the software system needs to
be empirically testable, and only if it is testable will it
The advent of AI has given rise to a plethora of ethical be deemed trustworthy. Factor (2) (safeguards against
guidelines that aim to regulate AI research and predictors) is rather straightforward; it means that there
development worldwide. A survey of the literature on needs to be a mechanism whereby false manipulation
AI for social good revealed that much of the literature of input into the software is prevented, so that the
overlaps with the ethics of AI and proposals for AI results produced by the software are not biased. Factor
ethics guidelines in general. This is not surprising, as (3) (receiver-contextualized intervention) refers to
proposing AI for social good implies that AI should respecting the autonomy of the user; any intervention
act ethically; by promoting social good, AI thereby performed by the software needs to be “contextualized”
becomes ethical. However, this transition is not to the needs and desires of the user. Factor (4) (receiver-
automatic; one still has to provide an account of why contextualized explanation and transparent purposes)
it is indeed the case. The need for such an account refers to respecting the autonomy of the user in terms of
seems to be more acute when an AI program might the software being easy and transparent to understand,
be created with the aim of providing a social good, where nothing important is hidden. Factor (5) (privacy
but instead, turns out to be harmful. This justification protection and data subject consent) is self-explanatory
forms one of the main objectives of this paper. and is the number one concern in the guidelines
studied in Hagendorff’s paper. Factor (6) (situational
Nevertheless, it is important to review the literature on fairness) refers to the need for the software to maintain
ethics guidelines for AI, as well as AI for social good, objectivity and neutrality by avoiding data input that is
to provide a general outline and identify some of the biased from the beginning. Factor (7) (human-friendly
key issues. A website titled “AI Ethics Guidelines Global semanticization) means that humans should still
Inventory” (https://algorithmwatch.org/en/project/ai- maintain a level of control when the software is allowed
ethics-guidelines-global-inventory/) has documented to interpret and manipulate meaningful messages. For
82 guidelines. However, only four Asian countries are example, AI software can create clearer communication
represented on the list: China, Korea, Dubai, and Japan. between the caregiver and patient, without intervening
It should also be noted that none of the documents and excluding the caregiver from the process (Floridi et
published in these countries are based on their own al, 2020, pp. 5-19).
indigenous intellectual resources (see also Gal, 2019).
This shows that there is a very high level of interest These factors and concepts are also very much related
in how AI should be ethically grounded. In a related to another set of concepts, also developed primarily by
paper, “The Ethics of AI Ethics”, Thilo Hagendorff Floridi (Floridi et al, 2018; see also Cowls and Floridi,

38
AI for Social Good: Buddhist Compassion as a Solution

2018). In this paper, Floridi and his team delineate five with deeper and more intricate interconnections, which
elements that are necessary for “good” AI in society. mere technical means alone cannot solve. Bettina
Most of these elements resemble the familiar ethical Berendt, in a similar vein, proposes an “ethics pen-
principles found in other areas of applied ethics, most testing” where the design of AI is critically challenged by
notably in medical ethics. These are beneficence, non- a series of questions aimed at the designer to defend
maleficence, autonomy, and justice. Then Floridi and his himself/herself and to show that the design is ethically
team add another factor, explicability, which is unique sensitive, all in order to improve the software design
to AI as it tends to operate in a “black box”, where the (Berendt, 2019). What is interesting in both Green’s
normal user has no clue over how it works and how and Berendt’s papers is that they are not content on
it comes up with its own answers (Floridi et al, 2018). merely proposing a list of guidelines for AI developers
Moreover, Mariarosario Taddeo and Floridi also have to follow, and instead point out that AI researchers and
another article published in Science in 2018 mentioning developers must be aware of ethics during all stages of
the need for these factors for a good AI society (Taddeo development. Technical solutions alone are not enough,
and Floridi, 2018). They also discuss the need for what and will not be effective in bringing about the proposed
they call a “translational ethics” that combines foresight “social good” of AI.
methodologies and analyzes of ethical risks (Taddeo and
Floridi, 2018). In addition, these five principles are also What has emerged is that most of the literature focuses
discussed in The European Commission’s High Level on a list of ethical principles which, they argue, should be
Expert Group on Artificial Intelligence (The European necessary for an effective ethical AI system. However,
Commission’s High-Level Expert Group on Artificial only a few works (e.g., Green and Berendt) argue that
Intelligence, 2018, pp. 8-10), with the emphasis that AI simply providing such a list bypasses the deeper
systems need to be “human-centric” (The European interweaving connection between ethical principles and
Commission’s High-Level Expert Group on Artificial the underlying social and cultural contexts. Nonetheless,
Intelligence, 2018, p. 14). The overall concern of the both Green and Berendt address these contexts in a
document is that AI needs to be “trustworthy”, and the vertical manner. More specifically, they focus on the
requirements discussed here are among the necessary interrelations between ethical principles and the wider
conditions. More specifically, the document discusses concerns in a Western context. As mentioned earlier,
ten factors that are supposed to be sufficient for a there are only a few guidelines in Asia, and more
trustworthy AI system. These are accountability, data interestingly, these guidelines do not mention their own
governance, design for all, governance of AI autonomy intellectual resources. Hence, a large gap exists in the
(human oversight), non-discrimination, respect for (and literature, namely the formulation of AI ethics principles
enhancement of) human autonomy, respect for privacy, based on the intellectual resources of the East. In fact,
robustness, safety, and transparency (The European my recent book, “The Ethics of AI and Robotics: A
Commission’s High-Level Expert Group on Artificial Buddhist Viewpoint”, discusses this issue in great detail
Intelligence, p. 14). Thus, these ten requirements largely (Hongladarom, 2020). Moreover, going beyond the gap
mirror the requirements or essential factors mentioned in theoretical terms, there is also a gap in the content of
earlier. Chief among these lists are factors such as the proposed guidelines. What I propose in this paper
autonomy, privacy, safety, and transparency. It is clear is that a complimentary principle of Buddhist ethics
that there are many overlaps among such guidelines, should be adopted as the foundation for thinking and
with only relatively small differences among them. deliberating on the ethics of AI and AI for social good.
Furthermore, the principle of karunā (compassion)
Furthermore, Ben Green (Green, 2019) argues that should be considered for the ethical guidelines of AI and
computer scientists cannot rely on the idea that for any theory related to AI for social good.
algorithms alone can solve the world’s problems, but
they need to see how social programs (which AI for
Social Good is supposed to solve) are all connected

39
Philosophical point of view for social implementation

Buddhist Ethics and Basic Buddhist Principles


It is not possible to explain all of the principles of This does not mean becoming a scientist, but instead,
Buddhism in this paper. Nonetheless, a very brief understanding that nature works according to the
introduction to its relevant principles should provide rules of cause and effect. Realizing this is a necessary
a better context for the argument. More details on step towards attaining what Buddhists call “nirvāna”
the principles of Buddhist ethics and an introduction or total cessation of all suffering. The term is usually
to Buddhist philosophy can be found in the book that translated as “Enlightenment.” Hence, Buddhist ethical
I mentioned earlier (Hongladarom, 2020). The book theory explains that an action is good if it leads to
explains that Buddhist ethics is based on the idea that nirvāna, and vice versa.
an action is considered right if it brings out something
that is universally desired by all human beings, and As mentioned earlier, the aim of this paper is to show
wrong if it goes in the opposite direction. Thus, that Buddhist philosophy can contribute to the ethics
Buddhist ethics is markedly different from modern of AI and AI for Social Good. A key point is that a
ethical theories; for instance, other theories do not person’s actions must be in tune with nature. When
specify what is universally desirable for all humans. In this is the case, they essentially become one with
Immanuel Kant’s ethical theory, for example, the basic nature. This is a concrete expression of the realization
idea of what constitutes a good action comes without that there is no attachment to the ego, since it is
considering the possible consequences of that action. just such an attachment that separates one from
Instead Kant’s theory questions whether the action becoming fully in tune with nature. Compassion is a
follows a universalizable maxim or not. The universally key ingredient in this realization, and what is truly good
desirable goal, on the contrary, is definitely a goal; is the realization that there is no boundary between
thus, Buddhist theory is in opposition with Kant’s the ego and everything else, as well as the resultant
deontological theory. Furthermore, Buddhist theory desire to help others get rid of their suffering, which is
is also different from utilitarianism in that, although ultimately due to a lack of realization. In the area of AI
utilitarianism is a kind of consequentialism, Buddhist ethics and AI for social good, this means that one has
theory specifies a definite content of the goal that is to find a way in which AI can contribute to relieving the
universally desirable to all human beings. Conversely, suffering of all beings. This may not be as grandiose
utilitarianism does not specify any definite content, as it may sound, as we are more than capable of
and instead focuses on content that is deemed finding out specific and concrete ways to achieve
utilitarian. Buddhism suggests the possibility of a this. Doing so is to implement an ethics of AI that is in
universally desirable goal that is valid for everyone. accordance with Buddhist ethical principles. The main
Since everyone desires happiness and wishes to avoid idea being that, in order for AI to provide social good,
suffering, it may be seen as a universal goal. Buddhism it must consider the contexts involved, which may
has a very detailed theory regarding the definition vary from place to place. A solution that might work in
of happiness as a universal goal. In simple terms, it one context might not work in another. The examples
describes a type of happiness that results when one’s put forward in this paper are flood forecasting and
action is in total accordance with nature. Thus, the kind facial recognition, however, we can certainly imagine
of “happiness” that results from indulging in sensual other cases. In the field of automated reasoning or
pleasure would not qualify, as this pleasure also brings decision making, one also needs to be careful that
about suffering. For example, eating certainly brings the decisions made by AI are always accountable to
pleasure, but too much eating can cause a certain humans. Allowing AI to have a free hand in making
degree of discomfort, such as feeling bloated, etc. decisions (such as in stock trading) would go against
Therefore, true happiness (i.e., without suffering) is the Buddhist principle of compassion, as this tends to
only attainable through a true understanding of nature. create more suffering rather than reduce it.

40
AI for Social Good: Buddhist Compassion as a Solution

41
Philosophical point of view for social implementation

42
AI for Social Good: Buddhist Compassion as a Solution

Could AI Become Compassionate?


As we have seen, this study argues that AI needs must do whatever we can—within the limits of our
to be compassionate. This means that AI must power—to help relieve suffering. For AI algorithms, this
exhibit the two qualities that constitute compassion, would mean taking active steps in creating a world
namely interdependence and altruism. AI exhibits where suffering is eliminated as much as possible.
interdependence by showing concretely that it More specifically, the algorithm should be designed
understands (within the constraints of current to help alleviate suffering from the very beginning.
AI technology) the concept of things being For example, facial recognition technology could be
interdependent and interconnected. This can be developed to recognize particular features so that
achieved with an AI algorithm that shows concern for certain traits are predicted, such as the onset of a
the welfare of someone or something. For example, disease, leading to early prevention. One may assume
the aforementioned flood forecasting algorithm could that suffering is unrelated to software development, as
show a level of understanding of interdependence it appears to be an external requirement. However, it
by having in its internal mechanism connected to should be an integral part of software development in
other relevant factors that are no less important, itself. This pertains to key areas or problems which AI
such as economic conditions, price forecasting, algorithms will be designed to solve from the beginning.
political climate, and geographical information, etc.
AI flood forecasting could lead to the hoarding of Michael Kearns and Aaron Roth (Kearns and Roth,
essential food and supplies, which is an unethical act. 2019) argue that an algorithm should be ethical in the
However, the algorithm might struggle to learn how sense that ethical components should be programmed
its predictions could be used by humans in a negative into the algorithm. Here I suggest that compassion
way. Here, a program that embeds algorithms in should also be programmed into AI algorithms. In
a larger context could make it more difficult for fact, the same idea has already been proposed by
information to be used for personal gains. For James Hughes (Hughes, 2012). However, according
example, the algorithm could publicly broadcast its to Hughes, a robot only becomes compassionate
predictions, making it impossible for certain parties to when it can imitate human emotion. I propose that
gain an advantage. An internal “safety lock” within the compassion can be attained when it exemplifies
algorithm could be installed as an indelible component the two components mentioned earlier, namely
to make it imperative to broadcast information to realization of interdependence and the commitment
everyone involved rather than to individual users. The to relieve suffering. More specifically, a robot becomes
broadcasting feature may, however, be necessary for compassionate when it exhibits genuine commitment
flood forecasting, but broadcasting on this scale might and action geared toward alleviating suffering. Thus,
be unethical in other contexts or for certain algorithms. it is more action-oriented than merely displaying or
For example, some algorithms are intended to work mimicking emotions.
privately (e.g., personal health information). As such,
developers need to see which contexts are relevant How can we program robots or AI algorithms
for installing safety mechanisms inside algorithms. to be compassionate? We could say that an
algorithm “understands” interdependence when it
The other component of Buddhist compassion is is programmed in such a way that it “recognizes”
the commitment to alleviate suffering for all sentient various external factors that are involved in making
beings. Here, sentient beings are relieved of their a more ethically nuanced assessment. Of course,
suffering through someone who is completely the algorithm does not understand anything—we
compassionate. However, such an ideal is impossible are not talking about a superintelligence—but it is a
to realize in reality, where the one who practices way of talking to show that the algorithm exhibits
compassion has limited power. Nonetheless, we certain behaviors that we recognize colloquially as an

43
Philosophical point of view for social implementation

understanding. Hence, for the algorithm to understand Some may object to this proposal, saying that giving
interdependence, which is one component of Buddhist AI its own discretion in making more ethical decisions
compassion, it has to exhibit certain external features will inhibit the freedoms of the human user in applying
that are not directly part of its core objective, so to AI in any way he or she sees fit. Furthermore, there is
speak. These features may not be part of the core no guarantee that the algorithm will act as ethically as
mission, but they are very important in making intended. These are legitimate concerns. Nonetheless,
an ethical judgment of the situation in which it is installing a component that inhibits the user from
employed in order that it becomes more ethical. If a performing certain actions is not a new principle. For
given objective, such as to maximize a certain output, instance, some cars will not start unless the driver
is found to involve trade-offs between the output and is wearing a seatbelt. The AI that refuses to follow
other desirable factors, then the machine would be certain orders from the user acts in the same way.
programmed not to follow the maximization. It will Such a car limits the freedom of the user, but this is
realize or “understand” that such an action leads to still seen as a strong safety feature. Additionally, how
a contradiction with its own prime directive, which is do we know that the AI, when given this amount of
to alleviate human suffering. To come back to flood freedom, will always act ethically? For the artificial
prevention software, an algorithm might be taught to general intelligence (AGI) of the future, this is a
accurately predict floods in a certain area. However, serious matter because AGI’s are capable of thinking
predicting floods alone is not ethical as it could lead on their own. Therefore, it is in our best interest to
to hoarding, as we have seen. Thus, the AI needs to be guide its development towards being both intelligent
programmed with compassion so that it can predict and ethical. For today’s more specialized AI, however,
floods while also considering other relevant factors. safety devices should be installed or programmed so
For example, the AI could display a warning sign if a that the algorithm functions to promote ethical action.
user attempts to misuse the data. Then, the second
component of compassion, altruism, is ideally put into In fact, giving AI the ability to act ethically is possible
action when the algorithm initiates an action designed with today’s technology. This does not necessarily
to help relieve affected persons from suffering. To use mean that the AI is endowed with consciousness and
another example, a microloan algorithm might override free will. Instead, the AI is equipped with algorithms to
its directive (maximizing profit for its creator or owner) act ethically and compassionately from the beginning.
in favor of clients who, on paper, would have suffered The microloan software will act ethically if it takes
even more if the algorithm did not act otherwise. Here the interest of its clients into account. This might
the algorithm must be able to distinguish between not maximize the bank’s profit, but the social cost of
clients who really need the money, and who show being inflexible when loan decisions are analyzed and
good faith and commitment to repaying the loan, from approved could be greater. As an increasing number of
those clients who are out to get cheap money without loan decisions are made autonomously by algorithms,
any intention of repayment. In this case, there are having an ethical algorithm seems essential.
many specific details involved; the idea I am proposing
is only that the algorithm should follow the Buddhist
principle of trying to relieve suffering as best it can,
based on the information available to it at the time.

44
AI for Social Good: Buddhist Compassion as a Solution

Objections and Replies


During a series of meetings held by the Association The second objection builds upon the final sentence
of Pacific Rim Universities (APRU) under the project in the paragraph above. When a programmer encodes
titled “AI for Social Good”, my proposal benefited ethics into a machine, who will ensure that these
from a number of comments and helpful criticisms ethics are correct? In other words, who or what
from my colleagues, who challenged me to develop a would guarantee that the programmer does not put
better, more defensible position on this topic. The first forward their own personal agenda and values into
objection focused on the claim that ethics should be the software? In order to answer this question, one
encoded into the algorithm or the inner programming has to bear in mind that the programmer cannot, in
core of the AI software. The objection is that it fact, neglect the needs and values of society. If the
makes ethics too narrow and technical. According programmer neglects those values and injects his or
to this objection, ethics coding would result in the AI her own personal beliefs into the machine, it is likely
system being estranged from its social, cultural, and that the machine would act strangely and be unusable.
economic environment, leading to the system not Software containing an idiosyncratic set of values
being relevant to the aims of the forum. First, it should would be condemned by users and thus would not
be noted that I would not advocate that social and be successful. The manufacturer would also have a
cultural considerations should be taken away from strong interest in ensuring that the consumer receives
ethical deliberation. This is just not possible, because a desirable service. Hence, the software would need
ethics is always naturally embedded in the set of to be tested repeatedly, not only for safety and quality
practices that surround any technical product, which control, but also for ethical quality.
is something that has been recognized by technology
philosophers for a long time. For example, a car that According to the third objection, coding ethics into
remains stationary until the driver puts on a seatbelt, a machine is too narrow; the program must learn its
is an example of encoded ethics. According to my ethics by interacting with its environment. Instead of
analysis, a car that neglects to warn the driver to wear a taking all the cues from the programmer, an intelligent
seatbelt and does not take appropriate action to ensure AI should be able to learn what is right and wrong from
that he or she does so, is unethical. In the same vein, it its interaction with other people. The more people
is also ethical for microloan software to take more data it interacts with, the better it becomes at learning
points than required to ensure that loans are repayable. right and wrong. This is just like how a child learns
Sure enough, a program that makes an accurate risk ethics—to live in a social environment with parents,
calculation for a loan would to some extent be an siblings, friends, and so on. There is just no way for an
ethical program. However, if this is all the software algorithm to understand ethics through code alone.
does, then its degree of ethicality is limited. It needs This is a valid objection, but the coding is only a part of
to consider other factors too, such as the condition the larger program, which involves teaching a machine
of the loan applicant (e.g., economic status, children, to be compassionate. Since we do not have AGI
health, etc.). It would be more prudent for the program level machines yet, we have to see how specialized,
to provide a loan under certain economic conditions, blind ASIs (artificial specialized intelligence) can
such as the current COVID-19 pandemic. The act of exhibit behaviors that we deem to be (approximately)
coding ethics into the inner workings of an AI program compassionate. At this stage, we would be glad
does not imply that the coder and employer are isolated if AI could deliver social good, even without being
from the surrounding socio-economic conditions or conscious. The AI could be encoded in such a way that
social environment. On the contrary, it shows that the it knows how to learn ethical principles. Humans are
coder and tech company value ethics, and must pay already hardwired to become ethical, since altruism
close attention to the needs and values of the society in and cooperation among members of our species has
which they intend to use the software. been fundamental throughout our evolution. After

45
Philosophical point of view for social implementation

all, understanding ethical and social cues would be a Recommendation 1: Programmers and software
very strong achievement for AI, but would still require companies must implement compassionate AI
coding for this possibility to occur. programs, which is the key message of this article.
No matter what kind of “social good” the AI is
The final objection explains that coding ethics into supposed to bring about, the software needs to be
a machine implies that programmers and software compassionate and ethical in the Buddhist sense.
companies do not care for, and are not accountable I have specified in some detail as to what being
to, society at large. Again, this does not have to be the compassionate for AI actually means. Basically, the
case. There is no logical link between coding ethics into AI needs to realize that all things are dependent on
an algorithm and the programmer and employer being all others (interdependence) and that the AI needs to
unaccountable to society. We have seen earlier that the show actual commitment to improving the condition
programmer and software company must ensure that of everyone in society (altruism). In order to make
their products meet the requirements set by consumers this recommendation feasible, the components of
and society; furthermore, they are still a part of society compassion need to be translated into algorithmic
and need to follow specific laws and regulations. steps for the computer. In other words, the software
needs to be coded in such a way that it becomes
The objections and comments from my colleagues ethical. However, the coding must not be alienated
largely focus on the relation of coding to its socio- from its socio-economic and historical contexts.
economic context. This is an important matter, and That is, the software companies responsible for
in conclusion I would like to argue that coding must manufacturing AI programs must function as
be embedded within its contexts. More specifically, responsible and contributing members to society. No
this means that coding must only be one aspect of matter what kind of social good the AI is intended
the overall systematic practice of ensuring that AI is to bring about, this is a necessary requirement. The
ethical. Nevertheless, without an emphasis on coding, paper has shown that some applications that are
there is no definitive way in which the design of AI being developed in the AI for Social Good program,
could directly contribute to a better society. For this such as flood forecasting, can indeed be used for
to happen, the components of an ethical AI need to nefarious purposes. This can happen when the
be translated into a language that a computer would information gained from the AI is used to gain unfair
understand. That is, the ethical components need to personal advantages. There should be ways within the
be made operationalizable, and they need to be pared design and programming of AI itself to prevent this,
down into basic steps for a computer to follow. Most insofar as it is technically feasible. Abuses of flood
importantly, the ethical vision must be clear, and the forecasting information is an example of how the
operationalization needs to adhere to it closely. work of AI, which may originate from good intention,
can be used in such a way that the AI itself becomes
Conclusion and Recommendations a culprit in an unethical action, such as hoarding or
implementing flood prevention programs that privilege
I would like to end this paper with a number of certain groups over others. Software companies need
recommendations, both to public and private sectors, to be aware of this possibility and take the necessary
so that an ethical AI for social good can be fully steps to prevent it from happening.
developed and deployed. The recommendations are
as follows:

46
AI for Social Good: Buddhist Compassion as a Solution

Recommendation 2: The public sector needs These two recommendations make it clear that AI will
to ensure that rules and regulations are in place in create social good that truly answers people’s needs
order to create an environment that facilitates the and suffering. AI in the future may, or may not, become
development of ethical AI for social good. Such rules conscious and attain the level of superintelligence in
and regulations will ensure not only that private the sense advocated by Nick Bostrom (Bostrom, 2014).
companies have a clear set of directives to follow, In any case, AI needs to be made ethical at this time, as
but also public trust in the works of the private sector there is a decreasing window of opportunity to do so.
(assuming the work of creating AI software belongs
to the private sector). Furthermore, even in a situation Acknowledgments
where the development of AI falls largely on the public
sector, such as in Thailand, where the private sector is Many thanks to the Association of Pacific Rim
still rather weak in original research and development, Universities (APRU) for initiating the project on AI
the rules are also applicable. For example, the rules for social good. I would also like to thank Prof. Jiro
could provide incentives for software manufacturers Kokuryo of Keio University, Japan, the Principal
to be more ethical. It needs to be made clear to all Investigator of this project, for giving me the
parties that there are material benefits to being more opportunity to become engaged in this exciting
ethical. The belief that becoming ethical runs counter project. My sincere gratitude goes to Christina
to profit maximization is shown to be unfounded. Schönleber, Director for Policy and Programs, APRU,
Realizing the objective of a private company must be as well as all my colleagues in the project, from whom
embedded in the context of consumer trust; without I have learned a great deal. Thank you Prof. Pirongrong
the latter, it is hard to imagine how this type company Ramasoota, Vice-President for Communication,
could flourish in the long run. Chulalongkorn University, and my colleague at
Chula. Finally, I would like to thank Dr. Chulanee
Tianthai, who gave me the information about this
project and encouraged me to apply.

47
Philosophical point of view for social implementation

References

Berendt, B. (2019). AI for the common good?! pitfalls, challenges, and ethics pen-testing.
Paladyn, Journal of Behavioral Robotics, 10(1), 44-65.

Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford University


Press.

Cowls, J., & Floridi, L. (2018). Prolegomena to a white paper on an ethical framework for
a good AI society. https://ssrn.com/abstract=3198732 or http://dx.doi.org/10.2139/
ssrn.3198732

Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to Design AI for Social Good:
Seven Essential Factors. Science and Engineering Ethics.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C.,
Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—
An ethical framework for a good AI society: Opportunities, risks, principles, and
recommendations. Minds and Machines, 28(4), 689-707.

Gal, D., Perspectives and Approaches in AI Ethics: East Asia (June 7, 2019). Dubber,
Markus, Pasquale, Frank, and Das, Sunit, (Eds.) Oxford Handbook of Ethics of Artificial
Intelligence, Oxford University Press, Forthcoming. https://ssrn.com/abstract=3400816
or http://dx.doi.org/10.2139/ssrn.3400816

Green, B. (2019). “Good” is not good enough. AI for Social Good Workshop, NeurIPS
(2019). https://www.benzevgreen.com/wp-content/uploads/2019/11/19-ai4sg.pdf

Hagendorff, T. (2019). The ethics of AI ethics: an evaluation of guidelines. Arxiv.org.


https://arxiv.org/abs/1903.03425

Hongladarom, S. (2004). Growing science in Thai soil: culture and development of


scientific and technological capabilities in Thailand. Science, Technology and Society,
9(1), 51-73.

Hongladarom, S. (2020). The Ethics of AI and Robotics: A Buddhist Viewpoint. Rowman &
Littlefield.

48
AI for Social Good: Buddhist Compassion as a Solution

Hughes, J. (2012). Compassionate AI and selfless robots: a Buddhist approach. In


Patrick Lin, Keith Abney, and George A. Bekey (eds.), Robot Ethics: The Ethical and Social
Implications of Robotics. Cambridge, MA: MIT Press.

Kearns, M., & Roth, A. The Ethical Algorithm: The Science of Socially Aware Algorithm
Design. Oxford University Press, 2019.

Mittal, V. (2017). Top 15 Deep Learning applications that will rule the world in 2018 and
beyond. Medium.com. https://medium.com/breathe-publication/top-15-deep-learning-
applications-that-will-rule-the-world-in-2018-and-beyond-7c6130c43b01

Taddeo, M., & Floridi, L. How AI can be a force for good. Science 361 (6404), 751-752.
DOI: 10.1126/science.aat5991.

The European Commission’s High-Level Expert Group on Artificial Intelligence: Draft


Ethics Guidelines for Trustworthy AI. (2018) Working Document for Stakeholders’
Consultation. Brussels, 18 December 2018.

49
Moralizing and
Regulating Artificial
Intelligence:

Does Technology
Uncertainty and
Social Risk Tolerance
Matter in Shaping
Ethical Guidelines
and Regulatory
Frameworks?

M. Jae Moon
Iljoo Park
Institute for Future Government,
Yonsei University
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Introduction

Artificial intelligence (AI) is considered one of the most powerful developments in computer
science, which affects every aspect and sector of society. While we are increasingly paying
attention to its significance and impact, we do not yet know how and to what extent
it affects the replacement and creation of jobs, industrial transformation, and lifestyle
changes, which causes uncertainties and risks related to AI. Due to these underlying
uncertainties and risks, there has been a growing demand for regulating and moralizing
AI in order to minimize AI-caused uncertainties and risks. It is hoped that AI regulation will
help to sustain its positive impact on society as a whole. With growing social fears and
uncertainties, there has been increasing demand for a specific and proactive approach
towards dealing with AI. Responding to these demands, governments and key international
actors have attempted to provide regulatory frameworks and ethical guidelines for this
rapidly developing technology. This study aims to review the uncertainty and risk issues of
disruptive technologies such as AI, and assess their socio-economic and political impacts
on society. This study will also discuss how key stakeholders (i.e., governments, industries,
international organizations, NGOs, etc.) craft ethical guidelines/principles as well as review
how different countries establish AI regulatory frameworks, particularly for autonomous
vehicles (AVs).

Tzur (2017) argues that technological advancements fundamentally change the paradigm
of regulatory mechanisms, while a conventional regulatory political framework (Wilson,
1980) seems to fail to offer an effective explanation for the nature of emerging disruptive
technologies (i.e., AI, gene editing, blockchain, etc.), simply because defining who should
benefit and who should bear the costs is quite uncertain and dynamic. Because of
uncertainties regarding cost-benefit distributions as well as the opportunities and risks
of emerging disruptive technologies, many countries appear to have adopted differing
regulatory approaches to these technologies. For instance, national regulatory positions
vary widely among different countries regarding the acceptance of cryptocurrencies (i.e.,
Bitcoins) as legal tender and the banning, regulation, or encouragement of cryptocurrency

51
Philosophical point of view for social implementation

exchanges. Notably, some countries such as Japan US and three Asian countries, namely China, Japan,
and the US have relatively light regulatory positions and Korea. The aforementioned Asian countries are
towards cryptocurrencies, while others including China major economic players in the region, and are all
and Korea have very restrictive policies. Likewise, interested in disruptive technologies for the potential
regulations of disruptive technologies also differ in implications of economic and social development. The
content and intensity from country to country. While US has been included as a base for comparison since
some governments are in a strict regulatory position, it is more market-oriented than other countries, while
others remain in an active deregulatory position by the three Asian countries are somehow paternalistic.
introducing regulatory sandboxes. Furthermore, the
uncertainty and new forms of risk posed by these Due to the disruptive nature of emerging technologies
technologies (Slovic, 1987) demand social, industrial, such as AI and related technologies including robots,
and often international agreement, as well as AVs, drones, etc., there is no particular consensus
discussion on ethical requirements and technological regarding how disruptive technologies should be
standards to ensure the maximization of social regulated and moralized through social interests in
benefits and the minimization of social risks of these those technologies, as well as research interests in
disruptive technologies. the intertwined relationship between technological
advancements and regulations. Despite growing
In general, governments enact regulations to correct interest in disruptive technologies and related ethical
market failures, pursue collective and public interest guidelines and regulations, limited research has been
goals, and to prevent potential social problems caused conducted in this field. In particular, a comparative
by the excessive pursuit of private interests. However, analysis of ethical guidelines for AI and different
individual regulations do not always meet public national responses to disruptive technologies have
expectations or help achieve intended social goals. been somewhat lacking, primarily because there is
Regulatory decisions on disruptive technologies are no clear measure of regulatory stringency as the
often not timely, primarily because of the lag between basis for comparative studies of regulation politics
the emergence of technology-driven social issues and (Brunel & Levinson, 2013). In order to fill this research
regulatory policy decision-making. Views regarding the gap, this study aims to look at key ethical elements
regulation of novel technologies also often vary widely of AI, and then determine how and why countries
because of country-specific contextual factors— develop different regulatory approaches to the same
including legal systems, influence of various interest technologies.
groups, and the ethical perspectives of the general
public, which determines the social risk perceptions of Along with the growing interests in AI, governments,
the public. research institutes, international organizations, and
industries initially began to pay attention to ethical
This study uses cross-country comparative case frameworks for AI, as many are puzzled about the
studies by examining the similarities and differences potential consequences and ethical dilemmas. For
of regulatory actions caused by levels of certainty, as example, an ethical dilemma on this subject is how an
well as the tolerance of social risks for technologies in autonomous vehicle should deal with an unavoidable
given countries. As an example, this study will examine accident, where the car must decide whether to kill
regulatory approaches to AVs, which is a product an innocent bystander or the five passengers inside
of AI and robotics technology. We will examine the the vehicle. It is also imperative to question who

52
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

should be responsible for an incident involving an in each country, asking about their perceptions of
autonomous vehicle, among AI programmers, vehicle the benefits and negative consequences of 12 major
manufacturers, vehicle sellers, drivers, and others. emerging disruptive technologies. Participants
perceived AI and robotics as the most beneficial and
As proposed by a group of experts on AI commissioned risky technologies, while they perceived blockchain
by the OECD, an ethical guideline specifies and technology as moderately beneficial and risky.
addresses core values in developing, manufacturing, Moreover, people tend to perceive both biotechnologies
and using AI and AI-loaded machines. In fact, it will not and neuro-technologies to be more beneficial and
be long before ethical guidelines and principles for AI riskier than blockchain technology.
are offered by governments, international organizations,
private companies, and NGOs. Reviewing 84 Despite variations in the perceived benefits and risks
documents of ethical principles and guidelines, Jobin of those disruptive technologies, many stakeholders
et.al. (2019) found that most of these documents (88%) have raised their concerns over the potential risks
were released after 2016 by private companies (22.6%) of such technologies. As such, they have demanded
and government agencies (21.4%). for alternative ways of moderating and minimizing
the risks, which often results in informal/unofficial
We will first discuss technology uncertainty and social forms of ethical principles and formal/official forms
risk in the context of disruptive technologies. Then, we of regulation. While the former is presented as a set
will review the development of ethical guidelines for AI of soft, suggestive, and general principles, the latter
developed by different actors as a loosely institutional is a set of hard, legally binding, and specific rules. The
effort to moralize AI technologies. Next, we specifically former is discussed and manufactured by various
examine the different regulatory positions of four stakeholders of different sectors (private, non-profit,
selected countries to AVs. Finally, policy implications and public sectors) at different levels (i.e., local,
are discussed and policy recommendations are national, and international), whereas the latter tends to
presented. be made by executive or legislative branches through
formal rule-making and legislative processes, because
Determinants of Regulating and each country makes its own regulatory decisions
as technological risks and interest conflicts among
Moralizing Disruptive Technologies: stakeholders gradually mount. Recently, ethical
Technology Uncertainty and Social Risk standards and regulations have been discussed and
Tolerance proposed in the European Union (EU), the OECD, and
other economic communities to moralize as well
Disruptive technologies: benefits and risks as control (regulate) technologies. While there is a
general consensus in the nature and scope of ethical
Since being presented by the World Economic principles for AI, there is no consensus in regulatory
Forum in 2016, there has been a growing interest in frameworks among different countries. Moreover, the
disruptive technologies which are often proposed governmental regulatory decision can fall even farther
as technological engines for the fourth industrial behind when the potential costs and benefits of a
revolution. Figure 1 shows the different levels of technology are uncertain.
expected benefits and costs from each technology. The
World Economic Forum (2016) surveyed professionals

53
Philosophical point of view for social implementation

Artificial Intelligence
and robotics

4.4

Proliferation and ubiquitous


4.2 presence of linked sensors
Biotechnologies
Geoengineering

4.0
Blockchain and
distributed ledger

3.80 New computing


average Virtual and Neurotechnologies technologies
augmented realities

3.6
Advanced materials
Negative consequence

and nanomaterials
3.4

3D printing
3.2
Space technologies
Energy capture,
storage and transmission

5.0 5.2 5.4 5.6 5.8 6.0


5.56
Benefits average
plotted
7.0 area

Figure 1: Perceived benefits and negative consequences of


12 emerging technologies
(Source: World Economic Forum Global Risks Perception Survey 2016)
1.0 7.0

54
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Regulatory lag and regulatory paternalism Determinants of cross-country regulation


differences
Regulators are often uncertain as to whether or
how to address the risks (World Bank, 2016). In Based on the “psychometric paradigm,” Slovic,
particular, regulators are uncertain and unclear about Fischhoff, and Lichtenstein (1982) conducted a
assessing the potential benefits and risks of emerging classical study regarding the risk perception of people
technologies, which makes regulating disruptive and offered a solid framework to understand the
technologies even more challenging than conventional cross-country regulation difference on disruptive
technologies (Hunt and Mehta, 2013). Generally, technologies. They suggest two significant factors
regulations tend to be reactive rather than proactive, to distinguish technologies: dreadfulness and
which often causes regulatory lag. While regulatory unfamiliarity. Dreadfulness refers to the extent to
lag is partially a result of market-based and non- which a technology can be controlled not to be
interventionistic policy position, it often causes tardy catastrophic, which is understood as a measure for
responses to previous problems that could have been technological risk. Unfamiliarity refers to how much a
addressed in advance. technological risk is observable, which is considered
as a technology uncertainty. It implies that subjective
On the contrary, regulatory paternalism also plays perception is an important factor to the classification
an important role in driving proactive regulations of technologies besides objective criteria. It should be
to minimize potential risks. Paternalism originally noted that these terms are not absolute, and instead
referred to the ideological belief that governments used as relative terms. For instance, nuclear power
should intervene to protect people—similar to can be a more dreadful and less unknown technology
protecting their children. Thus, regulatory paternalism than dynamite, which is a less dreadful but more
involves paternalistic regulatory action on the part known technology.
of governments. Paternalism lies behind many
regulatory measures beyond specific instances (e.g., While “uncertainty” and “social risk” are considered
seatbelt and safety helmet laws); it is also the driving to be independent, they are somewhat related since
force behind the prohibition or control of certain risk- technology uncertainty often causes a higher level of
generating products and services. In fact, citizens social risk of a particular technology in a society. As a
of contemporary risk-obsessed societies expect result, the social tolerance of a particular risk would be
their governments to provide them with protection a significant factor in a country since the response to
(Ogus, 2005). To overcome excessive regulations one technology would be different for other countries,
formulated by regulatory paternalism, some countries although the objective technological risk would be
have recently adopted temporary deregulation identical. This leads to specific regulatory positions
schemes such as a regulatory sandbox, which is a for different technologies because certain countries
testing ground that is protected against any possible may want to control the potential technological
regulation. This supports a flexible and lenient risk and take various regulatory measures (e.g.,
regulatory position to maximize potential economic law enactments) to restrict the reckless research,
and social benefits of various disruptive technologies. development, and utilization of technology.

55
Philosophical point of view for social implementation

1. Technology uncertainty In contrast to the uncertainty of expected outcomes


Technological “unfamiliarity” (Slovic et.al., 1982) is from any given technology, responsiveness to the
somewhat similar to technology “uncertainty”, though global consensus is a significant factor for converging
the term “uncertainty” may not be used in a strictly similar regulatory positions. Although it may be
defined sense since it is commonly used by many challenging to make a public consensus between
people in different senses (Downey and Slocum, 1975) scientists and the general public (Kahan, Jenkins‐
or often poorly understood (Fleming, 2001). Despite Smith, & Braman, 2011), the existing consensus or
this poor understanding of “uncertainty”, it is generally standards can apply to regulatory decisions regarding
accepted that the degree of technology uncertainty emerging technologies. Recently, a global consensus
may vary depending on controllability, which is directly led by international and regional organizations such as
related to the level of safety and potential risk of a the EU, the OECD, and the WHO has also been made,
particular technology. According to Milliken (1987), which shapes the nature of regulatory positions of
the three common definitions derived from psychology countries that are not necessarily obligated to follow
and economics for “uncertainty” are (1) “an inability the global standard (Kerwer, 2005).
to assign probabilities as to the likelihood of future
events”, (2) “a lack of information about cause-effect 2. Social risk tolerance
relationship”, and (3) “an inability to predict accurately Another reason for differences in regulatory responses
what the outcomes of a decision might be”. Similarly, between countries is that some countries have
we can define technology uncertainty as “the inability different levels of tolerance for social risks. Uncertainty
to measure the likelihood of a future event and the of one technology makes people eager to prepare for
outcome with probabilistic function and to infer the potential risks or hazards. We focus on the fact that
causal outcome made by a particular disruptive the preparation level for an uncertain technology can
technology”. differ depending on the country. Social risk tolerance
is closely related to uncertainty avoidance; people
We argue that uncertainty about the spillover who prioritize avoiding uncertainty are likely to control
effects from technologies themselves results in uncertain situations by imposing strong schemes
cross-country variation in regulatory decisions on such as regulations. Empirical studies in various
disruptive technology. For example, the difficulty of areas — e.g., Kanagaretnam et al. (2011) — examine
predicting the costs and benefits of a technology the relationship between high risk perception and low
causes regulatory lag since this can obstruct timely uncertainty avoidance.
regulations. Governments are likely to identify
disruptive technologies based on the extent to Hofstede’s 6-D model of national culture is considered
which the expected costs and benefits are easily one of the major measurements of the general
measured. If the costs and benefits derived from a public’s uncertainty avoidance. It attempts to
technology can be predicted quickly, the regulatory measure the degree to which members of a society
policies can be developed more promptly. Otherwise, feel uncomfortable with uncertainty and ambiguity
governments may postpone strict regulatory (Hofstede, 2015). According to Hofstede’s score out
decisions if a technology has the potential to cause of 100, Japan (92) and Korea (85) have somewhat
harm in ways that cannot be foreseen during the higher uncertainty avoidance than China (30) and the
innovation process, preventing them from quickly US (46). Note that the interpretation of this index has
predicting the costs and benefits the technology could been made cautiously because Hofstede originally
generate. We define such technologies as “uncertain developed his theory from a management perspective
technologies”. It should be noted that the regulation of to recognize the difference between diverse cultures.
uncertain technologies is also affected by the degree That said, it helps to draw a better understanding of
of uncertainty that a particular society should and can the cultural differences among countries in many
tolerate (Kolacz et al., 2019). aspects, such as uncertainty avoidance. Uncertainty
avoidance is different to risk avoidance, but is related

56
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

to anxiety and distrust towards the unknown (and and quality life than civil law countries. La Porta et
vice versa), with the desire to have fixed practices al. (2008) even summarize their series of articles (La
and rituals as well as understanding reality (Hofstede, Porta et al. 1997, 1998, 1999) to address the prevalent
2015). impact of a wide range of desirable organizations and
social outcomes of nations’ legal traditions and other
Exploring the determinants of social risk tolerance related articles to develop a so-called “Legal Origins
levels could provide substantial insight into cross- Theory” (Charron et al. 2012).
country differences in regulatory decisions regarding
disruptive technologies; however, discussion of such Competition among interest groups can also generate
an approach in prior research is scarce. We identify differences in countries’ regulatory decisions. Gai
the following three main factors that define countries’ et al. (2019) explain that regulatory complexity
different tolerance of social risk: (1) legal traditions is a consequence of lobbying. They focus on
and the efficiency of legally challenging regulations, the fact that lobbyists may be able to persuade
(2) competition among interest groups, and (3) ethical policymakers or politicians to give their interests to
concerns. more favorable regulatory treatment, which leads
to additional complexity and fragmentation across
First, legal traditions and efficiency of legally countries, especially when it comes to financial
challenging regulations can generate differences in regulation. In addition to the appeals of individual
regulatory decisions among countries. Numerous groups, conflict among many interest groups can
studies, including Beck et al. (2002) and Hail and Leuz significantly affect countries’ regulatory decisions. For
(2006), examine the relationship between countries’ instance, interest-group politics are heavily involved
legal origins and levels of economic development, in cryptocurrency regulation; debates regarding the
finding the nations’ legal origins significantly impact use of cryptocurrency worldwide is intense, and
their financial development. In particular, Beck et al. many stakeholders are involved in this discussion.
(2002) suggests that differences in countries’ legal According to Houben and Snyers (2018), numerous
origins help explain differences in their levels of players are involved in the cryptocurrency debate and
financial development. they all play particular roles: cryptocurrency users,
miners, cryptocurrency exchanges, trading platforms,
Furthermore, some empirical studies have wallet providers, coin inventors, and coin offerors.
identified differences between common law and In addition to these players, policymakers such as
civil law countries in terms of regulation decisions. the International Monetary Fund (IMF), the Bank for
For instance, Djankov et al. (2002) finds that, at International Settlements, and the World Bank have
comparable levels of development, French civil law their own views on cryptocurrency. The groups who
countries tend to have heavier regulations, less secure utilize cryptocurrency are expected to experience the
property rights, and fewer political freedoms than associated benefits, costs, and discussions, which are
common law countries. Moreover, Charron et al. (2012) still ongoing.
also mention that countries’ legal origins could explain
cross-country differences in judicial independence Ethical concerns can also lead to differences in
and government regulations of economic life, which countries’ regulatory decisions. Such concerns may be
can be summarized as the quality of institutions, as related to general public safety or the religious views
well as low degrees of corruption and high degrees of of various groups. In particular, regulations regarding
the rule of law, which in essence are desirable social genetically modified organisms (GMOs) are affected
and economic outcomes. They suggest that because by the ethical perspectives of countries’ citizens.
of stronger legal protections for outside investors and Such perspectives can be affected by religious beliefs
less state intervention, countries with a common law or the general views of human morality. Globus
tradition have achieved higher economic prosperity and Qimron (2018) investigate the regulations and

57
Philosophical point of view for social implementation

cultural perceptions of different countries regarding generate variation in uncertainty and risk tolerance.
GMO approval. Their study found that regulatory Two different approaches have been suggested: (1)
and supervisory procedures for GM crops and the moralizing technologies based on ethical standards
foods produced from these crops differ because and (2) regulating technologies based on legal
governmental approaches represent the differing mechanisms. The former refers to the efforts of
responses of citizens and scientific communities. various stakeholders to promote desirable status
These policies also reflect a variety of cultures, or conditions through codes of conduct or moral
environmental conditions, political pressures, and principles, which are often voluntary instead of
the interests of different groups such as farmers, mandatory. The latter refers to legal actions by
agricultural companies, and environmental activists or governments to mandate and enforce particular
agencies. actions, or to prohibit illegal actions which in many
cases lead to penalty or punishment. In the next
To summarize, we suggest that the regulation of section, we examine the evolution of ethical principles
disruptive technology might vary as a result of for AI and then survey regulatory actions regarding
technology uncertainty and social risk tolerance, three selected disruptive technologies that pose
and that several socio-economic factors may different degrees of risk in four developed countries.

Ethical Approach Legal Approach

Mechanism Ethical standards Regulatory laws

Actor(s) Various stakeholders Government(s)

Nature Voluntary; Broadly defined and Mandatory; specifically defined and


widely applied narrowly applied

Consequences Moral blaming Punishment or penalty

Table 1: Comparison of ethical approach and legal approach

58
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Moralizing Disruptive Technologies: Ethical Guidelines and Principles for AI


Ethical AI (Jobin et.al., 2019), trustworthy AI (European Presenting trustworthy AI, the European Commission
Commission, 2018), and responsible AI (Microsoft, (2018) proposed three elements constituting
2018) have been proposed and discussed among trustworthiness including lawful AI, ethical AI, and
various stakeholders (e.g., academics, industries, robust AI. Lawful AI refers to the fact that AI should
governments, and international organizations), as be bound by existing legal systems of local, national,
AI was presented as a main driver for radical and regional, and international levels so that they bind
disruptive changes (Jobin et.al., 2019). Although terms any processes and activities involving the entire
such as “ethical”, “trustworthy”, and “responsible” are AI lifecycle. The European Commission (2018)
used in documents that cover ethical guidance and suggests that lawful AI “should not be interpreted
principles, they all explain that we must handle AI in with reference to what cannot be done, but also with
a lawful, ethical, and robust way throughout its entire reference to what should be done and what may
lifecycle. Such guidelines include design, development, be done”. In addition to legal compliance as a basic
deployment, and usage (European Commission, 2018) minimal requirement, ethical AI emphasizes the
by recognizing, preparing, and resolving the potential reference of ethical norms in particular because legal
risks and negative impacts of AI in a society. systems are often far behind and do not keep up with
technological developments. Robust AI is presented
Ethical AI is often considered as a starting point to avoid or minimize the possible unintended negative
for moderating any potential negative social and consequences of AI in a society.
economic impacts of AI and AI-loaded devices,
including automation and job replacements, As shown in Figure 2, the European Commission
intentional misuses and malevolent consequences, (2018) suggests that all stakeholders including
dissemination of social bias and its reinforcement, developers, deployers, and end-users should meet
and an undermining of fairness (Jobin et.al., 2019). critical requirements for realizing trustworthy AI.
Reviewing and scoping 84 documents of ethical Seven requirements are presented as follows: (1)
guidelines and principles, Jobin and her colleagues human agency and oversight (fundamental rights,
(2019) suggest that several key ethical principles are human agency, and human oversight); (2) technical
commonly identified including transparency, justice robustness and safety (resilience to attack and
and fairness, non-maleficence, responsibility, and security, fallback plan and general safety, accuracy,
privacy. That said, there is no consensus on how these and reliability and reproducibility); (3) privacy and
principles are interpreted and applied in the course data governance (privacy and data protection,
of designing, developing, and using AI and AI-loaded quality and integrity of data, and access to data);
devices. (4) transparency (traceability, explainability, and

59
Philosophical point of view for social implementation

Human agency
and Oversight

Technical
Accountability robustness and
Safety

To be continuously
evaluated and
addressed
throughout the AI
systems life cycle
Social and Privacy
Environmental and Data
wellbeing Governance

Diversity,
Non-Discrimination Transparency
and Fairness

Figure 2: Seven requirements for trustworthy AI and their interrelationship


(Source: European Commission (2018), p. 15.)

communication); (5) diversity, non-discrimination and Similar to the Ethics Guidelines for Trustworthy AI
fairness (avoidance of unfair bias, accessibility and by the European Commission, many organizations
universal design and stakeholder participation); (6) and governments have offered ethics guidelines
societal and environmental wellbeing (sustainable and and principles for AI. As summarized in Table 2,
environmentally friendly, social impact, and society many documents have been formulated by private
and democracy); and (7) accountability (auditability, companies, government agencies, and academic
minimization and reporting of negative impacts, trade- institutions; many of which were formed in the US,
offs, and redress) (European Commission, 2018). UK, and EU institutions. Table 2 shows the breakdown
of ethical guidelines and principles for AI by type,
geographical location, and target audience.

60
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Type and Geographical Classifications


Location

Type of Issuing Organizations* 19 private companies (22.6%), 18 government agencies (21.4%), 9


academic and research institutions (10.7%), 8 inter-governmental or
supra-national organizations (9.5%), 7 non-profit organizations and
professional associations (8.3%), 4 private sector alliances (4.8%), 1
research alliance (1.2%), 1 scientific foundation (1.2%), 1 federation of
worker unions, 1 political party, 4 others

Geographical Location of 20 USA (23.8%), 16 international organizations, 14 UK (16.7%), 6 EU


Issuing Organizations** institutions, 4 Japan, 3 Germany, 3 France, 3 Finland, 2 Netherlands, 1
Iceland, 1 India, 1 Singapore, 1 Norway, 1 South Korea, 1 Spain, 1 UAE, 1
Australia, 1 Canada

Target Audience*** 27 for multiple stakeholder groups (32.1%), 24 for own employees of
companies (self-directed) (28.6%), 10 for the public sector (11.9%), 5
for the private sector (6.0%), 3 for developers or designers (3.6%), 1 for
organizations, 1 for researchers

Source: Compiled by author from Jobin et.al. (2019).


* 4 documents are double counted and 4 are not classified
** 3 are not classified
*** 13 not classified.

Table 2: Ethical guidelines and principles by type and geographical location

Based on content analysis, Jobin and her colleagues fairness (68/84), among 11 key ethical principles
identified 11 key ethical principles along with related including transparency, justice/fairness, non-
values. Some key findings on ethical principles from maleficence, responsibility, privacy, beneficence,
the content analysis by Jobin and her colleagues freedom/autonomy, trust, sustainability, dignity, and
(2019) are summarized in the following table. As solidarity. Non-maleficence and responsibility are also
the table indicates, transparency and related values primary principles which are found in 60 out of 84
(73/84) appeared the most, followed by justice/ documents.

61
Philosophical point of view for social implementation

Ethical Principles No. of Documents Related Values

Transparency 73 Explainability, explicability, understandability,


interpretability, communication, disclosure, showing

Justice/fairness 68 Consistency, inclusion, equality, equity, (non-) bias,


(non-)discrimination, diversity, plurality, accessibility,
reversibility, remedy, redress, challenge, access and
distribution

Non-maleficence 60 Security, safety, harm, protection, precaution,


prevention, integrity, (bodily or mental), non-
subversion

Responsibility 60 Accountability, liability, acting with integrity

Privacy 47 Personal or private information

Beneficence 41 Benefits, well-being, peace, social good, common


good

Freedom/autonomy 34 Freedom, autonomy, consent, choice, self-


determination, liberty, empowerment

Trust 28

Sustainability 14 Environment (nature), energy, resources

Dignity 13

Solidarity 6 Social security, cohesion

Table 3: Ethical principles and related values


(Source: Jobin et.al. (2019), p. 7.)

62
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

As noted in the earlier section, international Regulating AI: The Case of Autonomous
organizations such as the EU have been actively working
on formulating ethical guidelines for AI. For example, the
Vehicles in Different Countries
European Parliament took an initial action by asking the
As noted earlier, regulatory instruments and levels of
European Commission to assess AI’s social impacts,
regulation vary widely from country to country. We
which led to a set of “recommendations on civil law
conduct an exploratory comparison of the regulatory
rules on robotics” in early 2017 (Madiega, 2019). This
approaches of four major countries—China, Japan,
was followed by the Commission’s coordinated plan on
Korea, and the US—in terms of the regulatory intensity
AI for EU member countries, which was later endorsed
of AVs. The three Asian countries were selected
by the EU Council and then became a foundation for
because they are considered as economic leaders,
the Commission’s Ethics Guidelines for Trustworthy
while also representing countries at different levels
AI (Madiega, 2019). The guideline formulated by the
of economic development in the region. The US
High-Level Expert Group on AI of the Commission is
was selected as a basis for comparison, as the
considered one of the most comprehensive frameworks
country represents market-based and relatively non-
for offering critical principles that various stakeholders
interventionist regulation policies.
should consider in designing, developing, and deploying
AI. In particular, the guideline emphasizes the core
nature of a “human-centric approach”, which has been
1. Current status of autonomous vehicle
widely accepted beyond the EU. The nature of this
technology development
human-centric approach to AI is summarized as follows:
An autonomous vehicle (AV) is a vehicle that can
navigate by itself without human intervention
The human-centric approach to AI strives to ensure that human
(Taeihagh & Lim, 2019). According to SAE International
values are central to the way in which AI systems are developed
(originally the Society of Automotive Engineers),
and deployed, used and monitored, by ensuring respect for
automated driving can be divided into six levels,
fundamental rights, including those set out in the Treaties of
from 0 to 5 (the higher the level, the more automated
the European Union and Charter of Fundamental Rights of
the vehicle), based on the level of sophistication
the European Union, all of which are united by reference to
and automation. As Figure 3 summarizes, AVs are
a common foundation rooted in respect for human dignity,
equipped with various autonomous features for
in which the human being enjoys a unique and inalienable
driver supporting systems ranging from automatic
moral status. This also entails consideration of the natural
emergency breaking (Level 0) to lane centering
environment and of other living beings that are part of the
systems (Level 2: partial “hands off” automation),
human ecosystem, as well as a sustainable approach enabling
while “automated driving systems” also range from
the flourishing of future generations to come.1
traffic jam “chauffeurs” (Level 3: conditional “eyes off”
automation) to the highest level of complete driverless
Emphasizing the lawfulness, ethics, and robustness of
taxis in all conditions (Level 5: full “steering wheel”
a trustworthy AI system from a lifecycle perspective,
automation) (QVRTZ, 2019). Several carmakers,
the guideline essentially promotes ethical principles
including Waymo, are already using level 4 AVs in
for ensuring reliable and trustworthy AI. The guideline
some areas for ride-sharing or delivery services, but
emphasizes seven key requirements for EU member
these vehicles have not yet entered the retail market.
countries including (1) human agency and oversight, (2)
It has been said that substantive impact of AVs might
robustness and safety, (3) privacy and data governance,
begin when driverless automobiles are introduced in
(4) transparency, (5) diversity, non-discrimination and
local areas.
fairness, (6) societal and environmental well-being, and
(7) accountability (Madiega, 2019).

1. Glossary section of the Ethics Guidelines for Trustworthy of AI (2019). Requoted from Madiega (2019), p. 3.

63
Philosophical point of view for social implementation

LEVEL0 LEVEL1 LEVEL2 LEVEL3 LEVEL4 LEVEL5

You are driving whenever these driver support features You ARE NOT driving when these automated
are engaged even if you are not steering driving features engaged - even if you are steered
What dose the in “the driver’s seat”
human in the
driver’s seat
have to do? You must constantly supervise these support features: When the feature These automated driving
you must steer, brake or accelerate as needed to requests features will not require you to
maintain safety. take over driving
You must drive

These are driver support features These are automatic driving features

These features These features These features These features can drive the This feature
are limited provide steering provide steering vehicle under limited conditions can drive the
to providing OR brake/ AND brake/ and will not operate unless required vehicle under
What do these
warnings and acceleration acceleration conditions are met all conditions
features do?
momentary support to the support to the
assistance driver driver

• automatic • lane centering • lane centering • traffic jam • local driverless • same as level
emergency OR AND chauffeur taxi 4, but feature
Example braking • adaptive • adaptive • pedals/ can drive
Features • blind spot cruise control cruise control steering wheel everywhere in
warning at the same may or may all conditions
• lane departure time not be installed
warning

Figure 3: Levels of autonomous vehicles


(Source: SAE (2018). https://www.sae.org/news/press-room/2018/12/sae-international-releas-
es-updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-stand-
ard-for-self-driving-vehicles)

2. Regulating autonomous vehicle

Table 4 presents a cross-country comparison of the The three Asian countries under examination have
specific regulations for AVs, particularly focusing prohibited autonomous driving when the driving is
on AV driving in China, Japan, Korea, and the US. not for testing, and enforcement is legally binding.
We consider four regulatory issues: (1) whether the The US, however, has placed no strict restraints on
government permits autonomous driving, (2) whether autonomous driving; a bill that would establish the
the enforcement is legally binding, (3) whether the federal government’s role in ensuring the safety
government can hold people liable based on laws or of highly automated vehicles has been referred to
guidelines, and (4) whether the government provides a federal committee. All countries except China
any guidelines for users. We will not discuss license can hold persons (rather than AVs) liable based on
issues, since it has been debated at national levels. these laws or guidelines; as it stands, China has no
It should also be noted that no global consensus official guidelines regarding the issue. Furthermore,
currently exists and nation states generally have strict people who want to take autonomous driving tests
requirements for drivers.

64
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Prohibiting free Legally binding Holding persons Offering


autonomous enforcement liable based on the guidelines
driving itself laws or guidelines for users

China Yes Yes No Yes

Japan Yes Yes Yes No

Korea Yes Yes Yes No

US No No Yes Yes
(No strict restraint) (Referred the bill to (Those who want to
the Committee) test AVs should obtain
the state-designated
insurance)

Table 4: The status of autonomous vehicle driving regulations (as of August 2019)

must obtain state-designated insurance in Korea. document has two sections—voluntary guidance and
Governments’ provision of user guidelines for technical assistance for states. The new guidelines
autonomous driving demonstrates their interests in focus on Levels 3 to 5 of the SAE International’s
the development of autonomous driving technology automation classification, stipulating that entities do
and commercialization. China and the US have user not need to wait to test or deploy an ADS, revising
guidelines while Japan and Korea do not. the elements of safety self-assessments, aligning
federal guidelines with the latest developments and
The US Congress passed a bill titled the Safely terminology, and clarifying the role of the federal and
Ensuring Lives Future Deployment and Research state governments. The guidelines emphasize their
in Vehicle Evolution Act (more commonly known voluntary nature and do not include with compliance
as the “SELF DRIVE Act”) in 2017. Proponents of requirements or enforcement mechanisms. They
the bill claim that by encouraging the testing and represent an attempt to establish best practices for
deployment of AVs, the bill establishes a federal role state legislatures, outlining the common safety-related
in ensuring the safety of highly automated vehicles. components of ADSs that states should consider
It has been received in the Senate, read twice, and incorporating into their legislation. Additionally, they
referred to the Committee on Commerce, Science, include the US Department of Transportation’s view
and Transportation (The US Congressional Research regarding federal and state roles and offers best
Service, 2019). In addition to this bill, the US is the first practices for highway safety officials.
country to introduce legislation to permit the testing
of automated vehicles (UK Department for Transport, China is also preparing regulations to ensure safe
2015). It has also introduced “A Vision for Safety AV testing. Notably, Chinese regulations and policies
2.0,” federal guidelines for the automobile industry regarding autonomous driving are seen as relatively
and individual states regarding automated driving moderate compared to their strict control of some
systems (ADSs) that builds on the National Highway other aspects of driving, such as restrictions stating
Traffic Safety Administration’s 2016 guidelines. This that public maps can only be accurate to a scale of

65
Philosophical point of view for social implementation

50 meters at most, and that drivers must keep both the steering wheel and braking system. However, the
hands on the steering wheel at all times (KPMG Automobile Control Act defines AVs as cars that can
International, 2018). The road-testing regulation be operated without any driver or passenger input. The
was established in April 2018 and the guidelines for Enforcement Rules of the Act, enacted in 2016, specify
building safe, closed test sites were released in July the requirements for the safe operation and testing of
2018 (Xinying, 2019). The Chinese do not appear to be AVs, meaning that the laws are in conflict with each
very concerned with safety and liability issues; their other to some extent regarding whether “a driver” can
concerns focus on the technological availability of refer to an automated system. At present, the Ministry
AVs and economic consideration related to their use of Land, Infrastructure, and Transport requires a
(Dickinson, 2018). temporary operation permit for the testing of AVs, and
the “Requirements for Safe Operation of Autonomous
Likewise, Japan is preparing the commercialization Vehicles and Trial Operation Regulations (as of March
of level 3 AVs and will enact a new legal amendment 31, 2017),” stipulate that a preliminary test of 5,000 km
for autonomous driving. The National Diet of Japan must be conducted (Ministry of Science and ICT and
passed a bill amending the current Road Transport KISTEP, 2018).
Vehicle Act to include “automatic operating devices”
as a vehicle in May 2019. In addition, it passed another The KPMG International’s annual reports provide
bill that allows people to use level 3 AVs in certain insight into the current state of AV testing. The reports
conditions and to use cell phones during autonomous evaluate countries’ AV readiness and AV testing
driving (Matsuda et al., 2019). Although there has restrictions, giving countries scores out of seven
been some progress in AV-related regulations thanks based on reviews of media articles, government
to the May 2019 amendments of Japan’s Road Traffic press releases, and government regulations. A higher
Act, Matsuda and his colleagues (as quoted below) score indicates that the country’s regulations support
stressed that there are still several issues to be AV use and impose fewer restrictions on when,
resolved in future. where, and how testing of AVs can occur (KPMG
International, 2019). According to the report, among
“… One of the main outstanding issues is determination the four countries considered in this study, Japan has
of the rules for criminal and civil liabilities in the event of the strictest regulations on AV testing with a score of
a traffic accidents involving self-driving vehicles. Because 0.333, while Korea and the US have somewhat fewer
these provisions have not yet been updated, a driver may restrictions on AV testing, both receiving scores of
still be held responsible for criminal or civil liabilities for 0.833; China’s score was 0.5 in AV regulation (KPMG
a traffic accident caused by a vehicle under automated International, 2019). The scores of 2018 are largely
driving even if the driver operated the self-driving vehicle the same, although a different scale was used (KPMG
properly. This issue affects not only drivers but also International, 2018).2 Similar to AV regulation score,
manufactures and insurance companies, and is therefore Korea and the US have higher scores than China and
likely one of the thornier issues remaining to be resolved” Japan in terms of institutional responsibility for AVs
(Matsuda et al., 2019). (KPMG International, 2019). According to the indicator
of the AV-focused government agency by the KPMG
In Korea, the Road Traffic Act, Automobile International, South Korea’s score is 0.857 and the US
Management Act, and Automobile Damages is 0.714. China’s score of consumer AV acceptance
Guarantee Act currently regulate the use of is 0.643 and Japan is 0.571, which is the lowest
automobiles, but that will change in 2020 when among the four countries (KPMG International, 2019).
the Act on the Promotion and Support of the Considering the fact that regulations are often affected
Commercialization of Self-driving Cars comes into and influenced by the voices of private businesses,
force. The Road Traffic Act regulates traffic problems the number of AV firms in a country might be a factor
and establishes rules for safe operation. It presumes which is closely associated with the nature and level
the presence of a driver who is required to manipulate of regulations on AV test driving and safety. According

2. According to the 2018 scores on AV regulation, Japan, China, Korea, and the US were scored at 3, 4, 6, and 6, respectively (KPMG International, 2018).

66
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

China Consumer Acceptance


Japan 1
Korea
0.8
US

0.6

0.4
AV Tech Firm Population of AV
Headquarters 0.2 Testing Areas

AV Regulation AV Institutional Responsibility

Figure 4: Regulatory and social dimensions for autonomous vehicles


(Source: Made by the author based on the data from KPMG International (2019))

to the index representing the number of AV technology The regulatory and social dimension scores of AV
firms’ headquarters based on the KPMG International regulation for these four countries are compared in
(2019), the US has the highest score of 0.176 followed Figure 4.
by Korea (0.043). Japan is 0.029 while China (0.005)
scored the lowest among the four countries (KPMG The figure suggests that the US and Korea are very
International, 2019). proactive and less restrictive about AVs, and have
good institutional support for AV test driving. Japan is
In addition to AV regulations, social acceptance for somewhat passive and cautious, with less institutional
AVs appears to be different among countries. As part arrangement for AVs from the government. However,
of the consumer AI acceptance index, a consumer it is interesting to note that Korean consumers are
AV acceptance score—based on a branded research the least receptive to AVs, and therefore test driving is
online consumer panel survey—shows that China limited to certain areas (smallest population living in
scored the highest with 0.783 followed by South test driving areas). Chinese and American consumers
Korea’s score of 0.725 (KPMG International, 2019). are highly receptive to AVs; particularly the US, as test
Japan and the US scored 0.442 and 0.103 respectively driving is allowed in more areas than the three other
(KPMG International, 2019). In addition, the proportion countries, as indicated by the proportion of population
of population living in AV testing areas (cities) vary in test areas. This suggests that the US is the least
because the numbers and areas of designated strict country when it comes to autonomous driving.
testing sites are different among countries. The US It has not enacted specific legislation regarding
scored 0.355 for the highest percentage of people AVs, but instead established guidelines based on
living in an AV testing area, followed by Japan with SAE International standards that are used when
a score of 0.301; China and Korea scored 0.043 establishing policies. In the US and Germany, AVs
and 0.020 respectively (KPMG International, 2019). have already been put into operation on public roads.

67
Philosophical point of view for social implementation

Meanwhile, Japan has not yet passed legislation, but regulations due to the uncertain nature of those novel
is preparing for Level-5 autonomous vehicle testing technologies. This study suggests there are two
in advance of the Tokyo Olympics (Lee, 2018). Both distinctive approaches—an ethical approach and legal/
China and Japan have declared their intentions to regulatory approach to new disruptive technologies.
boost autonomous vehicle commercialization, and Examining the ethical guidelines of AI and the
both have already passed related bills to allow test regulatory positions of AVs, this study suggests
driving in limited areas. Additionally, Japan allows an ethical approach as an informal and unofficial
people to use cell phones while engaged in level 3 guideline with key principles, which is often introduced
autonomous driving. Korea has also established a before specific and formal regulations are adopted
new law that addresses the commercialization of AVs, by governments. The ethical approach offers a broad
which is similar to the law for testing AVs. Despite the range of key values to be considered for the design,
differences in regulating AVs, countries are similarly development, deployment, and use of particular
moving toward developing regulatory frameworks disruptive technologies. This study also suggests that
by introducing restrictions, limiting driving tests, and regulatory decisions on disruptive technologies are
providing terms of technical standards. That said, often affected by uncertainties regarding the expected
there are still differences within these four countries’ outcomes and social risk tolerance in relation to
regulations in terms of technology-supported driving a specific technology. The regulatory positions of
and safety measures. different countries might vary, primarily because
of the expected roles of governments and market
competition.
Conclusions and
Regulatory schemes for novel technologies are not
Policy Recommendations necessarily different from conventional technologies
in a society, because regulatory politics are often
As governments consider disruptive technologies as
similarly applied, regardless of the type of technology.
a source of future economic competitiveness, many
However, we believe that disruptive technologies might
have been shifting their regulatory positions from
create new regulatory dynamics in a country because
a regulatory paternalistic position to a somewhat
of their novelties as well as their social risks and
deregulatory position, as seen in sandbox initiatives.
perceived uncertainty. Considering the implications
While the regulation of disruptive technologies has
of ethical and regulatory approaches, as well as their
weakened worldwide due to many people believing
strengths and weaknesses, societies must manage
that regulation can harm the development of novel
disruptive technologies by carefully adopting and
technologies, the risks and uncertainties associated
designing both approaches in order to address their
with disruptive technologies still remain valid and
uncertainties and perceived social risk. The following
require some form of regulation. At the same time,
recommendations are proposed:
ethical guidelines often precede specific and formal

68
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Recommendation 1: Moralizing disruptive Recommendation 4: In regulating AI and other


technologies should precede, and should be fully disruptive technologies, governments should align
discussed and shared among different stakeholder regulations with key values and goals embedded
prior to regulating them. Before a society adopts and in various AI ethical guidelines (transparency,
enacts specific regulatory frameworks for disruptive trustworthiness, lawfulness, fairness, security,
technologies, ethical guidelines (i.e., AI principles accountability, robustness, etc.) and aim to minimize
or AI ethical guidelines) must be jointly formulated the potential social risks and negative consequences
based upon a thorough deliberation of particular of AI by preventing and restricting possible data
disruptive technologies by different stakeholders abuses or misuses, ensuring fair and transparent
representing industries, researchers, consumers, algorithms, in addition to establishing institutional
NGOs, international organizations, and policymakers. and financial mechanisms through which the negative
consequences of AI are systematically corrected.
Recommendation 2: AI ethical guidelines should
support sustainable and human-centric societies Recommendation 5: Governments should
by minimizing the negative socio-economic and ensure the quality of AI ecosystems by increasing
international consequences of disruptive technologies government and non-government investment in R&D
(i.e., inequality, unemployment, psychological and human resources for AI by maintaining fair market
problems, etc.), while maximizing their potential competition among AI-related private companies,
benefits for environmental sustainability, quality of life and by promoting AI utilities for social and economic
among others. benefits.

Recommendation 3: Once a general consensus Recommendation 6: Governments should carefully


is made on general ethical guidelines, they should design and introduce regulatory sandbox approaches
be elaborated and specified in details targeting to prevent unnecessarily strict and obstructive
individual stakeholder groups representing different regulations that may impede AI industries but also
actors and sectors. Specific AI ethical guidelines facilitate developing AI and exploring AI-related
should be developed and customized for AI designers, innovative business models.
developers, adopters, users, etc. based on the AI
lifecycle. In addition, industry and sector specific
ethical guidelines should be developed and applied to
each sector (care industry, manufacturing industry,
service industry, etc.).

69
Philosophical point of view for social implementation

References

Aghion, P., Algan, Y., Cahuc, P., & Shleifer, A. (2010). Regulation and distrust. The
Quarterly Journal of Economics, 125(3), 1015-1049.

Chang, I. (2019). US Legislative Trends and Implications for Gene Editing Technology.
Study on The American Constitution, 30(1), 213-242.

Choe, Y. S., & Jeong, J. (1993). Charitable Contributions by Low- and Middle-Income
Taxpayers: Further Evidence with a New Method. National Tax Journal, 46, 33–39.

Beck, T., Levine, R., & Demirgüç-Kunt, A. (2002). Law and finance: why does legal origin
matter? The World Bank.

Becker, G. S., & Stigler, G. J. (1974). Law enforcement, malfeasance, and compensation
of enforcers. The Journal of Legal Studies, 3(1), 1-18.

Black, J. (1998). Regulation as Facilitation: Negotiating the Genetic Revolution. Mod. L.


Rev., 61, 621.

Berkhout, J., & Lowery, D. (2010). The changing demography of the EU interest system
since 1990. European Union Politics, 11(3), 447-461.

Borges, B. J. P., Arantes, O. M. N., Fernandes, A., Broach, J. R., Fernandes, P., & Bueno, M.
(2018). Genetically Modified Labeling Policies: Moving Forward or Backward? Frontiers
in bioengineering and biotechnology, 6, 181.

Brodsky, J. S. (2016). Autonomous vehicle regulation: How an uncertain legal landscape


may hit the brakes on self-driving cars. Berkeley Tech., LJ, 31, 851.

Brunel, C., & Levinson, A. (2013). Measuring Environmental Regulatory Stringency. OECD
Trade and Environment Working Papers, 2013(5), 0_1.

Castor, A. (11 May 2018). How Japan Is Creating a Template for Cryptocurrency
Regulation. Bitcoin magazine. https://bitcoinmagazine.com/articles/how-japan-
creating-template-cryptocurrency-regulation

Charo, R. A. (2016). The legal and regulatory context for human gene editing. Issues in
Science and Technology, 32(3), 39.

Charron, N., Dahlström, C., & Lapuente, V. (2012). No law without a state. Journal of
Comparative Economics, 40(2), 176-193.

Cheon, C. (2018). Global ICO Regulation Trends and Implications. Korea Capital Market
Institute Issue report, 18-06.

70
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Cohen, J. (19 March 2019). WHO panel proposes new global registry for all CRISPR
human experiments, Science. https://www.sciencemag.org/news/2019/03/who-panel-
proposes-new-global-registry-all-crispr-human-experiments

ComplyAdvantage. (2018). Cryptocurrency Regulations Around The World.


https://complyadvantage.com/blog/cryptocurrency-regulations-around-world

Cook, K., Shortell, S. M., Conrad, D. A., & Morrisey, M. A. (1983). A theory of organizational
response to regulation: the case of hospitals. Academy of Management Review, 8(2),
193-205.

Cyranoski, D. (2016). CRISPR gene-editing tested in a person for the first time. Nature.
doi:10.1038/nature.2016.20988

Cyranoski, D., & Ledford, H. (2018). Genome-edited baby claim provokes international
outcry. Nature. https://www.nature.com/articles/d41586-018-07545-0

Cyranoski, D. (2019). China to tighten rules on gene editing in humans. Nature.


https://www.nature.com/articles/d41586-019-00773-y

Cyranoski, D. (2019). China announces hefty fines for unauthorized collection of DNA.
Nature. https://www.nature.com/articles/d41586-019-01868-2

Cyranoski, D. (2019). Japan approves first human-animal embryo experiments. Nature.


https://www.nature.com/articles/d41586-019-02275-3

Das, S. (2017). China’s Central Bank Completes Digital Currency Trial on a Blockchain,
CCN. https://www.ccn.com/chinas-central-bank-completes-digital-currency-trial-
blockchain

De Bruycker, I., & Beyers, J. (2015). Balanced or biased? Interest groups and legislative
lobbying in the European news media. Political Communication, 32(3), 453-474.

Deng, C. (2018, January 11). China Quietly Orders Closing of Bitcoin Mining Operations,
The Wall Street Journal. https://www.wsj.com/articles/china-quietly-orders-closing-of-
bitcoin-mining-operations-1515594021

Dickinson, S. (2018, July 17). Self Driving Cars in China: The Absence of Non-Technical
Barriers, China Law Blog. https://www.chinalawblog.com/2018/07/self-driving-cars-in-
china-the-absence-of-non-technical-barriers.html

Djankov, S., La Porta, R., Lopez-de-Silanes, F., & Shleifer, A. (2002). The regulation of
entry. The quarterly Journal of economics, 117(1), 1-37.

Downey, H. K., & Slocum, J. W. (1975). Uncertainty: Measures, research, and sources of
variation. Academy of Management journal, 18(3), 562-578.

71
Philosophical point of view for social implementation

European Central Bank. (2012). Virtual Currency Schemes. https://www.ecb.europa.eu/


pub/pdf/other/virtualcurrencyschemes201210en.pdf

European Parliament. (2018). Report on three-dimensional printing, a challenge in the


fields of intellectual property rights and civil liability (2017/2007(INI)).

Fleming, L. (2001). Recombinant Uncertainty in Technological Search. Management


Science, 47(1), 117-132.

Fordham, B., & McKeown, T. (2003). Selection and Influence: Interest Groups and
Congressional Voting on Trade Policy. International Organization, 57(3), 519-549.

Gai, P., Kemp, M., Sánchez Serrano, A., & Schnabel, I. (2019). Regulatory complexity and
the quest for robust regulation (No. 8). European Systemic Risk Board.

Glaeser, E. L., & Shleifer, A. (2002). Legal origins. The Quarterly Journal of Economics,
117(4), 1193-1229.

Globus, R., & Qimron, U. (2018). A technological and regulatory outlook on CRISPR crop
editing. Journal of cellular biochemistry, 119(2), 1291-1298.

Go, J. (December 2). [Genetic Editing Baby Controversy] a rekindled debate on


human embryo research. Dong-A Science. http://dongascience.donga.com/news.
php?idx=25463

Hacker, P., & Thomale, C. (2018). Crypto-Securities Regulation: ICOs, Token Sales and
Cryptocurrencies under EU Financial Law. European Company and Financial Law Review,
15(4), 645-696.

Hail, L., & Leuz, C. (2006). International differences in the cost of equity capital: Do legal
institutions and securities regulation matter?. Journal of accounting research, 44(3),
485-531.

Hofstede, G. (2015). The 6-D model of national culture. https://geerthofstede.com/


culture-geert-hofstede-gert-jan-hofstede/6d-model-of-national-culture/

Houben, R., & Snyers, A. (2018). Cryptocurrencies and blockchain: legal context and
implications for financial crime, money laundering and tax evasion. Policy Department
for Economic, Scientific and Quality of Life Policies, European Parliament.

Hunt, G., & Mehta, M. (Eds.) (2013). Nanotechnology: “Risk, Ethics and Law”. Routledge.

72
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Hwang, Y. (2018, August 29). Deregulation of human embryo research and other...There’s
going to be a controversy over bioethics. http://www.hani.co.kr/arti/society/
health/859834.html

Jeung, T. (2018, October 10). Japan Is Drafting a Rulebook for Ethically Editing the Genes
of Human Embryos: Which country will be first to create a CRISPR baby? https://www.
inverse.com/article/49725-governments-regulate-human-embryo-gene-editing

Kahan, D. M., Jenkins‐Smith, H., & Braman, D (2011). Cultural cognition of scientific
consensus. Journal of risk research, 14(2), 147-174.

Kerwer, D. (2005). Rules that many use: Standards and global regulation. Governance,
18(4), 611-632.

Kim, B. (2015). A Study on Uber Taxi and the Fit of It. Chonbuk Law Review, 46, 99~134.

Kisiel, D. (2018). Legal concept of internet currencies. Financial Law Review, 11(3), 81-
91.

Kolacz, M. K., Quintavalla A., & Yalnazov. O. (2019). Who Should Regulate Disruptive
Technology? European Journal of Risk Regulation, 10(1), 4-22.

KPMG International. (2018). Autonomous Vehicles Readiness Index - Assessing


countries’ openness and preparedness for autonomous vehicles. https://assets.kpmg/
content/dam/kpmg/xx/pdf/2018/01/avri.pdf

KPMG International. (2019). 2019 Autonomous Vehicles Readiness Index – Assessing


countries’ preparedness for autonomous vehicles. https://assets.kpmg/content/dam/
kpmg/xx/pdf/2019/02/2019-autonomous-vehicles-readiness-index.pdf

Kun, L., & Xiaodong, W. (2019, March 1). Rules to be revised on organ donations. China
Daily. http://www.chinadaily.com.cn/a/201903/01/WS5c78936aa3106c65c34ec237.
html

Lander, E. S., Baylis, F., Zhang, F., Charpentier, E., Berg, P., Bourgain, C., Friedrich, B.,
Joung, J. K., Li, J., Liu, D., Naldini, L., Nie, J., Qiu, R., Schoene-Seifert, B., Shao, F., Terry,
S., Wei, W., & Winnacker, E. (2019, March 19). Adopt a moratorium on heritable genome
editing, Nature. https://www.sciencemag.org/news/2019/03/who-panel-proposes-new-
global-registry-all-crispr-human-experiments

La Porta, R., Lopez-de-Silanes, F., Shleifer, A., & Robert, V. (1997). Legal determinants of
external finance. Journal of Finance 52, 1131–1150.

73
Philosophical point of view for social implementation

La Porta, R., Lopez-de-Silanes, F., Shleifer, A., & Robert, V. (1998). Law and finance.
Journal of Political Economy 106, 1113–1155.

La Porta, R., Lopez-de-Silanes, F., Shleifer, A., & Robert, V. (1999). The quality of
government. Journal of Law, Economics, and Organization 15, 222–279.

La Porta, R., Lopez-de-Silanes, F., & Shleifer, A. (2008). The economic consequences of
legal origins. Journal of economic literature 46(2), 285-332.

Lee, S. (2018). Issues on Regulatory Reform for Industrial Revitalization of Self-driving


Cars. ICT Spot issue, IITP, S18-06.

Lee, S. & Kim, H. (2018). International Regulatory Trends on Genome Editing Research
Using Human Embryo and Its Implication. Korean Journal of Medicine and Law 26(2),
71-96.

Marchant, G., Meyer, A., & Scanlon, M. (2010). Integrating social and ethical concerns
into regulatory decision-making for emerging technologies. Minn. JL Sci. & Tech., 11,
345.

Marris, C., Langford, I., Saunderson, T., & O’Riordan, T. (1997). Exploring the “psychometric
paradigm”: comparisons between aggregate and individual analyses. Risk analysis,
17(3), 303-312.

Marshall, A. (2018). New York City Goes After Uber and Lyft. Wired. https://www.wired.
com/story/new-york-city-cap-uber-lyft

Martin-Laffon, J., Kuntz, M., & Ricroch, A. E. (2019). Worldwide CRISPR patent landscape
shows strong geographical biases. Nature biotechnology. 37(6), 613-620.

Matsuda, D., Mears, E., & Shimada, Y. (2019). Legalization of Self-Driving Vehicles in
Japan: Progress Made, but Obstacles Remain. DLA Piper. https://www.dlapiper.com/en/
global/insights/publications/2019/06/legalization-of-self-driving-vehicles-in-japan

Milliken, F. J. (1987). Three types of perceived uncertainty about the environment: State,
effect, and response uncertainty. Academy of Management review, 12(1), 133-143.

Ministry of Science and ICT. (2017). Korea Provides Gene Scissors, U.S. Corrects Human
Embryo Gene Mutation. Press Releases.

Ministry of Science and ICT and KISTEP. (2018). Comparative analysis of domestic and
foreign legislation on autonomous vehicles and policy alternatives, In Science, ICT Policy
and Technology Trends, 128.

74
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

Molteni, M. (2019, July 30). The World Health Organization Says No More Gene-Edited
Babies, Wired. https://www.wired.com/story/the-world-health-organization-says-no-
more-gene-edited-babies

Nature. (2019, March 13) Hybrid embryos, ketamine drug and dark photons. https://www.
nature.com/articles/d41586-019-00790-x

Normile, D. (2019). China tightens its regulation of some human gene editing, labeling
it ‘high-risk’. Science. https://www.sciencemag.org/news/2019/02/china-tightens-its-
regulation-some-human-gene-editing-labeling-it-high-risk

Normile, D. (2019) Gene-edited foods are safe, Japanese panel concludes, Science.
https://www.sciencemag.org/news/2019/03/gene-edited-foods-are-safe-japanese-
panel-concludes

OECD. (2018). Blockchain Technology and Corporate Governance.

Ogus, A. (2005). Regulatory paternalism: when is it justified?. Corporate governance in


context: Corporations, states, and markets in Europe, Japan, and the US, 303-320.

Ormond, K. E., Mortlock, D. P., Scholes, D. T., Bombard, Y., Brody, L. C., Faucett, W. A.,
Garrison, N. A., Hercher, L., Isasi, R., Middleton, A., Musunuru, K., Shriner, D., Virani, A., &
Young, C. E. (2017). Human germline genome editing. The American Journal of Human
Genetics. 101(2), 167-176.

Oshiro, Y., & Ohkohchi, N. (2017). Three-dimensional liver surgery simulation: computer-
assisted surgical planning with three-dimensional simulation software and three-
dimensional printing. Tissue Engineering Part A, 23(11-12), 474-480.

Park, T. (2019). Does Uber want to tap the Korean market again? Foreign Taxi Call
Service Initiated. Hankyoreh. http://www.hani.co.kr/arti/economy/it/879525.
html#csidx7f70c05e63c5236a29bda7b854ff47f

Pinto, C. (2012). How autonomous vehicle policy in California and Nevada addresses
technological and non-technological liabilities. Intersect: The Stanford Journal of
Science, Technology, and Society, 5.

Pollock, D. (2018, March 21). G20 and Cryptocurrencies: Baby Steps Towards Regulatory
Recommendations. https://cointelegraph.com/news/g20-and-cryptocurrencies-baby-
steps-towards-regulatory-recommendations

Herskind, N., Lim, C.K., & Hoist, S. (2019). How China will shape the future of
autonomous vehicles. QVARTZ. https://www.sae.org/news/press-room/2018/12/
sae-international-releases-updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-
automation%E2%80%9D-standard-for-self-driving-vehicles

75
Philosophical point of view for social implementation

Roca, J. B., Vaishnav, P., Morgan, M. G., Mendonça, J., & Fuchs, E. (2017). When risks
cannot be seen: Regulating uncertainty in emerging technologies. Research Policy, 46(7),
1215-1233.

Sabel, C., Herrigel, G., & Kristensen, P. H. (2018). Regulation under uncertainty: The
coevolution of industry and regulation. Regulation & Governance, 12(3), 371-394.

Schwinger, A. (2018, March 14). Federal court holds that CFTC can regulate
virtual currencies as commodities, Norton Rose Fulbright website. https://www.
nortonrosefulbright.com/en/knowledge/publications/6c7bcc30/federal-court-holds-
that-cftc-can-regulate-virtual-currencies-as-commodities

Shim, M. (2019, June 13). Legal Issues Related to Genetics Patent. Korea Institute of
Intellectual Property. https://www.kiip.re.kr/board/report/view.do?bd_gb=data&bd_
cd=4&bd_item=0&po_item_gb=5&po_item_cd=&po_no=12504

Shukla-Jones, A., Friedrichs, S., & Winickoff, D. E. (2018). Gene editing in an international
context: Scientific, economic and social issues across sectors. OECD Science,
Technology and Industry Working Papers, 2018(4), 0_1-51.

Siegrist, M. (2010). Psychometric paradigm. Encyclopedia of science and technology


communication, Volume 2, pp. 600-601. SAGE Publications.

Slovic, P. (1987). Perception of risk. Science, 236(4799), 280-285.

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1982). Why study risk perception?. Risk
analysis, 2(2), 83-93.

Starr, C. (1969). Social benefit versus technological risk. Science, 1232-1238.

Taeihagh, A., & Lim, H. S. M. (2019). Governing autonomous vehicles: emerging


responses for safety, liability, privacy, cybersecurity, and industry risks. Transport
Reviews, 39(1), 103-128.

The Francis Crick Institute. (2019). Kathy Niakan: Human embryo genome editing
licence. https://www.crick.ac.uk/research/labs/kathy-niakan/human-embryo-genome-
editing-licence

The Law Library of Congress. (2014). Restrictions on Genetically Modified Organisms.


Global Legal Research Center.

The Library of Congress. (2018, August 16). Regulation of Cryptocurrency Around the
World. https://www.loc.gov/law/help/cryptocurrency/world-survey.php

76
Moralizing and Regulating Artificial Intelligence:
Does Technology Uncertainty and Social Risk Tolerance Matter in Shaping Ethical Guidelines and Regulatory Frameworks?

The Library of Congress. (2018, August 16). Regulation of Cryptocurrency: China.


https://www.loc.gov/law/help/cryptocurrency/china.php

The United Nations Economic and Social Council. (2017). Consolidated Resolution on
the Construction of Vehicles (R E.3).

The US Congressional Research Service. (2019). H.R.3388 - SELF DRIVE Act.


https://www.congress.gov/bill/115th-congress/house-bill/3388

Tomlinson, T. (2018). A crispr future for gene-editing regulation: a proposal for an


updated biotechnology regulatory system in an era of human genomic editing. Fordham
L. Rev., 87, 437.

Tzur, A. (2017). Uber Über regulation? Regulatory change following the emergence of
new technologies in the taxi market. Regulation & Governance. https://doi.org/10.1111/
rego.12170

UK Department for Transport. (2015). The Pathway to Driverless Cars: A detailed review
of regulations for automated vehicle technologies. https://assets.publishing.service.
gov.uk/government/uploads/system/uploads/attachment_data/file/401565/pathway-
driverless-cars-main.pdf

Van Rijssen, W. J., & Morris, E. J. (2018). Safety and Risk Assessment of Food From
Genetically Engineered Crops and Animals: The Challenges. In Genetically Engineered
Foods, pp. 335-368. Academic Press.

Vienna Convention on Road Traffic. (2009). 1968 Vienna Convention on Road Traffic:
Consolidated Resolution on Road Traffic. Revised on 14 August, 2009.

Wilson, J. (1980). Politics of Regulation. New York: Basic Books.

World Bank. (2016). World Development Report 2016: Digital Dividends. Washington, DC:
World Bank. DOI:10.1596/978-1-4648-0671-1

World Economic Forum. (2017). Global Competitiveness Index 2017-2018. whttp://


reports.weforum.org/global-competitiveness-index-2017-2018/?doing_wp_cron=1565
516422.9761869907379150390625

Xinying, Z. (2019, March 1). Ministry to speed development of self-driving vehicles.


http://www.chinadaily.com.cn/a/201903/01/WS5c78992ca3106c65c34ec27d.html

77
Definition and
Kyoung Jun Lee
School of Management,
Kyung Hee University

Recognition of AI Yujeong Hwangbo


Dept. of Social Network Science,

and its Influence Kyung Hee University

on the Policy:
Critical Review,
Document
Analysis and
Learning
from History
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

Abstract

Opacity of definitions hinders policy consensus; and while legal and policy measures
require agreed definitions, to what artificial intelligence (AI) refers has not been made clear,
especially in policy discussions. Incorrect or unscientific recognition of AI is still pervasive
and misleads policymakers. Based on a critical review of AI definitions in research and
business, this paper suggests a scientific definition of AI. AI is a discipline devoted to
making entities (i.e., agents and principals) and infrastructures intelligent. That intelligence
is the quality which enables entities and infrastructures to function (not think) appropriately
(not humanlike) as an agent, principal, or infrastructure. We report that the Organization
for Economic Co-operation and Development (OECD) changed its definition of AI in 2017
and how it has since improved from humanlike to rational and from thinking to action. We
perform document analysis of numerous AI-related policy materials, especially dealing with
the job impacts of AI, and find that many documents which view AI as a system that mimics
humans are likely to overemphasize the job loss incurred by AI. Most job loss reports have
either a “humanlike” definition, a “human-comparable” definition, or “no definition”. We do
not find “job loss” reports that rationally define AI, except for Russell (2019). Furthermore,
by learning from history, we show that automation technology such as photography,
automobiles, ATMs, and Internet intermediation did not reduce human jobs. Instead, we
confirm that automation technologies, as well as AI, creates numerous jobs and industries,
on which our future AI policies should focus. Similar to how machine learning systems learn
from valid data, AI policy makers should learn from history to gain a scientific understanding
of AI and an exact understanding of the effects of automation technologies. Ultimately,
good AI policy comes from a good understanding of AI.

79
Philosophical point of view for social implementation

1. Scientific understanding of AI
How one recognizes something influences their through the use of computational models (Charniak &
attitude when dealing with it. With AI being a very new McDermott, 1985); and the study of the computations
concept compared with traditional subjects such as that make it possible to perceive, reason, and act
physics, economics, and sociology, there have been (Winston, 1992).
numerous misunderstandings; and while these have
been overcome by the AI communities themselves,
there is still incorrect and unscientific recognition 1.2. AI is not about humans, it should be based
of AI. Definitional ambiguity hampers the possibility on rationalism
of conversation; and although legal and regulatory
intervention requires agreed-upon definitions, The definition of AI should not include the word
consensus surrounding the definition of AI has been “human”. Physics is not about humans, chemistry is
elusive, especially in policy conversations (Krafft et al.,
not about humans; both are natural science. History
2020). In the following sections, we attempt to correct is about humans, sociology is about humans; these
this misconception, thereby redefining AI. are humanities and social science, respectively. AI is
the science of the artificial (Simon, 1969), it is not a
1.1. AI is a discipline not an entity science about humans. A natural science similar to AI
is brain science, which is concerned with how human
Although AI is a discipline, some view it as a physical and animal brains work. AI, however, is not about
thing, in other words, a machine or entity. For example, how the human brain works, since even animals can
the physicist Stephen Hawking told the BBC that be intelligent. As such, AI should not deal solely with
“[the] development of full artificial intelligence could human intelligence. Including the word “human” in the
spell the end of the human race” (Cellan-Jones, definition of AI confines the scope of the discipline
2014). This statement highlights Stephen Hawking’s and misleads academic and practitioner communities.
misunderstanding of AI, which, in turn, can mislead AI is simply an activity that makes certain entities
mass media and people. Just as he regarded AI as an intelligent. It is not about making machines humanlike
entity and not a discipline, the non-AI community and in intelligence; Nor is it about making machines
non-professional community sometimes show their more intelligent than humans, despite numerous
misunderstanding of AI by defining it as “machines non-professionals explaining AI as trying to making
performing humanlike cognitive functions” (OECD, something more intelligent than a human (Bostrom,
2017) or “intellectual machines and systems… 2014; Cellan-Jones, 2014; Clifford, 2017; Manyika et
that could automatically sense people’s situations al., 2017; Niyazov, 2019; John, 2019; Adel, 2019).
or expectations, and offer necessary information
before it is required” (Ema et al., 2016). That said, We found evidence that even AI researchers such as
mainstream AI research communities have known AI Rich and Knight (1991), incorrectly define AI as about
is an activity devoted to making machines intelligent making humanlike intelligence or human-comparable
1
(Nilsson, 2010), is the science of making machines intelligence. Defining AI as human-related is a very
smart (Hassabis, 2015), and is a discipline. The most common mistake in the non-AI and non-professional
frequently used textbook in AI, “Artificial Intelligence: communities, such as with the aforementioned OECD
A Modern Approach” (Russell & Norvig, 1995), says (2017) and Ema et al. (2016). Merriam-Webster also
that AI is “one of the newest fields in science and shows an incorrect understanding of AI by defining it
engineering”. Textbooks older than this also explain as “the capability of a machine to imitate intelligent
that AI is the study of how to make computers do human behavior”.
things which, at the moment, people do better (Rich,
Knight & Nair, 1985); the study of mental faculties

1. AI is the activity devoted to making machines intelligent, and intelligence is that quality which enables an entity to function appropriately and with foresight in its environment
(Nilsson, 2010).

80
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

This misconception of AI as “imitating humans” develop complex computer programs that will be
comes from the misunderstanding of Alan Turing’s capable of performing difficult cognitive tasks”. OECD
imitation game, the so-called Turing Test. Alan Turing, (2017) also defines AI as “machines performing
the father of computer science, suggested using the humanlike cognitive functions”. Sometimes this
test as an operational definition of a “machine that can emphasis on cognition stems from attempting to
think”. If a machine can pass test, then he suggested differentiate AI from robotics. However, robotics also
we can say the machine can think. However, different deals with cognition. Bostrom’s (2014) definition of
from his original intention, early AI scholars considered superintelligence, as “any intellect that greatly exceeds
passing the imitation game as the goal of AI. Many AI the cognitive performance of humans in virtually all
researchers began to think that the goal of AI was to domains of interest”, also mistakenly emphasizes
make a machine that is indiscernible from a human. cognition. This emphasis on cognition is not only
wrong but is also misleading, in that it implies the
However, this outdated belief began to change AI system can think. As Turing tried to explain, we
after Hayes and Ford’s speech at the International cannot determine when a thing thinks or not. Instead,
Joint Conference on Artificial Intelligence (IJCAI) in he simply suggested a proxy test for the decision.
Montreal, Canada in 1995. Hayes and Ford asserted Emphasis on cognition runs the risk of neglecting the
that the Turing Test has harmed AI development. They action aspect of AI, which is a more important aspect
explained how, to be able to fly, it is not necessary for of intelligence.
us to construct a bird-like flying machine or a machine
that is indiscernible from a bird. Just as aeronautics The traditional explanation of intelligent systems says
is based on Bernoulli equation (Bernoulli, 1738) and an intelligent system has three processes: perception,
not ornithology, AI does not have to be based on brain cognitive, and motor. The perceptual system consists
science. Russell and Norvig (1995) also referred to of sensors and associated memories. The cognitive
Hayes and Ford (1995) in their famous book, “Artificial system receives information from the stores in
Intelligence: A Modern Approach”. its working memory and uses previously stored
information in long-term memory to make decisions
They propose two dimensions on the view of AI: about how to respond. The motor system carries
humanlike or rational and thinking or acting. In out the response (Card et al., 1983). However, this
choosing rationality over humanlike and acting over traditional sandwich (perception-cognitive-motor)
thinking, theirs is the first really “modern” approach to model has been criticized, for example, by Hurley
AI in comparison with traditional textbooks. As will be (1998), and has now evolved into “enactivism”. This is
discussed in the following sections, the AI community defined as the manner in which a subject of perception
has evolved by overcoming the Turing Test and not creatively matches its actions to the requirements of
emphasizing AI cognition. Gershman et al. (2015), its situation (Protevi, 2006). Similar to the relatively
also proposes computational rationality as a potential new enactivism, traditional behaviorism also excludes
unifying paradigm for intelligence in brains, minds, and or doubts the central role of cognition in intelligent
machines. systems. As such, the view regarding cognition as the
center of intelligence is now being challenged, such
1.3. AI is not only about cognition as in Auer-Welsbach (2019).2 As explained above,
there still exists a disagreement over the central role
Certain explanations of AI emphasize the cognitive of cognition; hence, the definition of AI should not only
aspect (Drum, 2017; Miller-Merrell, 2019; Frey & include the word “cognitive”.
Osborne, 2017; Manyika et al., 2017). For example, we
see plenty of examples of using the word “cognitive”
or “cognition” when defining AI, such as Eysenck
et al.’s (1990) definition of AI as the “attempt to

2. The fundamental composition of the most advanced intelligent system, the Homo Sapiens system, is not comprised of independent information processing units which interface with
each other via representations. Instead, the system is comprised of independent and parallel producers of activity which all interface directly with the world through perception and
action, rather than interface with each other exclusively. From this perspective, the notions of central and peripheral systems evaporate, as everything is both central and peripheral.

81
Philosophical point of view for social implementation

1.4. AI should be extended to not just agents removes the humanlike feature, since it is nonsense
to imagine humanlike roads or buildings. We assume
To date, AI applications have been confined to making that the agent orientation in defining AI could lead
agents intelligent from the principal-agent perspective. to humanlike orientation, which we can avoid by
Meaning that the agents in AI disciplines only refer to extending the scope of AI in its definition.
machines, software, and robots that are owned and
controlled by human principals. For example, Nilsson’s At the time, Russell and Norvig’s (1995) approach
(2010) definition of AI, as explained earlier, satisfies which defined AI as making rational agents was the
all three conditions: (1) it is referred to as a discipline, most pioneering and scientific at the time, hence
(2) it is not humanlike, and (3) there is not only an why their book has been the most widely used at
emphasis on cognition. This definition is the most top AI schools around the world for more than 20
accepted and up-to-date, and is therefore referred to years since its publication. That said, it is necessary
by the comprehensive review and prospect report, to extend Nilsson’s (2010) and Russel and Norvig’s
“Artificial Intelligence and Life in 2030: One Hundred (1995) definition and approach from making agents
Year Study on Artificial Intelligence” (Stone et al., rational to making entities and infrastructures
2016). rational. Until now, AI research has concentrated only
on optimizing the behavior of agents under a given
However, Nilsson’s (2010) definition has one condition. However, sensors and their networking
limitation which confines the intelligent entity to only technologies, such as Internet of things (IoT)
a machine. This similar to Hassabis’ (2015) definition technology, and automatic recognition technologies,
in its limitation. In this paper, we extend Nilsson’s such as convolution neural networks (CNNs), enable
definition since AI now plays a wide role in society. making infrastructures intelligent. Nowadays, AI needs
It is important to remember that AI is a discipline to deal with the intelligence of not only single entities
which makes entities and infrastructures intelligent, but also of infrastructures. This enlarged perspective
whereby the entities not only refer to agents such as encompasses the efforts for and contributions to
machines, but also include principals such as humans, human intelligence augmentation. In other words,
organizations, businesses, and nations. Infrastructures augmented intelligence and intelligence amplification
include computing elements, which can be imbedded (Licklider, 1960; Engelbart 1962).3 Jordan (2018)
into the natural world such as forest, lakes, and suggests a new term called intelligent infrastructure
seas, as well as artificial infrastructures such as (II). Our new AI definition encompasses intelligence
roads, cities, buildings, and homes. The extension amplification (IA) and II, as well as traditional agent-
to infrastructures from entities in the definition of AI oriented AI.

3. By “augmenting human intellect” we mean increasing the capability of someone to approach a complex problem, to gain comprehension to suit their particular needs, and to derive
solutions to the problem. In this respect, increased capability is taken to mean a mixture of the following: more rapid comprehension, better comprehension, the possibility of gaining
a useful degree of comprehension in a situation that was previously too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before
seemed insoluble (Engelbart 1962).

82
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

2. Scientific definition of AI
The simplest definition of AI is a discipline that makes the criteria of success and the system’s ultimate
entities and infrastructures intelligent. If we refine that goal are defined externally to the intelligent system.
definition, AI is a discipline devoted to making entities For an intelligent machine system, the goals and
and infrastructures intelligent, with intelligence being success criteria are typically defined by designers,
that quality which enables entities and infrastructures programmers, and operators. For intelligent biological
to function appropriately. creatures, the ultimate goal is gene propagation, with
success criteria being defined by the processes of
2.1. The meaning of functioning appropriately natural selection.

“To function appropriately” is derived from Nilsson’s Albus (1991) deals with the intelligence of both
(2010) definition. It also means “acting rationally”, as artificial intelligent systems and intelligent nature. His
per Russell and Norvig’s (1995) two-by-two matrix. notion of intelligence corresponds with Anastasi’s
This paper will dispense with a detailed explanation (1992) explanation that intelligence is the combination
of each quadrant of the matrix because we have of abilities required for survival and advancement
already criticized humanlike and cognition emphases within a particular culture, and with Roth and Dicke’s
when defining AI in an earlier section. Appropriate (2005) definition of intelligence.4 In the definition of AI,
functioning is necessary for an entity to survive and “appropriate action” is also found in Kubacki (2009).5
prosper. Intelligence is evolved for the process of The recognition of intelligence as an instrument
survival and, simultaneously, becomes the result for survival and prosperity has not been popular in
of the prospering of entities. Thus, appropriate AI communities, though the idea was prevalent in
functioning is developed through evolution for natural evolutionary biology and psychology. However, we
entities and through optimization by a designer for can find attempts by AI communities who view AI for
artificial agents and infrastructures. We found that the survival and prosperity of entities. Weng (2002)
Nilsson’s (2010) “functioning appropriately” comes regards the performance of an intelligent entity as
from Albus’s (1991) definition of intelligence as “the keeping the norm defined by social groups,6 which
ability of a system to act appropriately in an uncertain can be called “institutional intelligence”. This approach
environment, where appropriate action is that which can be called an institutional approach to AI. Since
increases the probability of success, and success is institutional economics is a relatively new discipline in
the achievement of behavioral subgoals that support economics, the institutional approach to AI is a novel
the system’s ultimate goal”. According to Albus (1991), area to investigate.

4. Intelligence may be defined and measured by the speed and success of how animals, including humans, solve problems to survive in their natural and social environments (Roth &
Dicke 2005).
5. Artificial, “embodied” intelligence refers to the capability of an embodied “agent” to select an appropriate action based on the current, perceived situation (Kubacki 2009).
6. Different age groups of developmental robots have corresponding norms. If a developmental robot has reached the norm of a human group of age k, we can say that it has reached
the equivalent human mental age k (Weng 2002).

83
Philosophical point of view for social implementation

2.2. Optimization as the science of functioning which cannot be solved under limited time and
appropriately resources.

AI traditionally focuses on optimizing the behaviors Judd (1990) proved learning in neural networks is
of an agent under the conditions and goals given NP-complete, and thus demonstrated that it has no
by its principal. Intelligent agents fundamentally efficient general solution. Goodfellow et al. (2015) also
seek to form beliefs and plan actions in support of confirmed neural networks cannot avoid local minima.7
maximizing expected utility (Gershman et al., 2015). Google-developed quantum computers solved a
Our new definition of AI emphasizes approaches to problem in three minutes, while the IBM Summit, the
enabling the appropriate actions of agents, principals, most powerful supercomputer in existence, requires
and infrastructures. Hence, AI can be divided into: a calculation time of 10,000 years (Arute et al., 2019).
(1) making agents rational – finding a method of If quantum computing, which is 1 billion times faster
optimizing the behavior of an agent with the goals than current supercomputing, is well developed and
given by the principal (i.e., the owner of the agent), widely used for optimizing problems, it may become
and (2) making entities and infrastructures function possible to solve problems considered intractable.
appropriately – finding the optimization method If so, the range of problems that mankind could
in which the entities survive and prosper while solve would be drastically expanded. Russell (2019)
interacting with other entities and the infrastructures confirms that quantum computation helps slightly
in their environment by making the rational entities in solving intractable problems, but not enough to
and infrastructures learn, adapt, and improve the change the basic conclusion that there is no reason to
institutions of the world or society. In either case, it is suppose that humans can solve intractable problems.
important to recognize that optimization is the main
problem when creating such AIs. On the other hand, if such developments are not
realized, AI will still be forced to incompletely solve
Optimizing a behavior of an agent under a principal numerous problems and create a system for making
has been covered by many studies on optimization occasional mistakes. Such incomplete systems
systems. It is important to note that there is an should be used safely under human control. Although
intractable problem in which the optimal solution the performance of deep learning algorithms has
cannot be obtained, no matter how good the improved, mistakes (i.e., local optima) have not gone
computer’s performance. Stuart Russell’s recent away, which is the main problem of deep learning.
book, “Human Compatible: Artificial Intelligence and Since deep learning is simply a neural network, it
Problem Control”, also confirms that the existence inherits the characteristics of a neural network, such
of intractable problems gives us reason to think that as inexplainability and error inevitability. Research into
computers cannot be as intelligent as humans. There increasing explanatory possibilities continues, and
is also no reason to assume that humans can solve automatic recognition by deep learning is evolving,
intractable problems either (Russell, 2019). however, there is still a danger due to recognition error.
Therefore, it is only suitable for use in areas where
Gershman et al. (2015) emphasizes that ideal mistakes are not fatal and statistically good results
maximizing expected utility (MEU) calculations are achieved. Current AI methodology is essentially
may be intractable for real-world problems. That is, a system that is able to make mistakes (Szegedy
finding optimal solutions can be intractable, even et al., 2014; Nguyen et al., 2016). Thus, Facebook
though optimization can be effectively approximated researchers (Bordes et al., 2015) emphasize research
by rational algorithms which maximize a more and development through artificial tasks, just as an
generally expected utility incorporating the costs of artificial task, such as XOR (exclusive OR) (Minsky
computation. Thus, even though AI methodology & Papert, 1969), led to the birth of a multi-layer
improves, there are still certain optimization problems perceptron (Rumelhart et al., 1986).

7. Do neural networks enter and escape a series of local minima? Do they move at varying speed as they approach and then pass a variety of saddle points? [...] we present evidence
strongly suggesting that the answer to all of these questions is no (Goodfellow et al., 2015).

84
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

2.3. An AI approach defined as an optimization Libratus (Brown and Sandholm, 2017), the first AI
problem system to defeat top humans in heads-up no-limit
Texas hold ’em poker, formulates itself by finding
An AI algorithm is an algorithm which can find an the optimal strategy for solving subgames. While
optimal path to a preferred goal node, provided that Libratus may not be able to arrive at an equilibrium by
the heuristic function satisfies certain conditions (Hart independently analyzing subtrees, it may be possible
et al., 1968). Genetic or evolutionary algorithms are a to improve the strategies in those subtrees when the
type of optimization algorithm, meaning they are used original base strategy is suboptimal, as is typically the
to find the maximum or minimum of a function (Carr, case when abstraction is applied. DeepMind’s AlphaGo
2014) called a “fitness function” – often a black-box in is also based on the optimization perspective,
real-world applications. Automated theorem proving claiming that all games of perfect information have an
also finds proofs via application of optimization optimal value function, which determines the outcome
methods (Yang et al., 2016). of the game from every board position or state, under
perfect play by all players (David et al, 1986).
Most machine learning problems, once formulated,
can be solved as optimization problems, with the On the other hand, IBM’s Watson is not based on the
essence of most machine learning algorithms being to optimization perspective. Watson is a knowledge-
build an optimization model and learn the parameters based decision support tool that suffers from the
in the objective function from the given data (Sun et requirement to manually craft and encode formal
al., 2019). Sun et al. (2019) formulates supervised logical models of the target domain. This should be
learning, semi-supervised learning, unsupervised evolved into an interactive decision support capability
learning, and reinforcement learning as optimization that strikes a balance between a search system and
problems. For example, with supervised learning, a formal knowledge-based system (Ferrucci, 2012).
the goal is to find an optimal mapping function to IBM’s Watson has not been successfully deployed,
minimize the loss function of the training samples. experiencing only failures, particularly in the medical
Deep learning, if without nonlinearity in the hidden field (Brown, 2017; Herper, 2017; Bloomberg, 2017;
layer, would reduce to a generalized linear model. Strickland, 2019).
As such, minimizing the nonlinear and nonconvex
loss functions is difficult, and at best we seek good Softbank’s Pepper is not formulated as an optimized
local optima (Efron and Hastie, 2016). Reinforcement machine either. As a result, Pepper is rather limited
learning is a branch of machine learning, whereby an in how it can help customers and its answers do not
agent interacts with the environment through a trial seem that helpful (Mogg, 2018). Pepper’s failure was
and error mechanism, and learns an optimal policy predicted (Lee, 2014) and widely reported on (Alpeyev
by maximizing cumulative rewards (Sutton and Barto, & Amano, 2016; Bivens, 2016; Boxall, 2017; Nichols,
1998). Dialogue can also be considered as optimal 2018). Hanson Robotics’ robot, Sophia, is a typical
decision making (Gao et al., 2018). The goal of example of AI being based on the incorrect humanlike
dialogue learning for realizing conversational AI is to perspective, rather than the rational optimization
find optimal policies to maximize expected rewards in perspective. As such, it only makes jokes and cannot
a reinforcement learning framework. have meaningful conversations (Campanella, 2016).
Similarly, Honda’s ASIMO business operation has also
been stopped (Ulanoff, 2018). Humanoids such as
2.4. Successful AI applications in the pursuit of Pepper, Sophia, and ASIMO all failed because they
optimization were based on a humanlike paradigm and not on an
optimization framework.
Successful AI applications and developments include
the optimization perspective in their explanations.

85
Philosophical point of view for social implementation

3. OECD’s redefinition of AI 4. Identifying the definition of AI’s


influence on policy: Document analysis
Of the aforementioned perspectives, the OECD
(2017) definition of AI is the most inaccurate, as Through our document analysis we were able to find
it includes all three misconceptions. OECD (2017) research that was very close to ours. Krafft et al.
defined AI as “Machines performing humanlike (2020) compares AI researchers’ recognition of AI with
cognitive functions”, thereby mistaking AI as an policy reports’ perspective of AI. Similar to our claim
entity and not a discipline and incorrectly believing in this paper, Krafft et al. (2020) criticizes the human
that AI should be humanlike. When defining AI, OECD emphasis in the definition of AI in most AI policy
(2017) also only emphasized cognition – a common reports, while noting that AI researchers’ recognition is
misconception. This critical mistake in the definition more inclined to rational emphasis. Krafft et al. (2020)
of AI by the world-leading policy organization could found that 28% of definitions by AI researchers and
have resulted in misguided policy decisions. In 2017, 62% from published policy documents use the word
OECD was advised by one of this paper’s authors “human”. There was more disagreement over whether
to revise its definition. Interestingly, OECD (2018) existential threats are relevant (42% agreed) – an
changed it to: “Equipping systems with cognitive issue more relevant to (hypothetical) humanlike AI.
functions that allow them to function appropriately In our paper, we analyze AI policy-related reports and
and with foresight in their environment”. From this, classified resources according to their definition or
it is apparent that OECD (2018) adopted Nilsson’s perspective on AI. We particularly focus on resources
(2010) definition. In the new definition, OECD (2018) which define AI as humanlike (thinking or action)
avoided the humanlike criterion, stating that AI is an entities.
activity, rather than simply objects such as machines.
Unfortunately, OECD (2018) unnecessarily added the For the analysis, we had planned to perform
word “cognitive”, meaning that even this definition was document analysis to investigate their position on:
inaccurate. In 2019, the definition was revised again, (1) the concern, fear, peril, threat, and danger of AI;
removing the word “cognitive”, to read: “An AI system (2) the fairness of AI (discrimination, oppression,
is a machine-based system that can, for a given discrimination, and inequality); and (3) unemployment
set of human-defined objectives, make predictions, and job loss. However, it was difficult to obtain
recommendations, or decisions influencing real or systematic results, since it is very time consuming
virtual environments. AI systems are designed to to analyze the perspectives of reports only by human
operate with varying levels of autonomy”. reading. At first, we considered automatic document
analysis using AI techniques. However, it is still
In the OECD (2019) definition, it is worth noting the difficult to automate document analysis to replace
phrase “given set of human-defined objectives”. Since human reading; although there is research on the
rationalization refers to optimization under human- subject, such as Hermann et al. (2015). In near future,
defined objectives, the OECD (2019) definition can AI-based document analysis software will help human
be seen as taking the “rational” perspective. It is researchers perform this kind of research. With such
also explained that AI technologies can only deliver AI discipline-based software, human researchers will
value if they are part of the organization’s strategy be able to improve their performance and reduce
and are used in the right way (Hippold, 2019). This the necessary research time. During our research,
also corresponds to the phrase “given set of human- because we could not find such software for our
defined objectives” in OECD (2019). Gartner’s criticism purposes, we narrowed our focus to only job related
of AI misconceptions shows its “rational” approach to reports, then analyzed them by keyword search and
AI. It also criticizes humanlike AI, explaining that while human reading. Krafft et al.’s (2020) study also seems
some forms of AI might give the impression of being to be based on this method.
clever, it is unrealistic to think that current AI is similar
or equivalent to human intelligence (Hippold, 2019).

86
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

4.1. Relationship between the perception of AI the AI revolution – robots will takeover of several jobs,
and the expectation of job loss although not all careers will be destroyed. Balatayan
(2018) claims even white-collar jobs are being cut due
We investigate the relationship between the perception to technological advancements, defining an AI system
of AI and the expectation of job loss incurred by AI. We as any software that can mimic a rudimentary form of
conject that a policymaker who believes or defines AI thinking.
as something that thinks or acts in humanlike manner
will be likely to overemphasize AI’s negative impact on McClelland (2020) explains that the impact of AI
job creation. We were able to find numerous reports and automation will be profound, and that we need
using humanlike AI definitions, such as Miller-Merrell to prepare for a future where job loss reaches 99%.
(2019), Molla (2019), and Hawksworth et al. (2018). His definition of AI is based on the following two
For example, Miller-Merrell (2019) describes AI as assumptions, that (1) we will continue making
a branch of computer science that uses machine progress in building more intelligent machines, and (2)
learning algorithms which “mimic” cognitive functions, human intelligence arises from physical processes.
making machines more humanlike. While Molla (2019) With this in mind, McClelland (2020) concludes that
explains machine learning as something that can we will build machines which have human-level or
make humanlike decisions. higher intelligence. However, these assumptions were
criticized by George Zarkadakis in his seminal book,
In Our Own Image. In it, he describes six metaphors
4.2. AI-induced job loss expectation defining AI that people have used over the past 2,000 years to
as humanlike try and explain human intelligence. Zarkadakis (2015)
shows that each metaphor simply reflected the most
Policy reports advanced thinking of the time.
The report “Australia’s Future Workforce?” by the
Committee for Economic Development of Australia Consulting and research institute reports
(CEDA, 2015) recognizes the ability of computers to Bughin et al. (2017) at McKinsey define AI as the
emulate human thought patterns, claiming that AI is ability of machines to exhibit humanlike intelligence,
able to take over intellectual tasks, as well as routine and explains that AI-powered automation could have
ones. Hindi (2017) argues that the real issue facing a profound impact on jobs and wages. The Digital
governments today is the failure to transition to a Marketing Institute (2019) raises the question, of
sustainable AI society, which will lead to massive job whether AI will really steal our jobs in the future,
loss and economic downturn. Hindi (2017) defines and characterizes AI systems as being able to do
AI as the ability for a machine to reproduce human things that humans can do and imitate the way we
behavior. Daniel (2020) asserts that the pace at which think. Wisskirchen et al. (2017) of the IBA Global
AI is replacing the way humans work, forecasts that Employment Institute describes AI as the work
the future to be fully automated, even to the extent processes of machines that would require intelligence
that jobs for humans will no longer exist. She explains if performed by humans, asserting that both blue-
that intelligent AI-models are trained to enable them collar and white-collar sectors will be affected.
to “act like a human” in real-world situations and that
machines “think like human minds”. Media reports
Dai and Jing (2018) of the South China Morning
Business websites Post refers to Oxford-Yale AI impact research –
Many business web sites also make similar mistakes. based on a survey of 352 machine learning experts
For example, John (2019) defines AI as computers – which estimates that there is a 50% chance of AI
or devices that mimic humanlike movements, and outperforming humans in all tasks in just 45 years,
expects that with automation – the real essence of and which could take over every job in the next

87
Philosophical point of view for social implementation

century. The research explains that AI is the science 4.3. AI-induced job loss expectation regarding
of “simulating” intelligent behavior in computers, AI as a super-intelligent entity
enabling the latter to exhibit humanlike behavioral
traits such as knowledge, reasoning, common sense, Through the document analysis, we found a number
learning, and decision making. Knapton (2016) of of reports that regard AI as a competitor to humans,
the Telegraph reports that the rise of robots could i.e., a superhuman entity. Although the reports do
lead to unemployment rates greater than 50%, and not explicitly describe AI as being humanlike, they
that many middle-class professionals’ jobs would be also belong to the humanlike category. Cellan-Jones
outsourced to machines within the next few decades, (2014) refers to Stephen Hawking’s fears on the
leaving workers with more leisure time than ever. Such consequences of creating something that can match
comments are common misconceptions of people or surpass humans (who are limited by slow biological
who see AI as being humanlike. The report itself evolution), as well as the concerns that clever
also uses the term humanlike robots. Kelly (2019) of machines, capable of undertaking tasks performed by
Forbes maintains that AI, robotics, and technology humans up until now, will swiftly destroy millions of
will displace millions of workers, and defines AI as the jobs. Clifford (2017) refers to Elon Musk’s belief that
ability of a machine to mimic human behavior. a machine could be far smarter than a human, that
robots will be able to do jobs better than humans, and
Adel (2019) of Medium states that AI’s effect on that there will certainly be job disruption. Manyika et
work will be disruptive, and predicts a future in which al. (2017) of McKinsey is of a similar opinion, saying
robots take jobs from human workers. Adel (2019) that “machines already exceed human performance”.
also defines AI as the act of “simulating the human Finally, Niyazov (2019) assumes that AI algorithms
brain” in a machine, i.e., creating an artificial human and automated manufacturing are much better at
mind far more powerful than an actual human one. performing tasks.
Wadhwa (2016) of FactorDaily argues that we are
facing a jobless future because AI systems emulate 4.4. AI-induced job loss expectation without a
the functioning of the human brain’s neural networks. specific definition of AI
Xu (2017) of Northeastern’s J-school’s Ruggle Media
reports that computers have become substitutes for There are also claims of job loss by AI without a
various types of jobs for numerous reasons, such as specific definition of AI (Brynjolfsson & McAfee, 2011;
recent developments in AI machine learning. Machine Kurzweil Network, 2012; Frey & Osborne, 2013; World
learning will not only reduce the huge demand for Economic Forum, 2016; Acemoglu & Restrepo, 2017;
labor input with tasks since it can be routinized Frey & Osborne, 2017; Rieley, 2018; Lambert & Cone,
depending on pattern recognition, it will also increase 2019; Ambika, 2019; The Week, 2019; Muro et al.,
the demand for labor-performing tasks that are not 2019). For example, Krafft et al. (2020) mentions that
subject to computerization. Xu (2017) recognizes over 40% of policy reports do not have a definition of
that every aspect of learning or any other feature of AI. Frey and Osborne (2013) of Oxford Martin School
intelligence can, in principle, be so precisely described reports that 47% of total US employment is in the high-
that a machine can be made to simulate it. risk category, and that associated occupations are

88
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

potentially automatable over an unspecified number jobs will be severely disrupted as AI accelerates the
of years – perhaps a decade or two. The World automation of existing work. Lambert and Cone (2019)
Economic Forum (2016) holds that current trends of OxfordEconomics.com claim that with the rise
could lead to a net employment impact of more than of robots in business models, many sectors will be
5.1 million jobs lost to disruptive labor market changes seriously disrupted and millions of existing jobs will be
from 2015–2020; with a total loss of 7.1 million jobs, lost, with 20 million manufacturing jobs set to be lost
two thirds of which are concentrated in the office and to robots by 2030.
administrative job family, and a total gain of 2 million
jobs in several smaller job families. Most job loss reports have either a “humanlike”
definition, a “human-comparable” definition, or “no
Using a model in which robots compete against definition”. According to our definition of AI, we
human labor in various tasks, Acemoglu and Restrepo claim that job loss reports make mistakes due to
(2017) of the Massachusetts Institute of Technology the incorrect recognition and understanding of the
(MIT) and Brown University show that robots may characteristics of AI. We were unable to find job loss
reduce employment and wages, and that the local reports that define AI as rational, except for Russell
labor market effects of robots can be estimated by (2019). Russell is a very respectable AI pioneer who
regressing the change in employment and wages on wrote an innovative textbook on AI (Russell & Norvig,
the exposure to robots in each local labor market – 1995). However, even though he makes an attempt,
defined from the national penetration of robots into he confesses not to be qualified to opine on the job
each industry and the local distribution of employment issue. Other AI experts, such as Lee (2018a), also
across industries. Frey and Osborne (2017) of Oxford make similar mistakes when defining AI by incorrectly
Martin School claim that recent developments emphasizing “humanlike” and “cognitive”’. AI policies
in machine learning will put a substantial share are too important to leave entirely to technical AI
of employment at risk across a wide range of experts. As Russell (2019) asserts, the job issue is
occupations in the near future, and that nearly half of too important to leave entirely to economists. For
all US jobs were at risk from AI-powered automation. example, Martin Ford, a journalist who is not an AI
Rieley (2018) of the US Bureau of Labor Statistics also expert, wrote a book exaggerating job loss from AI
asserts that employment of bookkeepers is projected (Ford, 2015). However, he seems to have changed his
to decline 1.5% from 2016–2026, representing a loss mind after interviewing numerous world-renowned
of 25,200 jobs. AI experts (Ford, 2018). It is therefore necessary for
us to explain AI to policy experts, as well as promote
Ambika (2019) also maintains that AI technologies collaboration among AI and policy experts.
being adopted around the globe will replace numerous
jobs currently being done by humans. The Week (2019)
reports that over the next decade, automation and AI
could put 54 million Americans out of work. Muro et
al. (2019) of Brookings Institute reports that although
robots are not replacing everyone, a quarter of US

89
Philosophical point of view for social implementation

5. Automation creates more jobs than it


eliminates: Learning from history
5.1. AI creates more jobs than it eliminates that while there is now a consensus that AI does not
spell the end of work, neither will the transition be
Throughout the research, we found numerous reports painless for all. Although human-level intelligence
claiming that AI will not eliminate jobs. Shrive (2018) (‘general AI’) receives significant media attention, it
claims that AI cannot replace humans in performing is still some time away from being delivered, and it is
all tasks, especially in the property management unclear when it might be possible. Krafft et al. (2020)
domain. AI has been specifically developed to points out that hype surrounding general AI centers on
simplify repetitive and time-consuming processes, humanlike AI, and that it is a problem that many policy
thereby freeing up time for property managers, letting analysts think of it in this way.
agents, and contractors to deal with more pressing
problems. Lokitz (2018) asserts that with every AdextAI (2019) explains that, as the technology has
job taken over by a machine, there will be an equal evolved, unemployment rates have decreased as a
number of opportunities for jobs to be done by people. result of the new jobs created. Naudé (2019) holds
Furthermore, in many cases, humans and machines that, in the foreseeable future, AI is unlikely to cause
will find themselves in symbiotic relationships, helping huge job losses (or job creation), at least in advanced
each other to do what they do best. economies. The main reasons for this conclusion
are based on: (1) the fact that the methods used
The World Economic Forum (2018) asserts that to calculate potential job losses are sensitive to
38% of businesses surveyed expect to extend their assumptions; (2) automation may affect tasks more
workforce to new productivity-enhancing roles, more significantly, rather than the jobs within which they
than a quarter expect automation to lead to the are performed; (3) net job creation can be positive
creation of new roles in their enterprise, about half of because automation stimulates the creation of new
today’s core jobs – making up the bulk of employment jobs or jobs elsewhere; (4) diffusion of AI may be
across industries – will remain stable up to 2022, and much slower than is thought or assumed; and (5) the
current estimates suggest a decline of 0.98 million tempo of innovation in AI is slowing down. Thomas
jobs and a gain of 1.74 million jobs. Atkinson (2018) (2019) explains that AI is poised to eliminate millions
asserts that there is no reason to believe that this of current jobs and create millions of new ones –
coming technology wave will be any different in pace some of which have yet to be invented. Liang (2019)
and magnitude than previous waves. Each past wave describes that recent advances in AI, while seemingly
has led to improved technology in a few key areas (e.g., impressive, are very narrow in scope and require a
steam engines, railroads, steel, electricity, chemical lot of human supervision and input to work in real
processing, and information technology), and these applications. While as many as 47% of current jobs
were then used by many sectors and processes. contain tasks that may be automatable, less than
Within manufacturing, for example, each wave has 5% of jobs will be fully automatable by 2030. As with
led to important improvements, however, there have many new technologies that came before, AI tools
always been many other processes that have required will augment and not replace workers by automating
human labor. The British Academy (2018) maintains subtasks of a job.

90
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

5.2. Automation proved more of a blessing than Nowadays that number is less than 2% (Dimitri et al.,
a threat 2005). So, has American agriculture disappeared? The
answer is no, it has simply become more automated.
Garry Kasparov says that he is the first knowledge The US has transformed from an agricultural economy
worker whose job was threatened by a machine to an industrial economy, then to a service economy,
(Knight, 2020). Referring to Kasparov, Knight (2020) and now to an information economy. Dimitri et al.
claims that technology destroys jobs before creating (2005) concludes that automation creates far more
new ones. This story has been repeated since the jobs than it eliminates. Even if automation takes on a
Industrial Revolution in the 19th century. For example, variety of professional roles, it does not always take
with the emergence and popularity of machines in away people’s jobs.
19th century Britain, many workers lost their jobs.
Luddism centered around the defense of hand trades 5.3. The camera created more jobs and
in the textile industry in the face of innovation which industries than it eliminated
threatened jobs (Beckett, 2012). Led by artisans who
felt their jobs were being threatened by the increased Invented roughly 200 years ago, cameras began to be
use of machines in the production process, Luddites distributed about 100 years ago. At the time, many
began destroying machines as a form of protest. An people thought that there would be no more need
agricultural manifestation of Luddism occurred during for artists as a result. However, cameras allowed for
the Swing Riots of 1830, which saw the destruction the development of modern art, and many painters
of threshing machines. Although automation freed used cameras in the studios. Even early contributors
people from mundane and repetitive tasks, it caused to the invention of photography and the camera were
some people to lose their jobs. painters themselves, such as Leonardo da Vinci, who
used the camera obscura for his painting, and Louis-
William Lee was an English clergyman and inventor Jacques-Mandé Daguerre, who was a theatre set
who, in 1589, devised the first stocking frame knitting painter and inventor of the daguerreotype process of
machine, the design of which was used for centuries. photography (Daval, 1982). With cameras, i.e., the new
Having perfected his design and desiring to secure automation technology of the time, painters were able
Queen Elizabeth I’s patronage, whose partiality for to dramatically reduce the time needed for painting
knitted silk stockings was well known, Lee went to and sell photos of their works to more customers. The
London to exhibit the loom before the Queen. However, existing skills needed for drawing portraits, simply
her reaction was not what he had expected. She is became the basis for becoming a better photographer.
said to have opposed the invention on the grounds In other words, the new technology became an
that it would deprive a large number of poor people of opportunity to expand the existing portrait market into
their employment of hand knitting (Smiles, 2005). the photography market (Benjamin, 1969).

Although people have always been afraid of new In addition, the invention of camera allowed related
automation technologies, they always proved more of industries to develop. New industries emerged,
a blessing than a threat. As machine learning systems such as film manufacturing, camera manufacturing,
learn from data, intelligent human beings should learn film sales, photo album production, photo studios,
from history. In 1790, 90% of Americans were farmers. photographic development, photo distribution,

91
Philosophical point of view for social implementation

newspapers, magazines, advertising, and publishing 5.5. Digital typesetting created more jobs by
industries, etc. Cameras also contributed to the promoting publishing
development of other industries. For example, as more
people began to take cameras with them when they Physical typesetting is the composition of text
travelled, the photos being taken encouraged more through the arranging of metal “types” and is most
people to travel. Cameras also had an impact well-known in the production of newspapers in the
on the movie industry (Jeong, 2015), while the late 19th century. Being a typesetter was a highly
influence of celebrities such as Marilyn Monroe skilled position, so much so that when the Hankyoreh
and John F. Kennedy was greater as a result newspaper in Korea was founded in 1988, it was
of photography. Today, not only do people take unable to find a skilled typesetter. To solve the
pictures with their smartphones, but the continued problem, the newspaper introduced an innovative
development of photography has created new technology called the Computerized Typesetting
businesses such as Facebook and Instagram. System (CTS). Starting with the Hankyoreh newspaper,
many newspapers in Korea soon adopted this system,
5.4. Automobiles created jobs and industries leading to a lot of typesetters losing their jobs. At the
same time, however, demand for digital typesetters
A photograph taken on 5th Avenue in New York in 1900 increased, which the traditional typesetters quickly
shows the horse and cart to be the predominant mode learned, becoming desktop publishing professionals
of transport. By 1913, in little more than a decade, the (Lee et al., 2012).
automobile had replaced the horse as the main form
of transport. In turn, this led to the development of 5.6. ATMs created jobs by contributing to bank
related industries, such as automobile manufacturers, expansion
mechanics, and automobile salesmen. In addition to
the development of personal automobiles, the city bus, When Automated Teller Machines (ATMs) were first
intercity bus, express bus, taxi, and trucking industries invented in the 1970s, there were serious concerns
all developed. At the same time, the construction of about the layoffs of tellers. In the 1980s, US banks
roads and car parks resulted in an increase in jobs introduced ATMs to improve work efficiency, with the
(Lee, 2018). Not only did automobiles spark a desire number of employees per branch decreasing to one
for long-distance travel, but by shortening travel times, third as a result. Between 1995 and 2010, the number
the travel industry and related transportation, lodging, of ATMs in the US surged from 100,000 to 400,000.
and restaurant industries also developed alongside However, there was no massive unemployment, since
one another. the number of bank branches increased by more
than 40%. Furthermore, by 2015, the number of bank
employees had increased from 250,000 to 500,000.
As the introduction of ATMs reduced the cost of
creating new branches, banks were able to expand
and hire more employees than in the past. In addition,
with ATMs replacing simple deposit and withdrawal

92
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

services, banks were able to focus on developing It is often argued that as electronic markets lower
profitable financial products such as loan counselling the cost of market transactions, traditional roles
and insurance. As a result, bankers were freed up for intermediaries will be eliminated, leading to
to perform more important tasks than ever before. “disintermediation”. Bailey and Bakos (1997) discuss
Not only were new jobs created when ATMs took the findings of an exploratory study of intermediaries
over performing simple and repetitive tasks, bankers in electronic markets which suggests that markets
were able to take charge of tasks requiring high-level do not necessarily become disintermediated as they
capabilities (James, 2015; Deloitte, 2018). become facilitated by information technology. Middle
businesses, functions, or people need to move up
5.7. Internet intermediaries created jobs by the food chain to create new value or face being
reintermediation disintermediated. However, the “reintermediation”
opportunities are greater than the disintermediation
Baen and Guttery (1997) predicted that increased use perils (Tapscott, 1997). Yoon (2015) also explains that
of the Internet and information technology would have attention should be paid to reintermediation, where
a dramatic and negative impact on the real estate the value of brokerage functions has been recently
industry in terms of both income and employment created. There will be an opportunity to create new
levels. They argued that buyers and sellers with value for middlemen connecting consumers and
access to information available via the Internet would suppliers.
have no need for traditional “infomediaries”, and that
several other players in real estate support positions These aforementioned examples show that new
would also be disintermediated by the Internet. technology does not threaten the existence of
The authors predicted job losses in sectors directly someone’s job. Just as a painter adapted to the
related to real estate, including sales agents and invention of the camera and found a new job in a
developers, as well as sectors involved in the support related field, so will it be the same in the case of AI.
of real estate transactions, such as legal services and People currently engaged in fields such as health
banking. Muhanna and Wolf (2002) revisited Baen and care, architecture, and law, where AI is expected
Guttery’s (1997) examination of technology’s effect on to be applied, will acquire AI-related skills and take
the real estate industry and found that, in general, their on new jobs.
most ominous predictions of income and employment
loss have not materialized. In the years since their
1997 article, according to the Bureau of Labor’s
statistics, the real estate industry, like most sectors in
the US, has experienced steady growth. Specifically,
more workers were employed as real estate agents,
developers, and legal service providers.

93
Philosophical point of view for social implementation

6. Summary and Conclusion


Incorrect or unscientific understanding of AI is We suggest four policy recommendations as follows:
still pervasive and misleads policymakers. While
ambiguity in definition has hampered conversation, Recommendation 1: Policy experts should be well
legal and regulatory intervention requires agreed- educated about what AI is and what is really going
upon definitions. However, consensus over the on in the AI researches and businesses. Especially, AI
definition of AI has been elusive thus far, especially should be considered as a discipline making entities
in policy conversations (Krafft et al., 2020). In this and infrastructures intelligent, and the intelligence
study, we reviewed numerous definitions of AI, and is that quality that enables agents, principals, and
based on our critical review, we suggest a scientific infrastructure to function appropriately. AI should not
definition of AI. Namely, that AI is a discipline devoted be considered as human-like or super-human system.
to making entities and infrastructures intelligent, Past AI policies based on the old paradigm should be
with the intelligence being that quality which enables rewritten.
agents, principals, and infrastructures to function
appropriately. We have observed how, since 2017, Recommendation 2: Government should make
OECD has continued to update its definition of AI; and program for educating the administrative officials,
have noted how OECD has improved its definition policy experts in public-owned research institute, and
from humanlike to rational and from thinking to action. lawmakers in the national assembly.

We investigated numerous AI-related policy Recommendation 3: Just as machine learning


documents, particularly those dealing with the systems learn from data, policymakers should also
impact of AI on jobs, and found that those which learn from history and data. The positive impacts of
view AI as a system that mimics humans are likely automation technology should be recognized by policy
to overemphasize job loss incurred by AI as an makers and the new AI policy should be established
automation technology. In addition, most job loss based on the new recognition.
reports have either a “humanlike” definition, a “human-
comparable” definition, or “no definition”. We were Recommendation 4: Government and society
unable to find job loss reports that defined AI as should recognize the characteristics of AI, as an
rational. Through our historical review, we showed optimization system, to have more public benefit,
that automation technology, such as photography, faster business outcomes and less risks from AI
automobiles, ATMs, and the Internet as an automatic adoption.
intermediation technology, did not reduce human jobs.
Instead, they created numerous jobs and industries. Acknowledgements
AI will also create a wide range of jobs and industries,
on which our future AI policies should instead focus.
We would like to thank the Association of Pacific Rim
Similar to how machine learning systems learn
Universities (APRU) for initiating the “AI for Social
from valid data, AI policy makers should learn from
Good” project, of which this study is a part. We would
history to gain a scientific understanding of AI and
like to thank Prof. Jiro Kokuryo of Keio University,
an exact understanding of the effects of automation
Japan, the Principal Investigator of this project, for
technologies. Ultimately, good AI policy comes from a
giving us the opportunity to be involved in such an
good understanding of AI.
exciting project. Our thanks must also go to Christina
Schönleber, Director for Policy and Programs, APRU,
as well as all of my colleagues on the project, from
whom I have learned a great deal.

94
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

References
Acemoglu, D., & Restrepo, P. (2017). Robots and Jobs: Evidence from US Labor Markets.
NBER Working Paper No. 23285.

Adel, K. (2019). The Future of Jobs in Artificial Intelligence Era. medium.com. Retrieved
from https://medium.com/analytics-vidhya/the-future-of-jobs-in-artificial-intelligence-era-
93e34c33c25f

Adext AI. (2019). “How Many Jobs Will Be Lost Because of Artificial Intelligence?” Is the
Wrong Question. Adext AI. Retrieved from https://blog.adext.com/jobs-lost-artificial-
intelligence/

Albus, J. (1991). Outline for a theory of intelligence. IEEE Transactions on Systems, Man, and
Cybernetics, 21(3).

Alpeyev, P., & Amano, T. (2016). A Japanese Billionaire’s Robot Dreams Are on Hold.
Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/
a-japanese-billionaire-s-robot-dreams-are-on-hold

Anastasi, A. (1992). What Counselors Should Know About the Use and Interpretation of
Psychological Tests. Journal of Counseling & Development, 70(5).

Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J. C., Barends, R., . . . Collins, R. (2019).
Quantum supremacy using a programmable superconducting processor. Nature, 574(7779),
505-510.

Atkinson, R. (2018). Shaping structural change in an era of new technology. policynetwork.org.

Auer-Welsbach, C. (2019). Interview with cognitive scientist Newton Howard on AI. Medium.

Baen, J. S., & Guttery, R. S. (1997). The Coming Downsizing of Real Estate: Implications of
Technology. The Journal of Real Estate Portfolio Management, 3(1), 1-18.

Bailey, J. P., & Bakos, Y. (1997). An exploratory study of the emerging role of electronic
intermediaries. International Journal of Electronic Commerce, 1(3), 7-20.

Baltayan, A. (2018). Robots, Automation & Technology Taking Over – Is Your Job at Risk?
Money Crashers. Retrieved from https://www.moneycrashers.com/robots-automation-
technology-replacing-jobs/

Beckett, J. (2012). Luddites. Retrieved from The Nottinghamshire Heritage Gateway:


http://www.nottsheritagegateway.org.uk/people/luddites.htm

95
Philosophical point of view for social implementation

Bellman, R. E. (1978). An Introduction to Artificial Intelligence: Can Computers Think? San


Francisco: Boyd & Fraser Pub. Co.

Benjamin, W. (1969). Das kunstwerk im zeitalter seiner technischen reproduzierbarkeit.


Frankfurt am Main: Suhrkamp.

Bernoulli, D. (1738). Hydrodynamica. France: The University of Strasbourg: Johann Reinhold


Dulsecker.

Bessen, J. (2015). Toil and Technology. Finance and Development, 52(1).

Biven, M. (2016). Pepper Salé: Lessons from the bitter Aldebaran / SoftBank project.
Retrieved from https://markbivens.com/m/archives/pepper-sale-lessons-from-the-bitter-
aldebaran-softbank-project

Bloomberg, J. (2017). https://www.forbes.com/sites/jasonbloomberg/2017/07/02/is-ibm-


watson-a-joke/. Forbes.

Bordes, A., Weston, J., Chopra, S., Mikolov, T., Joulin, A., Rush, S., & Bottou, L. (2015). Artificial
Tasks for Artificial Intelligence. Facebook AI Research ICLR.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. UK: Oxford University Press.

Boxall, A. (2017). Pepper is everywhere in Japan, and nobody cares. Should we feel bad for
robots?

British Academy. (2018). The impact of artificial intelligence on work. royalsociety.org.

Brown, J. (2017). Why Everyone Is Hating on IBM Watson—Including the People Who
Helped Make It. GIZMODO. Retrieved from https://gizmodo.com/why-everyone-is-hating-on-
watson-including-the-people-w-1797510888

Brown, N., & Sandholm, T. (2017). Safe and Nested Endgame Solving for Imperfect-
Information Games. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.

Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: how the digital revolution is
accelerating innovation, driving productivity, and irreversibly transforming employment and the
economy. Digital Frontier Press.

Bughin, J., Hazan, E., Ramaswamy, S., Chui, M., Allas, T., Dahlstrom, P., . . . Trench, M. (2017).
Artificial intelligence: the next digital frontier? McKinsey and Company Global Institute, 1-80.

Choudhury, A. (2019). AI May Kill These 5 Jobs By 2030, Say Experts. Analytics India
Magazine. Retrieved from https://analyticsindiamag.com/ai-may-kill-these-5-jobs-by-2030-
say-experts/

96
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

Campanella, E. (2016). Meet Sophia, the human-like robot that wants to be your friend and
‘destroy humans’. Retrieved from Global News: https://globalnews.ca/news/2888337/meet-
sophia-the- human-like-robot-that-wants-to-be-your-friend-and-destroy-humans/

Card, S., Moran, T., & Newell, A. (1983). The Psychology of Human-Computer Interaction.
Hillsdale.

Carr, J. A. (2014). An Introduction to Genetic Algorithms. Senior Project, 1(40), 7.

CEDA. (2015). Australia’s future workforce? CEDA.

Cellan-Jones, R. (2014). Stephen Hawking warns artificial intelligence could end mankind.
Retrieved from BBC: https://www.bbc.com/news/technology-30290540

Charniak, E., & McDermott, D. (1985). Introduction to Artificial Intelligence. Unitted States:
Addison-Wesley Longman Publishing Co., Inc.

Clifford, C. (2017). Elon Musk: ‘Robots will be able to do everything better than us’.

Dai, S., & Jing, M. (2018). Worried AI will replace your job?Here’s an explainer to prepare for
that day. Retrieved from SCMP: https://www.scmp.com/tech/innovation/article/2131339/
worried-ai-will-replace-your-jobheres-explainer-prepare-day#:~:text=Tech%20%2F%20
Innovation-,Worried%20AI%20will%20replace%20your%20job%3FHere’s%20an,to%20
prepare%20for%20that%20day&text=The%20Oxford%2D

Daniel, E. (2020). Role of Artificial Intelligence in Human Revolution. Retrieved from Thrive
Global: https://thriveglobal.com/stories/role-of-artificial-intelligence-in-human-revolution/

Daval, J.-L. (1982). Photography History of an Art. First American Edition.

Rumelhart, D., & McClelland, J. (1986). Parallel distributed processing: Explorations in the
microstructure of cognition (Vol. 1). MIT Press.

Deloitte. (2018). The Future of Work. One and One Books.

Digital Marketing Institute. (2019). The Rise of AI: Will It Take or Create Digital Jobs? Retrieved
from DMI Blog: https://digitalmarketinginstitute.com/blog/the-rise-of-ai-will-it-take-or-
create-digital-jobs

Dimitri, C., Effland, A., & Conklin, N. (2005). The 20th Century Transformation of U.S.
Agriculture and Farm Policy. Economic Information Bulletin, 17.

Drum, K. (2017). You Will Lose Your Job to a Robot—and Sooner Than You Think. Retrieved
from Mother Jones: https://www.motherjones.com/politics/2017/10/you-will-lose-your-
job-to-a-robot-and-sooner-than-you-think/

97
Philosophical point of view for social implementation

Efron, B., & Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and
Data Science. UK: Cambridge University.

Ema, A., Akiya, N., Osawa, H., Hattori, H., Oie, S., Ichise, R., & Kanzaki, N. (2016). Future
Relations between Humans and Artificial Intelligence: A Stakeholder Opinion Survey in
Japan. IEEE Technology and Society Magazine, 35(4), 68-75.

Engelbart, D. C. (1962). Augmenting Human Intellect: A Conceptual Framework. Stanford


Research Institute.

Eysenck, M. W., Hunt, E., Ellis, A., & Johnson-Laird, P. N. (1991). The Blackwell Dictionary of
Cognitive Psychology. UK: Wiley-Blackwell.

Ferrucci, D. (2012). ntroduction to “This is Watson”. IBM Journal of Research and


Development.

Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future. Basic
Books.

Ford, M. (2018). Architects of Intelligence: The Truth about AI from the People Building it. Packt
Publishing Ltd.

Frey, C. B., & Osborne, M. A. (2013). The Future of Employment: How Susceptible are Jobs to
Computerisation? The Oxford Martin Programme on Technology and Employment.

Frey, C. B., & Osborne, M. A. (2017). The Future of Employment: How Susceptible are Jobs to
Computerisation? Technological Forecasting and Social Change, 114, 254-280.

Gao, J., Galley, M., & Li, L. (2018). Neural Approaches to Conversational AI. Retrieved from
https://arxiv.org/pdf/1809.08267.pdf

Gershman, S. J., Horvitz2, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A


converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245),
273-278.

Goodfellow, I. J., Vinyals, O., & Saxe, A. M. (2015). Qualitatively characterizing neural network
optimization problems. arXiv.

Haugeland, J. (1985). Artificial intelligence: The very idea. MIT Press.

Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A Formal Basis for the Heuristic Determination
of Minimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics, 4(2), 100-
107.

98
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

Hassabis, D. (2015). DeepMind Technologies - The Theory of Everything. Retrieved from


Google Zeitgeist.

Hawksworth, J., Berriman, R., & Cameron, E. (2018). Will robots really steal our jobs? An
international analysis of the potential long term impact of automation. PwC.

Hayes, P., & Ford, K. M. (1995). Turing Test Considered Harmful. International Joint
Conference on Artificial Intelligence.

Hermann, K. M., Kočiský, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom,
P. (2014). Teaching Machines to Read and Comprehend. In Advances in neural information
processing systems, 1693-1701.

Herper, M. (2017). MD Anderson Benches IBM Watson In Setback For Artificial


Intelligence In Medicine. Retrieved from Forbes: https://www.forbes.com/sites/
matthewherper/2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-
intelligence-in-medicine/#395dbe613774

Hindi, R. (2017). How my research in AI put my dad out of a job. Retrieved from Medium: https://
medium.com/snips-ai/how-my-research-in-ai-put-my-dad-out-of-a-job-1a4c80ede1b0

Hippold, S. (2019). Gartner Debunks Five Artificial Intelligence Misconceptions. Retrieved from
Gartner: https://www.gartner.com/en/newsroom/press-releases/2019-02-14-gartner-
debunks-five-artificial-intelligence-misconce

Hurley, S. (1998). Consciousness in Action. United States: Harvard University Press.

Jeong, M. S. (2015). Humanities travel through film. Kyungsung University.


12 Jobs that Will Be Soon Replaced by AI. (2019). Retrieved from apruve:
https://blog.apruve.com/12-jobs-that-will-be-soon-replaced-by-ai

Jordan, M. (2019). Artificial Intelligence—The Revolution Hasn’t Happened Yet. Harvard Data
Science Review.

Judd, J. S. (1990). Neural Network Design and the Complexity of Learning. MIT Press.

Kelly, J. (2019). Unbridled Adoption of Artificial Intelligence May Result in Millions of Job
Losses and Require Massive Retraining for Those Impacted. Retrieved from Forbes:
https://www.forbes.com/sites/jackkelly/2019/09/30/unbridled-adoption-of-artificial-
intelligence-may-result-in-millions-of-job-losses-and-require-massive-retraining-for-those-
impacted/#6cd5dac51de7

99
Philosophical point of view for social implementation

Knapton, S. (2016). Robots will take over most jobs within 30 years, experts warn. Retrieved
from The Telegraph: https://www.telegraph.co.uk/news/science/science-news/12155808/
Robots-will-take-over- most-jobs-within-30-years-experts-warn.html

Knight, W. (2020). Defeated Chess Champ Garry Kasparov Has Made Peace with AI. Retrieved
from Wired: https://www.wired.com/story/defeated-chess-champ-garry-kasparov-made-
peace-ai/

Krafft, P. M., Young, M., Katell, M., Huang, K., & Bugingo, G. (2019). Defining AI in Policy
versus Practice. arXiv.

Kubacki, J. (2009). Artificial intelligence. SpringerLink. Retrieved from SpringerLink.

Kurzweil, R. (1990). The Age of Intelligent Machines. MIT Press.

Kurzweil Network. (2012). 2 Billion Jobs to Disappear by 2030. Retrieved from Kurzweil
Accelerating Intelligence: https://www.kurzweilai.net/2-billion-jobs-to-disappear-by-
2030#!prettyPhoto

Lambert, J., & Cone, E. (2019). How Robots Change the World. Oxford Economics.

Lee, K. (2014). Human Robot Era is Far yet. Money Today.

Lee, K.-F. (2018a). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton
Mifflin Harcourt.

Lee, Y. J., Jang, S. L., & Kim, W. J. (2012). Gutenberg’s Return . Idambooks (Korean).

Lee, S. (2018b). The Future of the 4th Industrial Revolution. One and One Books (Korean).

Liang, J., Ramanauskas, B., & Kurenkov, A. (2019). Job Loss Due To AI — How Bad Is It
Going To Be? Retrieved from Skynet Today: https://www.skynettoday.com/editorials/ai-
automation-job-loss

Licklider, J. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in


Electronics.

Lokitz, J. (2018). The future of work: How humans and machines are evolving to work
together. Retrieved from businessmodelsinc.com: https://www.businessmodelsinc.com/
machines/

Luger, G. F., & Stubblefield, W. A. (1993). Artificial Intelligence: Structures and Strategies for
Complex Problem Solving (2nd ed.). Benjamin-Cummings Publishing Co., Inc.

100
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., . . . Sanghvi, S. (2017). Jobs
lost, jobs gained: What the future of work will mean for jobs, skills, and wages. McKinsey
Global Institute, 1-160.

McClelland, C. (2020). The Impact of Artificial Intelligence - Widespread Job Losses.


Retrieved from iotforall.com: https://www.iotforall.com/impact-of-artificial-intelligence-job-
losses/

Minsky, M., & Papert, S. (1969). Perceptrons: an introduction to computational geometry. MIT
Press.

Miller-Merrell, J. (2019). Resources, How Artificial Intelligence (AI) is Changing Human.


Retrieved from Randstad RiseSmart: https://www.randstadrisesmart.com/blog/how-
artificial-intelligence-ai-changing-human-resources

Mogg, T. (2018). Pepper the robot fired from grocery store for not being up to the job.
Retrieved from Digital Trends: https://www.digitaltrends.com/cool-tech/pepper-robot-
grocery-store/

Molla, R. (2019). “Knowledge workers” could be the most impacted by future automation.
Retrieved from Vox: https://www.vox.com/recode/2019/11/20/20964487/white-collar-
automation-risk-stanford- brookings

Muro, M., Maxim, R., & Whiton, J. (2019). Automation and Artificial Intelligence: How
machines are affecting people and places. Retrieved from Brookings Metropolitan Policy
Program: https://www.brookings.edu/research/automation-and-artificial-intelligence-how-
machines-affect-people-and-places/

Naudé, W. (2019). The Race against the Robots and the Fallacy of the Giant Cheesecake:
Immediate and Imagined Impacts of Artificial Intelligence. IZA Discussion Paper no. 12218.

Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep Neural Networks are Easily Fooled: High
Confidence Predictions for Unrecognizable Images. arXiv.

Muhanna, W. A. (2002). The Impact of E-Commerce on the Real Estate Industry: Baen and
Guttery Revisited. Journal of Real Estate Portfolio Management(2), 141-152.

Nichols, G. (2018). Robot fired from grocery store for utter incompetence. Retrieved
from ZDNet: https://www.zdnet.com/article/robot-fired-from-grocery-store-for-utter-
incompetence/

Nilsson, N. J. (2010). The Quest for Artificial Intelligence: A History of Ideas and
Achievements. UK: Cambridge University Press.

101
Philosophical point of view for social implementation

Niyazov, S. (2019). How the Replacement of Blue-Collar Jobs by AI Will Impact the Economy.
Retrieved from iotforall.com: https://www.iotforall.com/how-ai-replacing-blue-collar-
jobs-impact-economy/#:~:text=AI%20Is%20Emerging&text=The%20research%20
conducted%20by%20Accenture,%25%20to%204.6%25%20by%202035.&text=This%20
will%20boost%20exports%2C%20encourage,the%20US%20a%20manufac

OECD. (2017). OECD Science, Technology and Industry Scoreboard 2017. OECD.

OECD. (2018). AI: Intelligent machines, smart policies. OECD Digital Economy Papers, 0-33.

Protevi, J. (2006). A Dictionary of Continental Philosophy. United States: Yale University


Press.

OECD. (2019). Recommendation of the Council on Artificial Intelligence. Retrieved from


OECD: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449

Rich, E., Knight, K., & Nair, S. (1985). Artificial intelligence. New York: McGraw-Hill.

Rich, E., Knight, K., & Nair, S. (1991). Artificial Intelligence. New York: McGraw-Hill.

Rieley, M. (2018). In the money: occupational projections for the financial industry. Beyond
the Numbers, 7(16).

Roth, G., & Dicke, U. (2005). Evolution of the brain and intelligence. TRENDS in Cognitive
Sciences, 9(5), 250-257.

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning Internal Representations by
Error Propagation. MIT Press.

Norvig, P., & Russell, S. J. (1995). Artificial Intelligence: A Modern Approach. United States:
Prentice Hall.

Russell, S. J. (2019). Human Compatible: Artificial Intelligence and Problem Control. Viking.

Schalkoff, R. J. (1990). Artificial Intelligence Engine. United States: McGraw-Hill, Inc.

Page, S. E. (2018). The Model Thinker: What You Need to Know to Make Data Work for You.
United States: Basic Books.

Shrive, T. (2018). AI will never replace jobs in the property market. Retrieved from Finance
Digest: https://www.financedigest.com/ai-will-never-replace-jobs-in-the-property-market.
html#:~:text=AI%20mimics%20human%20behaviour%20and,%2C%20in%20theory%2C%20
be%20automated.

102
Definition and Recognition of AI and its Influence on the Policy: Critical Review,
Document Analysis and Learning from History

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Driessche, G. v., . . . Sutsk. (2016).
Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587).

Simon, H. A. (1969). The Sciences of the Artificial. MIT Press.

Smiles, S. (2005). Rev. William Lee, inventor of the Stocking Frame. Retrieved from
victorianweb.org: http://www.victorianweb.org/technology/inventors/lee.html

Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., . . . Tambe, M. (2016).
Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence.
Report of the 2014 study panel, Stanford University.

Strickland, E. (2019). IBM Watson, heal thyself: How IBM overpromised and underdelivered
on AI health care. IEEE Spectrum, 56(4), 24-31.

Sun, S., Cao, Z., Zhu, H., & Zhao, J. (2019). A Survey of Optimization Methods from a
Machine Learning Perspective. arXiv.

Barto, A., & Sutton, R. S. (1998). Reinforcement Learning: An Introduction. MIT Press.

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2014).
Intriguing properties of neural networks. arXiv.

Tapscott, D. (1997). Strategy in the new economy. Strategy & leadership, 25(6), 8-15.

The Week. (2019). Will you lose your job to a robot? Retrieved from The Week:
https://theweek.com/articles/866339/lose-job-robot

Thomas, M. (2019). Artificial Intelligence’s Impact on the Future of Jobs. Retrieved from
builtin.com: https://builtin.com/artificial-intelligence/ai-replacing-jobs-creating-jobs

Turing, A. (1950). Computing Machinery and Intelligence. Mind.

Ulanoff, L. (2018). Say Hello to Our Disappointing Robot Future. Retrieved from Medium:
https://medium.com/@LanceUlanoff/say-hello-to-our-disappointing-robot-future-
e6f7d1d42e24

Wadhwa, V. (2016). We are heading towards a jobless future: is it good or bad? Retrieved
from Factor Daily: https://factordaily.com/artificial-intellgence-automation-india-jobless-
future/

Weng, J. (2002). A theory for mentally developing robots. Proceedings 2nd International
Conference on Development and Learning, 131-140.

103
Philosophical point of view for social implementation

Winston, P. H. (1992). Artificial Intelligence (Third edition). United States: Addison-Wesley


Longman Publishing.

Wisskirchen, G., Biacabe, B. T., Bormann, U., Muntz, A., Niehaus, G., Soler, G., & Brauchitsch, B.
v. (2017). Artificial Intelligence and Robotics and Their Impact on the Workplace. IBA Global
Employment Institute.

World Economic Forum. (2016). The Future of Jobs Report 2016. World Economic Forum.

World Economic Forum. (2018). The Future of Jobs Report 2018. World Economic Forum.

Xu, J. (2017). Will human beings lose jobs due to AI? Ruggles Media.

Yang, L.-A., Liu, J.-P., Chen, C.-H., & Chen, Y.-p. (2016). Automatically Proving Mathematical
Theorems with Evolutionary Algorithms and Proof Assistants. arXiv.

Yoon, S. (2015). Digital Economy Leadership by Don Tapscott. Retrieved from


https://www.mk.co.kr/news/business/view/2015/11/1081476/

Zarkadakis, G. (2016). In Our Own Image: Savior or Destroyer? The History and Future of
Artificial Intelligence. Pegasus Books.

104
Institutional and
Technological
Design
Development
Through Use of
Case Based
Discussion
Regulatory
Arindrajit Basu,
Elonnai Hickok and
Amber Sinha

Interventions
For Emerging
Economies
Governing The
Use Of Artificial
Intelligence In
Public Functions
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Introduction

Background and Scope

The use of artificial intelligence (AI) driven decision making in public functions has
been touted around the world as a means of augmenting human capacities, removing
bureaucratic fetters, and benefiting society. Yet, with concerns over bias, fairness, and a
lack of algorithmic accountability, it is being increasingly recognized that algorithms have
the potential to exacerbate entrenched structural inequality and threaten core constitutional
values. While these concerns are applicable to both the private and public sector, this
paper focuses on recommendations for public sector use, as standards of comparative
constitutional law dictate that the state must abide by the full scope of fundamental rights
articulated both in municipal and international law. For example, as per Article 13 of the
Indian Constitution, whenever the government is exercising a “public function”, it is bound by
the entire range of fundamental rights articulated in Part III of the Constitution.

However, the definition and scope of “public function” is yet to be clearly defined in any
jurisdiction, and certainly has no uniformity across countries. This poses a unique challenge
to the regulation of AI projects in emerging economies. Due to a lack of government
capacity to implement these projects in their entirety, many private sector organizations are
involved in functions which were traditionally identified in India as public functions, such
as policing, education, and banking. The extent of their role in any public sector project
poses a set of important regulatory questions: to what extent can the state delegate the
implementation of AI in public functions to the private sector?; and to what extent and how
can both state and private sector actors be held accountable in such cases?.

107
Institutional and technological design development through use cases based discussion

AI-driven solutions are never “one-size-fits-all” and the second section, we expand on the use cases we
exist in symbiosis with the socio-economic context chose to study in detail and the policy. In the third
in which they are devised and implemented. As such, section, we identify core constitutional values that any
it is difficult to create a single overarching regulatory regulatory framework on AI in the public sector should
framework for the development and use of AI in any look to protect. In this section, we also highlight key
country, especially in countries with diverse socio- regulatory interventions that need to be made to
economic demographics like India. Configuring the protect these values by developing a set of guiding
appropriate regulatory framework for AI correctly is questions.
important. Heavy-handed regulation or regulatory
uncertainty might act as a disincentive for innovation We chose to work on the Indian ecosystem for three
due to compliance fatigue or fear of liability. Similarly, substantive reasons, apart from the convenience of
regulatory laxity or forbearance might result in the geographic proximity, which allowed us to conduct
dilution of safeguards, resulting in a violation of our primary research. First, in terms of public
constitutional rights and human dignity. Therefore, policy advancement, we feel that working in India
we have sought to conceptualize optimal regulatory is important, as the technology and its governance
interventions based on key constitutional values and frameworks are both in their nascent stages and
human rights that the state should seek to protect the potential for the use of these technologies
when creating a regulatory framework for AI. To devise and their impact on the populace, especially those
these interventions, we identify a decision-making emerging technologically, is immense. Second, the
framework consisting of a set of core questions that constitutional framework in India on key issues such
can be used to determine the extent of regulatory as privacy, discrimination, and exclusion has both a
intervention required to protect these values and legacy of jurisprudence and is at a critical juncture,
rights. as they evolve and adapt with respect to emerging
technologies. Finally, we believe that focusing on India
We have arrived at the framework by identifying key allows us to make a unique contribution to the existing
values and rights, and analyzing AI use cases to literature, as it charts out a potential regulatory model
understand how different uses and configurations of for other similarly placed emerging economies.
AI can challenge these values and rights. Specifically,
the paper examines: Our framework limits itself to decision making by
a regulator when designing or deploying the AI
1. Use of AI in predictive policing by law enforcement; solution. It does not delve into the adaptive regulatory
2. Use of AI in credit rating by means of establishment strategy that needs to be devised as the AI project is
of the Public Credit Registry (PCR) in India; and implemented. It is also not an exhaustive framework,
3. Use of AI in improving crop yields for farmers. as many context-specific questions will alter its
application. The objective is limited to framing
This paper is divided into three sections. In the first broad questions that can guide specific regulatory
section, we look at various models of regulation. In interventions as decision makers choose to adopt AI.

108
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Methodology Section I:
From the outset, we realize that the term “artificial
Regulatory Models for AI
intelligence” is used in multiple ways, and its definition
is often contested. For the purposes of this paper,
Privatizing Public Functions
we define AI as a dynamic learning system where
Across the world, activities traditionally undertaken by
a certain level of decision-making power is being
the state, including running prisons, policing, solving
delegated to the machine (Basu & Hickok, 2018). In
disputes, and providing housing and health services,
doing so, we distinguish AI from automation, where a
are increasingly being delegated to private actors,
machine is being made to perform a repetitive task.
often either private firms operating transnationally
(Palmer, 2008) or quasi-governmental actors (Scott,
The first stage of our research involved studying
2017a). It is not only a shift in the extent of legislative
three applications of AI in public functions. Through
discretion but the creation of formal and rule-based
primary interviews and desk research, we sought to
arrangements that were not needed in the welfare
understand:
state model, where the state delivered all services
directly (Scott, 2017a). Braithwaite’s conception of
• How the decision was arrived at to devise an AI-
a regulatory state combines state oversight with
based solution
the commodification of service provision, where
• Relevant policy or political enablers or detractors
the citizen is treated as a consumer (Braithwaite,
• What preparatory research or field work was done
2000). Businesses must deliver services with state
before implementing the solution
oversight, but the extent of oversight and the modes
• How the data was gathered and collected
of regulation must be determined contextually (Scott,
• Impact assessment frameworks or evaluation
2017b).
metrics used to determine the success of the project
by the developers and implementers
The increasing privatization of public functions
• External assessments of the impact
throws up two key constitutional questions. First, to
what extent can public functions be delegated to a
Using what was learnt from these case studies, we
private actor? Little jurisprudence exists on this, as
created a decision-making framework that relied on
there have been very few challenges to privatization
key threshold questions, as well as possible regulatory
across jurisdictions. The Indian Supreme Court
tools that could be applied.
in Nandini Sundar and Ors vs State of Chattisgarh
(2011), which banned the state designated private
police organization, Salwa Judum, held that “modern
constitutionalism posits that no wielder of power
should be allowed to claim the right to perpetuate
state’s violence… unchecked by law, and notions of

109
Institutional and technological design development through use cases based discussion

innate human dignity of every individual” (Sundar and human dignity instead of undermine it.2 This is a valid
Ors v State of Chattisgarh, 2011). The Court went concern for emerging economies as there are various
on to criticize the state of Chattisgarh’s “policy of circumstances, including AI deployment, where private
privatization” that was the cause of income disparity actors can deliver services more efficiently than an
and non-allocation of adequate financial resources overstretched state. However, given the implications
in the region, which in turn was responsible for the for human rights and dignity, it is conceptually difficult
Maoist/Naxalite insurgency. However, there was no to draw an objective line on delegation. The Court
clarification on what services are “governmental” and “assumed there is no constitutional impediment to
cannot be delegated. The only clear carve out was the privatization of a vast majority of services provided by
state’s monopoly on the use of violence, which could the state”. 3
under no circumstances be delegated. Although some
indication of where to draw the line comes from the In the US, no bar to privatization exists and the market
following dictum of the Supreme Court in Nandini for private actors providing prison services is booming
Sundar: (Pelaez, 2019). In fact, a US appellate body judge has
stated that a prisoner only “had a legally protected
“Policies of rapid exploitation of resources by the private interest in the conduct of his keeper, not in the
sector, without credible commitments to equitable keeper’s identity” (Pischke v Litscher, 1999). This lack
distribution of benefits and costs, and environmental of clarity on the definition, scope, and delegation of
sustainability, are necessarily violative of principles that public functions means that when deciding the extent
are “fundamental to governance”, and when such a to which an AI use case can be delegated to a private
violation occurs on a large scale, they necessarily also actor, a number of other context-specific factors must
eviscerate the promise of equality before law, and equal be considered. These will be developed and discussed
protection of the laws, promised by Article 14, and the in Section III.
dignity of life assured by Article 21.”
The second constitutional question hinges on the
The Israeli Supreme Court in Academic Center of Law extent to which the state or a private actor can be
and Business vs Minister of Finance (2006) had also held accountable for a violation of fundamental
invalidated a statute allowing for the privatization of rights. The state action doctrine in the US formulates
prisons by reading its Basic Law. The judges in the an apparently clear principle: constitutional rights
majority opinion did not embark on an inquiry into apply to the state and not to private action (except
whether private prisons worked better than those in certain situations, such as Habeas Corpus).4 State
run by the government (Academic Center of Law action, simply put, includes all government action
and Business v Minister of Finance, 2006). Instead, which includes acts by the executive, legislature, and
there was an assumption made that privatization judiciary at both the central and state levels (Jaggi,
was illegal because private actors inherently 2017). However, the doctrine has a clear “public
harmed human rights more than public providers.1 function” exception. As per this exception, a private
The Court argued that only the state itself had the actor may be considered a state actor if it “performs
right to deprive people of their liberty and dignity. the customary functions of government” (Lloyd Corp
The minority opinion countered this proposition by Ltd v Tanner, 1972) or if it performs a function that is
claiming that if the private sector was in fact able “traditionally exclusively reserved to the state” (Barrows
to maintain better prison conditions than the public v Jackson, 1953). The Indian Constitution is similar in
sector, then privatizing prisons may actually further that Article 12 states:

1. Para. 18 (Procaccia)
2. Id. ¶¶ 2, 4
3. Id. ¶ 65 (Beinisch) However, Justice Jowell did note that policing, defence, treaty-making, prosecution, and dissolving Parliament may be core governmental powers. ¶¶ 29–30
4. First articulated in The Civil Rights Cases (1883)

110
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

“Definition in this part, unless the context otherwise and expressing oneself in diverse forms, freely moving
requires, the State includes the Government and Parliament about, and mixing and commingling with fellow human
of India and the Government and the Legislature of each beings” (Francis Coralie Mullin v UT of Delhi, 1981).
of the States and all local or other authorities within the While recognizing that the magnitude and scope of
territory of India or under the control of the Government this right is contingent on economic development,
of India.” the Court stressed that the basic necessities of life,
and the right to carry on such functions, are essential
The question of whether private actors performing for basic human autonomy. Therefore, any entity
“public functions” comes under “other authorities” has carrying out a function that has implications for any of
come up before the Supreme Court. Questions have the functions described could be treated as a “public
revolved around the status of the Board of Control function”, although this cannot operate as a hard and
for Cricket in India (BCCI). In Zee Telefilms vs Union fast rule.
of India (2005), the Supreme Court held that the BCCI
is not discharging a public function, although it did Challenges to Regulating AI
not reject the public function test. The dissenting
judges in Zee Telefilms vs Union of India (2005) Regulation is often designed to avert, mitigate, or limit
recognized that with privatization and liberalization, risks (Haines, 2017) to human health or safety, or
as governmental functions are being delegated to more broadly, to the effective functioning of a society.
private bodies, these private bodies must safeguard However, the risks that AI pose are only just being
fundamental rights when discharging public functions. discovered and will continue to be realized as a greater
In 2015, the Supreme Court held that the BCCI is, in number of use cases are designed and implemented.
fact, performing a public function and therefore can Importantly, the risks posed by AI cannot be
be held accountable under Article 12 (Sethia, 2015). determined only by evaluating the technology at hand.
More recently, the Supreme Court held that a private A genuine assessment of risk must contextualize the
university can be held accountable for violation of technology within the socio-economic, cultural, and
fundamental rights, as they are performing a public demographic space within which it is being applied.
function or public duty by imparting education (Francis The same AI technology or solution used for a specific
Coralie Mullin v UT of Delhi, 1981). Therefore, it is use case in the defense industry may pose very
fair to say that Indian courts have adopted the public different risks when used in the educational sector.
function exemption. Yet, given the lack of clarity on
the definition of “public function”, a context-specific Scherer charts out four problems with regulating AI
approach is needed when ensuring that appropriate development ex ante (Scherer, 2016): “discreetness”,
accountability, grievance redressal mechanisms, which means that AI projects could be developed in
and liability are imposed in such cases. One test the absence of large-scale institutional frameworks;
we recommend for the purpose of classification is “diffuseness”, which entails that AI projects could be
linking the public function back to recognize aspects devised by a number of diffuse actors in various parts
of the “right to life” enshrined in Article 21 of the of the world; “discreteness”, which means that projects
Indian Constitution. The Supreme Court has held that will use discrete components and the final potential
“the right to life includes the right to live with human or risk of the AI system may not be apparent until the
dignity and all that goes along with it, namely, the system finally comes together; and “opacity”, which
bare necessities of life such as adequate nutrition, means that the technologies underpinning the system
clothing and shelter, and facilities for reading, writing, may be opaque to most regulators (Scherer, 2016).

111
Institutional and technological design development through use cases based discussion

Given these challenges, several academics have 2010) where competing interests are at stake
advocated applying Ayres and Braithwaite’s proposition (Kleinsteuber, n.d.). Traditionally, regulation has been
of responsive regulation to AI development (Terry, determined by the sovereign, although market actors
2019). Simply put, responsive regulation suggests are increasingly determining their own regulatory
that appropriate regulatory interventions should be frameworks, either through self-devised codes of
determined based on the regulatory environment and conduct or in conjunction with sovereign entities. The
the conduct of the regulated (Ayres & Braithwaite, decentralization of regulation away from a solely
1992).The crux of the idea lies in a pyramid of government-driven model is being spurred on by the
enforcement measures with the most interventionist fact that governments have incomplete information
command and control regulations at the apex and less and expertise, and do not have the financial or
intrusive measures such as self-regulation making up human resources to devise, implement, and enforce
the base (Ayres & Braithwaite, 1992). For all matters, regulation when emerging technologies propel rapid
Ayres and Braithwaite believe it is better to start at the change and consequent uncertainty (Guihot, Matthew,
bottom of the pyramid and escalate up the structure if & Suzor, 2017).
the regulatory objectives are not being met. This way,
the government signals a willingness to regulate more Primary (Government-driven) Regulation
intrusively while averting the negative impacts of more
interventionist regulation at the very outset (Ayres & Traditionally, governments have various tools at
Braithwaite, 1992). their disposal to implement legislation. This includes
nodality, authority, funding, and organization (Hood &
However, when deploying AI in public functions, Margetts, 2008). Nodality refers to the government’s
moving from a spectrum of leniency to intrusiveness in pivotal role as a receiver and distributor of vast
all instances is fraught with risks to core constitutional sources of information, which enable it to ensure
values and human rights. This holds particularly true implementation of the law by detecting breaches
when the project is in its design stage or just about to and subsequently passing sanctions (Hood &
be implemented, and the impact is not entirely known. Margetts, 2008). Authority bestows the government
We therefore advocate for “smart regulation” – a with the power to enforce sanctions and “demand,
notion of regulatory pluralism that fosters flexible and forbid, guarantee, and adjudicate” in a manner that
innovative regulatory frameworks by using multiple is respected by all stakeholders (Hood & Margetts,
policy instruments, strategies, techniques, and 2008). In governmental regulation, implementation
opportunities to complement each other (Gunningham is through force and punitive sanctions for non-
& Sinclair, 2017). Based on certain threshold questions compliance, with the regulated not necessarily having
that help identify risks posed by a specific use case a clear say in the framing of the regulation (Doekler,
to core values, we attempt to provide guidance as to 2010). The treasure chest refers to the variety of
what different instruments, strategies, techniques, and resources, both monetary and infrastructural, at the
opportunities could mitigate these risks associated disposal of the government to carry out any task (Hood
with AI development and use. & Margetts, 2008). Organization is the bureaucratic
structure which enables the government to actualize
Modes of Regulation the three other unique elements.

Broadly speaking, “regulation” can be conceptualized However, all of these elements may not necessarily
as governing with a certain intention across a apply to the multifarious nature of tasks that
number of often-complex situations (Doekler, need to be examined when regulating AI-driven

112
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

solutions, particularly in economies as diverse and “Berkman-Klein study”) identified eight sets of “ethical”
heterogeneous as India. The challenges in keeping AI principles put forward by a range of multi-national
up with the rapid pace of technological evolution companies, including Microsoft, Google (Pichai, 2018),
have been better understood by private companies and IBM. Each of these sets of guidelines espouse
such as Google and Microsoft, who have taken the a set of principles that defer but fail to explicitly
lead both in bank-rolling and implementing a variety incorporate standards of domestic or international
of AI-driven solutions (Basu & Hickok, 2018). They law (Basu & Pranav, 2019). For example, to protect
possess the requisite expertise and human resources the Right to Equality, the Google AI principles merely
to conceptualize and incorporate various tools of seek to avoid “unjust impacts on people, particularly
regulation into the governance of AI. Therefore, in the to those related to sensitive characteristics”, without
regulatory domain, these companies are driving the referring explicitly to the various contours of and
rules of the game by creating codes of conduct for jurisprudence related to the Right to Equality across
themselves and their peers in industry. jurisdictions.

Peer Regulation or Self-regulation As identified by the European Commission High


Level Expert Group, even after legal frameworks have
Jessop describes self-regulation as a system of been complied with, “ethical reflection can help us
bottom-up governance that allows private actors understand how the development, deployment, and
to limit the role of regulatory bodies by adopting use of AI systems may implicate fundamental rights
a “reflexive self-organization of independent and their underlying values, and can help provide
actors involved in complex relations of reciprocal more fine-grained guidance when seeking to identify
interdependence, with such self-organization being what we should do rather than what we (currently)
based on continuing dialogue and resource sharing can do with technology” (European Commission, n.d.).
to develop mutually beneficial joint projects, and to However, Mittelstadt argues that ethical frameworks
manage the contradictions and dilemmas inevitably are prone to fail to regulate AI solutions because
involved in such situations” (Jessop, 2003). In a self- unlike other fields where ethics are used as regulatory
regulatory ecosystem, actors conceptualize and interventions, AI lacks (1) common aims and fiduciary
voluntarily comply with their own set of codes, thereby duties, (2) professional history and norms, (3) proven
serving as a form of informal regulation, with no methods to translate principles into practice, and
punitive sanction for non-compliance (Fjeld, Achten, (4) robust legal and professional accountability
Hilligoss, Nagy, & Srikumar, 2020). Self-regulation mechanisms (Mittelstadt, 2019). Further, ethical
can be one of two types. The first, more standardized guidelines devised by multi-national corporations often
form, describes situations where industry-wide do not apply in the specific societal or legal contexts
organizations set rules, standards, and codes for across jurisdictions (Arun, 2019).
all actors operating in that industry. The second,
voluntarism, occurs when an individual firm chooses Therefore, reliance on self-regulation through ethical
to regulate itself and create its own code of conduct AI guidelines may not be adequate to appropriately
without any coercion (Gunningham & Sinclair, 2017). regulate the variety of ways in which AI may be
deployed-in public functions and to genuinely protect
Attempts at self-regulation have already started in core values and human rights.
the governance of AI. A recent study (Fjeld, Achten,
Hilligoss, Nagy, & Srikumar, 2020) by the Berkman-
Klein Center at Harvard University (hereinafter the

113
Institutional and technological design development through use cases based discussion

Co-regulation Co-regulation is widely present in the US. For example,


the Network Advertising Initiative (NAI) runs as a self-
A decentralized understanding of regulation entails regulatory body that is then approved by the Federal
an acknowledgement of the fact that states Trade Commission (Federal Trade Commission Staff,
cannot be the only regulators, and the complexity, 2009). Another form of co-regulation is when the
fragmentation, and the clashes in power and control government and private sector perform a number
ensure that regulation is hybrid, multi-faceted, and of tasks together. This may include both creation
often indirect (Black, 2001). Co-regulation has a and enforcement of standards, such as in the case
variety of definitions. Often referred to as “regulated of the California Occupational Health and Safety
self-regulation” (Schulz & Held, 2001), co-regulation is Administration, which created a program where it
founded on a legal framework through which private worked with representatives from both management
entities govern their affairs through codes of conduct and labor to create and implement safety standards
or set of rules (Doekler, 2010). The formation of the for construction sites (Freeman, 2000).
legal framework can be done in a multitude of ways
but generally considers a link between state and non- Through discussion and feedback, co-regulation
state regulation. The European Commission has would see the fostering of effective ideas over a
arrived at the following elements of co-regulation period of time. A co-regulation approach to developing
(Schulz & Thorsten, 2006): and implementing tools in AI governance would allow
for the symbiosis of the private sector and technical
1. The system is created to attain public policy expertise with the public sector and law-making
objectives directed at societal processes; experience. The potential problem with co-regulation
2. There is a connection between the state and is the creation of a culture of continuous lobbying,
non-state regulatory system; through which an already stretched public sector is
3. Some level of discretionary power is left to the compelled to respond to various pressure groups with
non-state regulatory system; conflicting agendas.
4. There is an adequate level of supervision and
involvement by the state. As we move from hierarchical regulation to more
hands-off self-regulation, regulatory intervention
In a co-regulatory framework, governments and becomes less rigid and binding, but also more
private actors share responsibilities (Schulz & participatory, and can potentially mitigate a far
Thorsten, 2006). One way of doing this would be to broader range of harms. Simply put, the greater the
divide up tasks. Government could set the high-level uncertainty and ambiguity in a type of intervention, the
goals but enable the industry to set standards while greater the range of cases it is able to regulate. The
still retaining some supervisory discretion. characteristics of each form of intervention have been
summarized in the table below.

114
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Type of Enforceability Rigidity Creation Applicability


intervention

Legislation Highest. Binding Highest. Clearly Top-down. Devised Lowest common


law, along with defined standards by the legislator denominator.
clearly defined of municipal law with optional Would only
sanctions for non- with any ambiguity consultation. prevent directly
compliance. ideally being identifiable harms
resolved by the resulting from AI.
judiciary. Would also require
production of
adequate evidence
and causality.

Co-Regulation Middle. Not unique. Could Participatory. May have wide


Decentralized be clearly defined With government, or narrow
regulatory process or vague depending civil society applicability to
may lead to a on the outcome. and industry actors, situations,
binding outcome. meaningfully and individuals
engage in this depending on the
process. context.

Self-Regulation Lowest. Lowest. Participatory. All AI that


Enforceable at the Clearly articulated Devised through is ethical is
organizational level frameworks with high-level necessarily legal.
but not binding. greater ambiguity consultations However, ethical
Reliance on “soft and more scope for among industry frameworks
sanctions” with manipulation. and civil society but have a broader
no clearly defined with an absence of applicability to
sanctions for non- government actors. harms that are
compliance. outside the rigid
confines of the
law.

Table 1: Modes of regulation

115
Institutional and technological design development through use cases based discussion

Section II:
Use Cases of AI in Public Functions
This chapter revolves around the governance of Without delving into the appropriate regulatory
specific use cases that we studied concerning the strategy for each use case, we explain each by looking
use of AI in public functions in India. As the definition at the following questions:
of “public function”’ remains unclear, we adopted a
broad remit of use cases – from core governmental • How the decision was arrived at to devise an AI-
functions, which channel the state’s monopoly over the based solution;
use of violence (as discussed in Nandini Sundar), to • Relevant policy or political enablers or detractors;
credit rating, which is seeing increased private sector • What preparatory research or field work was done
involvement and does not easily fit into the notion of a before implementing the solution;
core state function such as lawmaking or policing. • How the data was gathered and collected;
• Impact assessment frameworks or evaluation
The policy ecosystem in India has sought to promote metrics used to determine the success of the project
AI adoption with a number of policy instruments, by the developers and implementers;
underscoring the need to instrumentalize AI and • External assessments of the impact;
create broad stroke frameworks and focus areas. • Extent of involvement of the private sector;
These include the discussion paper for the National • Regulatory framework in the sector.
Strategy on Artificial Intelligence, published by India’s
government think tank NITI AAYOG (Kumar, Shukla, Predictive Policing in Government/Law
Sharan, & Mahindru, 2018), as well as the Report of Enforcement
Task Force on Artificial Intelligence (Department for
Promotion of Industry And Internal Trade, 2018) – Predictive policing is making great strides in various
a task force set up by the Ministry of Commerce. Indian states, including Delhi, Punjab, Uttar Pradesh,
There are three main policy levers we can take away and Maharashtra. A brainchild of the Los Angeles
from the National Strategy. First, it suggests that Police Department, predictive policing is the use of
the government should set up a multi-disciplinary analytical techniques such as machine learning to
committee to create a national data market place, identify probable targets for intervention to prevent
so that organizations looking to derive data-driven crime or to solve past crime through statistical
insights can benefit from this data. Second, it predictions (Berg, 2014). Conventional approaches
proposes an “AI+X” approach that articulates the to predictive policing begin by using algorithms
long-term policy vision for India. Instead of replacing to analyze aggregated data sets to map locations
existing processes in their entirety, decision making on where crimes are concentrated (hot spots). Police
AI should always look to identify a specific gap in an in Uttar Pradesh (Sharma S. , 2018) and Delhi
existing process (X) and add AI to augment efficiency. (Das, 2017) have partnered with the Indian Space
Third, it envisions the use of India as a garage bed for Research Organization (ISRO) in a memorandum of
emerging economies, which we feel is a risky approach understanding (MoU) that allows ISRO’s Advanced
as it treats Indian citizens as guinea pigs without Data Processing Research Institute to map, visualize,
considering the potential impact on constitutional and compile reports about crime-related incidents.
rights (Basu, 2019). Instead, India can set the tone for
emerging economies by devising appropriate regulatory There are also major developments on the facial
interventions that bring the best out of the technology recognition front. The Punjab Police, in association
without posing significant harms. with Gurugram-based start-up Staqu, has begun

116
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

implementing the Punjab Artificial Intelligence System Dial 100 call center. Unfortunately, the input data at
(PAIS), which uses digitized criminal records and this level is often flawed. The call taker is expected to
automated facial recognition to retrieve information enter the details of the crime into the “PA 100 form”,
on the suspected criminal (Desai, 2019). Staqu has which records information received from the caller
worked with police in a number of other states, into one of 130 pre-determined categories, or into
including Uttar Pradesh, Uttarakhand, and Rajasthan “miscellaneous” if it is too difficult to slot them in
(Ganguly, 2020). cleanly. If more than one crime is reported, such as
purse snatching and murder, only the more grievous
It is important to acknowledge that bias existed in crime is recorded. This is then escalated to the “Green
policing well before data-driven decision making came Diary”, which is often at the mercy of the police officer
into the picture. Studies conducted in several states recording the incident. Police officers commonly
point to a disproportionately high representation of believe that complaints by women are usually false
minorities and vulnerable communities in prisons (Marda & Narayan, 2020b). Marda and Narayanan’s
(Common Cause, 2018). Muslims in particular have study confirms that gathering this information has
been impacted by this trend and have also reported been selective and subjective. Among police officers
the highest rates of contact with the police among there is “a general apathy towards individuals living
any community (17%) (Common Cause, 2018). Courts in slums and more forgiving outlooks with respect
have often found that incarceration has taken place to individuals living in posh parts” (Marda & Narayan,
based on false implications, which highlights flaws 2020c).
in the decision-making processes adopted by the
police (Common Cause, 2018). This causes potentially The systems are shrouded in opacity, with CMAPS
flawed feedback loops, where increased police being out of the remit of the Right to Information Act,
presence in certain areas is also leading to more and appear to lack standard operating procedures
crime being detected, in turn, leading to further police or grievance redressal mechanisms. There is no
surveillance.5 legislation, policy, or guidelines that regulate and guide
the operation of these systems, and no framework
The thinking behind devising and implementing for evaluation. Reports indicate that there was no
predictive policing systems appears to be trust in the preparatory work or empirical research undertaken by
improved accuracy that data-driven decision making the police to identify how concerns raised by multiple
can provide. One official is reported as saying that studies in other parts of the world where predictive
“the key to [predictive policing] is the massive data on systems have been deployed might play out in India.
previous crimes and how best our people are able to As Marda and Narayanan point out, the greater
analyze and correlate them with the present crimes” number of calls from poorer parts of Delhi might not
(Sharma, 2017). be indicative of a higher crime rate than the relatively
richer areas, it could simply be a cry of desperation
A detailed analysis of Delhi Police’s predictive policing from vulnerable communities who do not have access
by Marda and Narayan, entitled Crime Mapping, to other governance institutions (Khanikar, 2018).
Analysis, and Mapping Systems (CMAPS), is very Given the current state of data curation practices,
useful in understanding how this data is collected data-driven decision making might not provide a fair
(Marda & Narayan, 2020a). The source of the input or accurate outcome.
data was through calls received by the Delhi Police

5. Insights gained from primary interview

117
Institutional and technological design development through use cases based discussion

While there has been considerable political excitement The standard in the US law for search and seizure
about the use of AI and machine learning in law under the Fourth Amendment is also of “reasonable
enforcement over the last few years (Basu & Hickok, suspicion”, and we can look at US jurisprudence
2018), there has also been parallel discourse around this term for guidance. This standard was
advocating a need for caution about the use of defined as requiring law enforcement agencies to “be
such techniques. This cautionary note is even more able to point to specific and articulable facts which,
pronounced in the use of machine learning by the taken together with rational inferences from those
state for public functioning, particularly where it facts, reasonably warrant that [actions]” (Terry v Ohio,
leads to decision-making that impacts individual 1968). In the case of informant tips, US jurisprudence
rights and entitlements. The intended use of AI by considers an informant’s veracity, reliability, and basis
law enforcement in India to infer individual affect of knowledge as relevant factors (Illinois v Gates,
and attitude, offers a ripe opportunity to consider 1983). The standard of “reasonable suspicion” under
the opacity of such techniques. Even though the the Fourth Amendment protection is not met by all
framers of the constitution deliberately kept the words tips. For instance, anonymous tips need to be detailed,
“due process of law” out of the Indian Constitution, timely, and individualized (Alabama v White, 1990).
subsequent years of jurisprudence have adopted The grounds of reasonable complaint and credible
versions of the US constitutional law doctrines knowledge in Section 49 of the Code of Criminal
of “procedural due process” and “substantive due Procedure in India speak to a similar expectation
process” within the meaning of “procedure established of reliability and basis of knowledge.6 It has also
by law” under Article 21. In criminal law, statutes been clearly held that “reasonable suspicion” is not
that define offences and prescribe punishments are the same as the subjective satisfaction of a law
considered “substantive”, while others relate to matters enforcement officer (Partap Singh (Dr) v Director of
of process are considered “procedural”. It is now Enforcement, Foreign Exchange Regulation Act, 1985),
accepted law that a procedural law which deprives and clearly requires a good faith element on the part
“personal liberty” has to be “fair, just, and reasonable,of the law enforcement agency (State of Punjab v
not fanciful, oppressive, or arbitrary” (Maneka Gandhi Balbir Singh, 1994). In the case of a reliance upon an
v Union of India, 1978). During investigations, as per algorithm to substitute the role of tips, it is therefore
the criminal procedure code, law enforcement officers necessary that the legal standards which can test
can take certain actions on the basis of “reasonable the reliability and basis of an algorithmic technique,
suspicion” and “reasonable grounds”. its suitability to the context, and the relevance of the
dataset in use are evolved. However, where these
In the life cycle of actions by law enforcement techniques are opaque, as Marda and Narayanan
agencies and the courts, starting from the opening of have demonstrated, would severely limit the capacity
an investigation, followed by arrest, trial, conviction, of both law enforcement agencies to make informed
and sentencing, we see that as the individual gets decisions, as well as the ability of the judiciary to
subject to increasing incursions or sanctions by examine their use. When a law enforcement officer
the state, it takes a higher standard of certainty relies on tips to arrive at a good faith understanding,
about wrongdoing and a higher burden of proof. there is a clear way for a reviewing officer or a judge to
Actions taken by law enforcement agencies, such as evaluate the nexus between the available facts, good
surveillance or arrests based on the use of sentiment faith understanding, and the decisions taken – this is
analysis would be subject to the standard of due the basis of the review. The same is not possible in the
process. However, there is no way to judicially examine case of an opaque algorithmic tool.
the reasonableness of such an action if the process is
not explainable.

6. Section 49 (1) (a) of the Code of Criminal Procedure states as follows: “When police may arrest without warrant. (1) Any police officer may without an order from a Magistrate and
without a warrant, arrest any person (a) who has been concerned in any cognisable offence, or against whom a reasonable complaint has been made, or credible information has been
received, or a reasonable suspicion exists, of his having been so concerned.”

118
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

attempted to solve the many challenges of financial


There are also significant issues with judicial and law inclusion in rural and semi-rural areas. They have
enforcement application of due process laws in India. used algorithms to analyze a variety of data and
For instance, despite having laws on admissibility “recommend a single, seamless package of insurance
and strict legal standards on what evidence is and investment solutions” (Randazzo, 2013).
admissible, these rules are often set aside.7 Even more
alarming is the legal position on warrantless arrests, Companies and public sector banks assert that using
where the courts have held that police officers are AI has enabled them to bolster financial inclusion
not accountable for the discretion of arriving at the by including those who lack a formal credit history
conclusion of reasonable suspicion while conducting (Vishav, 2019, as cited in Singh & Prasad, 2020). Flaws
a search on a suspect.8 The lack of these protections in credit rating have existed across countries for some
make it harder to hold police accountable for excessive time (Smith, 2018), with the creditworthiness of an
or unlawful use of predictive policing methods. Laws individual being contingent on local social and cultural
such as the Unlawful Activities (Prevention) Act (UAPA) notions of who “ought” to get loans, rather than simple
are notorious for placing wide and unaccountable number crunching (Kar, 2018a). Known as redlining,
discretionary powers in the hands of law enforcement these practices have had deleterious financial and
agencies (Khaitan N., 2019). In the UAPA, for instance, social impacts on minorities, particularly the African-
the term “unlawful activities” includes “disclaiming” American community in the US (Pearson, 2017;
or “questioning” the territorial integrity of India, and Corbett-Davies et al., 2017).
causing “disaffection” against India. The egregiously
broad wording of such provisions come close to not In a detailed exposition of what she terms the “moral
just criminalizing unlawful acts but also objectionable economy of credit” in West Bengal, Kar demonstrates
beliefs and thoughts. In this context, the derivation of that bias on conceptions of “credit-worthiness” are
likelihood of an individual to commit crime through an entrenched among loan-givers across micro finance
opaque and unreliable technique such as predictive institutions (MFIs) (Kar, 2018a). She argues that
policing posits key challenges for decision makers. “capacity was invoked as an ethical judgment [by the
loan officer] of a borrower’s ability to repay a loan, and
Credit Rating was understood not through a seemingly objective
analysis of financial data but through repeated
AI is being harnessed by lenders to calculate credit exchanges with the borrowers during the verification
scores and develop credit profiles. With the use of process” (Kar, 2018b). She identifies five categories
AI algorithms that draw from various data entries, of exclusion driven by loan officers at microfinance
such as an individual’s banking transactions, their institutions: religion, caste, class, language barriers,
past decisions, their spending and earning habits, and location. Discrimination is “inter-sectional”
familial history, and mobile data, firms can make fast (Kar, 2018b). “A number of Muslim dominated
credit decisions for typical and atypical applicants neighborhoods in Kolkata are discriminated against
(ICICI Bank, 2020). For example, Loan Frame uses both because of their religion and because they are
AI and machine learning to examine a borrower’s non-Bengali – largely migrants from the central Indian
profile and evaluate their creditworthiness (Loan states of Uttar Pradesh or Bihar” (Kar, 2018b). The lack
Frame, 2020). Similarly, start-ups such as Lending of data on individuals operating on the margins of or
Kart (2020) and Capital Float (2020) use AI to assess outside the formal financial system, combined with
the creditworthiness of micro, small, and medium these entrenched patterns of exclusion, has ignited
enterprises (MSMEs) to help reduce the risk of enthusiasm for data-driven decision making in this field.
defaulting. Kaleidofin is another start-up that has

7. See Umesh Kumar vs State of AP (2013) 10 SCC 591 (“It is a settled legal proposition that even if a document is procured by improper or illegal means, there is no bar to its
admissibility if it is relevant and its genuineness is proved. If the evidence is admissible, it does not matter how it has been obtained. However, as a matter of caution, the court in
exercise of its discretion may disallow certain evidence in a criminal case if the strict rules of admissibility would operate unfairly against the accused.”)
8. Section 165 of the Code of Criminal Procedure

119
Institutional and technological design development through use cases based discussion

Machine learning algorithms are trained on curated include a lack of comprehensive data, fragmented
datasets often referred to as “training data”. For the information, dependence on self-disclosure by
purposes of fintech lending, this could be datasets borrowers, authenticity of the data, dated information,
that contain information about people’s behavior and inefficiencies due to multiple reporting (Chugh
online, spending patterns, living conditions, and & Raghavan, 2019). Speaking about the registry, Dr.
geolocation, etc. As mentioned above, some fintech Viral Acharya, Deputy Governor, explained that “in an
companies in India have publicly acknowledged that emerging economy like India, it is always felt that the
the number of data points is often around 20,000 smaller entrepreneurs, mostly operating under the
(Nag, 2016). Machine learning-enabled credit scoring informal economy do not get enough credit as they
works by collecting, identifying, and analyzing data are informationally opaque to their lenders” (FinDev
that can be used as proxies, as mentioned above, for Gateway, 2019).
the three key questions in any credit-scoring model:
a) identity, b) ability to replay, and c) willingness to With the introduction of new forms of data, the
repay (Capon, 1982). With the advent of big data and richness of data may theoretically increase the
greater digitization and datafication of information, predictive power of the algorithm (Ranger, 2018).
new data sources such as telecom data, utilities data, However, narratives on greater accuracy presume both
retailers and wholesale data, and government data the suitability of input data towards the desired output,
are available. Traditionally, credit-scoring algorithms as well as faith that past attributes or activities that
consider set categories of data, such as an individual’s are used as training data do not lead to unintended
payment history, debt-to-credit ratio, length of credit outcomes (Joshi, 2020). There have been concerns
history, new credit, and types of credit in use. that a combination of a vast variety of data points and
the correlations recommended by machine learning
The Reserve Bank of India is in the process of processes will produce discriminatory outcomes
establishing the Public Credit Registry (PCR) for India that are not apparent and cannot be scrutinized in a
– a comprehensive database of verified and granular court of law (Langenbucher, 2020). When a model
information that will create a “financial information relies on generalizations reflected in the data, the final
infrastructure” for providing credit at a national level. result for the individual will be determined by shared
Chugh and Raghavan (2019) identified five limitations data on the relative group that the system assigns
in the functioning of the existing information to them, rather than the specific circumstances of
infrastructure, which the PCR seeks to remedy. These the individual (Barocas & Selbst, 2016). Algorithmic

120
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

credit scores can remove bias only as much as the Credit rating in India is governed by the Credit
data that fuels them. Often, an assessment of the Information Companies (Regulation) Act, 2005
assigned group is also flawed. The development of and the regulations issued in 2006 (Government
“risk profiles” for individuals by the car insurance of India, 2006). The Credit Information Companies
industry is a useful example (Kahn, 2020). Data might (Regulation) Act, 2005, defines credit information as
indicate that accidents are more likely to take place in any information relating to the amounts and nature of
inner city areas where the roads are narrower. Racial loans, nature of securities taken, guarantee furnished,
and ethnic minorities tend to reside more in these or any other funding-based facility given by a credit
areas, which effectively means that the data indicates institution that is used to determine the credit-
that racial and ethnic minorities, writ large, are more worthiness of a borrower. Given the variety of data
likely to get into accidents. Software engineers are that can be analyzed using algorithms, the definition
responsible for constructing the mined datasets, might need revisiting (Goudarzi, Hickok, & Sinha,
defining the parameters and designing the decision 2018).
trees. Therefore, as Citrone and Pasquale put it, “the
biases and values of system developers and software As per Regulation 9.5.5 of the Credit Information
programmers are embedded into each and every step Companies Regulation, 2006, it is mandatory for a
of development” (Citron & Pasquale, 2014). bank that has rejected a loan on the basis of a credit
information company report to:
The roll out of algorithmic credit rating in India
must be preceded by studies that map the possible (1) Send the borrower a written rejection notice within
disparate impacts of this practice and avoid some of 30 days of the decision, along with (2) the specific
the adverse impacts that have been experienced in reasons for rejection and (3) a copy of the credit
other countries. Some companies have started taking information report, as well as (4) the details of any
individual steps to conduct grassroots level efforts credit information company that constructed the
(Kaleidofin, n.d.), but a larger industry-wide effort report. If the decision has been rendered by crunching
that is supported and endorsed by the government data through algorithms, the results must be human
would be useful given India’s depth and diversity. The scrutable to the extent that a coherent explanation can
government also needs to ensure regulatory certainty, be provided.
so that start-ups are cognizant of the legal ecosystem
within which they are operating.

121
Institutional and technological design development through use cases based discussion

Improving Crop Yields for Farmers

There has been a variety of initiatives taken by the and pricing (Nayak, 2015). Due to the information
government, in collaboration with the large technology asymmetry in price modelling and forecasting, as
companies, to equip farmers with more accurate well as weather and sowing conditions, specifically
information on weather patterns and ideal sowing in Karnataka, the agricultural sector is characterized
dates for the generation of optimal crop yields by a combination of drought-prone regions and areas
(Gurumurthy & Bharthur, 2019). that receive abundant irrigation (Deshpande, 2002).
Compared to other states, Karnataka distinctively
IBM’s Internet of things (IoT) platform has been used comprises a disproportionately large share of drought-
in many states in collaboration with NITI AAYOG – prone areas (Deshpande, 2002). Farmer distress in
the Indian government’s development think tank. Karnataka typically arises out of stress factors such
The technology uses a “data fusion” approach which as uncertainty in climatic factors and crop-prices
aggregates remote sensing meteorological data (Deshpande, 2002). These conditions often have
from The Weather Company, which is affiliated with induced farmers to take miscalculated steps that
IBM, along with satellite and field data (NASSCOM, result in onerous debts and sheer inability to meet
2018). In the state of Andhra Pradesh, Microsoft has family requirements (Deshpande, 2002). In addition,
collaborated with ICRISAT to develop an AI sowing app a study conducted in 2002 by the Karnataka State
powered by the Microsoft Cortana Intelligence Suite. Agricultural Prices Commission identified that a
It sends advisories to farmers, providing them with large section of farmers (71%) did not end up selling
information on the optimal date to sow by sending their yield through regulated markets (Chatterjee &
them text messages on their phones in their native Kapur, 2016). This was because of an acute lack of
languages. The government of Karnataka has signed a knowledge (8%) of regulated markets (Chatterjee &
MoU with Microsoft to use predictive analytics for the Kapur, 2016).
forecasting of commodity pricing (UN ESCAP, 2019).
Data-driven decision-making was targeted both by the
Despite being critical to India’s economic development, state government of Andhra Pradesh and Telangana
the Indian agricultural sector continues to face a vast to address this specific gap (UN ESCAP, 2019). The
array of challenges (Indian Express, 2018): Some implementation of the MoU was initiated through
of them are associated with labor and resources, the development of an AI sowing app powered by
including migration to urban areas, overuse of the Microsoft Cortana Intelligence Suite, reported
groundwater, access to viable and quality seeds, a on June 9, 2016 (Reddy, 2016). Cortana Intelligence
lack of balance in the use of fertilizers, and storage; helps increase value in data by converting it into
infrastructure, including a lack of access to reliable readily actionable forms (Heerdt, n.d.). This facilitates
credit, marketplaces, and technologies such as the the expedient availability of information in achieving
Internet; and information, including a lack of access innovative outcomes within the agricultural industry.
to reliable information about weather, markets, Using this intelligence, the app was able to interface

122
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

with models to forecast weather prepared by Where period of 30 years (1986–2015) (ICRISAT, 2017). The
Inc. – a software company in the US. The app used estimation involved computing data to forecast a
extensive data mapping, including rainfall over the future moisture adequacy index (MAI) based on data
past 45 years in the Kurnool District (IANS, 2016; concerning daily rainfall, which was accumulated
Reddy, 2016). The information was combined with and reported by the AP State Development Planning
data collected in the Andhra Pradesh Primary Sector Society (ICRISAT, 2017).
Mission, popularly known as the Rythu Kosam
Project (ICRISAT, n.d.). Launched with the objective However, there were infrastructure-related hurdles to
of promoting productivity in the primary sector, the the successful implementation of both projects. As
project involved the collection of household survey of December 2017, the overall Internet penetration
data relating, among other things, to crop yields in India was around 64.84% (20.26% in rural areas)
(Charyulu, Shyam, Wani, & Raju, 2017). The combined (Agarwal, 2018).This meant that the AI intervention
data was downscaled in order to enable forecasting had to be very targeted. Since 77% of the bottom
that could guide farmers in identifying the ideal week quintile owned a mobile phone (Bhattacharya, 2016),
for the purpose of sowing (IANS, 2016). the output needed to be sent as text messages and
not through an app that required the user to have a
The datasets considered relevant for the AI solution smart phone.
include yield-related information, weather, sowing
area, and production. Part of the data was manually The NITI AAYOG reported that both in Karnataka
collected from farms in 13 districts in Karnataka and Andhra Pradesh there was an increase in crop
by field officers deployed by ICRISAT during yield between 10–30% due to the ICRISAT sowing
the aforementioned Rythu Kosam Project. The advisory app (NITI Aayog, 2018). As a result of
information was made available to Microsoft’s Azure the MoU, the government can reportedly get price
Cloud (Express Web Desk, 2017) and subsequently forecasts for essential commodities three months
downscaled to the village level in order to achieve the in advance in order to decide the minimum support
greatest possible precision, which was particularly price (IANS, 2017). The first impact assessment
useful for farmers in improving their decision-making conducted in Devanakonda Mandal in Andhra Pradesh
capabilities. The machine learning software acquired reflected a significant increase (30%) per hectare
by ICRISAT includes Cortana Intelligence and a for farmers using the app (ICRISAT, n.d.). However,
personalized village advisory dashboard that uses there are no publicly available reports on a holistic
business intelligence tools, both of which are prepared impact assessment of this project. Furthermore,
by Microsoft (ICRISAT, 2017). the calculations undertaken to arrive at the 10–30%
increase have also not been furnished.
In the pilot attempt implemented in Andhra Pradesh,
the sowing period was estimated on the basis of
datasets concerning the climate of the Devanakonda
area in Andhra Pradesh, historically spanning a

123
Institutional and technological design development through use cases based discussion

Section III:
Regulatory Interventions
To determine the optimal levels of regulation, we A framework of high-level articulation of values
have arrived at a set of principles that enable the and guiding questions can help to guide these
policymaker to define how the solution can work in determinations. We curated the values based on an
consonance with existing values and constitutional assessment both of India’s constitutional ethos and an
frameworks as applicable to emerging economies. evaluation of values and rights that might inherently
Transformative constitutionalism is a new brand of be tested by and therefore need to be explicitly
scholarship in comparative constitutional law, which protected when there is algorithmic decision making.
celebrates the crucial role of the state and the judiciary This section contains an explanation of how we
in bringing about emancipatory change and rooting selected these questions and how they protect these
out structural inequality. Originally conceptualized as values. It then goes on to draw out what an illustrative
a Global South (Christiansen, 2011) concept designed regulatory strategy might look like in response to
as a counter-model to the individual rights-driven these questions.
model of Northern Constitutions, scholars have now
identified emancipatory provisions in several Western Agency
constitutions, such as Germany (Hailbronner, 2017).
India’s Constitution is one such example. The origins Across jurisdictions, the concept of inherent dignity
of constitutional order in India were designed to “bring is connected to human agency – the capacity to
the alien and powerful machine like that of the state make choices as one deems fit and pursue one’s
under the control of human will” (Khilnani, 2004) and conception of a healthy life. Dignity reflected in agency
to eliminate the inequality of “status, facilities, and does not require a specific set of criteria to define
opportunities” (Kannabiran, 2012). itself (Rao, 2013). It focuses on human capacities
such as individuality, rationality, autonomy, and self-
Therefore, a transformational approach necessarily respect, and eschews focusing on the exercise of
considers the power asymmetries between the these traits (Rao, 2013).The Supreme Court of India
decision maker, implementer, and affected party, has recognized the importance of the principle of
respectively. The questions for guiding regulation are autonomy in our constitutional schema and held that
an entry point that remedy the inherent asymmetries no discrimination by the state can undermine the
which span out in a variety of contexts. personal autonomy of an individual (Bhatia, 2017).9
Of the instruments demarcating ethical uses of AI,
As public authorities begin to adopt AI into decision- 69% have adopted a principle of human control. This
making processes for public functions, and begin to essentially requires that key decisions delegated to AI
determine the ideal form of intervention(s), the extent remain under human review with a “human-in-the-loop”
to and the way in which decision-making capabilities (Fjeld, Achten, Hilligoss, Nagy, & Srikumar, 2020).
can and are delegated to AI need to be questioned
from the perspective of its transformative impact on Where stakeholders have sufficient agency to inform
justice, civil liberties, and human rights. their use or interaction with AI, there is a presumption
of limited regulatory intervention required. The less the
agency of a stakeholder in dealing with AI, the greater
the regulatory intervention needed.

9. Naz Foundation vs NCT of Delhi, (2009) 160 DLT 277 (High Court of Delhi). (“The grounds that are not specified in Article 15 but are analogous to those specified therein will be those
which have the potential to impair the personal autonomy of an individual… Section 377 IPC in its application to sexual acts of consenting adults in privacy discriminates a section
of people solely on the ground of their sexual orientation which is analogous to prohibited ground of sex.”), see Tarunabh Khaitan, ‘Reading Swaraj into Article 15: A New Deal for the
Minorities’ (2009) 2 NUJSLR 419

124
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Explanation Questions
If adoption of an AI solution is mandatory, individual The following questions can help guide determinations
autonomy is immediately surrendered and the state of agency:
determines the contours of individual agency. This is
happening at present with the mandatory adoption of • Is the adoption of the solution mandatory?
contact-tracing applications in light of the COVID-19 • Does the solution allow for end-user control?
pandemic (Agrawal, 2020). During times of emergency • What is the relationship between the primary user
or otherwise, if the state limits individual autonomy, and impacted party?
then unique regulatory solutions that check the
powers of the state must be deployed. Recommended Regulatory Strategy
Adoption of the solution must be made mandatory
For AI solutions such as predictive policing, the only in exceptional circumstances. Compelling a
primary users are state agents attempting to farmer to adopt a technological solution constrains
discharge their functions, whereas the impacted choice and undermines agency. Through primary
party is someone who is identified and evaluated by regulation legislation or judicial decisions, we
algorithmic decision-making. However, in the case recommend that all states ensure that government
of farmers receiving weather alerts, the farmer is entities at all levels adopt clear parameters for when
both the primary user and the impacted party. To use any technological solution can be made mandatory.
another example, if the marketing and sales wing of This must ensure that: (1) there is a pressing need
a company uses sentiment analysis to analyze the in the public interest, (2) there is no reasonably
user reviews of its products, the primary user, as well available alternative, and (3) adequate measures of
as the beneficiary or adversely impacted party of the compensation, oversight, and grievance redressal are
analysis, is the company itself. On the other hand, provided.
if the same techniques are used for assessment of
college application essays, the primary user is the Even if the adoption of the solution is not mandatory,
university, but the parties who have to bear its adverse the power asymmetry between the user and impacted
impact are the student applicants. Such a distinction party needs to be closely considered. Where the power
must be made to determine if the potential risk of the asymmetry is vast, such as police using AI to conduct
algorithmic system is being borne by the stakeholders surveillance in certain areas without the knowledge
who choose to use it, or by other stakeholders who or consent of the people impacted, there needs to be
become unwitting victims of risks undertaken by far greater regulatory scrutiny. Ideally, this scrutiny
others, and influences the impacted individual’s ability should be multi-stakeholder and civil society groups,
to question the outcome or seek redress. Where especially those representing vulnerable communities,
parties choose to use systems marked by opacity and and should be allowed to exercise vigilance by
risk for commercial gains, there is a strong argument inputting into the design of the project before it is
for regulatory restraint, unless the risks of such opaque launched, auditing evaluation reports, engaging with
decisions begin to percolate to others. In cases where targeted populations, and providing input as the
the primary user and the impacted party are the same, project processes. Furthermore, training must be
there is a possibility for some opportunity for the user mandated for the public servants implementing the
to play a role in deciding whether the inferences are solution, thereby enabling them to understand the
used or not. In cases where they are not the same, the socio-economic complexities of those with whom
impacted party has no agency in this decision-making, they are engaging. Marda and Narayanan observed
and the further removed the role is, the potential a lack of sensitization and empathy in the case of
for questioning this decision decreases when it is the Delhi police dealing with vulnerable communities
delegated to an algorithm. (Marda & Narayan, 2020a) while Kar observed the
same with loan officers passing judgement on “credit-

125
Institutional and technological design development through use cases based discussion

worthiness” (Kar, 2018a). Appropriate grievance 2011). Cheney-Lippold argues that algorithmic agents
redressal mechanisms that provide access for create identities for us on their own terms, rarely
the vulnerable must be created. This should all be with input from the subjects of the algorithm itself
mandated through a top-down policy that is devised (Cheney-Lippold, 2017) and terms this construction a
by the central government and made applicable to all measurable (a data equivalent of Weber’s ideal type)
government entities thinking of adopting AI solutions construct of conceptual purity that does not occur
that have a great disparity between the end user and in reality (Cheney-Lippold, 2017). Moreover, Rouvroy
impacted party. argues that the operation of the algorithm in terms
of mathematical precision ignores the embodied
Equality, Dignity, and Non-discrimination individual and replaces him with a datafied substrate
that can in no way capture the complexities of his
Background and Explanation character (Rouvroy, 2013). This leads to mathematical
Human dignity is a core value recognized the world conclusions on the features of a certain group that
over, which the state should guarantee. In the Indian might not reflect reality. Yet, the datafied substrate,
Constitution, dignity is mentioned in the Preamble and replete with assumptions compounded by hidden
nowhere else. However, the Supreme Court has used layers, is used for making targeted decisions.
the inclusion of the concept in the Preamble to interpret
the guarantee of life and personal liberty to include a These ramifications are amplified in the case of
variety of traits associated with dignity. These include minorities and other vulnerable communities.
not only the bare necessities of life such as adequate Algorithmic discrimination has been a concern among
nutrition, clothing, and shelter but also facilities for both legal experts and technologists for some time.
reading, writing, expressing oneself, and interacting with Hao explains three phases at which some form of
other human beings without fear (Mullin v And’r, Union algorithmic bias might play out (Hao, 2019). The first
Territory of Delhi, 1981). stage comes with the framing of the problem. As
soon as developers create a deep-learning model,
When algorithms model and predict human behavior, they decide what output they want the model to
there are important implications for the dignity of the provide and the rules needed to achieve this output.
individuals targeted. Modelling of human behavior However, as discussed earlier, notions of “credit-
includes use cases where the intent is either to predict worthiness”, “recruitability”, “suspicious”, or “at risk” are
or understand the activities, motivations, or proclivities often subject to cognitive bias. This makes it difficult
of human beings. This is true even for cases where to devise screening algorithms, which fairly portray
the intent is not to model human behavior but the society and the conglomeration of identities, and power
clear implication is on decisions taken regarding asymmetries that define it (Basu, 2019).
human beings, due to systemic factors involved in
data collection and labelling, use of algorithms, and The second stage is the data collection phase. As we
impact of inferences, etc. As an individual’s data is saw with the predictive policing setup in Delhi, often
manipulated and formatted to extract a pattern about data does not adequately represent reality. As crime
that individual’s world, the individual or their data no rates are determined based on the number of calls
longer exists for itself (Cheney-Lippold, 2017), but are that come into the Delhi Police call center, the quality
massaged into various categories. Amoore terms this of the dataset is highly dependent on how seriously
a “data-derivative”, which is an abstract conglomeration the receiver takes each call (Marda & Narayan, 2020a).
of data that continuously shapes our futures (Amoore, Calls from women from lower socio-economic groups

126
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

alleging sexual violence are often not taken seriously Guiding Questions
(Marda & Narayan, 2020a). A related problem is that The following questions help guide regulations on
datasets that are well curated and readily available agency, dignity, and non-discrimination:
are often very limited. For example, the data used for
Natural Language Processing Systems for Parts of • Is the AI solution modelling or predicting human
Speech (POS) tagging in the US come from popular behavior?
newspapers such as The Wall Street Journal. However, • Is the AI solution likely to impact individuals or
accuracy of these datasets would decrease if the communities, in particular the minority, protected, or
speech used by Wall Street Journal writers were applied at-risk groups?
to individuals or ethnic minorities who speak with a very
different style (Blackwell, 2015). Recommended Regulatory Strategy
If AI is modelling or predicting human behavior,
The final stage is that of data preparation, where the the state must be compelled to justify why this is
developer selects the parameters which they want the necessary and proportionate to the objective. This
algorithm to consider. For example, when determining justification must mandatorily be provided by any
credit-worthiness, the candidate’s type of employment entity choosing to apply AI for this purpose, and must
might be a parameter. It could be argued that someone be enforced through either legislation or executive
working in the informal economy may be less likely order. If a private sector actor such as Staqu is
to financially sustain themselves and thus would be involved in partnership with the government, it must
deemed less credit-worthy. However, many individuals go through a process of accreditation, which should be
working in the informal economy in India are from determined by a co-regulatory body. All projects must
lower caste communities (Kar, 2018a). Thus, working also go through a mandatory impact assessment that
in the informal economy is an ostensibly neutral proxy considers the possibility of disparate impact or proxy
for discriminating against a specific caste, thereby discrimination. This must be mandated through co-
violating the right to equality when the data is being regulatory guidelines framed by the government in
sorted during the machine learning process (Prince & consultation with private sector actors. We believe that
Schwarcz, 2020). a co-regulatory framework with regular consultations
works best if a private sector actor is involved with the
The right to equality has been enshrined in several technology, as the government alone might not fully
international human rights instruments and into the understand the implications of this technology. We
Equality Code of the Indian Constitution. The dominant also recommend that the private sector actor not be
approach to interpreting this right appears to focus involved with the final decision. For instance, with credit
on the grounds of discrimination in Article 15(1), rating, a number of private sector firms are involved in
thereby eschewing unintentional discrimination and crunching data from the traditionally financially under-
disparate impact on certain communities. However, served and predicting their behavior. However, the final
as Bhatia highlights (Bhatia, 2016), a few cases have decision to sanction or reject a loan must be taken by a
considered indirect discrimination to some extent – loan officer from a bank.
an approach that is critical in the case of data-driven
decision-making. Hence, we articulate the specific
question on evaluating potential impact on minority
groups, so that developers think of the potentially
negative consequences of supposedly well-intentioned
decisions.

127
Institutional and technological design development through use cases based discussion

Safety, Security, and Human Impact

The fundamental principle that guides regulatory When considering the impact, it is imperative to look at
decisions in this case is that of safety, security, and both the severity and likelihood of the adverse impact.
human impact. Where the use of AI has the potential A high “likelihood” of harm indicates a high probability
for direct, adverse, or large-scale human impact, greater of the human rights, quality of life, and core value
regulatory intervention is required. In the Berkman-Klein clusters being negatively impacted due to multiple pre-
study, safety and security of AI systems are present in deployment factors, such as corrupted data sets or lack
81% of documents espousing ethical AI (Fjeld, Achten, of awareness among users. Scale of harm indicates the
Hilligoss, Nagy, & Srikumar, 2020). Therefore, the extent of impact, which is determined by factors such
following broad questions need to be asked: as number of individuals impacted, while severity of
harm can be determined by aspects such as clamping
• Is there either a high likelihood or high severity of down on civil liberties or causing socio-economic
potential adverse human impact of the AI solution? distress.
• Can the likelihood or severity of adverse impact
be reasonably ascertained with existing scientific In some cases, the likelihood of the adverse impact on
knowledge? human beings may be low, yet in the remote eventuality
that it does lead to an adverse impact, its severity could
While we acknowledge that both likelihood and severity be very high. For instance, the use of autopilot systems
of impact, and the risks posed therein, are contextual, in aircraft navigation or in controlled trials where the
we believe that certain trends are worth noting. When number of people impacted are limited. The attention to
AI systems model human behavior, it is much more both aspects of risk is essential, as often justifications
likely to lead to an impact on the human beings in for risky systems are based on low likelihood. However,
question, or those who may be seen as belonging even in cases where there is low likelihood of human
to the same group or category by the algorithm. An harm, if the severity is high enough, it may still augur for
AI solution that could cause greater harm if applied greater regulatory scrutiny.
erroneously, such as one deployed for predictive
policing, should be subject to more stringent standards, In situations where the likelihood or severity of harm
audits, and oversight than an AI solution designed to cannot be reasonably ascertained, we recommend
create a learning path for a student in the education adopting the precautionary principle from environmental
sector. There could be cases where the behavior being law and suggest that the solution not be implemented
modelled is not human, yet it could lead to significant until scientific knowledge reaches a stage where it can
human impact. For instance, an AI system that makes reasonably be ascertained (Kriebel, et al., 2001).
predictions about weather or environmental factors
does not model human behavior but could be used to
make assessments that directly impact human beings.

128
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Regulatory Strategy
The following table contains a list of possible impact scenarios and regulatory strategies

Outcome Explanation Of Outcome Recommended Regulatory


Strategy

A) High Likelihood, Scenarios where the state is involved in Ban or proscribe until underlying
High Severity predicting human behavior (predictive issues are solved to reduce likelihood
policing/credit rating/predicting of harm. If likelihood or severity cannot
school dropouts) but training data is be gauged, then the solution must not
incomplete and a thorough impact be deployed.
assessment has not been conducted.

B) Low Likelihood, Scenarios where training data is State run human rights impact
High Severity robust but individuals relying on use assessment that externally verifies
case (flood prediction, crop price compliance.
forecasting) may face dire economic
consequences if solution works
incorrectly.

C) High Likelihood, Possible in pilot cases where data, Strong redressal mechanisms
Low Severity methodology, and funding are not yet that enable even one impacted
clear and safeguards have not been individual to receive compensation,
appropriately devised, or where AI is particularly if the initial estimation of
not directly impacting civil liberties severity is too low.
or socio-economic rights (traffic
management).

D) Low Likelihood, Where data is robust, methodology, Possible regulatory forbearance


Low Severity troubleshooting, and outreach have with strong industry-driven codes
been clearly devised, and use case is for standardization, evaluation, and
not directly impacting civil liberties or redressal if private sector is involved.
socio-economic rights.

Table 2: Impact thresholds

129
Institutional and technological design development through use cases based discussion

Accountability, Oversight, and Redress 1978, the Supreme Court of India has clearly espoused
the idea of both procedural and substantive procedural
Background and Explanation fairness. A further extension of this principle is the
This principle attempts to grapple with two challenges need for administrative authorities to record reasons
to fostering accountability. The first challenge lies in to exclude or minimize arbitrariness (A Vedachalal
the delegation of human decision making at some Mudaliar v State of Madras, 1952). In some jurisdictions
level to an algorithm, which creates an algorithmic such as the UK and US, there are statutory obligations
“black box” through which inputs are processed and that require administrative authorities to give reasoned
outputs are generated (Pasquale F., 2015). A certain orders.11 While there is no such corresponding statutory
level of transparency is key to fostering accountability provision in India, the case law is fairly instructive in
frameworks for algorithmic decision-making. Any imposing similar obligations of quasi-judicial authorities
algorithmic decision-making framework in the public (Travancore Rayons v Union of India, 1971; Siemen
sector should reasonably be able to explain its decision Engineering and Manufacturing Co. of India v Union
to anyone impacted by its working. However, there of India, 1976). As Pasquale argues, explainability is
may be a trade-off between the capacity or complexity important because reason-giving is intrinsic to the
of a model and the extent to which it can render a judicial process and cannot be jettisoned on account
reasonably understandable explanation (Oswald, 2018). of algorithmic processing (Pasquale, F.A., 2017). The
same principles equally apply to all administrative
Retrospective adequation is a legal standard we bodies, as it is a well-settled principle of administrative
propose to promote algorithmic accountability (Sinha & law that all decisions must be arrived at after a
Mathews, 2020). Essentially, this means that whenever thorough application of mind. Much like a court of law,
inferences from machine learning algorithms influence these decisions must be accompanied by reasons to
decision making in public functions, they can do so only qualify as a “speaking order”. Where the administrative
if a human agent is able to look at the existing data and decisions are informed by an algorithmic process
discursively arrive at the same conclusion. Unlike the opaque enough to prevent this, the next logical question
right to explanation under the General Data Protection is whether a system can be built in such a way that
Regulation (GDPR), which only includes “meaningful it flags relevant information for independent human
information about the logic involved, as well as the assessment to verify the machine’s inferences. Only
significance of the envisaged consequences of then will the requirements of what we call a speaking
10
processing”. As opposed to the case of retrospective order be in any position to be satisfied.
adequation, it does not tell us how an inference has
been reached. This approach essentially draws from Our assessment of opportunity for human supervision
standards of due process and accountability evolved is based on the idea that where inferences are inherently
in administrative law, where decisions taken by public opaque, they must provide sufficient information about
bodies must be supported by recorded justifications. the model and data analyzed, such that a human
Since the Maneka Gandhi vs Union of India judgment in supervisor must be in a position to apply analogue

10. Art.15 GDPR


11. Section 12 of the (UK) Tribunals and Enquiries Act, 1958; Section of the (US) Federal Administrative Procedural Act, 1946

130
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

modes of analysis to the information available in order Smart Regulation Strategy


to conduct an independent assessment. For instance, Since an empirical mapping of the potential loopholes
where AI systems are used to detect hate speech for in AI implementation across India’s socio-economic
takedown from online platforms, it is possible to make demographics does not exist, all AI solutions must be
available the inferences to a human supervisor who can built with human-in-the-loop supervision. Essentially,
apply her mind independently to the speech in question this means that while AI can aggregate and analyze
based on legal rules and standards on hate speech and data on a certain issue, the final decision will need
relevant contextual information. to be taken by a human being. As our case studies
showed, human bias in decision-making was prevalent
The increased role of the private sector in designing well before machine learning came into the picture.
and deploying AI systems poses a challenge. As However, human beings can be questioned, engaged
established earlier, there remains no clear threshold with, and held accountable through legal proceedings
for demarcating public functions with private ones. – something that cannot be done with an AI system.
With an increase in for-profit private actors playing a In addition, human beings also retain the flexibility
role in the discharge of functions that may be public, a to make broader policy interventions. For example,
liability mechanism that enables redress for adversely if it is observed that crime rates are higher among a
impacted individuals needs to be thought through. A certain community, instead of merely trying to stamp
potential thorny issue may be the proprietary nature out crime, a human being might try to identify the root
of the source code, which the private sector developer cause of the crime, which might lie in higher rates
may not want to share. This makes it imperative to think of unemployment or poverty in the area. Therefore,
around unique regulatory interventions to constrain they may look to intervene by devising social welfare
the private sector actor within the framework of the programs instead of merely conducting enhanced
rule of law. This is particularly significant for start-ups, surveillance. As such, human-in-the-loop must be made
such as those involved in credit rating, who want to do mandatory through top-down legislation.
“social good” but do not have the financial resources
or bandwidth to create their own voluntary compliance Retrospective adequation is necessary for imposing
strategy. Therefore, regulatory certainty that clearly accountability on AI systems discharging public
demarcates scope of activity, liability, and evaluation functions and impacting citizens’ rights. We
metrics for private sector actors is vital. recommend the evolution of technical standards from
the private sector actors operating in India, which are
The following questions help determine accountability, then discussed and affirmed by a co-regulatory body
oversight, and redress: such as the Bureau of Indian Standards.

• To what extent is the AI solution built with human-in- If a private sector actor is involved with the design
the-loop supervision prospects? or deployment of the AI solution, then it must be first
• Are there reliable means for retrospective adequation? considered whether the activity in question falls within
• Is the private sector partner involved with either the a reasonable and contextual understanding of a “public
design of the AI solution, its deployment, or both? function”. It is clear that private sector actors should

131
Institutional and technological design development through use cases based discussion

not deploy solutions when it comes to three core • In cases where private actors are involved with any
governmental functions: foreign relations, any form of function that violates civil and political or socio-
violence or provision of security, and legislation. This economic rights, and an aggrieved individual(s)
essentially means that once the final decision is taken, challenges the violation in a court of law, the court
any follow-up action must be decided and acted upon must treat this as a “public function” and hold the
by a government entity. private sector actor to the same level of scrutiny as
the government. If the government wants to shield
Actors such as Staqu are involved in the design the private actor from this liability, then it must be
and development of the AI solution, even though explicitly stated in the contract. These contracts must
the police implement the recommended outcome. also be made public.
Moreover, cases of public service delivery that have • That the private sector actor provides the needed
clear implications for the realization of the right to capacity building to public sector actors to ensure
life could be considered public functions. Either the they can understand the functioning and outputs of
state or private actor must be held liable if rights are the system.
violated in the process. To encourage private actors to
participate, the state may choose to soak up some of Privacy and Data Protection
the liability for damages. However, clear mechanisms
for assignment of liability must exist – something Explanation
that was not done for Microsoft’s partnership with the It is often argued that for emerging economies, the
government of Karnataka. In such cases, consistent right to privacy should take a backseat to development.
obligations must be imposed on the private sector. To However, as we have highlighted in this paper, the
this end, we recommend: poor and vulnerable are the most likely to have their
civil liberties infringed by data-driven decision-making.
• Clearly drafted contracts with private sector When affirming the right to privacy as a fundamental
developers that specify modes of liability, nature, and right, the Indian Supreme Court strongly rebutted this,
frequency of audits and impact assessments, as well arguing that civil and political rights are important for
as clarification that their source code and training every individual regardless of income (K. Puttaswamy
data may need to be made public if the algorithmic v Union of India, 2017). They also affirmed that placing
decision-making is challenged in a court of law. socio-economic rights over civil and political rights has
• Internal decision-making processes within the been done away with by constitutional courts. Since
organization must be scrutinized for conformity with this judgement in 2017, India has sought to formulate
constitutional standards and human rights. a data protection law – tabling a bill in Parliament in
• The organization must ensure that they will not December 2019 (Basu & Sherman, 2020). While the
interfere with core government decision-making obligations on private data processors in the bill are
processes, such as deciding when to use violence in similar, it does some disservice to individual rights by
the interest of public order. granting the government a wide range of exceptions.

132
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Section 35 states that exceptions can be made to Questions


collection rules, reporting requirements, and other • Does the AI solution collect, use, and/or share
requirements whenever the government feels that it is personal data even in anonymized form?
“necessary or expedient” in the “interests of sovereignty • Can the identity of an individual be ascertained even if
and integrity of India, national security, friendly relations the system is not directly collecting or using personal
with foreign states, and public order”. The “necessary information?
and expedient” standard replaces the “necessary and
proportionate” standard laid down by the Puttaswamy Regulatory Strategy
judgement and reflected in a previous version of the billWhenever personal data is processed, there must be
tabled by the Justice B.N. SriKrishna Committee. a national data protection law that demarcates user
rights and redressal mechanisms in case of violations
Another concern has been the bill’s treatment of by both government and private sector actors. A
non-personal data (Basu & Sherman, 2020). Section specialized tribunal dealing with grievances under this
91(2) states that the government is allowed to direct law may be a co-regulatory, multi-stakeholder endeavor
data collectors to hand over anonymized personal that has representatives from government, the private
information or other “non-personal data” for the purpose sector, and civil society. However, its decisions must
of “evidence-based policy making”. Non-personal data be binding and enforced through primary, hierarchical
is defined with little clarity as “anything that is not legislation.
personal data”. There has been a policy push towards
channelizing as much data as possible towards social Applying Regulatory Strategy to the Studied
and economic development. The draft e-commerce Use Cases
policy defined data as “community data” to be owned
and used for the benefit of all Indians (Government The following tables apply the regulatory strategies to
of India, 2019). On the other hand, chapter four of the the facts in the studied use cases. While not exhaustive,
Economic Survey treats data as a “public good”, with they indicate ways in which smart regulation that
no analysis of how this framework protects privacy intervenes based on the guiding question can arrive
rights. These concerns have been amplified as a result at a comprehensive regulatory strategy that mitigates
of the COVID-19 pandemic, where Indian citizens are potential harms while enabling innovation. The
being compelled to surrender personal data to the state regulatory interventions described in these tables are
through a contact-tracing app that has now become by no means an exhaustive framework that adequately
mandatory for download. Privacy is the most widely tackles all systemic issues that some of these use
protected value across AI instruments – present in 97% cases may raise. Instead, they should act as illustrative
of documents identified by the Berkman-Klein study guidelines that can guide policymakers to devise
(Government of India, 2019). targeted interventions while simultaneously tackling
larger societal questions and challenges through
widespread structural changes.

133
Institutional and technological design development through use cases based discussion

Regulatory interventions for predictive policing

Value Questions Predictive Policing Regulatory


Intervention

Agency Is adoption of the Mandatory for all police Regular consultation


solution mandatory? officers depending on the and feedback from all
decision made by police levels within the police
chief functionaries and hierarchy, in particular
mandatory for individuals officers who directly
that the police decide to engage with victims
use the solution on. on the ground and the
public.

Notice to individuals
when a decision about
them has been taken
using an AI system.

Human rights impact


assessment.

Does the solution allow Yes, as the police officer N/A


for end-user control? using it is the end user.

Is there a vast disparity Yes, between police Mandatory certification


between the primary officers and suspected for all police officers
user and the impacted criminals. working both with
party? the algorithm and
implementing it on
the ground (through
notification).

Statistical standards for


accuracy.

Evidentiary weight of
decisions informed by
an AI system.

Table 3a: Regulatory interventions for predictive policing

134
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Equality, Dignity, and Is the AI solution Modelling criminality. Needs assessment


Non-Discrimination modelling or predicting from the decision maker
human behavior? on why modelling
human behavior is
proportionate to the
objective of reducing
crime and also
demonstrating why
no other reasonable
alternatives exist.

Is the AI solution likely Possible disparate Awareness,


to impact minority, impact. sensitization, and
protected, or at-risk creation of grievance
groups? redressal mechanisms
and anti-discrimination
regulations protecting
vulnerable groups.

Safety, Security, and Is there a high likelihood Possible high likelihood Proscription of solution
Human Impact or high severity of and high severity, unless until data curation and
potential adverse human data collection practices analysis is improved and
impact as a result of the are improved. standardized.
AI solution?
The use of the system
should be guided by the
principles of necessity,
proportionality, and least
intrusive means.

Compliance with
international security
standards.

Can the likelihood or Yes, through empirical Government and the


severity of adverse research. private sector should
impact be reasonably undertake regular
ascertained with existing empirical assessments
scientific knowledge? of potential impact.

(Cont.) Table 3a: Regulatory interventions for predictive policing

135
Institutional and technological design development through use cases based discussion

Accountability, To what extent is the Human-in-the-loop


Oversight, and Redress AI solution built with exists.
“human-in-the-loop”
supervision prospects?

Are there reliable No publicly available The private actor


means for retrospective information. involved should
adequation? mandatorily
demonstrate possibility
of retrospective
adequation.

Is the private sector Yes. Contract as described


partner involved with above. Final
either the design of the AI implementation of
solution, its deployment, the decision should
or both? continue to be done by
the police.

Privacy and Data Does the AI solution use Yes. Any data collection must
Protection personal data, even in comply with a national
anonymized form? data protection law
that clearly separates
personal and non-
personal data.

(Cont.) Table 3a: Regulatory interventions for predictive policing

136
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Regulatory interventions for credit rating

Value Questions Predictive Policing Regulatory


Intervention

Agency Is adoption of the Optional for loan- Banks should have


solution mandatory? providers from banks. an internal regulatory
They can potentially strategy on the adoption
switch to a credit rating of AI.
company that does not
use AI. Human rights impact
assessment.

Does the solution allow Yes, as the company/ N/A


for end-user control? bank engaging in credit
rating is the end-user.

Is there a vast disparity Yes, there is a disparity Self-regulation: Loan


between the primary between those officers and credit rating
user and the impacted generating the scores companies should
party? and those they are communicate clearly
scoring. to potential candidates
the decision-making
process, how AI is being
used, and possible
implications.

Equality, Dignity, and Is the AI solution It is determining “credit- Mandatory needs


Non-Discrimination modelling or predicting worthiness”. assessment from
human behavior? bank clarifying why
algorithmic decision-
making is more accurate
than traditional credit
scoring methods, as
well as full transparency
on data being used and
curation methods.

Is the AI solution likely Possible disparate Awareness,


to impact minority, impact. sensitization, training
protected, or at-risk and creation of
groups? grievance redressal
mechanisms targeting
vulnerable groups.

Table 3b: Regulatory interventions for credit rating

137
Institutional and technological design development through use cases based discussion

Safety, Security, and Is there a high likelihood Possible high likelihood Mandatory pilot projects
Human Impact or high severity of and high severity. and standardization of
potential adverse human data curation practices
impact as a result of the certified by a co-
AI solution? regulatory committee.

Can the likelihood or Yes.


severity of adverse
impact be reasonably
ascertained with existing
scientific knowledge?

Accountability, To what extent is the Human-in-the-loop


Oversight, and Redress AI solution built with exists.
“human-in-the-loop”
supervision prospects?

Are there reliable No publicly available Retrospective


means for retrospective information. adequation should
adequation? comply with Indian
credit regulations.

Is the private sector Both. Contract as described


partner involved with above. If the private
either the design of the AI sector partner is a
solution, its deployment, start-up, the state may
or both? choose to cushion
some of the liability.
Final decision must be
independently taken by
the bank sanctioning the
loan.

Privacy and Data Does the AI solution use Yes. Any data collection must
Protection personal data, even in comply with a national
anonymized form? data protection law
that clearly separates
personal and non-
personal data.

(Cont.) Table 3b: Regulatory interventions for credit rating

138
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Regulatory interventions for AI in agriculture

Value Questions Agriculture Regulatory


Intervention

Agency Is adoption of the No, farmers may opt out. Pros and cons of
solution mandatory? adopting the solution
should be clearly
communicated in an
understandable format
to the farmer (self-
regulation).

Does the solution allow Yes, the farmer using the N/A
for end-user control? solution is the end-user.

Is there a vast disparity No, the farmer is the end- A co-regulatory


between the primary user and feels the impact consultative body
user and the impacted of the solution. should be set up
party? to organize regular
consultations between
the users and the
developers of the
project.

Equality, Dignity, and Is the AI solution It is modelling crop


Non-Discrimination modelling or predicting patterns and weather
human behavior? data.

Is the AI solution likely No, while there may be All farmers may not
to impact minority, a negative impact, it is equally benefit from the
protected, or at-risk unlikely to specifically app. Government and
groups? impact minorities. private sector partners
must mandatorily
provide training, set
up a pre-requisite
infrastructure to the
extent possible, and also
study trends on why
certain farmers may not
be benefitting.

Table 3c: Regulatory interventions for AI in agriculture

139
Institutional and technological design development through use cases based discussion

Safety, Security, and Is there a high likelihood Depending on the quality Mandatory pilot projects
Human Impact or high severity of of the data curated, there and standardization of
potential adverse human is possible low likelihood data curation practices
impact as a result of the and low severity. as certified by a co-
AI solution? regulatory committee.

Can the likelihood or Yes. The private sector


severity of adverse partner could publish
impact be reasonably research on preliminary
ascertained with existing scientific studies
scientific knowledge? (voluntarism).

Accountability, To what extent is the Unclear. More public information


Oversight, and Redress AI solution built with about the working of the
“human-in-the-loop” app should be disclosed
supervision prospects? to the public and to the
farmers concerned.

Are there reliable No publicly available The private sector


means for retrospective information. partner should be able
adequation? to provide retrospective
adequation for all
decisions.

Is the private sector Both. There needs to be


partner involved with a contract clearly
either the design of the AI imposing liability on the
solution, its deployment, private sector partner
or both? in case of negligence.
If the private sector
partner is a start-up,
the state may choose
to cushion some of the
liability.

Privacy and Data Does the AI solution use Yes. Any data collection must
Protection personal data, even in comply with a national
anonymized form? data protection law
that clearly separates
personal and non-
personal data.

(Cont.) Table 3c: Regulatory interventions for AI in agriculture

140
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Conclusion
The application of regulatory interventions to use oversight, and redress; and (5) privacy and data
cases brought up a number of similarities. While protection, were selected not only from a study
predictive policing is a core government function of India’s constitutional fiber but also through an
that could involve violence further down the line, the assessment of AI policy instruments released by a
modus operandi, and therefore the potential threats variety of stakeholders around the world. As such,
to core constitutional values are similar to those in we feel that our framework – although researched
credit rating. The fundamental difference between and developed in an Indian context – applies across
these two use cases and the agricultural case study is emerging economies who desire to improve the
that these involved two sets of human beings – one government’s role in public service delivery while still
group being in a position of power that is attempting mitigating negative impacts.
to predict how less powerful human beings will act.
Thus, the regulatory interventions needed to optimally A core challenge continues to be the complex question
govern AI stem from those necessary to remedy of the involvement of the private sector in functions that
structural injustices in society. The danger, however, have traditionally been the government’s prerogative,
in both India and other parts of the world, stems and often those that have implications for fundamental
from technological solutionism, which assumes that rights. One of the most important recommendations
existing societal fissures can be occluded through data- of our paper centers around the need to hold the
driven decision making. The reality is quite different, private sector accountable in these instances through
with data-driven decision making needing to adapt uniformly worded contracts that adequately impose
the same values that were required to fairly govern liability along with the delegation of any responsibility.
society in a pre-AI world. This is compounded by a lack However, given the lack of government capacity to
of effective public oversight and consultation of both entirely identify, design, and deploy an AI-driven solution,
policymaking and technological implementation. There some regulatory room must be given to these actors to
are no publicly scrutable external impact assessments innovate.
post-deployment or publicly available empirical socio-
economic assessments prior to deploying the solution. Appropriate regulation therefore does not fit neatly into
the division of the modes of hierarchical regulation,
Our paper establishes a framework for adapting these co-regulation, and self-regulation. A smart regulatory
values through a series of questions that identify critical strategy would require a combination of all three.
junctures at which core constitutional values and human
rights may be at threat due to algorithmic decision- Going forward, we feel the need for more empirical
making. Our framework is by no means exhaustive and assessment of use cases in emerging economies,
is meant to be read as a set of guidelines for decision as much of the literature, both on the technology
makers and technologists looking to devise their own and regulatory frameworks, are devised in a Western
set of frameworks. The set of regulatory tools mapped context and therefore not entirely applicable to
out by Freiburg (2010) may remain relevant and need emerging economies. That said, our paper shows
to be applied across contexts – often in response to that algorithmic decision-making is becoming more
knowledge that may be gained as the AI solution is commonplace in emerging economies. Through a close
implemented, evaluated, and adapted. analysis of the information gained from these empirical
assessments and a strong commitment to the values
The five sets of values that we felt merited protection: (1) described, we believe that adequate ex ante regulation
agency; (2) equality, dignity, and non-discrimination; (3) can mitigate harms while also enabling the realization
safety, security, and human impact; (4) accountability, of prospects for social good.

141
Institutional and technological design development through use cases based discussion

Acknowledgements
This paper was shaped by several helpful conversations with practitioners and scholars who
were incredibly generous with their time. We would like to thank Malavika Raghavan, Srikara
Prasad, Vidushi Marda, Sushant Kumar, and Anita Srinivasan. The paper also benefited from
feedback received after presentations at the Tamil Nadu e-governance agency and Microsoft
Research in Bengaluru. We were honored to be a part of the excellent cohort and benefited
greatly from the support offered by colleagues involved with this Association of Pacific Rim
Universities (APRU) project.

This paper was greatly improved by edits and feedback provided by Vipul Kharbanda, Nikhil
Dave, and Divij Joshi. We would also like to thank Nikhil Dave for some excellent research
assistance on this paper. All errors remain our own.

References
A Vedachalal Mudaliar v State of Madras, AIR Mad. 276 (1952)

Academic Center of Law and Business v Minister of Finance, Isr. (2006, Aug 20)

Agarwal, S. (2018, February 20). Internet users in India expected to reach 500 million by
June: IAMAI. Retrieved from The Economic Times: https://economictimes.indiatimes.
com/tech/internet/internet-users-in-india-expected-to-reach-500-million-by-june-iamai/
articleshow/63000198.cms

Agrawal, A. (2020, May 1). Lockdown Extension: Aarogya Setu Mandatory for All Employees
and in Containment Zones. Retrieved from MEDIANAMA: https://www.medianama.
com/2020/05/223-coronavirus-lockdown-extended-by-2-weeks-country-divided-into-red-
orange-and-green-zones/

Alabama v White, 496 U.S. 325 (1990)

Amoore, L. (2011). Data Derivatives: On the Emergence of a Security Risk Calculus


for Our Times. SAGE journal, 24, 27. Retrieved from https://journals.sagepub.com/
doi/10.1177/0263276411417430

Arun, C. (2019). AI and the Global South: Designing for Other Worlds. In M. D. Dubber, F.
Pasquale, & S. Das (Eds.), the Oxford Handbook of Ethics of AI. Oxford University Press.
Retrieved from https://ssrn.com/abstract=3403010

Ayres, I., & Braithwaite, J. (1992). Responsive Regulation. Oxford University Press.

Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. 104 California Law Review 671.

Barrows v Jackson, 252, U.S. (1953)

142
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Basu, A. (2019, October 12). We Need a Better AI Vision. Fountainink. Retrieved from
Fountain Ink: https://fountainink.in/essay/we-need-a-better-ai-vision-

Basu, A., & Hickok, E. (2018). Artificial Intelligence in the Governance Sector in India.
India: The Centre for Internet and Society. Retrieved from https://cis-india.org/internet-
governance/ai-and-governance-case-study-pdf

Basu, A., & Pranav, M. (2019, July 21). What is the problem with ‘Ethical AI’? An Indian
Perspective . Retrieved from The Centre for Internet and Society: https://cis-india.org/
internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-per

Basu, A., & Sherman, J. (2020, January 23). Key Takeaways from India’s Revised Personal
Data Protection Bill. Lawfare.

Berg, N. (2014, June 25). Predicting Crime, LAPD style. Retrieved from The Guardian:
https://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-los-angeles-
police-data-analysis-algorithm-minority-report

Bhatia, G. (2016). Retrieved from Indian Constitutional Law and Philosophy:


https://indconlawphil.wordpress.com/tag/indirect-discrimination/

Bhatia, G. (2017). Equal moral membership: Naz Foundation and the refashioning of equality
under a transformative constitution. Indian Law Review, 115-144.

Bhattacharya, P. (2016, December 5). 88% of households in India have a


mobile phone. Retrieved from Livemint: https://www.livemint.com/Politics/
kZ7j1NQf5614UvO6WURXfO/88-of-households-in-India-have-a-mobile-phone.html

Black, J. (2001). Decentring Regulation: Understanding the Role of Regulation and Self-
Regulation in a ‘Post-Regulatory’ World 54 Current Legal Problems (Vol. 54). Current Legal
Problems.

Blackwell, A. F. (2015). Interacting with an Inferred World: The Challenge of Machine


Learning for Humane Computer Interaction”. Proceedings of the Fifth Decennial Aarhus
Conference on Critical Alternatives, 179.

Braithwaite, J. (2000). The New Regulatory State and the Transformation of Criminology.
British Journal of Criminology, 40, 222-38. http://doi:10.1093/bjc/40.2.222

Capital Float (2020). Retrieved from Capital Float: https://capitalfloat.com/

Capon, N. (1982). Credit Scoring Systems: A Critical Analysis. Journal of Marketing, 46(2),
82-91. Retrieved from https://www.jstor.org/stable/3203343

Charyulu, D. K., Shyam, D. M., Wani, S. P., & Raju, K. (2017). Rythu Kosam: Andhra
Pradesh Primary Sector Mission. Coastal Andhra Region Baseline Summary Report.

143
Institutional and technological design development through use cases based discussion

ICRISAT Development Center. Retrieved from http://111.93.2.168/idc/wp-content/


uploads/2018/01/IDC-Report-No-13-Rythu-Kosam.pdf

Chatterjee, S., & Kapur, D. (2016). Understanding Price Variation in Agricultural Commodities
in India: MSP, Government Procurement, and Agriculture Markets. India Policy Forum.
Retrieved from http://www.ncaer.org/events/ipf-2016/IPF-2016-Paper-Chatterjee-Kapur.pdf

Cheney-Lippold, J. (2017). We Are Data: Algorithms and the Making of Our Digital Selves.
NYU Press.

Christiansen, E. C. (2011, January 1). Transformative Constitutionalism in South Africa:


Creative Uses of Constitutional Court Authority to Advance Substantive Justice. SSRN.
Retrieved from https://ssrn.com/abstract=1890885

Chugh, B., & Raghavan, M. (2019, June 18). The RBI’s proposed Public Credit Registry
and its implications for the credit reporting system India. Retrieved from Dvara Research:
https://www.dvara.com/blog/2019/06/18/the-rbis-proposed-public-credit-registry-and-its-
implications-for-the-credit-reporting-system-in-india/

Citron, D. K., & Pasquale, F. A. (2014). The Scored Society: Due Process for Automated
Predictions. Washington Law Review, 14, 89.

Commission, E. (n.d.). Ethics Guidelines for trustworthy AI. Retrieved from European
Commission: https://ec.europa.eu/futurium/en/ai-alliance-consultation

Common Cause. (2018). Status of Policing in India Report 2018: A Study of Performance
and Perceptions. Common Cause & Lokniti - Centre for the Study Developing Societies
(CSDS). Retrieved from https://www.commoncause.in/pdf/SPIR-2018-c-v.pdf

Corbett-Davies, S. (2017). Algorithmic Decision-making and the Cost of Fairness. Stanford


University. Retrieved from http://www.antoniocasella.eu/nume/Corbett-Davies_2017.pdf

Das, S. (2017, March 21). How Predictive Analytics Helps Indian Police Fight Crime.
Retrieved from http://www.computerworld.in/feature/how-predictive-analytics-helps-
indian-police-fight-crim

Department for Promotion of Industry and Internal Trade. (2018). Report of Task Force on
Artificial Intelligence. Government of India. Retrieved from https://dipp.gov.in/whats-new/
report-task-force-artificial-intelligence

Desai, K. (2019, March 31). Now Police Use Apps to Catch a Criminal. Retrieved from Times
of India: https://timesofindia.indiatimes.com/home/sunday-times/now-police-use-apps-to-
catch-a-criminal/articleshow/68649118.cms

144
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Deshpande, R. S. (2002, June 29). Suicide by Farmers in Karnataka Agrarian Distress and
Possible Alleviatory Steps. Economic and Political Weekly, pp. 2601-2604. Retrieved from
http://shreeindia.info/rsdeshpande.com/wp-content/uploads/2014/03/Suicide_by_
Farmers_in_Karnataka.pdf

Doekler, A. (2010). Self-regulation and Co-regulation: Prospects and Boundaries in an Online


Environment. Master of Law thesis, University of British Columbia. Retrieved from https://
open.library.ubc.ca/cIRcle/collections/ubctheses/24/items/1.0071207

Express Web Desk. (2017, October 27). Karnataka govt inks MoU with Microsoft to use
Artificial Intelligence for digital agriculture. Retrieved from The Indian Express: https://
indianexpress.com/article/india/karnataka-govt-inks-mou-with-microsoft-to-use-artificial-
intelligence-for-digital-agriculture-4909470/

Federal Trade Commission Staff. (2009). Report on Self-regulatory Principles for Online
Behavioral Advertising. Retrieved from https://www.ftc.gov/sites/default/files/documents/
reports/federal-trade-commission-staff-report-self-regulatory-principles-online-beh

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020, January 15). Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to
Principles for AI. Berkman Klein Center Research. Retrieved from https://ssrn.com/
abstract=3518482

Francis Coralie Mullin v UT of Delhi, AIR 746 (1981).

Freeman, J. (2000). The Private Role in Public Governance. NYULR, 75, 543, 547, 651–53.

Freiberg, A. (2010). Restocking the Regulatory Tool-kit. Dublin. Retrieved from


http://www.regulation.upf.edu/dublin-10-papers/1I1.pdf

Ganguly, S. (2020, March 31). Gurugram-based Start-up Staqu Has Notified AI-powered
JARVIS to Battle Coronavirus. Retrieved from Your Story: https://yourstory.com/2020/03/
gurugram-ai-startup-staqu-jarvis-coronavirus

Gateway, F. (2019, January 29). India: Reserve Bank of India Is Working on Public Credit
Registry to Improve Access to Micro Credit. Retrieved from FinDev Gateway: https://www.
findevgateway.org/news/india-reserve-bank-india-working-public-credit-registry-improve-
access-micro-credit

Goudarzi, S., Hickok, E., & Sinha, A. (2018). AI in Banking. India: The Centre for Internet and
Society. Retrieved from https://cis-india.org/internet-governance/files/ai-in-banking-and-
finance

Government of India. (2006). Notification. India. Retrieved from https://rbidocs.rbi.org.in/


rdocs/Content/PDFs/69700.pdf

145
Institutional and technological design development through use cases based discussion

Government of India. (2019). Data “Of the People, By the People, For the People.”

Government of India. (2019). Draft National E-Commerce Policy. Retrieved from https://
dipp.gov.in/sites/default/files/DraftNational_e-commerce_Policy_23February2019.pdf

Guihot, M., Matthew, A. F., & Suzor, N. P. (2017). Nudging Robots: Innovative Solutions to
Regulate Artificial Intelligence. VJETL, 20(2), 385, 429. Retrieved from http://www.jetlaw.
org/wp-content/uploads/2017/12/2_Guihot-Article_Final-Review-Complete_Approved.pdf

Gunningham, N., & Sinclair, D. (2017). Smart Regulation. In P. Drahos (Ed.), Regulatory
Theory: Foundations and Applications (p. 115). ANU Press.

Gurumurthy, A., & Bharthur, D. (2019). Taking Stock of AI in Indian Agriculture. IT for Change.
Retrieved from https://itforchange.net/sites/default/files/1664/Taking-Stock-of-AI-in-
Indian-Agriculture.pdf

Hailbronner, M. (2017, November 22). Transformative Constitutionalism: Not Only in the


Global South. American Journal of Comparative Law, 65(3), 527-556. Retrieved from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2777695

Haines, F. (2017). Regulation and Risk. In P. Drahos (Ed.), Regulatory Theory, Foundations
and Applications (p.181). ANU Press.

Hao, K. (2019, February 4). This is how AI bias really happens—and why it’s so hard
to fix. Retrieved from MIT Technology Review: https://www.technologyreview.
com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

Heerdt, J. (n.d.). Transform your data into intelligent action with Cortana Analytics Suite.
Retrieved from Sogeti: https://www.sogeti.nl/sites/default/files/Transform%20your%20
data%20into%20intelligent%20action%20with%20Microsoft%20Cortana%20Analytics%20
Platform.pdf

Hood, C. C., & Margetts, H. Z. (2008). The Tools of Government in the Digital Age. Palgrave
Macmillan.

IANS. (2016, June 9). Microsoft develop sowing app for Andhra Pradesh farmers. Retrieved
from Financial Express: https://www.financialexpress.com/industry/technology/microsoft-
develop-sowing-app-for-andhra-pradesh-farmers/279171/

IANS. (2017, December 19). #GoodNews: Indian Farmers Go the AI Way to Increase Crop
Yields. Retrieved from the quint: https://www.thequint.com/news/india/good-news-indian-
farmers-use-ai-for-higher-crop-yields

ICICI Bank. (2020, January 1). Artificial Intelligence in Loan Assessment: How does it Work?’
Retrieved from ICICI Bank: https://www.icicibank.com/blogs/personal-loan/artificial-
intelligence-in-loan-assessment-how-does-it-work.page?

146
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

ICRISAT. (2017, January 9). Microsoft and ICRISAT’s Intelligent Cloud Pilot for Agriculture
in Andhra Pradesh Increase Crop Yield for Farmers. Retrieved from ICRISAT: http://www.
icrisat.org/microsoft-and-icrisats-intelligent-cloud-pilot-for-agriculture-in-andhra-pradesh-
increase-crop-yield-for-farmers/

ICRISAT. (2017, January 13). New Sowing Application Increases Yield by 30%. Retrieved
from ICRISAT: http://www.icrisat.org/new-sowing-application-increases-yield-by-30/

ICRISAT. (n.d.). Microsoft CEO Speaks on Collaboration with ICRISAT. Retrieved from
ICRISAT: http://www.icrisat.org/microsoft-ceo-speaks-on-collaboration-with-icrisat/

ICRISAT. (n.d.). Rythu Kosam. Retrieved from ICRISAT: http://www.icrisat.org/tag/rythu-kosam

Illinois v Gates, 462 U.S. 213 (1983)

Indian Express (2018, March 16). Why are India’s Farmers Committing Suicide? Retrieved
from Indian Express: http://www.newindianexpress.com/nation/2018/mar/15/why-are-
indias-farmers-committing-suicide-1787539.html.

Jaggi, S. (2017). State Action Doctrine. Max Planck Encyclopedia of Comparative


Constitutional Law. Retrieved from https://oxcon.ouplaw.com/view/10.1093/law-mpeccol/
law-mpeccol-e473

Jessop, R. (2003). Governance and Metagovernance: On Reflexivity, Requisite Variety, and


Requisite Irony. Sociology. Lancaster University. Retrieved from https://www.lancaster.
ac.uk/fass/resources/sociology-online-papers/papers/jessop-g

Joshi, D. (2020, February 6). Welfare Automation in the Shadow of the Indian Constitution.
Retrieved from Socio-Legal Review: https://www.sociolegalreview.com/post/welfare-
automation-in-the-shadow-of-the-indian-constitution

K. Puttaswamy v Union of India (I) 10 SCC 1, 2017

Kahn, J. (2020, February 11). A.I. and tackling the risk of “digital redlining”. Retrieved from
Fortune: https://fortune.com/2020/02/11/a-i-fairness-eye-on-a-i/

Kaleidofin. (n.d.). About Us. Retrieved from Kaleidofin: https://kaleidofin.com/about-us/

Kannabiran, K. (2012). Tools of Justice: Non-Discrimination and the Indian Constitution.


New York: Routledge.

Kar, S. (2018-a). Financializing Poverty: Labour and Risk in Indian Microfinance. Stanford
University Press, 153.

Kar, S. (2018-b). Financializing Poverty: Labour and Risk in Indian Microfinance. Stanford
University Press, 154.

147
Institutional and technological design development through use cases based discussion

Khaitan, N. (2019, October 25). New Act UAPA: Absolute Power to State. Retrieved from
Frontline: https://frontline.thehindu.com/cover-story/article29618049.ece

Khaitan, T. (2009). Reading Swaraj into Article 15: A New Deal for the Minorities.
NUJS Law Review.

Khanikar, S. (2018). State Violence and Legitimacy in India, 321.

Khilnani, S. (2004). The Idea of India. New Delhi: Penguin.

Kleinsteuber, H. J. (n.d.). Self-regulation, Co-regulation, State Regulation. Retrieved from


https://www.osce.org/fom/13844?download=true

Kriebel, D., Tickner, J., Epstein, P., Lemons, J., Levins, R., Loechler, E. L. . . . Stot, M. (2001). The
Precautionary Principle in Environmental Science. Environmental Health Perspectives, 871-876.

Kumar, A., Shukla, P., Sharan, A., & Mahindru, T. (2018). NationalStrategy-for-AI-Discussion-
Paper. NITI Aaygo. Retrieved from https://niti.gov.in/writereaddata/files/document_
publication/NationalStrategy-for-AI-Discussion-Paper.pdf

Langenbucher, K. (2020). Responsible A.I. Credit Scoring – A Legal Framework. 25 Euro. L. Rev. 1.

Lending Kart (2020). Retrieved from Lending Kart: https://www.lendingkart.com/

Lloyd Corp Ltd v Tanner, 562, U.S. (1953)

Loan Frame (2020). Retrieved from Loan Frame: https://www.loanframe.com/

Maneka Gandhi v Union of India, SCR (2) 621 (1978)

Marda, V., & Narayan, S. (2020a). Data in New Delhi,s Predictive Policing System. Proceedings
of ACM Conference on Fairness, Accountability, and Transparency. Barcelona, Spain, ACM,
New York, NY, USA. USA. Retrieved from https://doi.org/10.1145/3351095.3372865

Marda, V., & Narayan, S. (2020b). Data in New Delhi’s Predictive Policing System.
Proceedings of ACM Conference on Fairness, Accountability, and Transparency, (p.
321). Barcelona, Spain. ACM, New York, NY, USA. USA. Retrieved from https://doi.
org/10.1145/3351095.3372865

Marda, V., & Narayan, S. (2020c). Data in New Delhi’s Predictive System. Proceedings of
ACM Conference on Fairness, Accountability, and Transparency, (p. 322). Barcelona, Spain,
ACM, New York, NY, USA. USA. Retrieved from https://doi.org/10.1145/3351095.3372865

Mittelstadt, B. (2019, May 20). AI Ethics – Too Principled to Fail? Nature Machine
Intelligence. Retrieved from Nature Machine Intelligence: https://papers.ssrn.com/sol3/
papers.cfm?abstract_id=3391293

148
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Mullin v And’r, Union Territory of Delhi, India, 2 S.C.R. 516, 518 (1981)

Nag, R. (2016, June 10). How Matrix Backed FinTech Startup Finomena is Disrupting the $8
Bn Youth Loan Market. Retrieved from Inc 42: https://inc42.com/startups/finomena/

NASSCOM. (2018). Agritech In India – Maxing India Farm Output. Retrieved from NASSCOM:
https://www.nasscom.in/knowledge-center/publications/agritech-india-%E2%80%93-
maxing-india-farm-output

Nayak, N. D. (2015, May 3). Agricultural sector needs technological intervention to face
challenges. Retrieved from The Hindu: https://www.thehindu.com/news/national/
karnataka/agricultural-sector-needs-technological-intervention-to-face-challenges/
article7166263.ece

NITI Aayog. (2018). National Strategy fir Artificial Intelligence. 33-34. Retrieved from
http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-
Discussion-Paper.pdf

Oswald, M. (2018). Algorithm-Assisted Decision-Making in the Public Sector: Framing the


Issues Using Administrative Law Rules Governing Discretionary Power. SSRN.

Palmer, S. (2008, July–October). Public Functions and Private Services: A Gap in Human
Rights Protection. International Journal of Constitutional Law, 6(3-4), 585-60.

Partap Singh (Dr) v Director of Enforcement, Foreign Exchange Regulation Act, AIR SC 989
(1985)

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and
Information. Harvard University Press.

Pasquale, F. A. (2017). Toward a Fourth Law of Robotics: Preserving Attribution,


Responsibility, and Explainability in an Algorithmic Society. SSRN, 78. Retrieved from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3002546

Pearson, J. (2017). AI Could Resurrect a Racist Housing Policy. Retrieved from https://www.
vice.com/en_us/article/4x44dp/ai-could-resurrect-a-racist-housing-policy

Pelaez, V. (2019). The Prison Industry in the United States: Big Business or a New Form
of Slavery? Global Research. Retrieved from https://www.globalresearch.ca/the-prison-
industry-in-the-united-states-big-business-or-a-new-form-of-slavery/8289

Pichai, S. (2018, June 7). AI at Google: Our Principles. Retrieved from Google: The Keyword:
https://www.blog.google/technology/ai/ai-principles/

Pischke v Litscher, 178 F.3d 497, 500 (7th Cir. 1999)

149
Institutional and technological design development through use cases based discussion

Prince, A., & Schwarcz, D. (2020). Proxy Discrimination in the Age of Artificial Intelligence and
Big Data. 105 Iowa Law Review 1257. Retrieved from https://ssrn.com/abstract=3347959

Randazzo, A. (2013). Can a Disruptive Fin-tech create a Mass Market for Savings and
Investment in India? Retrieved from Kaleidofin: https://kaleidofin.com/kaleidofin-can-a-
disruptive-fin-tech-create-a-mass-market-for-savings-and-investment-in-india

Ranger, C. (2018, November 13). Using machine learning to improve lending in the
emerging markets. Retrieved from Harvard Business School - Technology and Operations
Management: https://digital.hbs.edu/platform-rctom/submission/using-machine-learning-
to-improve-lending-in-the-emerging-markets/

Rao, N. (2013). Three Concepts of Dignity in Constitutional Law. Notre Dame Law Review, 200.

Reddy, B. D. (2016, June 9). Microsoft, Icrisat develop new sowing app for farmers using Al
and Azure cloud. Business Standard. Retrieved from https://www.business-standard.com/
article/companies/microsoft-icrisat-develop-new-sowing-app-for-farmers-using-al-and-
azure-cloud-116060900752_1.html

Rouvroy, A. (2013). The End(s) of Critique: Data Behaviourism versus Due Process. In M.
Hildebrandt, & K. De Vries, Privacy, Due Process and the Computational Turn: The Philosophy
of Law Meets the Philosophy of Technology.

Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges,


Competencies, and Strategies. Harvard Journal of Law & Technology, 29, 354, 357, 259.

Schulz, W. (2006). Final Report: Study on Co-regulation Measures in the Media Sector, Study
for the European Commission. Directorate Information Society and Media . Retrieved from
http://ec.europa.eu/avpolicy/docs/library/studies/coregul/final_rep_

Schulz, W., & Held, T. (2001). Regulated self-regulation as a form of modern government.
Indiana University Press.

Scott, C. (2017a). The Regulatory State and Beyond. In Regulatory Theory, Foundations and
Applications (p. 269). Australia: ANU Press.

Scott, C. (2017b). The Regulatory State and Beyond. In Regulatory Theory; Foundations and
applications (pp. 269–270). ANU Press.

Sethia, A. (2015, March 21). “The BCCI Case on “Public Function” and Its Implications on
Sports Governance. Retrieved from iconnectblog: http://www.iconnectblog.com/2015/03/
bcci-case-on-public-function/

150
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Sharma, S. (2018, July 9). How ISRO is helping in Uttar Pradesh Map and Predict Crime.
Retrieved from Tech Circle: https://www.techcircle.in/2018/07/09/how-isro-is-helping-
uttar-pradesh-police-map-and-predict-crime/

Sharma, V. (2017, September 23). Indian Police to be armed with Big Data Software to
Predict Crime. Retrieved from The New Indian Express: https://www.newindianexpress.
com/nation/2017/sep/23/indian-police-to-be-armed-with-big-data-software-to-predict-
crime-1661708.html

Siemen Engineering and Manufacturing Co. of India v Union of India, AIR Sc 1785 (1976)

Singh, A., & Prasad, S. (2020). Artificial Intelligence in Digital Credit in India. Dvara Research.
Retrieved from, https://www.dvara.com/blog/2020/04/13/artificial-intelligence-in-digital-
credit-in-india/

Sinha, A., & Mathews, H. V. (2020). Use of algorithmic techniques for law enforcement: An
analysis of scrutability for juridical purposes. 55(23). Retrieved from, https://www.epw.in/
journal/2020/23/special-articles/use-algorithmic-techniques-law-enforcement.html

Smith, C. A. (2018). The Colour of Creditworthiness: Debt, Race, and Democracy in the 21st
Century. Baltimore, Maryland: Johns Hopkins University. Retrieved from https://jscholarship.
library.jhu.edu/bitstream/handle/1774.2/60992/FORSTER-SMITH-DISSERTATION-2018.
pdf?sequence=1&isAllowed=y

State of Punjab v Balbir Singh, 3 SCC 299 (1994)

Sundar and Ors v State of Chattisgarh, 7 S.C.C, 547 para. 73 (2011)

Terry, N. (2019). Of Regulating Healthcare AI and Robots. Yale Journal of Law & Technology,
21, 18. Retrieved from https://yjolt.org/sites/default/files/21_yale_j.l._tech._special_
issue_133.pdf

Terry v Ohio, 392 U.S. 1 (1968)

Travancore Rayons v Union of India, AIR SC 862 (1971)

UN ESCAP. (2019). Artificial Intelligence in the Delivery of Public Services. Retrieved from
https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf

Zee Telefilms v Union of India, AIR, SC 2677 (2005)

151
Institutional and technological design development through use cases based discussion

Appendix: Examples of Regulatory Tools for AI

Accountability, Oversight, and Redress


• Clear, funded, and appropriate mechanisms for redress.
• Systematic and bottom-up impact assessment of potential harms to civil liberties and
human rights.

• Detection, mitigation, and response mechanisms for possible errors as a result of initial
training and self-learning.
• In-built audit mechanisms and possibility of verification by an independent third-party.

• Clearly articulated liability structures for situations that involve the use of an AI system.
• Mechanisms for consistent and regular evaluation and review of AI systems, including
inclusive and bottom-up mechanisms for tracking impact.
• Communication of changes to AI systems resulting from monitoring and evaluation.
• Capacity-building and awareness of data-driven decision making in courts at national,
regional, and district levels.
• Clear framework for working with the private sector, including enabling access to
training data held by the private actor, opening up source code, and assigning clear
modes of contractual liability.
• Certification schemes and trainings for end users.

Equality, Dignity, and Non-discrimination


• Anti-discrimination standards in compliance with constitutional and international human
rights laws.

• Diversity assessment for members of development/implementation team.


• Written standard operating procedures (SOPs) during curation of the data and training
of the algorithm.

• Mechanism for incorporation of citizen voices and feedback throughout implementation.


• Framework for assessing disparate impact on specific vulnerable communities.

152
Regulatory Interventions For Emerging Economies Governing The Use Of Artificial Intelligence In Public Functions

Safety, Security, and Human Impact


• Impact assessment of all cyber threats to which the AI system could be vulnerable.
• Risk assessment towards identifying unintended consequences prior to development,
including in unpredictable environments.
• Existing cyber security frameworks at a national level.

• Depending on the severity of impact, clear safety controls for a human to override the AI
system or reject a prompt, recommendation, or decision by the AI.

• Regular security audits, patches etc.


• Framework for data breach notifications and bug bounty programs.

Privacy and Data Protection


• Compliance with national and global protocols on data protection and governance,
including consent principles, control over data use, and restriction of processing, right to
erasure, and rectification.

• Clear regulatory frameworks for personal and non-personal data in existing data sets.
• Adoption of necessity, proportionality, and “least intrusive” standards to guide the design,
development, and use of AI systems.
• Built-in mechanisms for notice and consent, with possibility to revoke.
• Ethical practices in collecting and accessing data for training purposes.

• Oversight mechanisms for collection, storage, processing, and use – particularly for
real-time and long-term collection and use of data.

Agency
• Comprehensive notice framework that accounts for passive and active data collection.
• Comprehensive transparency frameworks for data inputs, data training and curation,
and use of decisions.
• Retrospective adequation.
• Opt-out options for individuals.
• Gradients of human-in-the-loop.
• Standards for accuracy.

153
AI Technologies,
Mark Findlay
Centre for AI and Data Governance,
School of Law,

Information Singapore Management University

Capacity, and
Sustainable
South World
Trading

This research is supported by the National Research Foundation, Singapore under its Emerging Areas Research
Projects (EARP) Funding Initiative. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore.
AI Technologies, Information Capacity, and Sustainable South World Trading

Abstract

This paper represents a unique research methodology for testing the assumption that
AI-assisted information technologies can empower vulnerable economies in trading
negotiations. This is a social good outcome, enhanced when it also enables these
economies to employ the technology for evaluating more sustainable domestic market
protections. The paper is in two parts. The first presents the argument and its underpinning
assumption that information asymmetries jeopardize vulnerable economies in trade
negotiations and decisions about domestic sustainability. We seek to use AI-assisted
information technologies to upend situations where power is the unfair discriminator in
trade negotiations because of structural information deficits, and where the outcome
of such deficits is the economic disadvantage of vulnerable stakeholders. The research
question is the following: How is power dispersal in trade negotiations, and consequent
market sustainability, to be achieved by greater information access within the boundaries of
resource limitations and data exclusivity? The second section is a summary of the empirical
work which pilots a more expansive engagement with trade negotiators and AI developers.
The empirical project provides a roadmap for policymakers convinced of the value of the
exercise to then adopt the model reflections arising out of the focus groups and translating
these into a real-world experience. The research method we propose has three phases,
designed to include a diverse set of stakeholders – a scoping exercise, a solution exercise,
and a strategic policy exercise. The empirical achievement of this paper is the validation
of the proposed methodology through a “shadowing” pilot method. It explains how the
representative groups engaged their role plays, and summarizes general findings from the
two focus groups conducted.

155
Institutional and technological design development through use cases based discussion

Analytical Purpose
This paper represents a unique research methodology In more detail, the policy and research assumption is
for testing the assumption that AI-assisted that by employing AI-assisted information sourcing,
information technologies can empower vulnerable sorting, and analyzing technologies to improve
economies in trading negotiations. This is a social information access and evaluation underpinning
good outcome, enhanced when it also enables these economic decision-making, vulnerable economies can
economies to employ the technology for evaluating better determine sustainable domestic market policy
more sustainable domestic market protections. against enhanced trade bargaining capacity. The
availability of AI information-assistance technologies
The paper is in two parts: the initial discursive analysis (and associated expertise/education)1 will, it is argued,
presents the argument underpinning the assumption; provide the material and understandings necessary
the second section is a summary of the empirical work (but currently absent or under-developed) for selecting
which pilots a more expansive engagement with trade contexts of domestic market protection to promote
negotiators and AI providers. This division allows a sustainability, and for more competently valuing trade
policy audience to concentrate on the justifications for bargaining positions in the case of transnational
the assumption, the challenges facing implementation, exchange markets.
and the speculated consequences from its successful
achievement. Researchers and evaluators will find At a more macro consideration of economic reliance,
interest in the details of the pilot methodology. this policy decision-making enhancement will reduce
the reliance on market surplus dumping from more
The paper demonstrates and tests our confidence powerful trading partners and its anti-subsistence
in the methodology to positively establish the consequences. As domestic market sustainability
analytical assumptions regarding power dispersal is more strategically prioritized, these vulnerable
and sustainable domestic market analysis. We economies will better weather post growth, or de-
advance speculative policy recommendations growth global economic trends.
that can be drawn for the critical experience of
the pilot methodology. The paper’s commitment As for enhanced trading capacity, and specifically
to empowerment through policy engagement and empowered trade bargaining positioning, AI
recipient ownership makes prescriptive policy information-assistance technologies for data access,
inappropriate without a full application of the method automated data management, and analysis, it is
in real market decision-making. argued, will offer social good outcomes to presently
disempowered multi-stakeholder trading players
Consistent with the overarching project brief, we who currently negotiate under information deficits
have identified a need and proposed an AI-assisted and resultant weakened bargaining capacity. AI
answer to that need at theoretical and policy levels. information-assistance technologies will strengthen
As such, a social deficit is established and a social bargaining power, which will increase trading revenue
good through AI is proposed, which is consistent and make more achievable aspirations for “world
with a major head of the ESCAP development goals. peace through trade” (Dikowitz, 2014).
Recognizing resource limitations and time constraints,
the empirical project in the second part provides a
roadmap for policymakers convinced of the value
of the exercise, to then adopt the model reflections
arising out of the focus groups and translating these
into a real-world experience.

1. It is not the intention of the paper to specify these technologies. In fact, essential for our belief in recipient “ownership”, any eventual policy applications should involve recipient
economies in a dialogue with AI technical resource personnel and donor agencies, to determine the technologies best suited to need on a case-by-case basis.

156
AI Technologies, Information Capacity, and Sustainable South World Trading

Background
The foundations of our thinking grow from the and moves from multi-lateral to bi-lateral trading
following propositions, which can be viewed as policy alliances, both designed to reduce individual
underpinnings: trade deficits and to penalize offending trading
partners, may offer opportunities for weaker trading
1. General principles can be identified as governing economies to assert domestic social production
2
successful trading bargains; and bi-lateral advantage. The reasoning behind
2. Trade negotiations usually reflect the relative market this view is that domestic market liberalization
power and positioning of participants; North to South World, ignoring how vulnerable may
3. Trading partners from more vulnerable economies be the target domestic resource market, leaves
may require external bargaining support if structural vulnerable economies even more exposed to trade
power asymmetries are to be dispersed in their discrimination when major global trading nations
favor; are reverting to selective and self-interested tariff
4. A “free trade model”3 has negative impacts in protectionism;
weaker economies being required to open up their 8. The paradox between free trade open market
markets and remove protections over domestic liberalization, and intellectual property and
4
social production. This trade liberalization has data transfer protection, disadvantages weaker
meant that domestic market subsistence and economies with lower levels of IP “ownership” and
economic sustainability are diminished in favor of effective data transfer controls.
trading exploitation;
5. Weaker economies have been adversely affected Taking these fundamentals as given8, the first part of
by discriminatory trading arrangements and the paper builds the following argument:
exclusionist trading alliances, particularly as their
trade commodities are undervalued, and their • Employing bargaining theory, a typology of
attractiveness as preferred partners is equally so; successful trade bargaining can be established and
6. Automated data management5, access to big data6, the significant factors, prioritized;
7
and artificial intelligence technology capabilities , if • Anticipating that information deficit regarding key
affordably available to weaker trading economies, aspects and dimensions of any particular trade
offer capacities to strengthen their positioning in bargain will further disadvantage weaker parties,9
certain trading arrangements; access to information and critically appreciating
7. A protectionist regression in domestic trade its analytical value will level the bargaining power
arrangements among major trading powers, asymmetries;

2. What is meant by “trade bargains” or “trade negotiations” here is specific trade deals rather than prevailing or permanent trade agreements and partnerships.
3. As a policy to eliminate discrimination against imports and exports, the free trading model has never fully been achieved globally. In such an ideal trading frame, buyers and sellers
from different economies may voluntarily trade without a government applying tariffs, quotas, subsidies, or prohibitions on goods and services. Free trade is, therefore, proposed as the
opposite of trade protectionism or economic isolationism. Instead of freedom and fairness, having attained comparative advantage in production, the hegemon is typically impaired by
artificial trade barriers in its quest to penetrate the domestic economies of competing states. Thus, as a state rises from the core to hegemony, it will progressively favor lower tariffs
and move towards a free trade doctrine for import receiving markets, while at the same time resorting to tariffs on imports where they are deemed to correct trade imbalances against
their benefit. In de Oliver M. (1993) “The Hegemonic Cycle and Free Trade: the US and Mexico” Political Geography 2/5: 457-474.
4. Can social production at home be an adequate substitute for market production from producers abroad, particularly when it comes to high-tech commodities and services? The same
could be asked about specialist natural resources which are the material life blood of high technology, and as such, trading priorities. We advance here that trade is necessary for
balanced development, but trade deals need not crowd out domestic social production through the export dumping of subsidized or cheap replications of sustainable domestic social
production.
5. This refers to the application of algorithmic technologies in cataloguing and mapping data at rest and in action, thereby lessening the prospect of “drowning in big data”, https://erwin.
com/blog/automated-data-management-stop-drowning-data/
6. The term “big data” has come to mean some form of “value-added” data application potentials. Simply, big data refers to extremely large datasets which may be analyzed
computationally to reveal patterns, trends, and associations, particularly concerning human behavior and interactions. The size of these sets and their capacity to cross fertilize
creates negative challenges to evaluating data sources and their progressive integrity.
7. The paper prefers the definition provided by Stuart Russell and Peter Norvig (2010) Artificial Intelligence: A Modern Approach (3rd edition), New Jersey: Prentice Hall; “the designing
and building of intelligent agents that receive precepts from the environment and take actions that affect that environment”. This approach connects with a key idea relevant to the
present discussion, that AI is not the same as information – it is technology that helps us process information to take actions in the world.
8. It is possible for each of these assumptions to be empirically tested and contextually validated. However, for our initial purposes, they are designed to form the foundations of wider
analytical projections.
9. Rather than talking about economies in terms of stages of development, this paper distinguishes participation in economic decision-making and trade bargaining in terms of the
relative strength and weakness of participants. Vulnerability is the approach taken here as an empirical measure of relative market power, which can be corrected through more equal
access to the information underpinning strategic economic decision-making.

157
Institutional and technological design development through use cases based discussion

• Understanding the dynamics of a global free- Part I


trading model, and its critique in the recent return to
protectionism, projections could be offered regarding The Analytical Challenge
how weaker trading economies might be advantaged
by interventions to improve their individual bargaining Trade has become essential for the viability of today’s
power, and at the same time strategically protecting exchange economies, big and small. Global trade that
their sustainable domestic social production; produces benefits for all is also seen as a positive
• Information deficits regarding crucial trade bargain aspect of global governance and peacemaking.
variables disadvantage parties10 with reduced or Commodities traded will vary, largely depending
restricted access to such information; on the demographics of the economy and its
• Automated data management, access to big data historical development. If we accept that “property
using artificial intelligence technology, and enhanced is a fundamental social practice” and “ownership is
analytical expertise/education can provide external indeterminate” (Humbach, 2017) then there needs to
assistance to disempowered trading parties when operate a sustainable frame for things traded between
seeking to improve their bargaining status; parties that want what property and ownership they
• Such information access capacity is made more claim, to work best for their complex social needs.
viable through enhanced internet access;
• Aid and development agencies, international Unfortunately, as Joseph Stiglitz has observed at the
organizations, and private philanthropic entities can forefront of free trade policy marketing operating from
provide the financial backing to finance the necessary a beggar-thy-neighbor perspective to beggar-thyself
technology for trade information empowerment. (Stiglitz, 2002a), the “free trade” panacea did not
Additionally, multi-stakeholder trading arrangements realize universal benefits across the globe.
could fund AI information technology capacity to
advance aspirations for “world peace through trade”; International economic justice requires that the developed
• Access to information alone will not rebalance countries take action to open themselves up to fair trade and
trading power asymmetries. Along with more access, equitable relationships with developing countries without
there is a need to invest in critical and resilient recourse to the bargaining table or attempts to extract
analytical capacity. concessions for doing so (Stiglitz, 2002b).

Each of the paper’s policy underpinnings represent Implicit in this recognition of requiring fair trade
commitments to the greater trading sustainability initiatives driven from the rich and powerful down
of small and less powerful trading economies, in a to the poor and powerless, is pragmatic structural
global context where these economies can teach and process cautioning about unequal bargaining
the North World much about sustainability in a relationships. The cynic might say that fair trade
post growth, or de-growth trading age. In addition, is a non-sequitur. A good bargain benefits one to
more encompassing policy eventualities directed the detriment of the other. If this is the inevitability
to sustainability for vulnerable economies will be of trade, at a global level it explains the inequitable
enriched by this research through the suggested and destructive trajectories of contemporary global
potentials it offers to enhance informed decision- economic imperialism (Hardt & Negri, 2001). This
making about what domestic resources should be paper does not proceed within any such inevitability.
retained in domestic markets, and where these market Nor does the paper ignore that the introduction of
can be opened up to trade without endangering the AI-assisted information technology can have the
resilience of such economies.

10. Parties to economic decision-making and trade negotiations may be state actors, commercial agents, or multi-participant stakeholders.

158
AI Technologies, Information Capacity, and Sustainable South World Trading

unintended adverse consequences of increasing The potential downsides of free trade are said to be
unfairness if the nature of trading biases based on mitigated by:
wider hegemonic disempowerment is not appreciated.
Laws against protectionism and promoting free trade • Allowing for innovation and structural change;
North to South worlds often give “fairness” a low • Increasing employability and enabling life-long
priority. Along with more access to information, we learning; and
would encourage the development of legal regimes • Redistributing globalization gains more-equally in
respectful of, and not simply exploiting, global domestic economies through taxation (Reichel,
economic disparity. 2018).

When reflecting the problems associated with Debate these eventualities if you will, but their
transferring misunderstood or misconceived concepts achievements are no doubt dependent on which
of “fairness” into complex socio-technical systems, side of the globalization engine one sits – is it for
Xiang and Raji conclude that “fairness” is a mutual prosperity and peace, or alternatively, for intra-country
enterprise between AI-creators and legal policymakers: wealth through production chains skewed to stronger
economic bargainers?
If the goal is for machine learning models to operate
effectively within human systems, they must be compatible The political and economic reality of current trade
with human laws. In order for ML researchers to produce agendas is that vulnerable economies will be
impactful work and for the law to accurately reflect negatively impacted via protectionist policies enforced
technical realities of algorithmic bias, these disparate by major trading nations, in different ways but to
communities must recognize each other as partners to similarly disabling extents as they were when forced
collaborate with closely and allies to aid in building to expose their own markets to the unbalanced
a shared understanding of algorithmic harms and the influence of North World free trade expansionism. The
appropriate interventions, ensuring that they are compatible inequalities of free trade and selective protectionism,
with real-world legal systems (Xiang & Raji, 2019). operating on profound imbalances in trade capacity,
represent the context for policy reform advocated in
New Global Economic Models the remainder of the paper.

Sustainable world trade in an era of post growth Specifically, the policy reform advocated in this
or de-growth,11 is facing challenges from the push analysis involves:
for protectionism and isolationism against trade
liberalization and the “wealth of nations”. National self- • Recognizing that sustainable global economies will
sufficiency has incrementally been downgraded by free not be advanced by a heavy regression to selective
trade imperatives in favor of the internationalization protectionism or a blind adherence to discriminatory
of economic activities. Populist backlash would and unbalanced trade liberalization.
selectively reverse the forces of global economic • Appreciating that free trade can continue as a
engagement in preference for trading imperatives dimension of positive global engagement where
governed by domestic surplus and offshore relative free trade agreements allow for domestic social
disempowerment. production and thereby advance the aspiration for
world peace through trade.

11. These are several definitions of de-growth which largely focus on economic policy which concentrates less on economic stimulus than sustainable social welfare. For this paper, the
concept also incorporates “post-growth” – economic economic inevitabilities which see growth slowing or flattening irrespective of political and market intervention. See Azam G.
(2017) “Growth to De-Growth; a brief history” https://www.localfutures.org/growth-degrowth-brief-history/. “[De-growth] challenges both capitalism and socialism, and the political
left and right. It questions any civilization that conceives freedom and emancipation as something achieved by tearing oneself away from and dominating nature, and that sacrifices
individual and collective autonomy on the altar of unlimited production and the consumption of material wealth. Capitalism has brought further ills such as the expropriation of
livelihoods, the submission of labor to the capitalist order and the commodification of nature, (for the South World in particular). This project to establish rational control over the
world, humanity and nature is now collapsing.”

159
Institutional and technological design development through use cases based discussion

• Realizing that the current financial sustainability of Practically, issues of efficiency and distribution are
vulnerable South World economies, despite those important. Efficiency is at risk if the agreement fails
being economies more likely to adjust successfully to or can only be reached after costly compromise and
post-growth or de-growth regimes,12 will be enhanced delay. Distribution relates to how gains emerge from
if their bargaining power in trading arrangements, co-operation between the two parties. To these issues
and their capacity to discriminate between what identified by Muthoo, we would add sustainability. It
should be traded and what should remain a domestic is rare that trade relationships are “one-off’s”. They
resource, is empowered through greater information usually lead on to the establishment of enduring
access and analysis.13 market connections, or they have ramifications for the
parties involved, which stretch beyond the commercial
The next section looks at a model of bargaining terms of the deal.
dynamics. In particular, it identifies the importance
of access to information for empowering bargain What are the determinants of the bargaining outcome?
participants.
A. Impatience, or the pressures of time
14
Bargaining Theory Each player values time. The preference is to agree to
the price today rather than tomorrow. The value given
What factors determine the outcomes of specific trade to time will be subjective and relative. In particular,
negotiations? What are the sources of bargaining it may be as disproportionate and incremental as
power? What strategies can help in improving a party’s it is exaggerated by other external cost pressures.
bargaining power? Weaker players may have less time to bargain or
stronger players may exert the pressures of time if the
Trade bargains can be epitomized as at least two rapid conclusion of the bargain is essential for other
parties engaging for the purpose of some beneficial bargains to follow.
outcome (which might or might not be mutual) but
who have conflicting interests over terms. These Apparent impatience can lead to a weakened
common interests are in cooperating for trade; the bargaining posture or a breakdown of other rational
conflict lies in how to cooperate. communication essentials. In order to avoid the
exposure of impatience, bargaining theory suggests
Taking a more contextual approach, understanding that the vulnerable party should decrease their
the dynamics of bargaining from the perspective haggling costs and/or increase the haggling costs of
of disadvantaged parties in particular, provides an the other party. One way of achieving such a differential
opportunity to appreciate market dynamics and is for the otherwise impatient party to possess and
relationships (internal to the bargain) as well as understand the richest range of information and
the influence of political and economic policies’ data that constructs (or constricts) the other party’s
repositioning transactions (external). Interrogating the bargaining context.
essential features of the bargain requires more than
disentangling reasons for agreement or disagreement. Because the wealth and power differentials between
A power analysis is at the core of bargaining theory, trading parties are structural (and often not temporal
governing the imperatives for gaining the best benefit, or spatial), a basic principle of bargaining theory is
and often at the cost of fairness or other more that economies are unlikely to converge in wealth and
universal normative considerations. income solely through international trading policy.

12. Some say that developing economies need the benefits of growth before adopting a largely North World economic countermovement like de-growth. There is an alternative
argument that the conditions required for rethinking the place of the economy within the social, and prioritizing social rather than material goods, are more apparent and resilient in
less modernized and less materially dependent societies. The debate is usefully discussed in Lang M. (2017) “Degrowth: Unsuited for the Global South?” Alternautas.
http://www.alternautas.net/blog/2017/7/17/degrowth-unsuited-for-the-global-south. In any case, we are not requiring de-growth, but rather post-growth approaches to
sustainability that accept growth as a priority for the South World but in the context that economic growth is repositioning as a global economic agenda.
13. In advancing this thesis, we are mindful that information access alone will not empower market stake-holding. The quality of that information (i.e., its relevance, immediacy, and
analytical transparency) all depend on more than technological facilitation. The factors on which information empowerment relies are contextually important when evaluating the
significance and sustainability of technological facilitation.
14. The following summary draws heavily on Muthoo, A. (2000) “A Non-technical Introduction to Bargaining Theory” World Economics 1/2: 145-166

160
AI Technologies, Information Capacity, and Sustainable South World Trading

Features integral to bargaining dynamics such as D.Parties’ relationships


information deficit, we argue, have greater potential to There is much in bargaining theory which concerns
counterbalance prevailing structural inequalities that the significance of connections between the parties
determine patience to let negotiations run their natural in contexts outside the bargain in hand. These
course. externalities (such as cultural familiarity and political
bonds) may impress so deeply into every other
B. Risk of breakdown condition of the bargain, that negotiations cannot
If while bargaining, the players perceive that the break free from responsibilities and obligations
negotiation might break down into disagreement inherent within any such prevailing relationship.
because of some exogenous and uncontrollable
factors, then bargaining dynamics will alter. Risk of Again, information imbalance, or data access
breakdown can be raised through a range of variables restrictions built into such extraneous relationships
from human incompatibility, to the intervention of third will further exacerbate the information deficit retarding
parties. knowledgeable participation in the eventual agreement
struck.
This risk perception is where strategies to increase
risk aversion are important. Information available to E. Parties’ interests and preferencing
parties concerning the nature of the risk and its impact Individuals and organizations seeking to influence
on the other side becomes important if a weaker party economic decisions or to achieve success in a trade
wants to shield through risk aversion. bargain, approach the enterprise with pre-formed
preferences and exhibiting internalized interests. The
C. Outside options decisions or bargains with which the result will be
Here, the principle is that a party’s bargaining power colored by such preferences and interests in the same
will be increased if their outside option is sufficiently way that any market choice is in part the product
attractive – that is where alternative trading/ of preference gratification, interest, containment, or
bargaining arrangements may parallel the first satisfaction. Pound (Grossman, 1935) would see the
instance bargaining. Weaker parties are often devoid contest over interests as settling on individual claims,
of any other option, outside or otherwise, or because demands, or desires. How any of these features have
of not fully understanding the values and variables a preference through a bargain or decision will reveal
at play in their bargain, feel trapped within a trade the relative power exercised by individual stakeholders,
that is anything but to their advantage. The outside and by dominating any conflict over interests, the way
option principle is directly impacted by the amount power differential may be increased.
of information either or both parties have about the
bargain in play and the outside option relative to the In trade negotiations, the interests of stakeholders will
first instance bargain. The valuation of an outside range well beyond the remit of what is to be bargained
option will depend not only on the conditions and or decided. Therefore, if the influence of pre-existing
characteristics of that option, but as much or more on preferences and interests is going to weigh significantly
its consequences for the bargain in play. on the negotiation or decision-making dynamics,
then the more each party has detailed and informed
knowledge about these preferences and interests, the
less likely they will distort outcomes in ways which
could not be planned for or at least anticipated by
negotiating parties on both sides.

161
Institutional and technological design development through use cases based discussion

F. Commitment tactics Therefore, policies to defeat information asymmetries


In many bargaining situations, the players often take in trading arrangements, we argue, offer empowerment
actions prior to/or during the negotiation process potentials for weaker players, and on the strength
which partially commit them to some strategically of power dispersal through information access and
chosen bargaining positions. If these commitments sharing, more sustainable trading markets ongoing.
are partial in that they are revocable, depending
on how far down the line of negotiation they have In seeking power dispersal via information access and
been struck, this may progress the appearance of analysis, this paper is not requiring some egalitarian
intractability and therefore costs associated with their levelling of market engagement. As Rawls argued,
revocation. Many of these commitments may have social inequality will not always be the product
been orchestrated in order to increase the “bluff” (e.g., of power abuse or discrimination (Grcic, 2007).
the limitations on a party to negotiate freely beyond What we are seeking to attack are those situations
the terms of another commitment). The power of bluff where power is the unfair discriminator because of
is always dependent on contrary information or any structural information deficits, and economic abuse of
suspension of disbelief in the bluff. vulnerable stakeholders is the outcome.

G. Asymmetric information From this review of bargaining dynamics, the essential


It might be accepted bargaining practice that one research question emerges: How is power dispersal
party will always know something the other does in trade negotiations, and consequent market
not. How such an information disparity should be sustainability, to be achieved by greater information
valued is relative to the significance of the information access within the boundaries of resource limitations
for the vital terms of agreement (or disagreement). and data exclusivity?
Information asymmetries affect both the values
and pricing on offer as conditions of a deal, as well Access through AI, Automated Data
as when the agreement might be concluded for the Management, and Big Data – Some Critical
maximum mutual benefit. Considerations

In general, an absence of complete information will As suggested in the brief reflection on fairness (above),
lead to inefficient bargaining outcomes, even for it is necessary to preface any consideration of the
those who benefit from an information surplus. The relationship between improved data access, and
logic behind this view rests on an acceptance that the improved bargaining power in trading arrangements,
more information available to both parties, the earlier with the caution that more data and better automated
synergies will be established and bargains struck. data management courtesy of AI technologies will
not automatically empower weaker trading partners.
The message is that treating information in some In fact, increased technological capacity to access
exclusionist or proprietorial manner may produce data, unconnected with significant advances in data
a short-term bargain benefit for the information appreciation and contextualization may simply further
owners (renters and possessors), but at the risk fog the understanding of smaller stakeholders and
of an unsustainable trading market vulnerable to exacerbate bargaining disempowerment.15
misrepresentation, exploitation, corruption, and
the retarding on any natural propensity for market In addition, bargaining tactics may prefer privacy
competition. when information is applied, sought, withheld, or

15. The focus group discussions in the Methodology section enunciate this concern.

162
AI Technologies, Information Capacity, and Sustainable South World Trading

exchanged. The bargaining attitude that bargaining more on algorithms to guide their decisions, whether
power is lessened if information is mutualized has they realize it or not. In the current technologized world
to be addressed with the argument that for market environment, it is axiomatic that new digital literacy is
sustainability, and not just a single bargain advantage,
not about more skillfully using a computer or being on
fairer information access will make for more robust the Internet on call, but understanding and evaluating
economic engagement. Once again, we return to the the consequences of an always-plugged-in lifestyle for
externalities of economic fairness. every aspect of social and economic engagement. In
societies and cultures that still place social relations
How market stakeholders accommodate and benefit much above digital connections, the introduction of
from information abundance is at the heart of any AI capacity is never, as we see it, meant to diminish or
policy derivatives designed to improve trading balance downplay the dominant role of human agency.
in a hegemonic global trading model, intensified in
its potential to discriminate as a consequence of Over two thirds of the worlds’ population either
selective and politicized protectionism. A feature of live outside or can only partially participate in the
the methodology to follow is the potential to better digital age. Digital access and digital literacy are
understand how information needs to be met with now recognized as fundamental human rights.
enhanced information access to address specific However, when it comes to fair trading practice, a
bargaining decisions. level playing field in terms of information engagement
is not only a long way off, but some might argue
An important consideration, which informs the policy is a misunderstanding of bargaining behavior and
projection for trade empowerment, is its timeliness. advantage (UNCTAD, 2019).
With the major trading partners at war over tariffs,
trade imbalances, protectionism, and perversely, In 2014, the UN General Assembly adopted
secrecy when it comes to tech transfer and IP, the resolution 69/204 “Information and Communication
conditions may be right for smaller trading economies Technologies for Development”. Most relevant for this
to rebalance their domestic sustainability without paper is the reference to:
the backlash of free trade essentialism.16 From that
stance, an informed and economic evaluation of what “…information and communications technologies have
remains open for trading will provide a more stable the potential to provide new solutions to development
platform for trade bargaining. challenges, particularly in the context of globalization, and
can foster sustained, inclusive and equitable economic
Access to information, complemented by increased growth and sustainable development, competitiveness, access
analytical capacity, will enable more nuanced to information and knowledge, poverty eradication and
distinctions between protection for domestic social inclusion that will help to expedite the integration of all
sustainability and competitive positioning in regional countries, especially developing countries, in particular the
and international trading. Yet, strategic analytical least developed countries, into the global economy” (UNCTAD,
capacity does not simply depend on more devices and 2015);
bigger technologies. In fact, the savvier information-
users are navigating away from an over-reliance This paper is not solely concerned about a “digital
on devices and are becoming aware about how divide” between those who have access to computers
algorithms affect their lives. In any case, even those and the Internet and those who do not. As digital
market players who have less information are relying devices proliferate, the divide is not just about access

16. As noted earlier, there has been much political hypocrisy surrounding the “freedom” of free trade, and as such, a re-balancing of domestic sustainability and regional/international
competitiveness will not necessarily require a wholesale rejection of more open cross border commercial engagement.
17. The mirror image of this divide is the incapacity of algorithm designers to appreciate the complexity and sometimes intentional ubiquity in the social circumstances and human
decisions to which they are applied.

163
Institutional and technological design development through use cases based discussion

or available technologies. How individuals and In identifying the necessity for a more level playing field
organizations deal with information overload and over data access and analysis in trade negotiations,
the plethora of algorithmic decisions that permeate this paper is not traversing discussions of “data trade”
every aspect of their lives is an even more relevant and its regulation, nor are we focusing on data driven
discriminator when turning a power analysis to the economies.19 The policy product of the research to
global trade divide (Susaria, 2019). The new digital follow is also not seeking to challenge even the most
divide is wedged over understanding how algorithms discriminatory IP and data protection regimes, though
can and should guide decision-making.17 such challenges might successfully advance market
sustainability in an era of access revolution (Findlay,
The “empowerment through data access and analysis” 2017). Rather, the purpose of the research method to
model that is advocated here depends on the follow is to scope the type of information necessary
availability of technological facilitation in identifying for successful domestic market discrimination and
relevant data, determining its legitimacy and fitness- trade negotiations, and the manner in which the
for-purpose, alongside enhanced analytical capacity provision of access and analytics technology (via AI
and an upgraded appreciation of how AI as information potentials) can enhance the decision-making benefits
technology can enhance essential economic and trade which sustainable domestic market analysis and
decision-making.18 Along with this external impetus invigorated trading negotiations offer for empowering
for empowerment in decision-making is a concurrent and assisting vulnerable economies at a time of world
challenge for information users in vulnerable trade transition.20
economies to more clearly determine who decides
what technologies should be preferred and whether Bargaining-empowerment Through
such technologies offer decision-making options that Technologized Information Access and Analysis
are fair/legitimate/fit-for-purpose.
Bargaining-empowerment though information access
Syncing AI potentials with the information needed may occur in several ways. Recognizing there is a
for domestic and trans-national trade bargaining difference between:
and economic decision-making, is not singularly a
question of sourcing and supplying technological 1. access to information helping an individual actor to
capacity presently unavailable to weaker market bargain better, and
stakeholders. Along with improved access and 2. access to information assisting this actor to locate
analytical technologies, there is a need to target the other stake-holder participants, and together they
utility of such technologies and the information they bargain better (because they share information and
produce to domestic economic sustainability (through they act as a more influential bargaining unit);
trade protection) and increased trading profitability
(through sharper trans-national bargaining).

18. In identifying this decision-making “space”, we recognize the importance of determining how to increase domestic market sustainability, while at the same time evaluating what
should be traded beyond the domestic market and at what value.
19. For an interesting discussion of these two themes and their intersection, see Ciuriak D. (2018) “Digital Trade; Is data treaty-ready” CIGI Papers No.162
https://www.cigionline.org/sites/default/files/documents/Paper%20no.162web.pdf
20. In talking of trading arrangements in terms of state-to-state dialogue, we are, for the purposes of this research, simplifying the trading demographics wherein private sector players
may be as significant or more so when vulnerable stakeholders in trade negotiations expose their domestic markets and resources to the interests of external multi-national traders.
This paper was settled prior to the impact of the COVID-19 pandemic on global economic relations and as such cannot take these influences into account for this analysis.
21. We recognize that these quality-control problems are exacerbated the bigger and more interconnected are the datasets.

164
AI Technologies, Information Capacity, and Sustainable South World Trading

Once information has been identified, its sources need “Developing countries and least developed countries have
to be understood and the prudential pathways through limited resources to prepare for trade negotiations,” said
which it has passed if relevance and reliability are to Pamela Coke-Hamilton, Director of International Trade and
be measured.21 The quality of information matters in Commodities of UNCTAD.
terms of its decision-making value, and information
offers diminishing decision-making returns as that “The amount of information that negotiators and their
quality is less open to testing and verification. A teams need to process is proliferating, and often they need
small amount of high-quality information is likely the information on a timely and rapid basis,” she said. The
more useful than an abundance of low-quality Cognitive Trade Advisor uses an understanding of natural
information. In that regard, access and analysis language to provide cognitive solutions to improve the way
must be accompanied by easy methods for data delegates prepare for and carry out their negotiations.
evaluation against simple matrices. An example of the
variables to be considered would be (where visible) “The texts of the agreements are getting longer and longer,”
completeness, timeliness, uniqueness, accuracy, Ms Coke-Hamilton said. “In the 1950s, an average trade
validity, and consistency (IT Pro team, 2020). agreement was around 5,000 words long. In the current
decade, this has increased to more than 50,000 words.
Next comes the issue of information over-load. Dealing with such amounts of information takes a lot of time
Unleashing masses of information, high quality or (UNTCAD, 2018).”
not, will swamp vulnerable users without the capacity
to process it. The other side of this problem is where Interesting as this development might be, our policy
AI and data analysis technologies can respond for frame has a more restricted but no less impactful
social good. intention. As mentioned earlier, we are not touching
on preferential trading arrangements, or the
Is this assertion confirmed by the literature? The understanding of their complex documentation.24
studies associated with improved bargaining power Instead, our remit is more contained, and as such,
as a consequence of greater information analysis attainable without new technologies. The direction
are heavily concentrated on labor mobilization.22 of the policy to follow is the employment of presently
Analogies are usefully drawn from this literature available AI technologies for accessing and analyzing
insofar as it has a distinct interest in negotiations, information that can better position vulnerable
bargaining, and decision-making modelling. negotiators by reducing crucial information deficits.
The UNCTAD initiative is to develop new AI tools
It is not novel to suggest that AI technologies can in order to make the attainment of Sustainable
enable better trading outcomes for vulnerable Development Goals more likely in under-developed
economies. The United Nations Conference on Trade regions. This paper shares the desire to see AI
and Development (UNCTAD) recently introduced a new supporting progress to these goals by reducing
AI tool to speed up trading negotiations by simplifying negotiating inequalities. On the way to achieving
complexity. As part of the Intelligent Tech and Trading this aim, the poverty in AI experience with currently
Initiative23, UNCTAD and the International Chamber of available technologies in vulnerable markets and
Commerce have produced a prototype of what they societies will hamper developments towards these
call the Cognitive Trade Advisor. goals even before new, affordable, user-friendly, and
sustainable technologies are more readily available.

22. An example is https://turkopticon.ucsd.edu/


23. Information retrieved from https://itti-global.org
24. For example, see Alschner, W., Seiermann J., & Skougarevskiy, D. (2017) “Text-as-data analysis of preferential trade agreements: Mapping the PTA landscape” UNCTAD Research
Paper No. 5. https://unctad.org/en/pages/PublicationWebflyer.aspx?publicationid=1838

165
Institutional and technological design development through use cases based discussion

Employing AI-assisted technologies for information • The technology must be robust and resilient. The
access may or may not be in itself a neutral endeavor. anticipated user population will not be sufficiently
In advocating this progress, there needs to be resourced with sophisticated tech support to
sensitivity to political and cultural parameters in manage frequent and constant hardware and
offering AI technologies to analyze and prioritize software upgrading.
economic and trading decision-making. Many post- • It should be capable of timely employment in
colonial vulnerable economies do not respond well the various vital stages of decision-making and
to top-down capacity building from the North World, bargaining.
especially when North/South disempowerment is • It must have rapid analytical capacities.
identified in these economies as the root cause of • Its operational language must be in sync with the
their trade problems in the first place. language of the bargainers and decision-makers.
• On the basis of the information it accesses and
It is not the intention of this paper to provide a pre- analyses, it should provide cognitive solutions from
packaged menu of preferred technological options to which the participants can draw informed choices.
enhance access and analysis. As the methodology
section to follow sets out, “ownership” of this selection On the nature of information absent for access by
should be offered through a scoping exercise which policymakers looking at trade and domestic resource
identifies context-specific needs and solutions. market sustainability from the perspective of
Ultimately, the preferred technology should be vulnerable emerging economies, the imperial influence
seen by the potential user as at base beneficial and of platform distributors over raw data is an important
manageable within the specific dynamics of their reflection in the empowerment equation.
decision-making and bargaining ecology.
The commercialization and monetarized analysis
In seeking to identify the types of AI-assisted of raw data through the big platforms presents a
information technology that would best support significant challenge when approaching the issue
vulnerable economies in domestic resource economic of more open access as an empowerment policy
decision-making and trade negotiations, the following (UNCTAD , 2019). Accepting that there will always be
factors are important selection criteria and determine sensitive metadata driving information technologies
how the policy suggestions in this analysis should be and linking through even simple keyword searching to
implemented: an array of mediations over raw data for commercial
purposes. The present project cannot neutralize this
• The technology needs to be affordable. Even if phenomenon, but it can flag it as a further level of
its purchase is subsidized there are running and potential disempowerment and seek transparency
maintenance costs which will fall to the user and and explainability of data sourcing and technological
as such, these need at least to be defrayed by translation in a language that the end user can
cost-savings through improved decision-making appreciate and take into account when relying on
outcomes and bargain positioning. information.
• It must be user-friendly and explicable so that
institutional, cultural, or administrative resistance to It is with this caution in mind that the project
new technologies, or suspicions about the hidden methodology is advanced.
agendas they might translate from donors, can be
overcome.

166
AI Technologies, Information Capacity, and Sustainable South World Trading

Part II
Methodology

The project’s methodology involves a pilot stage, the vulnerable economies realize information deficit in
results of which are summarized in the conclusion of terms of decision-making need, can articulate the
this section. Having satisfied ourselves that the focus sources and substance of information that would
group methodology is appropriate to test the analytical be useful to them, and from there speculate on how
underpinnings, the project-proper methodology is such information wants to be analyzed, validated,
described for later implementation. and sustained. Once this “needs analysis” has been
trialed, it then becomes the task for information
The methodology has two clear underpinnings. First, technologists, with an understanding of information
to adopt a top-down approach to empowerment, disadvantage and its decision-making context, to
with stakeholders already distrustful of the suggest AI-assisted information options that could
motivations which may underlie the actions of empower sustainable decision-making.
parties who in the past have been seen as complicit
in the disempowerment reality, would endanger the The research design in its post-pilot phase has three
sustainability of the support provided. Aligned with phases:
this is the second concern that both the research and
the policy outcomes it supports should form stages in Participant Focus Groups
the empowerment process. In the format of a facilitated focus group, a series
of hypotheticals designed to provoke situations of
Therefore, the initial context for designing the first vulnerability in economic decision-making and trade
component of a research plan is to appreciate the bargaining will be put to a meeting of negotiators and
nature of decision-making vulnerability that trading policymakers who have experienced disempowerment
policy will need to address, and sustainability through information deficit. Recognizing the risk
evaluations will need to constantly be monitored. of “digital imperialism” in designing research
Vulnerability is not to be viewed only in terms of experience from an external AI focused context, these
power imbalance, or to substitute for terms such hypotheticals will have been previously discussed,
as “weakness”, “disadvantage”, and “discrimination” critiqued, and settled by a small working party drawn
(Fineman, 2019). from scholars, negotiators, and policy people with a
familiarity of South World economic disempowerment.
Applying this individualist conceptualization of In particular, the advice on drafting the hypotheticals
vulnerability to economies, markets, and societies, we will be taken in the first instance from experts in
can imagine a research method that appreciates the mediation and negotiation with hands-on experience
forces which create and maintain vulnerability, and of South World decision-making styles. Added to
provides a voice to the disempowered that resultant this will be the impressions from policymakers
policy is designed to enable. In particular, and working and negotiators working in South World trade and
from our earlier review of bargaining theory, the economic environments.
research should test whether decision-makers from

167
Institutional and technological design development through use cases based discussion

a) Policymakers Focus Group, Scoping Exercise development, similar to those drawn together to
Participants in this focus group will be drawn formulate and test the hypotheticals. The Emory/
from five nominated economies with currently Leeds Vulnerability Initiative and scholars with
unsustainable domestic resources, limited trading interests in law and development, negotiation and
advantage, and who it might be argued, have mediation, and information systems would facilitate
suffered as a consequence of North/South World a policy forum designed to produce a workable
market liberalization.25 The output from this focus policy agenda for information empowerment
group would be a clear understanding of the and market sustainability in the five nominated
information needs of participants, the situations in vulnerable economies. Additionally, experts
which the absence of specific information on which from global information and communication
to rest decisions or make bargains is deemed organizations with abilities to fund a pilot scheme,
to disempower, suggestions concerning what representatives from ESCAP with responsibilities
information access needs to be prioritized, and the for promoting the UN Sustainable Development
types of cognitive options that participants would Goals, and interested participants from the previous
think helpful and why. two focus groups, would add to the policymaking
dynamics. The policy yield from this workshop
b) Expert Focus Group, Solution Exercise would be to roll out a pilot program that would
Armed with the information disadvantages enable an empirical evaluation of the impact of AI
identified in the first focus group, experts in the field technology capacity building on the achievement
of AI technology, development capacity building, of better trade bargaining benefits, and sustainable
trade negotiation and economic decision-making, economic decisions regarding the safeguarding of
mediation, and multi-participant stakeholder policy domestic resources in vulnerable economies.
work would be charged to apply particular AI
technologies to the problems represented in the “Shadowing” Pilot Focus Group Method-
first set of hypotheticals. In addition, members validation Exercise26
of focus group 2 will have access to a structured
transcript of the discussions emerging out of focus “Shadowing” is a style of simulation, where the survey
group 1. Using the same hypotheticals across population is brought together (usually at a pilot stage)
both focus groups will offer some qualitative to represent the intended actual survey population
consistency and comparability. Participants in the for the purposes of testing whether the research
first focus group could be invited to attend and methodology is promising and potentially reliable.
observe these discussions. The output from this Shadow methodology is where the survey population
focus group would be the preparation of a set is asked to assume the roles and responsibilities of an
of AI technology options nominated against the actual population, and where possible, to follow the
particular information deficits identified by the progress of that population as it performs a particular
first focus groups. In preparing and costing these decision task. This method has a history in jury
options, participants would be asked to reflect on research, in the US.
the list of selection criteria that is described above.
For the purposes of piloting, a combination simulation/
c) Implementation Focus Group, Strategic Policy Exercise shadow methodology was applied through two focus
The final focus group would involve academic groups. The first identified information deficiencies in
experts in vulnerability and social justice, as well as trade bargaining and domestic resource sustainability
negotiation/mediation, policy regulation, and social among trade and development policy personnel. The

25. APRU member institutions and their affiliates will be helpful in identifying and facilitating participants.
26. Due to pressures of time and limited resources, the pilot was not able to target policy makers in vulnerable economies (focus group 1). However, it was possible to engage with
young AI technical experts in focus group 2. The implementation focus group 3 was not necessary at the pilot stage.

168
AI Technologies, Information Capacity, and Sustainable South World Trading

second group, with the benefit of such identification, The second focus group was drawn from participants
then offered AI-assisted technology options. with special skills in AI-assisted information
Hypotheticals enabled information and decision- technology options. This group, while not specifically
making simulation to be experienced and monitored, researching the information disadvantages existing in
and assembling a group of participants who were the 5 vulnerable economies, were required to reflect
instructed “in-character” provided the “shadow” survey on these in a general sense and were assisted by the
population with capacity to identify information transcript from the first focus group, as a discussion
disadvantage prompted by the hypotheticals. and reflection resource.

For both groups, a small number of participants with The hypotheticals designed for group 1 to
similar social demographics27 were drawn together.28 elicit information deficit and information need,
The first group was asked to take on the character contextualized knowledgeable decision-making
of a trade negotiator, or a trade and development and empowered bargaining/resource-retention
policy officer in a nominated emerging economy.29 determinations in three specific directions: natural
Along with the identified role and jurisdiction, each resources trading, consortium-sponsored foreign
participant was assigned a particular strategic direct investment, cash cropping diversification
concern in performing their function. That concern and regional security (see Appendix 1).31 With the
was connected back to limitations in the information identification of information/analysis need, focus
base, or information deficits effecting the potential group 2 were asked to suggest and design practical
of each player to make knowledgeable trade policy options from available and affordable AI-assisted
or bargaining decisions, and determinations about technologies directed to trade bargaining, and trade/
domestic resource market sustainability. domestic resource balance. Sustainability for these
options is a priority.
From that perspective, and prior to focus group 1,
each participant was encouraged to research their Simulation/shadow survey populations are a
role with limited direction from the focus group compromise at the pilot stage but, with the
administrators. The reason that this research stage is participants applying sufficient dedication and
unstructured relates to the expectation that members immersion to their role-play, the discussions unfolded
of the actual population (trade policymakers, trade as a useful test-pad for whether this method should be
bargainers, and domestic resource decision-makers) applied in the more resource-demanding environment
will possess different levels of knowledge and of actual survey populations. In particular, we wanted
experience depending on personal, professional, and to explore whether participants can see the issues
information-centered variables, as well as differing with what should be available to them, what they
degrees of self-reflection.30 The only direction given do not know, what is hidden, what-leads-to what in
for this independent, individual research phase was any information chain, and once more information is
the necessity to focus on what information sources, available, how it can empower the decision-making/
technologies, and analytical capacities exist in the analytical challenge. It proved possible to elicit
nominated jurisdiction. Even the actual population responses along these lines in the context of the
would be differentially challenged to know and identify hypotheticals (see General Findings). It also emerged
what information is missing and what is needed possible from group 1 to group 2 that information
due no doubt to varied personal experience and deficit, once identified, was followed by technological
confronting variables (structural and functional) that enhancement, which will lead to more empowered
are sometimes difficult to enunciate. bargaining/decision-making capacity and outcomes.

27. A group of young, tertiary educated men and women with varied knowledge of the essential population experience, but briefed to take on a character within a defined context.
28. Once the participants for focus groups 1 and 2 were identified, they were separately briefed as to the purpose of the shadow simulation and were assigned characters and tasks to
research and adopt.
29. The economies selected were Papua New Guinea, the Philippines, Vietnam, Cambodia, and Myanmar.
30. The need for self-reflection is a central tension in the exercise of any focus group method.
31. The testing of hypothetical utility would be another feature of the focus group experience and ownership.

169
Institutional and technological design development through use cases based discussion

Focus Group 1 – General Findings Much of this information could come from the MNC
itself, but commercial confidentiality may limit this
Starting out with the MNC/natural resource trading as a source. In any case, information from a trade
scenario, the initial information need centered on bargainer would require third party validation. How
sufficient knowledge about the bargaining partner and might this be achieved?
the possibility of developing a relationship of trust. In
addition to what might be found on the commercial Several participants wanted to ensure that any such
public record, it was suggested to use already existing trade deal should be the first stage in a commercially
public and private sector trading networks, and explore sustainable arrangement. Aligned with this concern
previous case-study instances of the operation of was interest in the sustainability of the natural resource,
the MNC in the region with similar trading conditions. and the impacts on a pre-existing subsistence
Reservations were expressed about asking the MNC economy relying on the natural resource. Without any
directly, based on different interpretations of power detailed natural resource surveys or environmental
imbalance. impact evaluation capacity, what were external analysis
options to fill these information deficits?
Next, it was deemed necessary to identify major
decision sites and bargaining points in the commercial Assuming that information regarding the MNC, the
supply chain if the deal progressed. It was noted supply chain, and resource sustainability are available,
that some information along the chain might be the group discussed other information needs that
protected as commercial knowledge. Mention was impacted on relative bargaining power. It appeared that
made about information access costs (material previous experience in natural resource trading was an
and representational) in contacting third parties important viable. Furthermore, general prevailing trade
and seeking commercial data. Would there be policy impediments such as nationalized industrial
available historical aggregated data on harvesting, development, institutional corruption, exchange rate
processing, marketing, and consumption and where interference, and weak trade positioning against major
and how could it be accessed? There seemed limited trading economies were identified.
possibilities across other government and private
sector agencies in each economy as the natural There was discussion about necessary bargaining
resource in question was yet to be commercially conditions before negotiations could be progressed
exploited. International organizations may have besides those already identified. A framework for
relevant data, but because each economy was not economic growth and social benefits was identified,
already linked to the international standardization but the necessary information on how it might be
networks for this resource, this information might not formulated was uncertain. It seemed clear that
be easy to access. in order to see the bargain as having long-term
benefit, confidence had to be developed in the
Recognizing that this bargain had to be considered MNC’s commercial intentions; and again, that would
against competitive offers (or even exploitation by depend on knowledge about the breadth and depth
the state itself) how could other potential investors of the MNC’s commercial intentions. One participant
be approached without damaging the confidence specified the importance of tech transfer as part of
of the deal on the table? What information would be the deal, and the development of feedback loops so
necessary to identify markets for the natural resource, that information deficit ongoing would not simply
possible market prices, and features of alternative exacerbate misunderstanding and mistrust.
deals that should be anticipated?

170
AI Technologies, Information Capacity, and Sustainable South World Trading

Looking at the consortium scenario, the problem of When invited to dissect the consortium offer, the fear
power imbalances through compounded interests was expressed that to cherry-pick might mean the
was a recurrent theme. A sense emerged that the loss of desperately needed foreign direct investment.
bargaining interests of different players were more Without environmental impact evaluation for the
than they seemed, and how to reveal these was a medium and long term, it was difficult to assess
central information question. Participants wanted to whether the costs attendant on the FDI would
know more about what they were not being told. The outweigh the boost to foreign capital, particularly
experience of other states in dealing with consortium minus clear capacity building concessions.
members was suggested as a data source, but
problems with confidentiality agreements would arise. The final hypothetical canvassing cash crop
In particular, information about the bank’s standing diversification also presented regional relations issues.
within the international financial sector, along with The question of crop security was not addressed
more detail about the terms of the loan and penalties and needed to be. However, as with most of the
for default were required. The issue of imported labor information deficit pertaining to this proposal, there
for the construction company was not considered would be a disempowering reliance data sourced from
acceptable because it was not explained beyond the other bargaining party. This situation emphasized
skills, and if local labor was not involved there would a perennial concern about data validation.
be no instructional benefit through the exercise. The
immigration law implications would lead to a need for As the arrangement could degenerate into little more
“whole of government” information sharing. than the participant states providing the “farm” for all
the offshore commercial benefits, there needed to be
The precluded options for power development information on plans for sustainability, and benefits
provoked a need for much more data about the for the domestic economy. This scenario presented
proposed nuclear option, as well as its risks and tensions between macro and micro policy desires
benefits. Additionally, the rejection of solar options (diversified cash cropping vs. domestic security and
would not be acceptable without some comparative reputational issues), and information was needed in
market/environmental analysis. the form of projections on the wider socio-political
consequences going forward. Special mention was
Particularly, when it came to the push for 5G made about the importance of labor-force benefits,
technology, participants felt totally disadvantaged not just through the proposed (but unspecified) R&D
by knowledge deficit concerning the technology and injection, but more generally regarding associated
the implications of coincidental obsolescence. As agricultural labor transition and mobility. For instance,
there was no indication by the consortium of the what would be the concentration on planting/harvesting
sustainability of this new technology following its technology? Participants felt empowered at least to
introduction, data about which only the consortium require a detailed business plan from the proposer.
could furnish, participants expressed no position for Worries about the development of an underground
evaluating cost/benefit. Local business concerns economy in parallel and the exacerbation of already-
needed development so that they could be put to the existing drug problems required thinking through.
consortium proposer for its response, which then
would require external evaluation.

171
Institutional and technological design development through use cases based discussion

Focus Group 2 – General Observations out civil society if it is to have the capacity of keeping
the other two market players accountable.
At the outset, there was general comment that
reflected the concerns of those in the first focus Following on from identifying need and sourcing
group without then often moving to advance specific data, discussions included validation and evaluation
information technology solutions. This reluctance approaches. With diversity in sources of data, how
could have been a consequence of insufficient does one deal with bias? Questions were raised
clarity that we were not looking just to throw tech at about maintaining the currency and value of data.
any information deficit. In addition, it reflected the Original difficulties with knowing what questions to
group’s belief that data and associated information ask might translate into not knowing in what format
technology gains its relevance from the questions first to employ, store, and order data, or even what the
asked about need. data can accomplish. Added to these are problems of
granularity, and the potentially high costs of storage
The most significant takeaway for the facilitator was and analysis systems. Connected were worries
the need for a two-pronged approach to information about giving more data back to companies through
disempowerment, which marries mundane data information loops and thereby entrenching the
collection and access devices/routines with capacity information asymmetries in bargaining relativity even
enhancement among those who will apply the further.
information to the decision-task. This does not come,
originate, or exist as a generalized application, and On building tech capacity domestically, the data
instead requires purpose of design, modification, and market in the hypotheticals is situated now around
infrastructure support, which we did not get to specify identifiable information management needs, so
in every hypothetical area of need identified in the first perhaps we are moving into a world where start-ups
focus group. The main impression for the rapporteur can be generated without too much capacity required,
was regular reference to needing to know what the and these innovators could contribute home-grown
data problem was, which required a data resolution: information enhancement technologies. How hard
meaning that both the initial need and whatever data would that be? Is it possible to seed something like
collected may satisfy it, should be clearly specified. this? The simplest sustainable solution for information
Obviously, these observations return to a knowledge enhancement is to raise capacity within these
gap issue and the requirement for capacity building vulnerable economies to create purpose-designed
rather than information tech on its own. tech solutions.

An important qualification about the information On the standardization of data collection vs. having
empowerment thesis is its present over-emphasis in a problem to resolve and then standardizing data
the current project design on state capacity building. afterwards, some participants emphasized “big
Participants mentioned the not-uncommon situation is best” – the more data you have the better the
where a state can use information enhancement for standardization will be, as well as its application to
purposes which may advance economic interests at progressive information needs. A basic observation
social cost. There was also discussion of the need was made about the utility of producing mundane
to ensure information empowerment to the private data from documenting various stages of the supply
sector, where trade bargaining and resource retention chain/trade decision-making/auditing processes.
are matters shared between the state and commerce. Being involved in data production internal to the
Finally, in order that the use of data for trading and decision process enables participants to feel that they
resource retention decision-making is for social good, own that data and understand it better.
any information enhancement project should not leave

172
AI Technologies, Information Capacity, and Sustainable South World Trading

Policy Reflections from the Focus Group haphazard or careless data collection may
Deliberations entrench information asymmetries with external
data collectors even further and lead to greater
Information asymmetries on which the project is inequalities
based: – A working knowledge of technology would aid
clarification of when not to use technology
• Relationship trade bargainer with external partners • For trade negotiators (and the wider associated
– Necessary to have knowledge about possible organizations) working in targeted vulnerable
trading partners consequential of any trade economies that still have limited digitalization and
negotiation – building contacts with such contacts technological capacity, consider steps that would
– Knowledge about external companies that are make the collection of mundane data more efficient
offering trade relationships (and not technology dependent) in the near or mid-
term.
• Domestic market information gaps
– Knowledge about demographics of certain Before the injection of AI-assisted information
markets: e.g., different fishing practices, how fish technology
stocks may be implicated by trade negotiations
– Knowledge of existing needs of businesses and • At the initiation of the project, an intensive needs
commercial relationships with or without trade analysis must be commenced, which is grounded
bargain in developing skills around what questions to ask
about information deficit, that then will translate
• Knowledge gaps in technology into learning about what format to store and order
– Emerging technology: target vulnerable economies data, and what data can accomplish in trading
may be hampered by limited information about negotiations and domestic market sustainability.
these technologies including about servicing and • Capacity building within the target vulnerable
maintenance in the long run. In turn, this traps economies will help the identification of major
technology recipients into a relationship with an decision sites and bargaining points in the entire
external organization in the long run which might supply chain so that negotiators will see where
compromise the sustainability of the suggested information deficit needs to be addressed.
tech aid and increase information dependencies • International organizations can assist in capacity
building as they do not have commitments to either
Capacity building considerations regarding side of any trade bargain. However, due to the lack
asymmetries and dependencies: of relationships between the target vulnerable
economy and IOs, consequent on the absence of
• Capacity building to address knowledge gaps in commercial trading markets on which they may have
technology and to enable maintenance of technology advised, as well as failure by the target economy in
in the long-run, or to shift away from an over-reliance the past to implement international standards, these
from single service providers relationships may need to be project-specific.
– Work to address dependencies concerning data • Associated with assistance from international
sources, data integrity, and the accountability of organizations, the target vulnerable economy needs
tech development. If people do not know what to have access to knowledge in the public domain
kinds of questions to ask, it will have consequences about natural resources and the demographics of
for data collection, cleaning, processing, and AI- different harvesting practices, and how the relative
products chosen and employed. In addition, sustainability of natural resource stocks is impacted

173
Institutional and technological design development through use cases based discussion

by trading and domestic market decisions. This informed about the type of information that is being
information access could be provided through collected and provided to governments (particularly
aid agencies connection with national scientific important when local farms and fisheries are part of
repositories and regional data bases. the data production chain); technical sustainability of
• Target economies must be trained in all areas of a technical product – who maintains it? These issues
government information retention, usage, and require allied services from sponsors, providers,
exchange, rather than operate with information advisers, and locally trained experts.
locked in certain ministries. • Mission creep: if we want to avoid the monetization
• While information technologies are a priority for of technical applications, developers need a clear
advanced consumerist economies, this is not and disciplined purpose which is struck in agreement
the experience in target vulnerable economies. with the local end users.
Therefore, prior to the roll-out of such technologies, • At the time of introduction, there should be
sponsors and providers should supplement local stimulated public debate about intentions around
limited information about servicing a technology, and information access and use. Civil society will then be
the dangers inherent in locking into a provider/client involved in a holistic and integrated approach to data
relationship in the long-run. empowerment.
• Along with technological “needs and potential” • As a condition of the technology contract, a home-
training, target vulnerable economies and countries grown sector development and training in technology
in their region will have limited basic market development in country must be offered. This could
information to do cost-benefit analysis. However, be coordinated and stimulated by a developer-centric
these economies can supplement this information branch of government.
if provided by aid agencies and IOs with essential • Recognize the importance and resourcing of internet
local knowledge of social/political contexts in which penetration into social networks within the economy,
information is best contextualized. particularly those that provide rich sources of
• Through any phase of externally supported capacity resource and market data.
building, there is a need to ensure civil society • Recognize the necessity for introduced technology to
remains in the loop – to understand business needs be affordable, maintainable, and anti-obsolescent.
on the ground. • Efforts at standardization in the collection of data
from multiple sources, which may then enable
At the introduction of AI-assisted information more actions on the data to be taken in the analysis
technology, and following phase. This endeavor will include leveraging existing
collection methods active and accessible prior to
• Product sustainability is essential and takes certain the information roll-out. Identification of mundane
crucial forms that must be ensured: data sources data built into the consciousness of people who are
– who is collecting data and originally for what currently promoting trade and measuring markets
purposes; data integrity and validation – how is (internal and external to the economy).
information to be accredited and verified ongoing?;
accountability – ensuring that civil society is

174
AI Technologies, Information Capacity, and Sustainable South World Trading

Concluding Reflections variables influencing capacities to seek out and


understand information asymmetries. Some working
The rhetoric of what AI can and cannot do continues to technical knowledge connected with contextual
be shrouded in mysticism, technological elitism, post- sensitivities would ensure that people from both
colonial reticence, and just the type of knowledge/ groups are speaking in, not the same language, but
power differentials that this project set out to address unfamiliar ground; and that the articulation of need
(de Saint Laurent, 2018). Along with confusion in and sustainability of technological solutions can be as
language and application, AI-assisted decision- precise as possible. In addition to domestic education
making technologies largely remain the province and training programs, international organizations
of powerful economies and as such increase their have a larger role to play in addressing and reversing
advantages in trade bargaining. In this paper, we have the knowledge deficits of technologies in trading
laid out a set of selection-criteria for identifying AI- and domestic market situations as yet deprived
enabled technologies applicable to assist and restore of AI-assisted information pathways. International
some balance to trade negotiations by opening up organizations such as UNCTAD and/or the WTO
pathways of information and analysis. The criteria for should thus formulate education policies crafted
selecting technologies lead on to broader proposals to enable such productive forms of knowledge
for information access and analysis that will need exchanges to be initiated before the commensuration
to be thoroughly contextualized for each vulnerable of the first scoping exercise. More research can be
economy eventually selected for the real-world project. done here to determine productive forms of such
While the method we have piloted has proved useful exchanges, and their trajectories. Private sector
for enabling the articulation of known unknown factors participants looking for a more resilient global trading
influencing the relationship between information and sustainable market future also have a role to play
access and trading power, and these in turn will better here, as do the large information platform providers in
enable sustainable trade negotiations (through power helping to achieve the ESCAP sustainability goals.
dispersal and sharper market discrimination with
more information); another key contribution this multi- The potential for enhancing regional cooperation –
disciplinary method offers is the identification of pre- in addition to the identification of data pathways –
existing unknown knowns (such as alternative sites through this method can also serve as a route towards
for data access and/or management, and contextual increasing industry standardization and state-to-state
variables which impact information availability and data flows, particularly where regional sustainability
access, irrespective of AI-assisted technologies). issues are in the trading conversation. Empowerment
approaches beyond nation-state priorities are
Acknowledging that AI-assisted information more likely to achieve scalable deployment and
technology access alone will not level the trade interoperability across countries and can be
bargaining horizon or open up understandings of significantly aided by international coordination bodies
domestic market sustainability, the scoping and such as UNCTAD and/or the WTO working together
solution exercises suggested some essential pre- with standard setting bodies, such as the IEEE32. At
conditions: (1) participants in the first group were this policy level, trading benefit will be viewed as more
advantaged as they were familiar with, or had a than only a national concern. Regional approaches to
working knowledge of, current ML technologies; and information empowerment and technological capacity
(2) technical experts in the second had a similar building are a realistic recognition that the information
working knowledge of trade bargaining theory, so which may assist vulnerable economies often knows
as to prevent technological solutionism that ignored no jurisdictional boundaries.
important social, political, and economic contextual

32. For example, the IEEE’s Data Trading System Initiative. https://standards.ieee.org/industry-connections/datatradingsystem.html

175
Institutional and technological design development through use cases based discussion

References

de Saint Laurent, C. (2018). In Defence of Machine Learning: Debunking the Myths of


Artificial Intelligence. Europe’s Journal of Psychology, 14(4), 734-737.

Dikowitz, S. (2014). World Peace Through World Trade. Retrieved from Hinrich Foundation:
https://hinrichfoundation.com/blog/global-trade-world-peace-through-world-trade/

Findlay, M. (2017). Law’s Regulatory Relevance? Property, Power and Market Economies.

Fineman, M. A. (2019). Vulnerability and Social Justice. Valparaiso University Law Review 53.

Grcic, J. (2007). Hobbes and Rawls on Political Power. Ethics & Politics. Retrieved from
https://core.ac.uk/download/pdf/41174053.pdf

Grossman, W. L. (1935). The Legal Philosophy of Roscoe Pound. Yale Law Review, 605-618.

Hardt, M., & Negri, A. (2001). Empire. Cambridge: Harvard University Press.

Humbach, J. A. (2017). Property as Prophesy: Legal Realism and the Indeterminacy of


Ownership. Case Western Reserve Journal of International Law, 211-225.

IT Pro team. (2020, March 2). How to measure data quality. Retrieved from ITPro.:
https://www.itpro.co.uk/business-intelligence-bi/29773/how-to-measure-data-quality

Reichel, A. (2018). De-growth and Free Trade. Retrieved from https://www.andrereichel.


de/2016/10/18/degrowth-and-free-trade/

Stiglitz, J. (2002a). Globalization and its Discontents. New York: Penguin, 107.

Stiglitz, J. (2002b). Globalization and its Discontents. New York: Penguin, 246.

Susaria, A. (2019). The New Digital Divide is People who Opt out of Algorithms and People
who. Retrieved from The Telegraph: https://www.thetelegraph.com/news/article/The-new-
digital-divide-is-between-people-who-opt-13773963.php

UNCTAD. (2015). General Assembly: Resolution adopted by the General Assembly on 19


December 2014. United Nations. Retrieved from https://unctad.org/en/PublicationsLibrary/
ares69d204_en.pdf

UNTCAD. (2018, October 15). Small economies welcome AI-enabled trade tool but
worries remain. Retrieved from UNTCAD: https://unctad.org/en/pages/newsdetails.
aspx?OriginalVersionID=1881

UNCTAD. (2019). Digital Economy Report 2019: Value Creation and Capture: Implications
for Developing Countries. United Nation. Retrieved from https://unctad.org/en/
PublicationsLibrary/der2019_en.pdf

UNCTAD. (2019, July 19). Fairer trade can strike a blow against rising inequality. Retrieved
from UNCTAD: https://unctad.org/en/pages/newsdetails.aspx?OriginalVersionID=2154

Xiang, A., & Raji, I. D. (2019). On the Legal Compatibility of Fairness Definitions. Retrieved
from https://arxiv.org/abs/1912.00761

176
AI Technologies, Information Capacity, and Sustainable South World Trading

Appendix 1: Hypotheticals
Instructions Hypothetical 1

Remember your character and your professional A large multi-national corporation has commenced
location. Reflect on the facts of the following discussion with your government to have access
hypotheticals from the perspective of your character to fishing grounds in your territorial waters. Due to
and what you understand to be the “knowledge the tariff war between several other much larger
capacity” of trade and sustainability decision-making fishing nations, the price of fish products has
in your professional location. grown incrementally in the last economic quarter.
Read the following hypotheticals and imagine you The multi-national is also attracted to a trading
are required to participate and to make decisions arrangement with you because your national
as instructed with the information provided. At each regulation of fishing practice is neither detailed nor
nominated decision stage, think about what additional unduly restrictive. In fact, global fishing quotas have
information might be useful in making a more largely had little impact on your domestic fishing
effective choice as the factors of the bargain/retention practice because of its up-until-now subsistence
policy are set out. format.

Clearly, it is difficult to speculate on what you do The multi-national has not divulged its intended
not know or what is being withheld from you. In this market for the fish products it would acquire from
context, common sense as well as experience are your waters, but you have some general intelligence
useful measures in determining how your decision/ that Japan would be a principal third party trader.
bargain would be more empowered through the In Japan, you are aware that the consumer appetite
information available to you. One way of approaching for one particular fish product which is abundant in
this is to think about the issue/problem that you are your waters is high, and prices that can be fetched
confronting, where might be a source of information seem to you to be extraordinary. You have no
you currently do not possess, and the form that developed trade arrangement with Japan and you
information might take. have no detailed understanding of their fish product
consumer markets.
Finally, you are not entirely unfamiliar with information
technology. Even though official data, retrieval, and The multi-national has also expressed interest in
analysis capacity in your professional context is using local labor, the price of which is under-valued
limited, you have a sense of what technological due to limited local employment opportunities in the
enhancements and information databases those in sector. In preliminary meetings, the multi-national
better resourced administrations and commercial has talked of building canning factories for fish
arrangements can access and use to their benefit processing in two of your major ports where female
(and perhaps your detriment). Therefore, you unemployment is particularly high.
are concerned with information deficit and what
information access might enable. You are also
interested in how information can be analyzed and
applied to make your professional experience more
efficient and sustainable.

177
Institutional and technological design development through use cases based discussion

Fish are a dietary staple for many of your citizens The international construction company will design
living in coastal regions, who practice small scale, and build a new dam over a large natural river
indigenous fishing practices. Your fisheries and system. Water resources are a major concern
wildlife department has not done any study on for your country. Because of what they refer to
the fish stocks in your territorial waters or on the as ‘technology considerations’, the construction
impact of large-scale commercial fishing on these company intends only to use its own imported
stocks. You do not have up-to-date information on labor.
the multi-national’s practices in the harvesting and
use of natural resources. In these negotiations, you Your state is in desperate need of power generating
would be dealing with a subsidiary of the larger facilities. The major power generator in the
multi-national set up specifically for this trading consortium is happy to finance the construction
exercise and registered in the Republic of Ireland for and operation of a nuclear power plant within your
beneficial taxation concessions. territory, provided that you allow half of the power
generated in that grid to be independently traded by
You have been asked: the consortium into neighboring states. In addition,
the consortium wants your government to cease
a) To further the preliminary negotiations with discussions with another neighbor states for the
the multi-national; shared construction of wind farms on your border.
b) To oversee an environmental impact
assessment of the proposal; The telecommunications provider will invest in
c) To draft conditions under which specific trade 5G technologies throughout your state. Most of
negotiations might be structured; your communication capacity at present is not
d) To address concerns from local indigenous fully 4G compliant. There have been concerns
fishing communities. expressed in your business community that
such a rapid convergence into 5G might produce
Hypothetical 2 significant secondary costs through unnecessary
technological obsolescence. Furthermore, talk
A consortium of foreign investors has approached from the telecommunications about linking your 5G
your government with the intention of structuring capacity to developments in the Internet of Things
and implementing some foreign direct investment (IoT) in China, seem obscure and unclear.
(FDI) infra-structure projects in your country. The
consortium consists of a major Chinese banking You have been asked:
group, an international construction company, a
major power generator, and a telecommunications a) To further the preliminary negotiations with
provider. The types of projects being discussed are the consortium;
very attractive to your under-capitalized transport b) To oversee an economy-wide evaluation of the
and communications sector. impact of the proposed FDI;
c) To draft conditions under which specific
A condition of the foreign direct investment investment negotiations might be structured;
portfolio is that your government signs up to d) To address concerns from local businesses
various loan agreements offered by the Chinese such as the domestic power provider,
bank. As a condition of the loans, your government domestic telcos, and local trade unions
will agree to having any disputes arising between regarding medium-term sustainability issues.
your state and the consortium arbitrated in China
under Chinese commercial law.

178
AI Technologies, Information Capacity, and Sustainable South World Trading

Hypothetical 3

In an effort to improve your trade imbalance, your The Canadian investors have also indicated – to
government over recent decades has implemented improve the attractiveness of their agricultural
an agricultural policy of transition from subsistence intentions – to bring with them a significant
to cash cropping. In particular, palm oil plantations research and development investment that could
have been incentivized and major regional stimulate the growth of a generic drug industry
companies have invested in concessions for in your country; namely, processing the medical
palm oil production. A political consequence has constituents of marijuana. This industry would, they
been push-back from smaller farmers who are say, offers employment mobility for semi-skilled
unable to match the economies of scale of the workers currently occupied in low-paid sweat shop
bigger plantations. To confront this resistance, garment-making, which is another diminishing
the government has operated a subsidy system domestic export industry here.
to encourage small farmers to cash crop, and to
compensate for their market disadvantage. You have been asked:

Both the bigger producers and the small farmers a) To further the preliminary negotiations with
employ slash-and-burn clearing techniques, which the Canadian investors;
has caused air pollution with associated damage b) To oversee a comparative environmental
to the health of the domestic population and impact assessment of the proposal relative to
neighboring states. existing cash cropping practices;
c) To draft conditions under which investment
The government is worried about its growing negotiations might be structured;
dependence on a single export crop, when d) To address concerns on the relationship
global market vulnerability is difficult to predict. between trade and regional foreign policy.
Entrepreneurs from Canada, which recently
legalized the growing and use of marijuana, are
in discussions with your government to invest in
major hemp farms in your country for export back
to Canada and California, where they say the market
is expanding. Governments in your region with
tough anti-drug laws have lobbied your government
against the initiative. Marijuana is currently a
prescribed drug in your jurisdiction, but popular
opinion would be tolerant of decriminalization for
medical and economic reasons.

179
Governing Data-
Masaru Yarime1
Division of Public Policy,
The Hong Kong University of

driven Innovation Science and Technology

for Sustainability:
Opportunities and
Challenges of
Regulatory Sandboxes
for Smart Cities

1. I would like to thank Gleb Papyshev for his assistance in preparing this report.
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

Abstract

Data-driven innovation plays a crucial role in tackling sustainability issues. Governing data-
driven innovation is a critical challenge in the context of accelerating technological progress
and deepening interconnection and interdependence. AI-based innovation becomes robust
by involving the stakeholders who will interact with the technology early in development,
obtaining a deep understanding of their needs, expectations, values, and preferences,
and testing ideas and prototypes with them throughout the entire process. The approach
of regulatory sandboxes will particularly play an essential role in governing data-driven
innovation in smart cities, which inevitably faces a difficult challenge of collecting, sharing,
and using various kinds of data for innovation while addressing societal concerns about
privacy and security. How regulatory sandboxes are designed and implemented can be locally
adjusted, based on the specificities of the economic and social conditions and contexts,
to maximize the effect of learning through trial and error. Regulatory sandboxes need to
be both flexible to accommodate the uncertainties of innovation, and precise enough to
impose society’s preferences on emerging innovation, functioning as a nexus of top-down
strategic planning and bottom-up entrepreneurial initiatives. Data governance is critical
to maximizing the potential of data-driven innovation while minimizing risks to individuals
and communities. With data trusts, the organizations that collect and hold data permit
an independent institution to make decisions about who has access to data under what
conditions, how that data is used and shared and for what purposes, and who can benefit
from it. Alternatively, a data linkage platform can facilitate close coordination between the
various services provided and the data stored in a distributed manner, without maintaining
an extensive central database. The data governance systems of smart cities should be open,
transparent, and inclusive. As the provision of personal data would require the consent of
people, it needs to be clear and transparent to relevant stakeholders how decisions can be
made in procedures concerning the use of personal data for public purposes. The process
of building a consensus among residents needs to be well-integrated into the planning of
smart cities, with the methodologies and procedures for consensus-building specified and
institutionalized in an open and inclusive manner. It is also essential to respect the rights
of those residents who do not want to participate in the data governance scheme of smart
cities. As APIs play a crucial role in facilitating interoperability and data flow in smart cities,
open APIs will facilitate the efficient connection of various kinds of data and sophisticated
services. International cooperation will be critically important to develop common policy
frameworks and guidelines for facilitating open data flow while maintaining public trust
among smart cities across the globe.

181
Institutional and technological design development through use cases based discussion

Introduction
Data-driven innovation plays a crucial role in tackling data governance. Implications are explored for data
sustainability challenges such as reducing air pollution, governance to promote the collection, sharing, and
increasing energy efficiency, eliminating traffic use of data for innovation while taking appropriate
congestion, improving public health, and maintaining measures to address societal concerns, including
resilience to accidents and natural disasters (Yarime, safety, security, and privacy. Recommendations
2017). These multifaceted challenges, which are for policymakers are considered to facilitate the
interconnected and interdependent in complex ways, engagement of relevant stakeholders in society so
require the effective use of various kinds of data that various kinds of data collected in smart cities are
concerning environmental, economic, social, and appropriately used to govern innovation based on AI.
technological aspects that are increasingly available
through sophisticated equipment and devices in Characteristics of Data-driven Innovation
smart cities. Innovation based on artificial intelligence
(AI) can make the best use of these data to accelerate The emergence of data-driven innovation based
learning and improve performance. It is of critical on the rapid advancement in the Internet of Things
importance to establish adaptive governance systems (IoT) and AI creates exciting opportunities as well as
that allow experimentation and flexibility to deal with considerable challenges in promoting societal benefits
the uncertainty and unpredictability of technological while regulating the associated risks. As a vast amount
change, while addressing societal concerns such as of diverse kinds of data is increasingly available from
security and privacy incorporating local contexts and various sources that were not previously accessible,
conditions. Novel forms of technology governance, a wide range of sectors are currently undergoing
such as testbeds, living laboratories, and regulatory significant transformation. In energy, smart grid
sandboxes, are required for policymakers to address systems lower costs, integrate renewable energies,
the evolving nature of data-based innovation. and balance loads. In transportation, dynamic
congestion-charging systems adjust traffic flows
In this paper, we examine key opportunities and and offer incentives to use park-and-ride schemes,
challenges in the governance of data-driven depending upon real-time traffic levels and air quality.
innovation in the context of smart cities. First, we Car-to-car communication can manage traffic to
discuss the major characteristics of data-driven minimize transit times and emissions, and eliminate
innovation and highlight the importance of learning road deaths from collisions (Curley, 2016). The speed
and adaptation through the actual use of technologies of technological advancement is accelerating, and
in real situations. Next, we examine the approach of those technologies that used to be separate are
regulatory sandboxes to facilitate innovation by taking increasingly interconnected and interdependent
previous examples of introducing them to the field with one another, creating a significant degree of
of finance and other sectors with their experiences uncertainty in their impacts and consequences.
and implications. Then we consider emerging cases
of applying regulatory sandboxes to stimulate novel The process of data-driven innovation has three
technologies utilizing AI in cyber-physical systems key components: data collection, data analysis, and
such as drones, autonomous vehicles, and smart decision making (Organisation for Economic Co-
cities. Finally, we discuss critical challenges in operation and Development, 2015a). Data-driven
designing and implementing regulatory sandboxes innovation critically depends on the efficient and
for AI-based innovation, with a particular focus on effective collection, exchange, and sharing of large

182
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

amounts of high-quality data. New technologies AI is remarkably fast, which has been particularly
such as drones, IoT, and satellite images can now demonstrated in the case of image recognition
provide vast amounts of data that were not previously (Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang,
available or accessible before. The big data collected Karpathy, Khosla, Bernstein, Berg & Fei-Fei, 2015).
through various sources and challenges are analyzed That leads to remarkable progress in the performance
by applying data science. Sophisticated methodologies of AI and, at the same time, accompanies a significant
and tools are increasingly possible due to the recent degree of uncertainty in consequences and side
technological advancement in AI, particularly the rapid effects. Various kinds of technologies are increasingly
progress in machine learning. For decision making, interconnected and interdependent through data
it is critical to integrate the findings of data analytics exchange and sharing among multiple sectors, such
with the domain expertise that would be specific to as energy, buildings, transportation, and health.
the sector in which you are involved, such as energy, These characteristics make it difficult to explain or
health, or transportation. Increasingly, cyber systems understand the process of innovation and contribute
are merging with physical machines and instruments to giving rise to a widening gap between technological
as in manufacturing, and such cyber-physical systems and institutional changes. It is critical to establish a
are particularly important in dealing with sustainability proper system to govern data-driven innovation in
issues in the context of smart cities. the context of accelerating technological progress
and deepening interconnection and interdependence.
Data-driven innovation is accelerated by deriving New policy approaches are required to stimulate
new and significant insights from the vast amount data-driven innovation in cyber-physical systems by
of data generated during the delivery of services facilitating coordination and integration of emerging
every day. Hence training, the ability to learn from technologies while addressing societal concerns such
real-world use and experience, and adaptation, the as safety, security, and privacy.
capability to improve the performance, would be key
to creating data-driven innovation (Food and Drug As the introduction of AI systems is relatively new,
Administration, 2019). The development of cyber- our understanding of the behavior of such systems
physical systems such as smart cities is facilitated in real-life situations is still minimal. As machines
through the ready availability of and accessibility to powered by AI increasingly mediate our economic
data, as well as its mutual exchange and sharing with and social interactions, understanding the behavior
stakeholders in different sectors. Unlike the traditional of AI systems is essential to our ability to control
model of innovation, which tends to rely on closed, their actions, reap their benefits, and minimize their
well-established relationships between enterprises harms (Rahwan, Cebrian, Obradovich, Bongard,
in a specific industry, the new mode of data-driven Bonnefon, Breazeal, Crandall, Christakis, Couzin,
innovation requires open, dynamic interactions with Jackson, Jennings, Kamar, Kloumann, Larochelle,
stakeholders possessing and generating various kinds Lazer, McElreath, Mislove, Parkes, Pentland, Roberts,
of data. Close cooperation and collaboration on data Shariff, Tenenbaum & Wellman, 2019). AI systems
become crucial in the innovation process, from the cannot be entirely separate from the underlying data
development of novel technologies to deployment on which they are trained or developed. Hence it is
through field experimentation and legitimation in critical to understand how machine behaviors vary
society. with altered environmental inputs, just as biological
agents’ behaviors vary depending on the environments
There are difficult challenges to policymakers in in which they exist. Our understanding of the behavior
facilitating data-driven innovation in cyber-physical of AI-based systems can benefit from an experimental
systems. The speed of technological change of examination of human-machine interactions in real-
world settings.

183
Institutional and technological design development through use cases based discussion

The experience of using an AI system in clinics in on deploying AI in real-world scenarios (Beede,


Thailand for the detection of diabetic eye disease is 2020). The functioning of AI systems in healthcare
one of the few cases that provide valuable lessons is affected by workflows, system transparency, and
and implications (Beede, Baylor, Hersch, Iurchenko, trust, as well as environmental factors such as lighting
Wilcox, Ruamviboonsuk & Vardoulakis, 2020). While which vary among clinics and can impact the quality
deep learning algorithms promise to improve clinician of images. AI systems need to be trained to handle
workflows and patient outcomes, these gains have these situations. An AI system might conservatively
not been sufficiently demonstrated in real-world determine some images having blurs or dark areas
clinical settings. The Ministry of Health in Thailand to be ungradable because they might obscure critical
has set a goal to screen 60% of its diabetic population anatomical features required to provide a definitive
for diabetic retinopathy (DR), which is caused by result. On the other hand, the gradability of an image
chronically high blood sugar that damages blood may vary depending on a clinician’s experience or
vessels in the retina. Reaching this goal, however, is a physical set-up. Any disagreements between the AI
challenge due to a shortage of clinical specialists. That system and the clinician can create problems. The
limits the ability to screen patients and also creates a research protocol has been subsequently revised, and
treatment backlog for those found to have DR. Thus, now eye specialists review such ungradable images
nurses conduct DR screenings when patients come in alongside the patient’s medical records, instead of
for diabetes check-ups by taking photos of the retina automatically referring patients with ungradable
and sending them to an ophthalmologist for review. A images to an ophthalmologist. This helped to ensure
deep learning algorithm has been developed to provide a referral was necessary and reduced unnecessary
an assessment of diabetic retinopathy, avoiding the travel, missed work, and anxiety about receiving a
need to wait weeks for an ophthalmologist to review possible false-positive result. In addition to evaluating
the retinal images. This algorithm has been shown the performance, reliability, and clinical safety of an AI
to have specialist-level accuracy for the detection system, we also need to consider the human impacts
of referable cases of diabetic retinopathy. Currently, of integrating an AI system into patient care. The AI
there are no requirements for AI systems to be system could empower nurses to confidently and
evaluated through observational clinical studies, nor immediately identify a positive screening, resulting in
is it common practice. That is problematic because quicker referrals to an ophthalmologist.
the success of a deep learning model does not rest
solely on its accuracy, but also on its ability to improve This case highlights that, in addition to the accuracy
patient care. of the algorithm itself, the interactions between end-
users and their environment determine how a new
This experience provides critical recommendations system based on AI will be implemented, which cannot
for continued product development and guidance always be controlled through careful planning. Even

184
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

when a deep learning system performs a relatively Anticipatory governance acts on a variety of inputs to
straightforward task, for example, just analyzing retinalmanage emerging knowledge-based technologies and
images, organizational or socio-environmental factors the missions built upon them, while such management
are likely to impact the performance of the system. is still possible (Guston, 2014). It requires government
Many environmental factors that negatively impact foresight, engagement, and reflexivity to facilitate
model performance in the real world might be reduced public acceptance of new technologies, while at the
or eliminated by technical measures, such as through same time assessing, discussing, and preparing for
lighting adjustments and camera repairs. However, their intended and unintended economic and societal
these types of modifications could be costly and even effects. Anticipatory approaches can help explore,
infeasible in low-resource settings, making it even consult widely on, and steer the consequences of
more critical to engage with contextual phenomena innovation at an early stage and incorporate public
from the start. AI-based innovation becomes robust values and concerns, mitigating potential backlash
by involving the stakeholders who will interact with against technology. Traditional policy tools would
the technology early in development, obtaining a deep not be able to deal with situations where the future
understanding of their needs, expectations, values, direction of technological innovation cannot be
and preferences, and testing ideas and prototypes determined. In contrast, new policy tools such
with them throughout the entire process. as regulatory sandboxes emphasize the benefits
of environments that facilitate learning to help
The findings of the actual case of implementing AI- understand the regulatory implications and responses
based innovation provide useful implications for to emerging technologies. Participatory approaches
technology policy and governance. As policy makers can provide a wide range of stakeholders, including
are required to respond to technological change in citizens, with adequate opportunities to appraise and
real-life situations, technology governance becomes shape technology pathways (OECD, 2018). These
an integral part of the innovation process itself to practices can help ensure that the goals, values,
steer emerging technologies towards better collective and concerns of society are continuously enforced
outcomes. Governments need to anticipate significant in emerging technologies, and shape technological
changes induced by autonomous vehicles, drone designs and trajectories without unduly constraining
technologies, and widespread IoT solutions, as well innovators. This will contribute to supporting efforts to
as to consider their implications for public policy. AI promote responsible innovation, which has integrated
technologies offer opportunities to improve economic dimensions of anticipation, reflexivity, inclusion, and
efficiency and quality of life, but they also bring many responsiveness (Stilgoe, Owen & Macnaghten, 2013).
uncertainties, unintended consequences, and risks. As
such, this calls for more anticipatory and participatory
modes of governance (OECD, 2018).

185
Institutional and technological design development through use cases based discussion

The Approach of Regulatory Sandboxes


The approach of regulatory sandboxes has recently provide the conditions for businesses to test innovative
been proposed to stimulate innovation by allowing products and services in a controlled environment
experimental trials of novel technologies and systems without incurring the regulatory consequences of pilot
that cannot currently operate under the existing projects (Financial Conduct Authority, 2015). A fintech
regulations by specifically designating geographical supervisory sandbox was also launched by the Hong
areas or sectoral domains. Regulatory sandboxes Kong Monetary Authority in September 2016, followed
provide a limited form of regulatory waiver or flexibility by other fintech sandboxes in Australia, Canada, and
for firms to test new products or business models Singapore. The concept has also been embraced by
with reduced regulatory requirements, while preserving a growing number of developing world regulators as
some safeguards to ensure appropriate consumer well.
protection (Organisation for Economic Co-operation
and Development, 2019). Potential benefits include There are some lessons learned from the experience
facilitating greater data availability, accessibility, and of regulatory sandboxes in fintech (Financial
usability for innovators, and reducing the time and Conduct Authority, 2017). Working closely with the
cost of getting innovative ideas to market by reducing FCA has allowed firms to develop their business
regulatory constraints and ambiguities (Financial models with consumers in mind and mitigate risks
Conduct Authority, 2015). The approach aims to by implementing appropriate safeguards to prevent
provide a symbiotic environment for innovators to test harm. A set of standard safeguards have been put in
new technologies and for regulators to understand place for all sandbox tests. All firms in the sandbox
their implications for industrial innovation and are required to develop an exit plan to ensure that the
consumer protection. The aim is to help identify and test can be terminated whenever it is necessary to
better respond to regulatory breaches by enhancing stop the potential harm to participating consumers.
flexibility and adjustment in regulations, which would The sandbox has allowed the agency to work with
be particularly relevant in highly regulated industries, innovators to build appropriate consumer protection
such as the finance, energy, transport, and health safeguards into new products and services.
sectors.
The approach of regulatory sandboxes has gone
Regulatory sandboxes have initially been introduced beyond the field of finance and has been applied in
to the financial sector in efforts to encourage fintech other sectors involving cyber-physical systems, which
by providing a regulatory safe space for innovative more directly concern safety, human health, and
financial institutions and activities underpinned by public security. In the energy sector, the Office of Gas
technology (Zetzsche, Buckley, Barberis & Arner, and Energy Markets (Ofgem) of the UK started their
2017). While the sandbox creates an environment Innovation Link service in February 2017 as a one-
for businesses to test products with less risk of stop shop offering rapid advice on energy regulation
being punished by the regulator for non-compliance, to businesses looking to launch new products or
regulators require applicants to incorporate business models (Office of Gas and Electricity
appropriate safeguards to insulate the market from Markets, 2018a). When regulatory barriers prevent
risks of their innovative business. In early 2016, the launching a product or service that would benefit
Financial Conduct Authority (FCA) of the UK initiated a consumers, a regulatory sandbox can be granted to
fintech regulatory sandbox to encourage innovation in enable a trial.
the field of financial technology. The sandbox aimed to

186
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

The Energy Market Authority (EMA) in Singapore temporary relaxation of rules, most innovators would
also launched a regulatory sandbox in October 2017 like to continue to operate after the test and to see the
to encourage experimentation of new products and experience of regulatory sandboxes used to change
services in the electricity and gas sectors (Energy the existing policies and regulations.
Market Authority, 2017). EMA, as the industry
regulator, assesses the impact of new products and The approach of regulatory sandboxes can play an
services before deciding on the appropriate regulatory essential role in governing data-driven innovation,
treatment. Innovators submit their ideas to EMA which inevitably faces a difficult challenge of
for testing, and a successful application allows the collecting, sharing, and using various kinds of data for
plan to be applied in the market while being subject innovation while addressing societal concerns about
to relaxed regulatory requirements. Safeguards privacy and security. The Information Commissioner’s
such as limiting the duration of the trial or the Office (ICO) in the UK has recently introduced a
maximum number of consumers can be introduced regulatory sandbox that is designed to support start-
to minimize risks to consumers and industry. The ups, SMEs, and large organizations across private,
evaluation criteria when applying for the regulatory public, and voluntary sectors. The condition is that
sandbox include using technologies or products in they use personal data to develop products and
an innovative way, addressing a problem or bringing services which are innovative and have demonstrable
benefits to consumers or the energy sector, requiring public benefits (Information Commissioner’s Office,
some changes to existing rules, and having assessed 2019). The regulatory sandbox enables participants to
and mitigated foreseeable risks. The regulatory consider how they use personal data in their projects,
sandbox complements ongoing energy research and as well as provides some comfort from enforcement
development (R&D) initiatives by providing a platform action and increases public reassurance that
for R&D projects to be tested on a broader scale in the innovative products and services are not in breach
country. of data protection legislation. As these products and
services are considered to be on the cutting edge of
The experience of introducing regulatory sandboxes innovation and operating in particularly challenging
to the energy sector offers a number of lessons and areas of data protection, there is a significant extent
implications. Ofgem’s officials spent time talking of uncertainty about adequately complying with the
to innovators to understand their business and to relevant regulations. Participants in the regulatory
locate and interpret the rules that affected them. sandbox can become use cases, and, subsequently,
Through an iterative process, they effectively worked the ICO would be able to revise public guidance and
with innovators to co-create feasible sandbox trials provide necessary resources for compliance.
(Office of Gas and Electricity Markets, 2018b). It was
not always clear to innovators what they could or An important issue in designing and implementing
could not do, nor always easy for them to find rules regulatory sandboxes is how to manage regulatory
or interpret them. Hence advice from the agency arbitrage. Regulatory sandboxes aim to stimulate
helped the innovators figure out which regulations innovation by relaxing relevant regulations so that
would be relevant for their technologies or services. entrepreneurs can experiment with novel technologies
Sometimes proposals were not allowed for trials, as without being constrained too much by the existing
some institutional requirements, including industry regulatory environment. This creates opportunities
norms, systems, charging arrangements, codes, and for regulatory arbitrage, which refers collectively
licenses, became obstacles. While the sandbox was to the strategies that can be used to achieve an
introduced to facilitate time-limited trials with the economically equivalent outcome to a regulated

187
Institutional and technological design development through use cases based discussion

activity while avoiding the legal constraints (Fleischer, focus on changing the legal environment rather than
2010). It is a legal planning technique used to avoid merely arbitrage regulatory differences. A complex
regulatory requirements such as taxes, accounting set of factors and considerations would influence
rules, securities disclosure, with other requirements decisions about regulatory arbitrage, which includes
such as safety and privacy also possibly being transparency of information to the public and the
included. Jurisdictionally speaking, regulatory ability of a company to mobilize its resources for
arbitrage means that a firm chooses a location regulatory change.
where a more favorable regulatory treatment is
available to its business activities (Allen, 2019). While Moving in a more positive direction, an increasing
national borders do not constrain the development number of enterprises actually try to advance
and deployment of AI-based products and services, innovative technologies by strategically taking
regulatory sandboxes have only been created at regulatory arbitrage. One example is Cyberdyne, a
national or sub-national levels. This discrepancy can Japanese company that developed a medical and
lead to what is known as the race to the bottom, a healthcare robot, HAL (Ikeda and Iizuka, 2019).
phenomenon where jurisdictions compete to lower Under the Japanese product classification system,
their regulatory standards in order to attract innovative HAL could be categorized as a medical device or an
companies, which could potentially result in negative assistive device, each of which would be regulated by
consequences on consumer protection with regard to different institutions. Although the company initially
safety and privacy. planned to commercialize the robot as a medical
device with public medical insurance coverage, that
The challenges of regulatory arbitrage and the race to required the product to comply with rigorous medical
the bottom can be tackled if the regulators in different safety regulations with clinical trials. Considering
locations can coordinate with one other to share the regulatory environment, Cyberdyne first chose
the information necessary to formulate appropriate to commercialize HAL as an assistive device, which
policy measures and commit to agreements to usually requires proof of safety, certified by a third
apply consistently high regulatory standards (Allen, party on voluntary terms. The Robot Safety Centre, a
2020). Regulators, however, have their specific policy public institution located in the Tsukuba International
preferences and strong incentives to keep information Strategic Zone, Tokku, supported the company to
within individual regulatory sandboxes, rather than conduct the necessary testing and produce evidence
share it with other sandboxes in different locations. for proof of safety. During this process the company
Social license and the bundling of laws and resources was able to accumulate experiences to improve the
could work as constraining forces on regulatory product, which was eventually certified by the Japan
arbitrage (Pollman, 2019). Aggressive regulatory Quality Assurance Organization and commercialized
arbitrage can erode social license and create a costly as an assistive device.
environment for sustainable operation, especially when
social costs are widely recognized in the community. On the other hand, Cyberdyne chose to commercialize
Also, as an opportunity for regulatory arbitrage would HAL as a medical device in Germany first (Iizuka
arise not in isolation but within a system of laws, and and Ikeda, 2019). From the beginning there was an
in light of other considerations such as investment expectation that it would take a long time to receive
capital and workforce talent, the bundling of relevant an approval from the Ministry of Health, Labour
laws and regulations would leave less room available and Welfare (MHLW) in Japan because there was
for regulatory arbitrage. If the existing laws create no precedent product similar to the new robot. In
a regulatory environment that is prohibitive to a Germany, in contrast, a new health device like HAL
particular type of innovation, companies may try to

188
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

is categorized solely as a medical device strictly by Cases of Regulatory Sandboxes for


its function regardless of its risk levels on safety. As
the review of medical devices is certified by a private
AI-based Innovation
certification body, the procedure is codified, open, and
For smart city development, demonstration projects
transparent, and the time required for approval of new
play an increasingly crucial role in testing novel
medical devices is substantially less than in Japan.
technologies and raising awareness among the
HAL has been certified as a medical device in Germany
general public. These projects are mainly aimed at
and subsequently commercialized in Europe. After
examining promising but unproven technologies
that, the robot was approved by the Pharmaceuticals
concerning various aspects of cities, including energy,
and Medical Devices Agency (PMDA) in Japan and
transportation, buildings, health, environment, and
commercialized with public insurance coverage.
infrastructure. Existing policies and regulations,
however, may not necessarily be able to properly
At the same time, Cyberdyne also engaged in
deal with certain unexpected novel features of
developing ISO standards for the safety of personal
technologies. Hence entrepreneurs and innovators
care robots including healthcare robots (Iizuka & Ikeda,
would have difficulties in conducting field testing of
2019). As there had not been robots like HAL before,
emerging technologies on the ground, particularly
there was no regulation in place to protect users,
when other stakeholders, including local communities
and international standards were considered to be
and residents, are involved. Regulatory sandboxes can
crucial for establishing confidence in these products.
relax or adjust some of the relevant regulations so
Also, while these new standards can guarantee the
that these new technologies can be tested for actual
company an early-mover advantage with the global
adoption and use. How regulatory sandboxed are
recognition of its brand, they level the playing field for
designed and implemented can be locally adjusted,
new entrants to the emerging industry. As Cyberdyne
based on the specificities of the economic and social
was already developing personal care robotics and
conditions and contexts, to maximize the effect of
was experimenting with prototype safety measures, a
learning through trial and error. Various types of new
set of evidence created during this process became a
promising technologies can be verified, adopted,
basis to establish ISO standards on robotics safety.
and integrated, effectively improving technological
performance, reliability, and integration, as well as
This case demonstrates a possibility that regulatory
contributing to cost reduction.
arbitrage can actually function to promote innovation.
As a start-up with limited resources, Cyberdyne did not
In particular, regulatory sandboxes can improve
attempt to directly influence the relevant regulations.
the understanding of how AI systems may react
The company instead tried to cope with the regulatory
in specific contexts and satisfy human needs. As
obstacle by commercializing the new robot in the
AI-based innovation involves rapid technological
domestic market as an assistive device first and
change, uncertain market development, and diverse
further developing the technology as a medical device
social norms, there are many economic, ethical,
overseas. The company also participated in setting
and legal issues comprised of various interests and
up the institutional environment in which the new
preferences. It is necessary to have a regulatory
product is recognized properly. Hence regulatory
framework that is flexible enough to accommodate
arbitrage can also mean that enterprises strategically
the uncertainties of innovation and, at the same time,
take advantage of differences in regulatory systems to
clear enough to impose society’s preferences on
develop and commercialize innovative products while
emerging innovation. This requires a specific form of
contributing to establishing institutions to facilitate
governance that incorporates both elements of top-
market creation.

189
Institutional and technological design development through use cases based discussion

down legal framing and bottom-up empowerment of Given the rapid progress and unpredictable evolution
individual actors (Pagallo, Aurucci, Casanovas, Chatila, of AI-based innovation, some countries have
Chazerand, Dignum, Luetge, Madelin, Schafer & established special deregulated zones as living labs to
Valcke, 2019). Regulatory sandboxes can function as allow testing and experimentation of new technologies
a nexus of top-down strategic planning and bottom-up in actual fields. In Japan, the National Strategic
entrepreneurial initiatives. Special Zones system was introduced in 2013 to
enhance economic growth by implementing regulatory
The current regulations in the fields of autonomous reforms. So far, ten areas have been designated as
vehicles, drones, and medical devices show that special zones, and more than 60 reforms have been
rules on AI are significantly dependent upon the realized, with over 350 projects currently ongoing
context of locations and sectors (Pagallo, Aurucci, as a result of these regulatory reforms (Secretariat
Casanovas, Chatila, Chazerand, Dignum, Luetge, for the Promotion of Regional Development, 2019).
Madelin, Schafer & Valcke, 2019). In the case of In these special zones, regulatory exceptions have
the EU, for example, in addition to the rules on data been introduced without amending the laws by
protection, the testing and use of self-driving cars taking into account specific local circumstances,
needs to comply with a complex legal network and municipalities and private companies have
involving three directives and one regulation: Council proposed voluntary plans. Specifically targeting self-
Directive 85/374/EEC on the approximation of the driving vehicles, in October 2017, the government
laws, regulations, and administrative provisions of introduced the National Strategic Special Zones for
the Member States concerning liability for defective Level 4 Automated Vehicles Deployment Project on
products; Directive 1999/44/EC on certain aspects of public roads. With the aim of establishing social and
the sale of consumer goods and associated guarantee, legal systems for future technological development,
such as repair and replacement, and price reduction public road safety demonstration experiments were
and termination; Directive 2009/103/EC relating to conducted. Based on the experience of building these
insurance against civil liability in respect of the use of special zones, the Japanese government initiated a
motor vehicles, and the enforcement of the obligation new framework for regulatory sandboxes in March
to insure against such liability; and Regulation 2018, covering financial services, healthcare industry,
2018/858 on the approval and market surveillance mobility, and transportation.
of motor vehicles and their trailers, and of systems,
components, and separate technical units intended In Singapore, the Road Traffic Act was amended in
for such vehicles. The testing and use of drones February 2017 to recognize that a motor vehicle need
requires compliance with one regulation, Regulation not have a human driver. The Minister for Transport
(EU) 2018/1139 on common rules in the field of civil is able to create new rules on trials of autonomous
aviation and establishing a European Aviation Safety vehicles, acquire the data from the trials, and set
Agency, and two European Commission implementing standards for autonomous vehicle designs (Taeihagh
and delegated acts, Delegated Regulation 2019/945 & Lim, 2019). A five-year regulatory sandbox was
and the Implementing Regulation 2019/947, in addition created to ensure that innovation is not stifled, and
to several opinions and guidelines of the European the government intends to enact further legislation
Aviation Safety Agency (EASA). Medical devices based in the future. Autonomous vehicles must pass safety
on AI need to deal with contractual and tort liability in assessments, robust plans for accident mitigation
national regulations of the EU member states. must be developed before road testing, and the
default requirement for a human driver can be waived

190
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

once the autonomous vehicle demonstrates sufficient spaces in which solutions for technical, economic,
competency to the Land Transport Authority. After and regulatory challenges relating to energy transition
displaying higher competencies, autonomous vehicles can be developed and demonstrated (Federal
can undergo trials on increasingly complex roads. Ministry for Economic Affairs and Energy, 2019).
Moreover, a scheme for regulatory sandboxes has
In 2017, the United States Federal Aviation been established to test technical and non-technical
Administration (FAA) launched the Unmanned Aircraft innovations in real life and on an industrial scale
System (UAS) Integration Pilot Program (IPP), with in critical areas of energy transition. As the smart
fixed-term regulatory exemptions and adaptive cities project aims to test various possibilities for
regulations, to test the safe application of drones digitalization and ensure a good fit with sustainable
(Federal Aviation Administration, 2019). The program and integrated urban development, the Federal
has helped the Department of Transportation and FAA Ministry of the Interior, Building, and Community has
develop new rules that support more complex low- been funding the project since 2019.
altitude operations by addressing security and privacy
risks and accelerating the approval of operations For autonomous vehicles, the Federal Ministry
that currently require special authorizations. Ten of Transport and Digital Infrastructure (BMVI)
public-private partnerships have been chosen to test established the Digital Motorway Test Bed to allow
the use of unmanned aerial vehicles (UAV), drones, testing of the latest automated driving technology in
in potentially useful ways that are currently illegal a real-life setting. The Hamburg Electric Autonomous
under federal law without a waiver (Boyd, 2018). The Transportation project (HEAT) investigates how fully
program encouraged applicants to submit proposals autonomous or self-driving electric minibuses can
for test cases that would obtain data that could be be safely deployed to transport passengers on urban
applied to broader use cases, with the understanding roads. Since the test vehicles are powered vehicles
that the Department of Transportation and FAA would with highly or fully automated driving functions, the
waive certain restrictions to make these projects implementation of the project and registration of
viable. The IPP Lead Participants are evaluating a host the cars necessitates applications according to the
of operational concepts, including night operations, German Road Vehicles Registration and Licensing
flights over people and beyond the pilot’s line of sight, Regulations, with exemptions. Regulatory sandboxes
package delivery, detect-and-avoid technologies, can also be designed as testbeds for broad-based
and the reliability and security of data links between participation. The Baden-Württemberg Autonomous
pilot and aircraft, with potential opportunities for Driving Testbed is a regulatory sandbox for mobility
application in commerce, photography, emergency concepts that permits companies and research
management, agricultural support, and infrastructure establishments to test technologies and services
inspections. in the field of connected and automated driving.
The combination of various elements of relevance
In Germany, the energy sector is emphasized to to mobility and the consortium of scientific and
encourage innovative solutions for a future energy municipal partners creates a platform on which
system based on renewable energy and higher energy key insights and momentum can be gained for the
efficiency through digitalization. The Economic Affairs ongoing development of legislation and policy for
Ministry has set up a large-scale regulatory sandbox autonomous driving.
entitled Smart Energy Showcases – Digital Agenda
for the Energy Transition (SINTEG). It offers temporary

191
Institutional and technological design development through use cases based discussion

The approach of regulatory sandboxes has been Information Commissioner’s Office. The first output
identified as an essential policy instrument for from the regulatory sandbox process is a common
promoting responsible innovation in the national understanding of what should be present to help
strategy for AI of Norway (Norwegian Ministry of deliver high-quality care when using machine learning
Local Government and Modernisation, 2020). In this applications in clinical diagnostics. Developing this
strategy, the concept refers to legislative amendments shared view of quality with people who use services,
that allow trials within a limited geographical area or providers, technology suppliers, and system partners
period, as well as more comprehensive measures in has been the basis of their work in the sandbox.
areas where the relevant supervisory authority needs
close monitoring and supervision. The government In Europe, deregulated special zones have mainly been
has established regulatory sandboxes in the field of applied in the fields of self-driving cars and drones.
transportation in the form of legislative amendments The Swedish government sponsored the world’s
that allow testing activities. An act to enable pilot first large-scale autonomous driving pilot project in
projects on autonomous vehicles came into force in 2016. In Belgium, the first special zone for the testing
January 2018. Maritime authorities established the of drones in open labs was established in Antwerp
first test bed for autonomous vessels in 2016, and harbor in January 2019. The Russian government
two more test beds have been approved since then. has also announced that a new experimental legal
In 2019 parliament adopted a new Harbours and framework will be applied to the city of Moscow for AI
Fairways Act, which permits autonomous coastal experimentation.
shipping. Such permission allows sailing in specific
fairways, subject to compulsory pilotage or in areas Given that these various initiatives to create regulatory
where no pilotage services are provided. Where pilot sandboxes for AI-based innovation have only recently
projects deviate from applicable laws and regulations, been introduced, it is difficult to make concrete
they can be conducted with statutory authority in judgments about what impacts have been made
special rules. Alternatively, under the Pilot Schemes by the regulatory sandboxes. There are only limited
in Public Administration Act, public administration empirical data from which to draw any conclusions as
can apply to the Ministry of Local Government and to the extent regulatory sandboxes have succeeded
Modernisation to deviate from laws and regulations in creating innovation as expected. At the same time,
to test new ways of organizing their activities or we do not yet fully comprehend the scope of privacy
performing their tasks for a period of up to four years. violations or security risks that consumers may be
subjected to by AI algorithms.
In the UK, technology suppliers and their National
Health Service (NHS) partners who were delivering Regulatory Sandboxes for Data
machine learning applications in diagnostic pathways
have begun work on a regulatory sandbox (Care
Governance in Smart Cities
Quality Commission, 2020). The Care Quality
Although empirical findings are still limited, we can
Commission (CQC) formed a team with members
identify a number of key challenges in designing and
from across different functions, as well as a
implementing regulatory sandboxes for AI-based
governance committee to oversee the work. The
innovation in real-life settings. These include: how
National Institute for Clinical Excellence (NICE), the
to guarantee compliance with regulations for safety,
Medicines and Healthcare Products Regulatory
health, environment, security, and privacy, and to what
Agency (MHRA), and the NHSX – a joint unit between
extent regulations can be modified; how to share
the NHS and the Department of Health and Social
responsibility between the public and private sectors
Care to drive the digital transformation of health care
when accidents or problems have occurred; and how
– were also included as government partners in this
to manage accessibility, sharing, ownership, and use
sandbox. They have been working to explore new
of data. In particular, data governance is a critical
guidance for NHS providers on AI systems with the

192
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

challenge in fully utilizing the approach of regulatory collaboration through experimentation with a wide
sandboxes for AI-based innovation in the context of range of actors in different sectors (Curley & Salmelin,
smart cities. 2018). Open data initiatives are increasingly considered
as defining elements of emerging smart cities, which
Various sectors are undergoing significant can be characterized as open innovation economies
transformations by introducing data-driven innovation enabled by the participation of city residents, civic
in smart cities. In the energy sector, distributed energy society, software developers, and local small- and
systems with peer-to-peer exchange of energy have medium-sized enterprises (SMEs) (Ojo, Curry &
become possible through blockchain technology, Zeleti, 2015). A recent study which analyzed patent
with photovoltaics provided through Solar-as-a- applications in smart cities across the globe suggests
Service (SaaS). Smart meters and IoT technologies that smart city policies have a positive impact on the
are providing highly sophisticated services for rate of innovation, particularly in the high-tech sector
energy, health, and security to buildings and houses. (Caragliu & Del Bo, 2019).
In transportation, connected, autonomous, sharing,
and electrified (CASE) challenges are radically There are many issues that we need to consider
changing the technologies and systems in the sector, when implementing open data in smart cities. These
and Mobility-as-a-Service (MaaS) is being explored include the types of data collected, who owns and has
aggressively through alliances among key players access to the data, for what purposes can the data be
across the globe. In the health sector, Software as used, how the data are managed, and what incentives
a Medical Device (SaMD) is being explored, and the are provided to encourage data sharing to stimulate
diagnosis of cancers based on image recognition is innovation while addressing concerns about privacy
considered especially promising. and security in smart cities. Although laboratory-level
attempts have been made to integrate various types
An essential approach to stimulating data-driven of datasets and sources on research data scattered
innovation in smart cities is to foster data collection across organizations, the scope and amount of data
and sharing. A vast amount of various kinds of data collected and shared needs to be expanded to scale-
would be collected from energy systems, public up innovative initiatives for actual implementation in
transportation, individual vehicles, and buildings, and smart cities. The quality control, error monitoring, and
many benefits would be expected from using that data cleaning of data, as well as interoperability between
for different types of innovation. For example, while various data standards, must be maintained to
the data collected through smart meters on energy secure reliability. Organizational and legal frameworks
consumption in households would be useful for need to be established concerning the ownership
optimizing energy use, that data could also be used and accessibility of data, and to protect privacy and
for providing other services such as home delivery sensitive data. At the same time, it is also essential
services. The data could tell delivery operators when to keep a balance between open and proprietary
residents would be at home, allowing them to adjust data (Organisation for Economic Co-operation and
when to visit the house (Ohsugi & Koshizuka, 2018). Development, 2015b).
The same data could also be used to provide health
and security services to the residents of the house. The collection and use of an extensive range of data,
in particular, raises societal concerns in developing
An open data approach facilitates collaborative smart cities. The case of Sidewalk Toronto – a
efforts among stakeholders to create innovation smart city project initiated in Toronto’s waterfront by
for smart cities. In comparison to the conventional Alphabet, the parent company of Google – illustrates
model of open innovation, which focuses on bilateral the seriousness of the concerns among citizens.
collaboration between firms, open innovation 2.0 There are various benefits expected to be provided
is a new mode of innovation based on integrated to the residents and workers in the area, such as

193
Institutional and technological design development through use cases based discussion

ubiquitous high-speed Internet, intelligent traffic that could be used to identify an individual or that is
lights, smart shades in public spaces, underground associated with an identifiable individual. Individuals
delivery robots, and smart energy grids (Knight, 2019). typically share their personal data with governments
The smart city plan would generate large quantities and businesses when applying for a license, shopping,
of data that could be used to optimize and improve or ordering a delivery service.
technologies and services. However, some citizen
groups were very concerned about the management Digital transparency can be enhanced by providing
of the collected data, and the Canadian Civil Liberties easy-to-understand language that clearly explains
Association sued the City of Toronto in an attempt the nature of data and privacy implications of digital
to block the project. After extensive consultation technologies to citizens in smart cities (Lu, 2019).
with citizens and companies in the city, the Master Through digital transparency, people are able to
Innovation and Development Plan (MIDP) for Toronto understand how and why data is being collected and
was released in June 2019 (Sidewalk Labs, 2019a). used in the public realm through a visual language.
The new plan emphasized community engagement For example, one hexagon conveys the purpose of
and understanding of local needs in response to the the technology; another shows the logo of the entity
concerns raised about building smart cities that are responsible for the technology; and a third contains a
capable of tracking their inhabitants in unprecedented QR code that takes the individual to a digital channel
detail. Despite these efforts, the smart city project was where they can learn more. In situations where
eventually terminated (Doctoroff, 2020). identifying information is collected, a privacy-related
colored hexagon can also be displayed by combining
In trying to establish appropriate systems of data the technology type (video, image, audio, or otherwise)
governance, it is useful to classify various types with the way that identifiable information is used
of data available in smart cities. Urban data can (yellow for identifiable and blue for de-identified before
be defined as including personal, non-personal, first use, among others). This kind of approach could
aggregate, and de-identified data collected and used facilitate citizens’ understanding and engagement in
in physical or community spaces where meaningful smart city projects.
consent before collection and use is difficult to
obtain (Sidewalk Labs, 2019b). Non-personal data A key question is what would be an appropriate
does not identify an individual and can include other governance system for urban data to maximize the
types of non-identifying data not concerning people, potential of data-driven innovation while minimizing
such as machine-generated data about weather and risks to individuals and communities. One approach
temperature, and data on maintenance needs for is to establish a data trust, which is defined as a legal
industrial equipment. Aggregate data is about people structure that provides for independent stewardship
in the aggregate and not about a particular individual, of data (Hardinges, Wells, Blandford, Tennison & Scott,
and is useful for answering research questions about 2019). With data trusts, the organizations that collect
populations or groups of people. Aggregate counts of and hold data permit an independent institution to
people in an office space, for example, can be used in make decisions about who has access to data under
combination with other data, such as weather data, to what conditions, how that data is used and shared
develop an energy-efficiency program. De-identified and for what purposes, and who can benefit from it.
data concerns an individual that was identifiable when An independent urban data trust would be able to
the data was collected but has subsequently been manage urban data and make it publicly accessible by
made non-identifiable. Third-party apps and services default if appropriately de-identified (Sidewalk Labs,
can use properly de-identified data for research 2019b). An accountable and transparent process
purposes, such as comparing neighborhood energy for approving the use or collection of urban data
usage across a city. Personal data is usually the would ensure that local companies, entrepreneurs,
subject of privacy laws and includes any information researchers, and civic organizations can use urban

194
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

data. These data would be kept by the data trust and example, when there are two separate systems for
not be sold, used for advertising, or shared without the making taxi reservations and doctors’ appointments,
residents’ permission. a data linkage platform can optimize taxi dispatching
and appointment scheduling by connecting the
In Japan, the Super City Initiative was started in relevant data in the two systems. The data linkage
October 2018 in an attempt to respond to the platform does not necessarily need to maintain an
challenge posed by the fourth industrial revolution extensive central database, as data can be stored in
involving AI and IoT (Secretariat for the Promotion separate databases in a distributed way. The providers
of Regional Development, 2020). The initiative of digital data and services are required to make
requires that projects go beyond demonstrating a their application program interfaces (APIs) open to
single technology, such as autonomous vehicles in a the public, so that any information system can be
specific field, and to integrate it with other advanced developed through the data linkage platform. The
services, such as cashless transactions and once- super city initiative provides the operator of the data
only application for administrative procedures, to linkage platform with a right to request national and
comprehensively address a societal issue in a city. local governments and private enterprises to provide
It also emphasizes that projects should incorporate necessary data.
the views and perspectives of the people living there,
not simply the ideas promoted by the developers and Several issues need to be addressed concerning
suppliers of technologies. The super city initiative data governance in smart cities through regulatory
provides a particular legal procedure for deregulation sandboxes. For the use of sophisticated services
that is specifically designed to simultaneously support available in smart cities, personal data will be required
regulatory reforms in different fields in an integrated on various aspects of the residents’ lives. In the
manner. The broad regulatory changes involved in case of introducing an app connecting taxi–hospital
building smart cities often require dealing with multiple reservations, the data linkage platform would ask the
government agencies. In such cases, a top-down national or local government for personal data on the
approach is taken; if a municipality obtains approval address, health status, and level of care needed by the
for smart city plans from its residents, the prime elderly. The provision of such data would require the
minister in the central government can direct agencies consent of the person in question in accordance with
to make exceptions to the relevant regulations as the law. On the other hand, relevant laws might allow
needed. In June 2020, Japan’s parliament just passed the provision of such data without the permission
the “super city” bill, and the government is expected of the person if there is a particular reason, such
to soon begin taking applications from municipalities, as contributing to the public interest. As local
with approvals starting in the summer (Miki, 2020). governments, businesses, or regional councils would
make decisions in such cases, clear, transparent,
In a super city, a data linkage platform plays a and inclusive procedures are necessary for relevant
crucial role in facilitating close coordination among stakeholders.
various services as the operating system (OS) of
the city (Secretariat for the Promotion of Regional Another issue is how to reach a consensus among
Development, 2020). A data linkage platform would be residents in smart cities. As residents are expected
developed by professional vendors and operated by to agree on what kind of city they would like, and
local governments, whereas private service providers which areas they would target, the process of building
would offer various services. As long as the residents a consensus needs to be well-integrated into the
of the super city agree, it would also be possible for planning process. Furthermore, the methodologies
either public agencies or private enterprises to provide and procedures for consensus-building need to
services and the platform, making consent by the be specified and institutionalized in an open and
residents particularly crucial in data governance. For inclusive manner. It is also essential to consider how

195
Institutional and technological design development through use cases based discussion

to protect the rights of those residents who do not Smart Cities Alliance on Technology Governance
want to participate in the data governance scheme was launched in October 2019. The initiative aims
of smart cities. Residents need to form a consensus to establish global standards for data collection and
on where the balance should be located between the use, foster greater transparency and public trust, and
convenience of the advanced services that rely upon promote best practices in smart city governance
personal data and the risk of the data being used (World Economic Forum, 2019). Working together
without their consent. with municipal, regional, and national governments, as
well as private-sector partners and city residents, the
At the same time, the openness and interoperability alliance intends to co-design, pilot, and scale-up policy
of data in smart cities needs to be secured. In smart solutions to help cities responsibly implement data-
cities, it is often challenging to provide a cross- driven innovation. Such an international initiative will
sectoral service because, typically, data is independent contribute to developing a global policy framework for
for each field and organization. Reusing and deploying smart cities by examining key issues concerning data
such services to other cities is also difficult because governance, including privacy, transparency, openness,
the data system is specialized for each city. Moreover, and interoperability, based on experiences through
the cost and labor required for functional expansion in regulatory sandboxes in different locations.
the conventional data system increases, and services
cannot easily be expanded to a larger scale. The Conclusion
provision of various services will be improved through
close linkage and coordination of data in other Data-driven innovation plays a crucial role in tackling
systems and cities. APIs play a particularly significant sustainability challenges. As the development of
role in facilitating interoperability and data flow. The AI is accelerated by deriving new and significant
design process of APIs defines conventions of data insights from the vast amount of data generated
exchanges that influence interactions among the during the delivery of services every day, training and
stakeholders involved (Raetzsch, Pereira, Vestergaard adaptation is key to creating data-driven innovation.
& Brynskov, 2019). It is essential to make APIs open, The development of cyber-physical systems such as
secure, and transparent, so that various kinds of data smart cities is facilitated through the ready availability
and sophisticated services are connected efficiently of and accessibility to data, and its mutual exchange
and effectively. and sharing with stakeholders in different sectors.
Hence the new mode of data-driven innovation
Coordinated efforts to share experiences in regulatory requires open, dynamic interactions with stakeholders
sandboxes at the international level will help to possessing and generating various kinds of data.
foster openness and interoperability to promote data Close cooperation and collaboration in regards to
sharing and use for innovation and transparency, data is crucial in the innovation process, from the
as well as trust in managing and governing data development of novel technologies to deployment
to address concerns about privacy and security. through field experimentation and legitimation in
So far, no global policy framework has yet been society.
established on how to govern data for smart cities
(Russo, 2019). For example, there is no shared set It is critical to establish a proper system to govern
of rules concerning how sensor data collected in data-driven innovation in the context of accelerating
public spaces, such as by traffic cameras, should technological progress and deepening interconnection
be used. It is of critical importance to explore and interdependence. The speed of technological
guidelines and principles for the development and change with AI is remarkably fast, and it is
deployment of emerging technologies for smart accompanied by a significant degree of uncertainty
cities by sharing good practices. As an international in terms of consequences and side effects. Various
initiative to address these challenges, the G20 Global types of technologies are increasingly becoming

196
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

interconnected and interdependent through data to maximize the effect of learning through trial and
exchange and sharing among multiple sectors in error. Various types of new promising technologies
smart cities, such as energy, buildings, transportation, can be verified, adopted, and integrated, effectively
and health. These characteristics make it difficult improving technological performance, reliability, and
to explain or understand the process of innovation, integration, as well as contributing to cost reduction.
and contribute to giving rise to a widening gap As AI-based innovation involves rapid technological
between technological and institutional changes. AI- change, uncertain market developments, and diverse
based innovation becomes robust by involving the social norms, there are many economic, ethical,
stakeholders who will interact with the technology and legal issues comprised of various interests
early in development, obtaining a deep understanding and preferences. Regulatory sandboxes need to
of their needs, expectations, values, and preferences, be flexible to accommodate the uncertainties of
and testing ideas and prototypes with them innovation, and precise enough to impose society’s
throughout the entire process. preferences on emerging innovation, functioning as a
nexus of top-down strategic planning and bottom-up
Specifically designating geographical areas or sectoral entrepreneurial initiatives.
domains, in the form of regulatory sandboxes,
can facilitate data-driven innovation by allowing Emerging cases of regulatory sandboxes for smart
experimental trials of novel technologies and cities show that data governance is critical to
systems that cannot currently operate under the maximizing the potential of data-driven innovation
existing regulations. They provide a limited form of while minimizing risks to individuals and communities.
regulatory waiver or flexibility for firms to test new With data trusts, the organizations that collect
products or business models with reduced regulatory and hold data permit an independent institution
requirements, while preserving certain safeguards to make decisions about who has access to data
to ensure appropriate consumer protection. The aim under what conditions, how that data is used and
is to provide a symbiotic environment for innovators shared and for what purposes, and who can benefit
to test new technologies, and for regulators to from it. Alternatively, a data linkage platform can
understand their implications for industrial innovation facilitate close coordination between the various
and consumer protection. Regulatory sandboxes help services provided and the data stored in a distributed
to identify and better respond to regulatory breaches manner, without maintaining an extensive central
by enhancing flexibility and adjustment in regulations, database. The operator of the data linkage platform
which would be particularly relevant in highly regulated would require a right to request national and local
industries, such as the finance, energy, transport, and governments and private enterprises to provide
health sectors. necessary data. APIs-linking data and services need to
be open to the public so that any information system
The approach of regulatory sandboxes will play an can be developed through the data linkage platform.
especially essential role in governing data-driven
innovation in smart cities, which inevitably faces a It is critically important that the data governance
difficult challenge of collecting, sharing, and using systems of smart cities are open, transparent, and
various kinds of data for innovation while addressing inclusive. While the provision of personal data would
societal concerns about privacy and security. require the consent of the person in question, the
Regulatory sandboxes can relax or adjust some of the relevant law might allow the provision of such data
relevant regulations, so that these new technologies without the permission of the person if there is a
can be tested for actual adoption and use. How particular reason, such as contributing to the public
regulatory sandboxes are designed and implemented interest. As local governments, businesses, or regional
can be locally adjusted, based on the specificities of councils would be expected to make a decision, clear,
the economic and social conditions and contexts, transparent, and inclusive procedures are necessary

197
Institutional and technological design development through use cases based discussion

for relevant stakeholders. The process of building Recommendation 5: Regulatory sandboxes need
a consensus among residents needs to be well- to be flexible to accommodate the uncertainties of
integrated into the planning of smart cities, with the innovation, and precise enough to impose society’s
methodologies and procedures for consensus-building preferences on emerging innovation, functioning as a
specified and institutionalized in an open and inclusive nexus of top-down strategic planning and bottom-up
manner. It is also essential to respect the rights of entrepreneurial initiatives.
those residents who do not want to participate in the
data governance scheme of smart cities. As APIs Recommendation 6: Data governance systems of
play a crucial role in facilitating interoperability and smart cities should be open, transparent, and inclusive
data flow in smart cities, open APIs will facilitate to facilitate data sharing and integration for data-
the efficient connection of various kinds of data and driven innovation while addressing societal concerns
sophisticated services. International cooperation will about security and privacy.
be critically important to develop common policy
frameworks and guidelines for facilitating open data Recommendation 7: The procedures for obtaining
flow while maintaining public trust among smart cities consent on the collection and management of
across the globe. personal data should be clear and transparent to
relevant stakeholders with specific conditions for the
Policy Recommendations use of such data for public purposes.

Recommendation 1: New policy approaches are Recommendation 8: The process of building


required to govern data-driven innovation in the a consensus among residents should be well-
context of accelerating technological progress and integrated into the planning of smart cities, with the
deepening interconnection and interdependence. methodologies and procedures for consensus-building
specified and institutionalized in an open and inclusive
Recommendation 2: Regulatory sandboxes should manner.
be established to facilitate data-driven innovation by
allowing experimental trials of novel technologies Recommendation 9: Application programming
and systems that cannot currently operate under the interfaces (APIs) should be open to facilitate
existing regulations through specifically designating interoperability and data flow for efficient connection
geographical areas or sectoral domains. of various kinds of data and sophisticated services in
smart cities.
Recommendation 3:. Stakeholders should be
involved from the early stages of technological Recommendation 10: Common policy frameworks
development in order to obtain a deep understanding should be explored to develop guidelines for data
of their needs, expectations, values, and preferences, collection and use, foster greater transparency and
and to test ideas and prototypes with them throughout public trust, and promote interoperability and open
the entire process. data flow among smart cities across the globe.

Recommendation 4: Regulatory sandboxes should


be designed and implemented by incorporating the
specificities of local economic and social conditions
and contexts to maximize the effect of learning
through trial and error.

198
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

References
Allen, H. J. (2019). Regulatory Sandboxes. George Washington Law Review, 87(3), 579-645.

Allen, H. J. (2020). Sandbox Boundaries. Vanderbilt Journal of Entertainment & Technology


Law, Forthcoming, 22(2), 299-321.

Beede, E. (2020, April 25). Healthcare AI systems that put people at the center. Retrieved
from Google Blog: https://www.blog.google/technology/health/healthcare-ai-systems-put-
people-center/

Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., & Vardoulakis, L.
M. (2020). A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for
the Detection of Diabetic Retinopathy. Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems, 1-12.

Boyd, A. (2018, May 9). 10 Drone Programs Get Federal OK To Break The Rules. Retrieved
from Nextgov: https://www.nextgov.com/emerging-tech/2018/05/10-drone-programs-get-
federal-ok-break-rules/148098/

Caragliu, A., & Bo, C. F. (2019). Smart innovative cities: The impact of Smart City policies on
urban innovation. Technological Forecasting and Social Change, 142, 373-383.

Care Quality Commission. (2020, March). Using machine learning in diagnostic services: A
report with recommendations from CQC’s regulatory sandbox. Care Quality Commission.

Curley, M. (2016). Twelve principles for open innovation 2.0. Nature.

Curley, M., & Salmelin, B. (2018). Data-Driven Innovation. In Open Innovation 2.0: The New
Mode of Digital Innovation for Prosperity and Sustainability. Cham: Springer International
Publishing.

Doctoroff, D. L. (2020, May 7). Why we’re no longer pursuing the Quayside project — and
what’s next for Sidewalk Labs. Retrieved from Medium: https://medium.com/sidewalk-talk/
why-were-no-longer-pursuing-the-quayside-project-and-what-s-next-for-sidewalk-labs-
9a61de3fee3a

Energy Market Authority. (2017, October 23). Launch of Regulatory Sandbox to Encourage
Energy Sector Innovations. EMA.

199
Institutional and technological design development through use cases based discussion

Federal Aviation Administration. (2019, December 10). UAS Integration Pilot Program.
Retrieved from United States Department of Transportation: https://www.faa.gov/uas/
programs_partnerships/integration_pilot_program/

Federal Ministry for Economic Affairs and Energy. (2019, July). Making Space for Innovation:
The handbook for regulatory sandboxes. BMWi.

Financial Conduct Authority. (2015). Regulatory Sandbox. FCA.

Financial Conduct Authority. (2017, October). Regulatory Sandbox Lessons Learned Report.
FCA.

Fleischer, V. (2010). Regulatory Arbitrage. Texas Law Review, 89(2), 227-289.

Food and Drug Administration. (2019, April). Proposed Regulatory Framework for
Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a
Medical Device (SaMD) - Discussion Paper and Request for Feedback. FDA.

Guston, D. H. (2014). Understanding ‘anticipatory governance’. Social Studies of Science,


44(2), 218-242.

Hardinges, J., Wells, P., Blandford, A., Tennison, J., & Scott, A. (2019, April). Data trusts:
lessons from three pilots. Open Data Institute.

Iizuka, M., & Ikeda, Y. (2019). “Regulation and innovation under Industry 4.0: Case of
medical/healthcare robot, HAL by Cyberdyne.” Working Paper Series #2019-038, Maastricht
Economic and Social Research Institute on Innovation and Technology (UNU-MERIT).

Ikeda,Y., & Iizuka, M. (2019, October). “International Rule Strategies for Implementing
Innovation in Society: A Case Study of the Medical Healthcare Robot HAL.” RIETI Policy
Discussion Paper Series 19-P-016, Research Institute of Economy, Trade and Industry.

Information Commissioner’s Office. (2019). ICO opens Sandbox beta phase to enhance data
protection and support innovation. ICO.

Knight, W. (2019). Alphabet’s smart city will track citizens, but promises to protect their data.
MIT Technology.

Lu, J. (2019, April 19). How can we bring transparency to urban tech? These icons are a first
step. Retrieved from Medium: https://medium.com/sidewalk-talk/how-can-we-make-urban-
tech-transparent-these-icons-are-a-first-step-f03f237f8ff0

200
Governing Data-driven Innovation for Sustainability :
Opportunities and Challenges of Regulatory Sandboxes for Smart Cities

Miki, R. (2020, May 13). Coronavirus pushes Japan closer to high-tech ‘super cities’. Retrieved
from Nikkei Asian Review: https://asia.nikkei.com/Politics/Coronavirus-pushes-Japan-
closer-to-high-tech-super-cities

Norwegian Ministry of Local Government and Modernisation. (2020). National Strategy for
Artificial Intelligence. H-2458 EN.

OECD. (2015a). Data-Driven Innovation: Big Data for Growth and Well-Being. OECD
Publishing.

OECD. (2015b). Making Open Science a Reality. OECD.

OECD. (2018). OECD Science, Technology and Innovation Outlook 2018.

OECD. (2019). Digital Innovation: Seizing Policy Opportunities. OECD.

Office of Gas and Electricity Markets. (2018a). Enabling trials through the regulatory
sandbox. ofgem.

Office of Gas and Electricity Markets. (2018b). Insights from running the regulatory sandbox.
ofgem.

Ohsugi, S., & Koshizuka, N. (2018). Delivery Route Optimization Through Occupancy
Prediction from Electricity Usage. 2018 IEEE 42nd Annual Computer Software and
Applications Conference (COMPSAC), 842-849.

Ojo, A., Curry, E., & Sanaz-Ahmadi, F. (2015). A Tale of Open Data Innovations in Five Smart
Cities. 2015 48th Annual Hawaii International Conference on System Sciences (HICSS-48),
2326-2335.

Pagallo, U., Casanovas, P., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., . . . Valcke, P.
(2019). On Good AI Governance: 14 Priority Actions, A S.M.A.R.T. Model of Governance, and
a Regulatory Toolbox. AI4People.

Pollman, E. (2019). Tech, Regulatory Arbitrage, and Limits. European Business Organization
Law Review, 20(3), 567-590.

Raetzsch, C., Pereira, G., Vestergaard, L. S., & Brynskov, M. (2019). Weaving seams with data:
Conceptualizing City APIs as elements of infrastructures. SAGE Journals, 6(1).

201
Institutional and technological design development through use cases based discussion

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., . . . Larochel.
(2019). Machine behaviour. Nature, 568(7753), 477-486.

Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., . . . Fei-Fei, L. (2015).
ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer
Vision, 115(3), 211-252.

Russo, A. (2019). World Economic Forum to Lead G20 Smart Cities Alliance on Technology
Governance. World Economic Forum.

Secretariat for the Promotion of Regional Development. (2019). The National Strategic
Special Zones. Retrieved from Cabinet Office, Prime Minister’s Office of Japan:
https://www.kantei.go.jp/jp/singi/tiiki/kokusentoc/supercity/supercityforum2019/
supercityforum2019_EnglishVer.html

Secretariat for the Promotion of Regional Development. (2020). About the Super City
Initiative. Cabinet Office, Prime Minister’s Office of Japan.

Sidewalk Labs. (2019a). Sidewalk Labs Publishes Comprehensive Blueprint for the
Neighbourhood of the Future. Sidewalk Labs.

Sidewalk Labs. (2019b). Toronto Tomorrow: A new approach for inclusive growth, Volume 2.
Sidewalk Labs.

Stilgoea, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible
innovation. Research Policy, 42(9), 1568-1580.

Taeihagh, A., & Lim, H. S. (2019). Governing autonomous vehicles: emerging responses for
safety, liability, privacy, cybersecurity, and industry risks. Transport Reviews, 39(1), 103-128.

World Economic Forum. (2019). Forum-led G20 Smart Cities Alliance will create the first
global framework for smart city governance. World Economic Forum.

Yarime, M. (2017). Facilitating data-intensive approaches to innovation for sustainability:


opportunities and challenges in building smart cities. Sustainability Science, 12(6), 881-885.

Zetzsche, D. A., Buckley, R. P., Arner, D. W., & Barberis, J. N. (2017). Regulating a Revolution:
From Regulatory Sandboxes to Smart Regulation. Fordham Journal of Corporate and
Financial Law, 23(1), 31-103.

202
How to Expand
The Capacity of
AI to Build
Better Society
Including Women
Caitlin Bentley*
Lecturer in
AI-enabled Information Systems,

in AI-enabled Information School,


University of Sheffield
Honorary Fellow

Smart Cities: 3A Institute,


Australian National University

Developing Katrina Ashton,


Brenda Martin,

Gender-inclusive Elizabeth Williams,


Ellen O’Brien,
Alex Zafiroglu, and

AI Policy and Katherine Daniell


3A Institute,

Practice in the
Australian National University

Asia-Pacific Region

* Caitlin Bentley was Research Fellow at the 3A Institute of Australian National University when completing this paper.
She is now Honorary Fellow at the 3A Institute. Since 2020, she is also Lecturer in AI-enabled Information Systems
at the Information School of University of Sheffield.
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

1. Introduction

Heralded as the answer to rapid urbanization and related environmental, social, and
governance challenges, smart city developments are proliferating across the Asia-Pacific
region. Sensors, artificial intelligence (AI), machine learning algorithms, actuators, and
other advanced technologies are being built into city infrastructures. AI-enabled systems
undertake advanced data analytics, feeding into predictions and automated decision-
making that are enacted through actuators or other system structures. These new AI-
enabled systems are designed to tackle pressing urban issues such as air pollution, traffic
congestion, and public safety.1

However, not everyone has benefited from smart city developments. For instance, Cathelat
(2019) demonstrates that a gender dimension is lacking within smart city plans, even
when there is an expressed commitment to social inclusion. Broadly, women are also less
connected to the Internet, and technology access inequalities by gender are on the rise
in Asia-Pacific (Sey & Hafkin, 2019). AI-enabled systems introduce new risks and security
concerns that may disproportionately affect women (Finlay, 2019). It is important to
understand and promote effective ways to design, develop, manage, and regulate AI-enabled
systems more inclusively with and for women.

AI-enabled systems affect multiple aspects of women’s lives, as computational modelling


increasingly informs numerous areas of urban governance. Women may interact with AI-
enabled smart cities through multiple touchpoints, including embedded sensors, Internet
and mobile networks, and other networks (workplaces, healthcare, transportation, retail
centers, etc.). Ultimately, data streams record women’s behaviors, preferences, locations,
and values. Data streams may then be analyzed and incorporated into machine learning
algorithms, through which specific predictions are made. Due to the capacity for real-time
analytics, as well as the dominant focus of these systems on prediction and prevention,
AI-enabled smart cities suggest the need for an approach that is cognizant of the social
dynamics at play and of the cultural richness and diversity of our communities.

This work has two interrelated goals: to include the voices, theories, experiences, and
histories of female and feminist scholars and activists in developing better policies for AI-
enabled smart cities; and to evaluate a practical and concrete framework that policymakers
can use to support women, while taking into account the specific opportunities and risks
introduced by AI in smart city initiatives. Towards these ends, this paper critically reviews
the extant literature, focusing specifically on the status of AI-enabled smart city initiatives
across multiple countries in the Asia-Pacific region. We then analyze two key applications

1. ASEAN (2018) Smart City Network progress report gives a good overview of the 26 pilot initiatives underway across eight countries.

205
How to expand the capacity of AI to build better society

of AI for social good used within smart city initiatives: to women”, and on the management model that is
public safety and transportation. In general, we find implemented. Concerning being “inclusive to women”,
limited evidence of gender-responsive policymaking we adopt the UN DESA (n.d.) definition:
and practice, and little empirical research concerning
how AI contributes to safer public spaces or more Social inclusion is the process by which efforts are made to
effective transportation systems for women. We argue ensure equal opportunities that everyone, regardless of their
that greater integration between the technical capacity background, can achieve their full potential in life. Such
of AI-enabled systems and diverse communities of efforts include policies and actions that promote equal access
women is needed. to (public) services, as well as enable citizens’ participation
in the decision-making processes that affect their lives.
We introduce and evaluate the 3A Framework as
an effective approach to leading and forming such This definition considers the inclusion of women as a
integration holistically. Policymakers need practical societal issue, falling under the remit of multiple actors
and concrete ways to support women, whilst and institutions. It does not mean that women’s issues
taking into account the specific technological shifts and perspectives are favored over men’s, rather, it says
underway due to AI. This Framework provides a set of that women’s access to services and decision-making
core questions that can be used as a starting point. processes need to be considered in context and in
This research maps out key insights generated from relation to others. However, we acknowledge that the
interviews with leading female and feminist scholars UN definition may privilege notions of “equality” and
and activists who have significant knowledge and “access” over “equity” and “outcomes.” Roces (2010)
experience of working in Asia-Pacific. The experts details how international feminist discourses have
reflected on what inclusive practice means when conflicted and resonated in different ways with various
it comes to working at the intersection of gender Asian feminist movements. Our research examines
and advanced technologies. We examine how these the 3A Framework as a means for policymakers and
insights can be used to elaborate on the Framework, practitioners across a range of Asia-Pacific cultures
thereby establishing a method for inclusive to generate context specific goals, definitions, and
policymaking and practice. outcomes of gender inclusiveness, and to better
understand how they play out in smart city contexts.
2. Smart cities in the Asia-Pacific region:
Across the Asia-Pacific region, many countries are
are they inclusive to women? organizing state-level smart city initiatives, with many
making provisions for social inclusion within them
Hojer and Wangel (2015) argue that the idea of a
(Table 1). We find that there are often no principles or
smart city has its roots in concepts of “cybernetically
programs identified within these high-level initiatives
planned cities” developed in the 1960s, in which
defining or standardizing how social inclusion should
networked and computational capabilities would be
be implemented. Similarly, many Asia-Pacific countries
built into urban development plans starting in the
have published national AI strategies, such as
1980s, mostly within the US and Europe. The concept
India’s National Strategy for AI (NITI Aayog, 2018) or
has raised significant worldwide debate due to the
Thailand’s Industry 4.0 Policy (Baxter, 2017). Countries
tensions in its instrumental meaning versus associated
have also enacted data privacy and protection laws,
intended outcomes (Allwinkle & Cruickshank, 2011;
though it is not clear how all of these policy areas are
Hollands, 2008; Kitchin, 2014). AI-enabled smart
mediated. For instance, the Australian Human Rights
cities are increasingly common and are tied to the
Commission (2019) identified gaps in current law,
spread of connected Internet of Things (IoT) devices
application of law, regulatory measures, and education
and advances in computing power. That said, such
and training when evaluating the adequacy of existing
systems are envisaged, implemented, and regulated
laws in protecting human rights in the context of AI. The
in diverse ways across the varied social, political, and
lack of clarity surrounding national law and policy for AI-
economic landscapes of the Asia-Pacific region.
enabled smart cities across the region cast doubts over
whether the inclusion of women has been a priority, and
Whether Asia-Pacific smart cities are inclusive to
raises questions regarding the bases of inclusion.
women depends on what we mean by being “inclusive

206
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Cross-country comparison of national smart city plans and social inclusion provisions

Country/ National Smart City Policy Social Inclusion Notes


Region and Plans
(ordered
by GDP)
China China incorporated their Smart City Initiative Chan and Anderson (2015) reported
into its national policy, and is a main reason a transition in China from technology-
for accelerated development of over 500 centered to human-centered smart
smart city pilots in China (Long, Zhang, cities with a focus on increasing public
Zhang, Chen, & Chen, 2019). participation in the country. No details
about gender inclusion in national policy
were found.

Japan Launch of a “super-smart city” initiative called Society 5.0 plans explicitly mention social
Society 5.0 in 2016. National framework inclusion goals, focusing on optimal and
outlining how AI, IoT devices, and robots will tailored services for individuals, whilst
transition Japan from an information society overcoming national challenges such as
to an AI-enabled society, bringing about a the ageing population, social polarization,
human-centered society (Cabinet Office, and depopulation (UNESCO, 2019).
Government of Japan, n.d.).

India In 2015, the Indian Government pledged The Smart Cities Mission Statement and
to create 100 smart cities by 2020. Only a Guidelines (2015, p.6) includes 10 core
portion of allocated funds have been used infrastructure elements, one of which is the
so far, and the timeframe has been extended “safety and security of citizens, particularly
to 2023. The Smart Cities Mission is the women, children, and the elderly”. This is
responsibility of the Ministry of Housing and the only context in which gender issues are
Urban Affairs. specifically mentioned. Another document
provides examples of citizen engagement
activities, including ways to be inclusive
(e.g., placing Wi-Fi hotspots in slums)
(Government of India – Ministry of Urban
Development, 2015).

South Korea In 2013, the federal government launched an Explicit aim to make South Korean’s
initiative to construct ubiquitous cities, which citizens’ lives happy and inclusive in
has transitioned through two additional smart cities. A five-year, mid-to-long-term
phases to connect and decentralize smart roadmap was established, incorporating
city development across the nation. Since this vision into its plans (Ministry of
2018, national policy incorporates testbeds, Information and Communications, 2019b).
living labs, and implementation of AI No details in reference to women or gender
technology (Ministry of Information and inclusion were found.
Communications, 2019b).

Table 1: Cross-country comparison of national smart city plans and social inclusion provisions

207
How to expand the capacity of AI to build better society

Australia The Australian Smart City Plan was released No mention of social inclusion, gender, or
by the Department of Prime Minister and citizen participation.
Cabinet in 2016 and now sits with the
Department of Transport, Infrastructure,
Regional Development, and Communications
(Department of the Prime Minister and
Cabinet, 2016).

Indonesia In 2017, the national government created the Equality is mentioned in a press release
“100 Smart Cities Movement” initiated by the on the first phase of their smart cities
Ministry of Communication and Information program (Ministry of Communication and
Technology of the Republic of Indonesia, and Information – Public Relations Bureau,
in 2018 focused on improving public services 2017). Individual city master plans (Laksmi,
and increasing regional competitiveness 2018) include some specific mentions of
(Davy, 2019). preventing violence against women and
children. Sustainability, when mentioned,
includes a focus on social dimensions.
Citizen participation is an important part of
city and district planning.

Thailand Smart City Thailand (2018) is a national One of the seven dimensions of Thailand’s
program designed to roll out smart city plan is centered on building “Smart People”
services to all 76 provinces and Bangkok by by improving knowledge and skills of
2022. It incorporates multiple government residents in order to “decrease social and
divisions, is managed by a dedicated unit economic inequality and provide new
called the Digital Economy Promotion Agency opportunities for creativity, innovation, and
(DEPA), and involves multiple private sector public participation” (Smart City Thailand,
actors. 2018, p. 9). No details were given in
reference to women or gender differences
specifically.

Hong Kong Hong Kong’s Office of the Government The Blueprint focuses on application areas
Chief Information Officer has a Smart City and does not mention specifics in relation
Blueprint focused on embracing technology to women. One goal is to nurture young
towards strengthening the economy and talent to gain skills in science, technology,
achieving a high quality of life (Innovation engineering, and mathematics (STEM),
and Technology Bureau, 2017). but there is no discussion of gender
differences.

(Cont.) Table 1: Cross-country comparison of national smart city plans and social inclusion provisions

208
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Malaysia Malaysia’s Ministry of Housing and Local A main criterion given is gender
Government (2018) released a national empowerment and inclusivity of vulnerable
framework, outlining its definition, key smart groups. The seventh of 16 city policies
city challenges, national policy, strategic given is “Social inclusion, especially gender
areas of application, indicators, governance equality shall be given emphasis in smart
arrangements, and pilot project descriptions. city development” (Ministry of Housing
and Local Government, 2018, p. 36). This
includes supportive physical infrastructure
and programs, as well as participation in
decision-making.

Singapore Smart Nation Singapore (2020) outlined The digital society blueprint refers to
three pillars of action surrounding the digital inclusion in terms of digital inclusion but
economy, digital government, and digital does not refer to the specific needs of
society led by the Smart Nation and Digital women. The digital government likewise
Government Office and the Infocomm Media tracks citizen satisfaction with its services,
Development Authority. but does not explain how they address
gender differences, if at all (Smart Nation
Singapore, 2018).

Vietnam Ministries and agencies are currently No mention of social inclusion, gender or
researching and completing building citizen participation.
guidelines, mechanisms, and policies for
smart cities (Ministry of Information and
Communications, n.d.), with a first project
launched in October 2019 focusing on air and
water quality monitoring, renewable energy,
public transport, and others, taking part in the
ASEAN network (ASEAN, 2018; Ministry of
Information and Communications, 2019a).

Samoa Samoa’s National Urban Policy (2013) is a Whilst inclusivity is a core mission
good example of how Pacific Islands may statement, the Policy does not elaborate on
instead prioritize issues of sustainability, what this means.
resilience, and inclusion over technologically-
centered smart cities.

(Cont.) Table 1: Cross-country comparison of national smart city plans and social inclusion provisions

209
How to expand the capacity of AI to build better society

The conflicted national-level policy space means that Alternatively, “bottom-up” approaches typically
social inclusion is often implemented at the initiative center on placing participation and accountability
level within a particular city or for a specific purpose. towards marginalized people, including women, at
A range of what can be classified as “top-down” and their core. Bottom-up approaches are characterized
“bottom-up” smart city models can be observed. by participatory processes, highlighting how local
We refer to a “top-down” approach as one where citizens may know best how to respond to the
the locus of control over the design, governance, issues they are confronting in their local area, as
sensing, computation, and/or acting in an AI-enabled with Sadoway and Shekhar’s (2014) examination of
system is centralized in some way. For instance, data Transparent Chennai’s community-driven approach to
streams from various sensors are aggregated onto smart city governance. In contrast, Trencher (2019)
a “dashboard”. Taweesaengsakulthai et al. (2019) analyses another “bottom-up” smart city initiative in
compared the top-down projects led by the central Aizuwakamatsu, Japan, noting that the high level of
government in Nakhon Nayok, Phuket, and Chiang citizen participation was driven by skilled corporate
Mai provinces with the more locally-driven, bottom- professionals. Bottom-up approaches may also fail
up approach of the Khon Kaen smart city initiative. to take into account large sets of interdependent
They noted that the smart cities in Phuket and Chiang factors, as well as the plural intents, interests, and
Mai put a strong emphasis on supporting the tourism power relations of the people involved. Moreover, AI
industry rather than their citizens, and speculated that could be used to scale applications and services that
the reason these provinces were chosen by the central have wide benefit potential to complement grassroots
government for the smart city initiative was because engagement. A clear national strategy that embraces
they are both highly attractive tourist destinations. the benefits and minimizes the drawbacks of both top-
down and bottom-up approaches could enable better
Nevertheless, top-down approaches may facilitate outcomes for women.
widespread integration and use of computational
resources across a network when centralized in Another complicating factor for women’s inclusion is
some way. For example, where environmental the breadth and diversity of actors involved in planning
sustainability is concerned, centralized aggregation and managing smart city initiatives. Public-private
is being explored for monitoring emissions flows and partnerships (PPPs) are common in Asia-Pacific, with
making continuous adaptations to optimize these examples in India (SCC India Staff, 2018), Thailand
emissions (Giest, 2017). This sort of aggregated (Huawei Enterprise, 2019), China, South Korea, and
analysis and anticipatory policymaking may not work Japan (Thrive, 2018). Large technology companies
well for the inclusion of women because there are are increasingly expanding their roles from suppliers
fewer known “levers” that enable decision-makers to to smart city co-investors, designers, and managers
determine exactly how to respond to certain issues. (Cathelat, 2019). Lam and Yang (2020) examine why
Some countries are therefore implementing public PPPs occur, specifically in Hong Kong. They find that
participation processes to facilitate deliberative in the public sector, the most important criteria were
decision-making in smart city systems (Chan and availability of needed data, availability of expertise,
Anderson, 2015). However, the examined approaches possibility to maintain transparency of procurement,
have not yet addressed how such processes may and monitoring of operations. In the private sector, the
need to change in the context of AI, nor how unequal most important criteria were possibility to maintain
power relations between men, women, and LGBTQI+ transparency of procurement and monitoring of
are addressed. operation, complexity of coordinating government

210
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

departments, and availability of expertise. It is vital 3.1. Improving the safety and security of women
to note the lack of mention or consideration of in public spaces through facial recognition
community relations within this study. It appears that technology
whilst PPPs are crucial for the acquisition of resources
and expertise, private sector actors may not hold any Public safety and security issues differ greatly across
responsibility towards citizens. Asia-Pacific urban contexts. However, there are some
safety and security issues that affect certain genders
As a result, many countries are pursuing disproportionately (Heise et al., 2002; Jackman,
complementary approaches to address social 2006). Multiple accounts across the region reflect the
inclusion concerns. For example, Pune in India risks and fear that women experience due to sexual
developed their own framework to engage their harassment, assault, and violence in public spaces
citizens as part of their smart city initiative with (Baruah, 2020; Plan International, 2016; Rao, 2017;
mixed success (Ministry of Housing and Urban UN Women, 2017). In Indonesia, women are 13 times
Affairs, 2015). In contrast, Marsal-Llacuna (2015) more likely to be harassed in public places than men
and Panori et al. (2019) discuss indicators and (Widadio, 2019). Whereas in Mumbai, India, Bharucha
multi-dimensional poverty indexes, respectively, as and Khatri (2018) found 30% of the women surveyed
a means to foster socially inclusive outcomes. In had been groped in public. Human trafficking and
other fields, it is well established that participatory forced labor are two other significant safety and
citizen engagement processes can help to meet security issues affecting women in the Asia-Pacific
social inclusion objectives, but often only if they are region (Global Slavery Index, 2019; World Vision
negotiated into the design and implementation in Australia, 2007). These issues also affect men, but
a manner cognizant of these objectives; otherwise, trafficking for sexual exploitation makes up a large
citizen engagement processes can perpetuate existing proportion of human trafficking, and in these cases
power-structures, inequalities, and exclusion of certain women and girls are usually the victims (Lee, 2005;
participant groups (Musadat, 2019; Daniell, 2012). Piper 2005). An increasingly common strategy to
Thus, there is still a need to understand how to design reduce levels of violence and support intra-regional
such engagement processes in a way that women’s efforts to curb human trafficking and forced labor is
perspectives will not remain marginalized and so that to embed automated facial recognition technology
they have the opportunity to influence AI-enabled (AFRT) into smart city initiatives. For example, in 2019,
smart city development. thanks to AFRT, India celebrated the matching of
10,561 missing children with those living in institutions
3. AI for social good? Opportunities and (Zaugg, 2019).

risks of AI smart city technology for women However, AFRT is not a silver bullet, having generated
significant public debate around the technical
An increasing number of AI-enabled smart city
limitations of the underpinning AI technology and its
initiatives are aiming to improve the well-being
implications for individual privacy and centralization
and quality of life of residents and visitors. We
of power in urban governance. Public opinions on
are particularly interested in applications that
these matters are nuanced across the region; yet, in
hold significant opportunity and risk for women.
all contexts, better informed decisions can be made
Based on our cross-country review of smart city
by understanding the components of this AI system.
progress in Asia-Pacific, we selected two key smart
There are two types of facial recognition systems:
city applications that have seen substantial AI
verification and identification (Grother et al., 2019).
implementation. The following sections unpack the
Verification seeks to determine if two images of a face
AI components of these two key applications: public
match, whereas identification matches a face shown
safety and transportation.
in an image with potential matches in a database of

211
How to expand the capacity of AI to build better society

images. The main way that AFRT assists in reducing risks) and/or used for multiple purposes (various
violence in public places is its ability to identify issues around consent and biometric data ownership).
assailants post hoc. Similarly, human trafficking There is a need to consider how AFRT will contribute
and forced labor also depends on authorities having to socially good outcomes for women by examining AI
records of victims and being able to match or identify as part of a wider smart city system.
victims. However, there are still concerns about how
well this technology works for different demographics, 3.2. Increasing mobility for women through
as well as possible side effects and how effective AI-enabled transportation systems
systems incorporating AFRT are at solving the
problems they seek to address. For instance, the Traffic congestion and mobility is a significant
National Institute of Standards in Technology (NIST) challenge in the rapidly growing cities of Asia-Pacific,
found that women were significantly more likely to be and is commonly found on the wish list of problems to
misidentified than men, with false positive rates two to address within smart city initiatives. It is an issue that
five times higher (Grother et al., 2019). We outline the impacts on citizen well-being, leading to long hours of
potential reasons for misidentification in Appendix 1. commuting, increased air pollution, and inaccessibility
of city services. Two of the AI-enabled responses
When considering the needs and perspectives of often deployed in smart cities are smart traffic lights
women, there are still many substantial gaps in the and smart public transport.
knowledge. For instance, it is not clear whether post-
hoc identification of perpetrators actually has any Smart transportation systems often require major
bearing on the safety and security of women. New infrastructure works and the development of one or
AI applications to detect unusual behavior, rather multiple systems to manage and optimize transport at
than matching of perpetrators post hoc, may be various levels of scale and complexity (see Appendix
beneficial in that regard (see Huawei Enterprise, 2019).1 for a breakdown of these). Much empirical research
However, these applications are in the early stages and development in this field focuses on optimizing
of development and there is no evidence to support traffic flows based on real-time monitoring of traffic
their effectiveness (Barrett et al., 2019). It is also not
conditions (Javaid, 2018; Ghazal et al., 2016; Zhao
clear what happens to women once they are identified et al., 2012). Data on traffic conditions is collected
as victims of human trafficking or forced labor, and using vehicle detection sensors and either used to
whether there are other applications of AI technology determine optimal timing for a single traffic light, or
to identify trafficking patterns (such as one solution transmitted over the Internet to a data processing
discussed in Section 5.1). Evidence outlining the center where it is automatically analyzed to determine
effectiveness of such systems on crime reduction in optimal traffic lights for a broader system. Efficiency
the Asia-Pacific region is also lacking. gains in smart public transport are envisaged in
a similar manner (Hörold et al., 2015; Haque et
Lastly, little attention has been paid to data security al., 2013). Public transportation services can be
issues, which may also impinge on the safety and integrated within the same traffic management
security of women when misuse of the system or data system to both prioritize public transport vehicles
breaches occur. There is also the question of how the over private vehicles at intersections, as well as to
information will be used: does an alert go to a human, inform route optimization to service popular routes
or will there be an automated intervention? Very effectively and avoid congestion. As such, smart
little discussion has taken place regarding how the traffic management systems usually include an end-
images are stored and for how long, which becomes to-end platform to which users have access (usually a
a significant issue when data is centralized (security mobile application).

212
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

It is often not clear how system engineers have 4. Addressing women’s needs and
encoded priorities into the optimization of transport
systems. Women may have particular mobility
aspirations in AI-enabled smart cities
patterns and concerns that have not been factored
Overall, we find a lack of clarity in national smart
into optimization algorithms. Data collection in smart
city policymaking concerning the presence and
city initiatives is often aggregated across genders,
inclusion of women. Our review of two AI for social
which renders women’s specific patterns and needs
good applications likewise finds significant gaps in
invisible. Inequities persist in Asian cities despite
the knowledge concerning how these technologies
longstanding evidence of gendered differences in
contribute to making public spaces safer and
transport and several initiatives to address issues
transportation systems more effective for diverse
(Thynell, 2016). According to Singh (2019), women
women. To address the needs and aspirations of
often make more complex multi-purpose trips using
diverse women, our approach synthesizes principles,
different modes of transport, travelling at off-peak
practices, and concerns of female and feminist
hours. Women also place a higher priority on safety
scholars, activists, and practitioners with significant
and security in their transport than men, and this can
expertise in supporting women. We sought to
lead them to take more costly or less efficient modes
interview scholars with experience working at the
of transport (Gekoski et al., 2017). Little to no attention
intersection of women and technology, but also
has been paid to understand how and why women’s
included feminists with broader ranging experience
mobility can be supported and affected by AI-enabled
across Asia-Pacific contexts. We conducted 12
systems. As such, it is often assumed that smart
interviews with 13 selected scholars, activists,
traffic lights and AI-enabled public transport will serve
and practitioners (Table 2), and contacted another
the interests of women because of efficiency gains
23 experts, but were either unable to schedule an
in transportation systems. Rather, this needs to be
interview, had no response, or the invitee chose not
tested and women’s preferences on system objectives
to participate. Due to the diverse range of knowledge
factored into optimization functions or data sets for
and experience of the selected participants, interview
learning algorithms – even if these need to initially be
questions centered on their background, knowledge,
synthesized for training purposes.
and experience in implementing intersectional notions
of identity, how their thinking has evolved in the
There has been work done on issues women find
context of rapid technological change, their specific
important using AI techniques, such as how to make
recommendations for smart city initiatives, and any
public transport safer. In Australia, Transport New
insights regarding transnational or regional change.
South Wales (2020) recently proposed a challenge
The study was granted approval by the Australian
to seek tenders for solutions to make travelling in
National University human ethics committee under
Sydney safer for women at night, with a focus on
protocol 2019/732. The experts provided their
data and suggested solution areas including “Deep
informed consent to participate in the study and use
Technology.” There have also been non-technical
their full name in this publication.
solutions proposed, such as women-only carriages of
subways, although some argue that such solutions do
not address the root of the problem and are instead
reinforcing divisions between the sexes (Thynell,
2016). More work is needed to better integrate
the needs and aspirations of women in AI-fueled
transportation systems.

213
How to expand the capacity of AI to build better society

Name and Organization Country/Region of


knowledge/experience
discussed for this study
Diane Bell, Distinguished Honorary Professor, Anthropology, ANU College Australia
of Asia and the Pacific

Genevieve Bell, Distinguished Professor, Florence Violet McKenzie Chair, Australia


Director of the 3A Institute, Australian National University and Vice
President, Senior Fellow, Intel Corporation

Nandini Chami, Deputy Director, IT for Change India

Melissa Gregg, Principal Engineer and Research Director, Client Asia-Pacific


Computing Group, Intel

Anita Gurumurthy, Executive Director, IT for Change India

Sue Keay, Research Director for Cyber-Physical Systems, Data61 Australia

Padmini Ray Murray, Founder, Design Beku India

Nimita Pandey, Research and Information System for Developing India


Countries

Ruhiya Kristine Seward, Senior Programme Officer, Networked Asia-Pacific


Economies, International Development Research Centre

Araba Sey, Principal Researcher, Research ICT Africa Asia-Pacific

Hannah Thinyane, Principal Research Fellow, UN University Institute in Thailand


Macau

Amanda H. A. Watson, Research Fellow, Department of Pacific Affairs, Papua New Guinea
ANU College of Asia and the Pacific

Joanna Zubrzyki, Associate Professor of Social Work, Australian Catholic Australia


University

Table 2: List of interviewees and spread of knowledge/experience across Asia-Pacific

214
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

We carried out structural coding of the interview Framework to guide inclusive policy and practice with
transcripts to categorize sections of interviews and for women in the context of AI-enabled smart
into themes of inquiry (MacQueen, McLellan, Kay, & cities. The following section details our findings.
Milstein, 1998). This is a particularly useful strategy
when the research is exploratory in nature (Miles & 5. Findings
Huberman, 1994), as in this case. In a second round
of analysis, we selected quotations where there was This section outlines the findings of our interviews
a high level of agreement, difference, or nuanced with experts in relation to the 3A Framework.
opinions amongst the experts. We included illustrative
examples to give richness to the theme when possible. 5.1. Agency: The need to reconstitute AI
technology design processes
4.1. The 3A Framework
Across the two cases of AI for social good identified
The themes of inquiry we selected were based on a above, there are tasks that can be performed without
new framework being developed, tested, and iterated human oversight. In some instances, women can be
by the Agency, Autonomy, Assurance (3A) Institute, personally identified on the street and a prediction
called the 3A Framework. The 3A Framework is made about where they are going or what actions
structured around six themes, each grappling with a they will take (Huawei Enterprise, 2019). Likewise, in
core question to unpack interplay between people, the case of mobility, sensors and cameras, combined
technology, and the environment: with machine learning algorithms, monitor and
manage traffic flows. Policymakers will need to work
• Agency: How much agency do we give technology? through whether these functionalities are desirable or
• Autonomy: How do we design for an autonomous empowering for women.
world?
• Assurance: How do we preserve our safety and A main problem that experts mentioned was that it
values? is often too late to consider what technology should
• Indicators: How do we measure performance and and should not do by the time it is developed and
success? implemented. Experts cautioned for the need to
• Interfaces: How will technologies, systems, and reconstruct the design phase of the AI technologies
humans work together? we considered. Actors need to make explicit the types
• Intent: Why, by whom, and for what purposes has the of problems viewed as being important (or profitable)
system been constructed? enough to solve, the underlying assumptions made,
and who is included in the process of defining and
The 3A Framework was developed by Genevieve Bell, solving problems. However, there were differing
Director of the 3A Institute, and is based on over 20 opinions regarding how diverse women should be
years of experience working at Intel Corporation. represented in this process. Genevieve Bell, who
From 2017–2020, it has been expanded and tested by grappled with these issues in her role as a Senior
the staff at the 3A Institute. To date, the Framework Fellow at Intel, explained that:
has been used and clarified through qualitative case
study research, partnership work with industry, and “[It’s] not just about having more women in the room
through a series of educational experiments, including when the decision is being made. It’s about structuring the
micro-credentials and a prototype Masters in Applied way the decision is made completely differently because
Cybernetics – supported by Microsoft, KPMG, and [there’s] no point if you’re still driving to building a
Macquarie Bank – that involves two cohorts of highly- technology in one place and scaling it to the planet. It
skilled, multi-disciplinary, and diverse professionals. doesn’t matter how many other voices you’ve got in the
In this paper we evaluate the appropriateness of the room, if they’re not the right voices it makes no difference.

215
How to expand the capacity of AI to build better society

And thinking about who the right voices would be. That Design processes need to also incorporate a context-
doesn’t just mean having women in the room. It means integration phase. Genevieve Bell is wary of the
having women for whom this might be their community… temptation to “build a global thing and then just have
you have to change the nature of how decisions were localization strategies”, and that “[creating] a series of locally
made, how conversations were constituted, how you inflected designs that have some common threads” is more
thought about hearing different voices in the room and achievable using a bottom-up approach rather than a
making room for people, and how you thought about what top-down one. One advantage of such an approach is
the logic was under which you are operating.” “you hear what the genuine set of problems that people feel are,
that need to be solved... sometimes that what you think you know
We debate the topic of representing women further about the place isn’t what is the problem people want to solve
in Section 5.3. However, what we emphasize here is locally” (Genevieve Bell).
how the design process of an AI technology might
be structured and, ultimately, what technology is A good example of how design processes can
meant to do (i.e., the intent behind it. See Section 5.6). integrate these insights when working on problems
Genevieve Bell argues that clarifying decision-making of high relevance to women is Hannah Thinyane’s
processes and increasing diversity in thoughtful and Apprise System (Box 1). Thinyane and her team at
intentional ways precedes decisions about what AI UN University Macau have been exploring how digital
technology can and cannot do. technology can be used to reduce the exploitation of
workers in four sectors of employment in Thailand:
There is another thread related to the importance manufacturing, fishing, forced begging, and sex
of incorporating intersectional theory (Crenshaw, work. Following a values-sensitive design approach,
1991) into design practice, as articulated by Joanna Thinyane developed Apprise, a multilingual expert
Zubrzyki, a lecturer in social work from Australian system, to support frontline responders (labor
Catholic University: inspectors, police officers, community organization
representatives) to identify victims of forced labor
“I think it’s really important not to also essentialize or and human trafficking. Frontline responders access
stereotype that all women will have the same sets of values the application on their phone, accessing a question
just because they’re women…. One of the really important list that has been developed to screen for potential
contributions I think of postmodern feminism was to say vulnerability:
that you just cannot make global assumptions about the
lived experience of all women and therefore the values of “The question is a yes or no question, which makes it
all women.” easy for us to compute afterwards the vulnerability of
the situation there. So, how exploitation looks different
Padmini Ray Murray, founder of Design Beku, a in different sectors. So, the kinds of questions I might ask
collective working at the intersection of design and in different industry sectors, say in fishing, ‘I might not
technology in India, with substantial experience let you off your boat’, when you’re in port, that’s a way
implementing intersectional notions of identity in of confining you. And in sex work it might be that I won’t
smart city design, reflected on how difficult this can allow you to choose your own customers.”
be: “Histories of feminism in this country have been articulated
and published by the dominant caste, and so therefore what According to Thinyane and Bhat (2019), there is a
is seen as “Indian feminism” is kind of seen through the lenses gap in our understanding of how many workers are
of the savarna woman, who embodies the dominant caste often placed in challenging situations that are not
woman”. Approaches to balance dominant voices are clearly forced labor, that have begun on consensual
also discussed in Section 5.3, yet, here we note how and mutually beneficial terms but devolve into
challenging it can be to resolve these sorts of issues. abusive work relationships. It takes strong community

216
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

relationships and cultural sensitivity to be able to tease 1992; Oakley, 2001; Perkins & Zimmerman, 1995).
out whether a worker is vulnerable or not. For instance, Anita Gurumurthy and Nandini Chami are from
in the sex industry, Thinyane explains that the question IT for Change, a leading civil society organization
list was developed in consultation with sex workers, headquartered in Bangalore, India. IT for Change
community-based organizations (CBOs), and human is engaged in research, policy advocacy, and field
rights lawyers. It also incorporates empirical evidence practice at the intersections of digital and data
drawn from research with over 3,000 sex workers to technologies, with social justice and equality at the
identify the top four practices of exploitation within international, national, and local levels. Their approach
sex work in Thailand (Empower Foundation, 2012). to women’s empowerment rejects one-size-fits-all
solutions, enabling women and girls to define what
Another approach emphasizes women’s empowerment means for themselves:
empowerment, rather than focusing the design
process on addressing specific problems. “The team that works in schools has sought to build a
Empowerment broadly refers to capabilities to control curriculum that uses the Internet and digital media to
one’s life choices or the decisions that affect one’s create spaces for self-reflection and collective reflection
life, with the literature defining numerous dimensions among adolescent girls, where they can chart out their
and structural aspects to consider (Friedmann, own definitions and descriptions of what it means to
become empowered through technology.”

As pictured in (a), a frontline responder will give a smartphone to a worker to select a language. A series of
questions are spoken to the worker in their own language whilst they are wearing headphones, so that they can
respond without scrutiny of the responder or a translator. Thinyane notes in earlier research that translators
were often not trusted or corrupt. Workers likewise felt embarrassed to answer the questions honestly out loud
to the responder. Figures (b) and (c) show the interface that workers see when answering the questions. Once
the worker answers all of the questions, they hand the phone back to the responder and it displays a categori-
zation of the worker for the responder to review (Figure (d)). They may then offer additional options or avenues
of support. In the future, Thinyane’s team hopes to use Apprise to identify patterns of exploitation, which ma-
chine learning algorithms may facilitate.

217
How to expand the capacity of AI to build better society

The premise that individual reflections should factor generate stronger links directly with diverse members
into the design decisions of AI technology is complex, of a community, as all the experts interviewed
indicating that a new area of research is warranted. supported intersectional notions of identity.
However, protecting and supporting such spaces for
reflection is also important when it comes to enabling 5.2. Autonomy: Building for the diverse realities
women’s participation in AI-enabled smart cities: of women

“I think that what AI is going to do for women’s Designing for an autonomous world will involve
empowerment, and what that would mean for the ideal being sensible to the practical realities women face.
This involves many facets of a woman’s life and
of gender equal or feminist AI futures, is not only about
various aspects of her identity, not just those directly
women’s safety in smart cities. It’s really about the idea of
implicated in an AI-enabled system. As outlined in
that city in terms of many different citizenship planes… If
Section 5.1, one of the points of agreement amongst
you look at AI as being integrated into a larger economic
the experts interviewed regarded the importance of
ecosystem, or AI also re-architecting these larger economic
and social systems, we see that AI becomes part of that context-driven feminist praxis: “You’re talking about
a region that is essentially both on the infrastructure
important ingredient which is in a dialectic with society,
policy, politics, and economics” (Anita Gurumurthy). side, socio-economic development side, as well as
the digital innovation side very, I would say, highly
Gurumurthy stressed that attending to how AI fragmented. You’re not really seeing one picture. And
technology contributes to women’s participation in therefore, there can be no one-size-fits-all” (Anita
different “citizenship planes”, is crucial for women’s Gurumurthy). This theme focuses on understanding
empowerment. IT for Change has therefore confronted what processes and relationships AI is automating,
these issues by researching critical pedagogies, and what they reveal in terms of the power and
capacities, and impacts, which is then incorporated position of women in smart cities at the time. There
into their advocacy and policy work, and filters back are three key insights from the experts concerning the
into their practice engaging directly with communities. impacts of underlying infrastructure, differences in
access and abilities across marginalized populations
Overall, we find that there are a number of conditions of women, and the need to educate women about the
that need to be addressed in the design phase of AI changes AI introduce as part of the process to include
technology before a discussion can take place about women in smart city development.
what AI can and cannot do. This reinforces the need
to simultaneously investigate issues across the 3A Our literature review shows that it is common for
Framework (see Sections 5.2 and 5.3 in particular). Asia-Pacific countries to pilot smart city initiatives
However, what we conclude is that inclusive in places where it may be easier to implement AI-
AI requires greater attention and transparency enabled systems in terms of the required underlying
regarding decision-making processes surrounding its infrastructure. Experts discussed how exclusions can
development, particularly when identifying problems take place on three levels: the country level, city level,
and intended outcomes. There were differing opinions and within cities. At the country level, countries lacking
regarding which actors should be involved in decision- in power and Internet infrastructures exclude many
making surrounding design decisions. Some experts AI for social good applications. Amanda Watson, a
believe strongly in the need to incorporate participatory Research Fellow that has been researching mobile
democratic technology design processes, underpinned phone use in Papua New Guinea for 12 years, provided
by women’s empowerment objectives. At the very a useful criterion for systems-development there:
least, there is a need for designers of AI technology to

218
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

“Every single time someone mentions a possible project data management system reflecting their concerns? What
idea to me, which has happened many times over the is it that women have to say about water consumption
years, I frequently say, first of all, can it work offline? So, if in the city? Which women’ s voices are being captured
there is an Internet or cloud element, can it still function if by the system? Is it covering the voices of the women who
you have no Internet? For instance, can the information be are waking up early in the morning to fill their pots in
stored locally… what’s the battery life and power if there’s these slums and then rushing as domestic help to work in
some sort of device because electricity does go down.” somebody’s house? And struggling to send their daughters
to school whose safety they can’t ensure? And also finding
Many citizens in Papua New Guinea do not have city transport, creaking under the pressure of efficiency.”
electricity, and the electricity grids that do exist
may depend on solar energy. Indeed, many remote Nandini Chami likewise felt that new models of
Australian towns and cities face similar constraints. ownership are required:

Exclusions also happen at the city level, as Nimita “We need to think deeply about the design of smart city
Pandey, a Research Associate working for New- projects. In the data systems being set up in these projects
Delhi-based Research and Information System (RIS) through public-private partnerships, who should be the
for Developing Countries, with expertise in science, trustees for the management of common data resources?
technology, and innovation policy perspectives, Can we assume private companies will automatically
described regarding the choices India has made: uphold public accountability or do we need completely
new arrangements for the stewardship of citizen data? We
“There is a huge list of criteria and processes that they opt need a radical overhaul of data governance frameworks.”
in picking up cities, in order to make them smart. And the
idea of making them “smart” is to make them “sustainable” The ownership of data resources is a particularly
in terms of energy, in terms of infrastructure, in terms sticky topic where AFRT and mobility pattern
of quality of living. But while doing this, the idea of recognition are concerned, which is further discussed
“sustainability” is lost; it actually causes “exclusion”. And in Section 5.3. However, Chami gave the example of
this exclusion is not merely from the gender perspective, South Korea, who opted to create its own mapping
but in terms of the socioeconomic demographic angle platform to map various resources (not just locations)
as well. Most of these smart cities are not accessible to (see Korea Legislation Research Institute, 2019).
everyone who is part of the city.” This strategy may support Asia-Pacific countries
to adopt heterogeneous models of integration for
Pandey argued that women are also excluded in autonomous systems, which could address the needs
heterogeneous ways within cities. In infrastructure- of diverse women. In South Korea’s mapping platform
poor locations, smart cities may need to either case, contributing actors need to be able to frame
decentralize management of autonomous systems their service in terms of the platform aims and how
or find specific ways to include marginalized women. the benefits would be shared publicly. This would
Anita Gurumurthy, in reflecting on the Indian context, encourage companies to make explicit how their
felt the latter was critically important: service responds to particular populations of women
in a specific context.
“And so smart city data for energy management or water
management or housing, each of these is not going to be A second aspect that needs consideration relates to
managed in silos. The city will manage all of this data in how diverse women have differing abilities to both
an integrated way and therefore it is basically a question understand and interact with automated processes
of where are women in a participatory democracy? Is the and technologies. There is limited empirical evidence

219
How to expand the capacity of AI to build better society

regarding how women across Asia-Pacific might have In sum, whilst AI technology and the integrated
different capacities to engage with CCTV or smart systems needed to implement these technologies
public transportation systems. We do know from into Asia-Pacific cities is important, our research
experience that marginalized women have drastically emphasizes that inclusive practice comprises three
different needs stemming from divergent cultures aspects. Firstly, when taking autonomous processes
and capabilities of interacting with technology, and systems to scale, policymakers need to make
such as smart phones. IT for Change has been clear links between plans to reduce infrastructure
supporting women across rural and urban settings inequalities and plans to develop smart city initiatives.
in India, investigating how technology can be used to Secondly, regardless of successful pilot tests, gender
empower girls, all the way up to elderly women. They and cultural diversity are clearly factors that will
have learned to adapt their engagement strategies impact on the roll out of autonomous systems. Greater
to various levels of digital literacy and technological attention and planning must be paid to accompany
usage patterns. Upon speaking of a project based in implementation through research and refinement
Mysore with older women, Chami mentioned post- to customize and problem-solve across contexts.
literacy approaches for empowerment education: “for Thirdly, citizen education programs are urgently
example, you cannot use a lot of text-based aids or learning needed to raise awareness of the myriad impacts and
materials. One would have to rely a lot more on highly audio- implications that automated processes have.
visual tools: videos, digital stories, [and] voice messages on
mobile”. This implies a necessity to account for skill 5.3. Assurance: Ensuring diverse women’s
and cultural diversity when embedding automated needs and values are heard
technologies and processes into an environment.
Assurance refers to the practices, processes,
Padmini Murray, having conducted one of the only institutions, and rules that ensure the safety and
studies on the experiences of girls in the smart city, respect of societal values, especially from the
also found that girls in Delhi were reporting new risks perspectives of diverse women in this case. It is
that required mitigation: “I think what was most visible therefore not only structured by one relationship
was that patriarchy enacts itself through digital vectors as but by a system of relationships between all actors
well as through the material. So, you would have things like involved – including AI technologies and systems.
girls complaining about being sent pornography, harassment
on platforms themselves”. Melissa Gregg, along with If assurance is conceptualized as a system of
Genevieve and Diane Bell, likewise expressed the relationships, the experts interviewed have worked
importance of ethnographic fieldwork as a means tirelessly and consistently to ensure that women are
to understand the particular challenges experienced key actors, whose voices have a right to be heard
by women as new autonomous technologies are in such a system. The main difficulty in the context
introduced. However, Gregg cautioned that at times it of AI-enabled smart cities is the lack of clear roles
may not be obvious what processes AI is automating: and opportunities to participate in decision-making
“What I wonder though is… how much do people even know surrounding how these initiatives impact on women’s
about what’s being collected about them right now. So, [for] me, lives. There are lessons to be learned from the
my first question is how are people even made aware of how they struggles that the experts interviewed have confronted
are tracked?” As discussed further in Section 5.5, Ruhiya in their own lives and careers. For instance, Diane
Seward argued that new education programs are Bell, acclaimed Australian feminist anthropologist,
needed. recounted the struggles she endured to pursue her
education, and to gain access to scholarships and
grants as a single parent:

220
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

“I was the first woman to do anthropological fieldwork attention to questions and issues of participation
in Australia with two children as a single parent. There in smart city governance. One area that emerged
had been women in the field, but as a wife looking after regarded data ownership and governance.
his children, or it had been a woman just for a very short Araba Sey explained:
period or somebody had taken the kids. All the major
women who’d worked in the field were single and had no “What should happen, or what might be more practical,
children.” is for government and civil society organizations to
find ways to partner somehow with the commercial
Moreover, the experiences of the experts also or corporate entities to ethically get access to the data
highlight what it means for women to claim greater that they automatically generate, and try and use it
accountability for the conditions and quality of life in ways that go beyond just making profit. That may
imbued by AI-enabled smart cities. As Diane Bell be an arrangement that could possibly at least share
expressed, concerning her experience working with the responsibility, and make sure that it’s not just the
Aboriginal women in Australia: corporate bodies that have access to the data and use it
only for economic gain.”
“How do we get all those voices to the table? How do we
hear from those people? How do we make the conditions In contrast, Anita Gurumurthy reflected on their
so that all of those are there? But why should it be “we” experience developing a community-based water
making the conditions? How do we have it so that those management app in Bangalore, India; and the steps
people are saying, “This is my issue too”. … How do we get taken to enable collective ownership of data and the
that consciousness of who’s at the table? To understand how skills needed for citizens, women, and men alike to
these broader issues are interrelated? An “Aboriginal issue” use the system to claim greater accountability from
is not just about where do I live because I’m Aboriginal local officials:
and how is my language and my culture respected, but
why am I not at the table on issues of national security for “This is the idea of [a] smart city that we think should
instance? Where should my understanding and my history really be replicated, not necessarily to scale in a
be understood? [And] it should be right across the board.” homogenized fashion, but in context-appropriate ways
based on the particular needs of communities. There
Sue Keay pointed to ethics panels, especially in a should actually be a way by which communities can
medical context, as a good example of consulting manage their data and engage with local authorities for
with people who are representative of a diverse claims-making, with the complete knowledge of how data
community. Joanna Zubrzycki talked about her work interfaces work.”
with indigenous people and noted that you cannot
always get everyone to the table at the same time, so However, as Padmini Murray pointed out, referencing
“[you’ve] got to reach out and ensure that you are listening... Baud et al. (2014), the ways in which similarly
and find those diverse perspectives... you’ve got to make the effort participatory democratic processes have been
to go to people to consult”. implemented in smart city initiatives has tended
to over-index the perspectives of the middle-class,
Yet, in the current phase of technological development leading to significant bias in interpretation and
and implementation, we have seen limited evidence inclusion. To work towards “resolving intersectionality with
of consultation or participation in decision-making, consensus”, Murray, along with Mozilla Fellow Divij Joshi,
reducing the scope of effective local governance are developing an automated decision-making system
of which Diane Bell speaks. Therefore, the experts precisely for this purpose. They are constructing
speculated about mechanisms that may return an interactive platform that “demystifies how automated

221
How to expand the capacity of AI to build better society

decision-making is done in the smart city. What technologies cases, performance measures emphasize envisioned
are used, what is the data that those technologies [are using], purposes of technology and their overarching
what are the assumptions, rather, that are being built into those efficiencies. Nevertheless, these technologies affect
technologies to take the decisions that they do”. critical infrastructure and the social fabric within which
urban living takes shape. Moreover, the inclusion of
Another essential intervention strategy is to women in this context implies gender relations will be
significantly increase evaluations, including social rebalanced in the process. Yet, the needs of diverse
audits of AI-enabled smart city initiatives. As Anita women and men are complex and are particularly
Gurumurthy argued: challenging to measure.

Gender inequality observed in both access to


“About four years ago, after the very unfortunate event of
technology and its related industries are the main
a young woman student in Delhi being raped, a fund was
set up by the government, and then UN women and many challenges to which the experts are no strangers.
Women tend to have less access to technology
other actors then got on board to initiate action on women
and safety. Many apps were introduced as part of suchacross four basic access indicators: computer use,
mobile phone ownership, mobile phone use, and
action and I’m not really sure whether the assessments and
access to the Internet (Sey & Hafkin, 2019). Women
evaluations of these really do exist. I haven’t seen many.
also constitute less than 35% of information and
We work on the whole idea of feminism in technology and
I do think that we should really be having many more communications technology (ICT) and related
evaluations.” professions, with substantially fewer in leadership
positions (Sey & Hafkin, 2019). It is this persistent
Whilst we elaborate on potential purposes of awareness of the severe gendered imbalances in
evaluations in the next section, generally speaking, access and usage patterns, affordability, workplaces,
the assurance theme highlighted that voice, and industry representation that propel experts to
representation, participation in decision-making, and engage in generating knowledge and praxis to bridge
community ownership are of great consequence to divides. Araba Sey is a scholar who has worked
including women in AI-enabled smart cities. There is for the last three years on the UN’s Equals in Tech
reason to explore innovative ways to address these initiative. Prior to that, she investigated inequality
processes and topics, as Murray is doing. Indeed, this between nations in terms of ICT infrastructure and
thematic area seems critically important to empirically uses, as well as between socioeconomic groups
research further. within countries for more than a decade. As someone
who understands these imbalances all too well, Sey
5.4. Indicators: Addressing root causes rather expressed frustration regarding how our knowledge of
than symptoms of gender inequality, and the the issues points to little progress towards resolving
concept of equity inequalities:

When AI technologies are embedded within urban “I feel like some of the things we’re measuring need to
infrastructures, they may be designed and evaluated start at a much, much earlier age, and may not all be as
with a certain purpose in mind. Measuring the quantitative as the current trends in the collection. I feel
performance of a remote sensing system for that a lot of what happened could be addressed at early
traffic flow management might focus on indicators stages, so at the primary elementary school level and then
related to time or congestion. Likewise, facial in the home, so that things like [a] parent’s attitude towards
recognition systems might also monitor error rates gender… or towards [their children’s] career [choices].”
and positive identification rates. In either of these

222
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Sey’s advice is to concentrate efforts on addressing the white women’s movement at the time, which wanted
the root causes of gender inequality rather than equality and integration. And I was saying there are other
treating symptoms down the line. However, although it models. There are models with independent bases of power
might seem out of scope to address gender inequality and standing.”
issues within smart city initiatives per se, it could be
an important mitigation strategy. Alternative models (to equality) based on independence
and freedom to define one’s measures of success
In contrast, Diane Bell spent decades researching is similar to IT for Change’s approach to women’s
and advocating for Aboriginal Australian women. Her empowerment discussed in Section 5.1. Both require
experience highlights how the international framing of sufficient trust and time to establish as a means to
gender equality within the Sustainable Development protect “independent bases of power and standing”.
Goals (SDGs) may not be an appropriate standard to
set. On speaking of her fieldwork from the 1970s in Trusting relationships are indeed critical to developing
central Australia: measures of success shared across organizations.
Ruhiya Seward, based in the Amman, Middle-East
“They had very independent lives. They hunted and office of the International Development Research
gathered for their own food. Some of that food would go to Centre, and working on the technology and innovation
their menfolk, but they were self-sufficient in themselves. area in the Networked Economies group, has been
They had their own camps that were organized according working to improve gender-related outcomes across
to their relationships to country and they had their own her team. She has also been overseeing feminist
ceremonies which were organized by themselves… The projects including the Gender and Technology
notion that there was a feminist perspective on practices Network, led by the Association for Progressive
that might be underwritten by shared values and Communication (APC). She reflected on the specific
principles but were pursued in separate spaces was very challenges of working collaboratively across
clear to me. And that was a difficult thing to explain within institutions:

223
How to expand the capacity of AI to build better society

“Feminism is, in a way, depending on how broad your are women. CSIRO joined the Science in Australia
umbrella is, what we might call kind of participatory Gender Equity (SAGE) program, which is a partnership
democracy or even democratic socialism. It takes between the Australian Academy of Science and the
time to activate. And yet there are the realities of Australian Academy of Technology and Engineering.
getting work done and being responsive and doing Its vision is to “improve gender equity in STEMM in the
stuff and forging forward and having a strategy – Australian higher education and research sector by
these challenges don’t always lend themselves to an building a sustainable and adaptable Athena SWAN
amoebic participatory/collaborative management.… model for Australia” (SAGE, 2018, n.d.). Such a model
This can be a challenge when it comes to policy provides a charter of principles to ensure that their
ecosystems versus feminist ecosystems… You policies, practices, action plans, and culture reduce
actually need policy outcomes in order to show that gender inequality. Sharing data on these matters
it’s valid and worthwhile and that you’re spending enables some accountability for this issue from the
public money in good ways.” organization. However, Keay felt that a great deal more
needs to happen:
There may be some indicators that can be negotiated,
whilst others cannot. This may be why it is also “With these initiatives that I personally was following
beneficial to establish shared principles of success. or kind of I was asked to do, I guess unfortunately,
Nimita Pandey’s organization, RIS, developed a they’re all things that I’ve decided to do. I would prefer
framework to contextualize responsible research if that was just a priority for the area that I work in, but
and innovation (RRI) in India. She mentioned that the at the moment it’s not… I’m increasingly feeling that it
framework provides a principled basis to examine actually has to be something that is mandated, that
the social dimension, spanning multiple projects and it’s compulsory that there is no ifs, buts, or maybes,
contexts: people just have to do it. And it doesn’t actually matter
the reason, people [just] know that they have to think
“From a developing country perspective, we proposed about safety in the workplace, they should also have
the [Access, Equity, Inclusion] framework… because to be thinking about inclusion in the workplace… I
while reflecting at gender under the project(s), it has certainly believe we must be publishing metrics.”
emerged as a very critical issue; even there have been
mandates across different departments, particularly the In sum, indicators designed to establish and track
Department of Science and Technology. Studies would progress towards various levels of reducing gender
definitely add to our methodology, in order to develop inequality within AI-enabled systems are needed. The
an exhaustive list of indicators to assess and evaluate experts flagged three scales of complexity to consider:
programs and initiatives, in order to find the enablers orfirstly, indicators relating to global gender equality
barriers, which are critical for gender inclusion.” targets (or alternatively, independently defined targets);
secondly, indicators relating to specific projects or
Initiatives such as these could potentially be integrated programs; and, thirdly, evaluations must seek to
into smart city projects as a means to monitor and uncover how AI-enabled smart cities address the root
evaluate gender issues across projects. causes, not only the symptoms of gender inequity.

Lastly, many of the experts agreed that including


women in AI-enabled smart cities depends on 5.5. Interfaces: Defining boundaries and
the participation of women in the relevant skilled considering accessibility
professions, policy spheres, public services, and
leadership roles. Sue Keay is the only female research As outlined in Sections 5.1 and 5.2, the AI-enabled
director (of three) at Australia’s Commonwealth smart city technologies we consider are rarely
Scientific and Industrial Research Organisation designed in a manner that aligns with the feminist
(CSIRO). She leads four group leaders, none of which praxis discussed in the interviews. The experts

224
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

identified how interfaces within AI-enabled smart “One of the things that really struck me, [for example],
cities are a crucial element to consider where diverse is how services like Amazon Alexa, the Echo, and other
women are concerned. The most-marginalized women technologies were being brought into the home with a very
within Asia-Pacific cities are potentially concealed and gendered voice. As a sort of idea and subservience that
further disadvantaged when they lack the accessibility is very familiar for women in domestic environments.
and knowledge to interact with the interfaces of a Having that background let me think about what is being
system. Women’s public spaces are also increasingly normalized by the design of these devices. But then as
occupied by the sensors and cameras needed to the ecosystem developed towards Amazon’s ties to the
operate smart traffic systems and CCTV systems. Ring doorbell, for example, it made me stop to think
This occupation has implications on the definition and about the role of the household within a neighborhood...
communication of boundaries to acknowledge where It really worried me when I started to realize that Ring
interfaces begin and end. had arrangements with local authorities in certain
neighborhoods that there was subject to screening from
Regarding accessibility, Ruhiya Seward remarked some of those law enforcement officials. The idea of the
that severe access inequalities necessarily impact state in the US again is a little dis-aggregated from your
on how people may experience interfaces with a local street. So, [for] me, that’s a clear example of how
system: “So many people in the world don’t connect [to the if there is a thread of how the woman at home is under
Internet] at all, which means that they don’t show up in the data. threat and technologies are designed to enable a certain
If you don’t show up in the data, you don’t matter to AI”. On kind of efficiency of monitoring, whether they’re in that
the other hand, by building AI technologies into city home or outside of its perimeter. I don’t know that is the
infrastructures, women may have less of a chance thing that concerns me a lot, which is what has been
to decide whether or how to connect with a system. traditionally gender roles of care and nurturing and
Hannah Thinyane referred specifically to this point: support and community relations becoming instrumented
in these data gathering devices.”
“What if they don’t actually have an ID document? What
if they don’t want to be known? There are all of these Gregg directs us to some of the more entrenched
things you have to consider when designing a system impacts of integrated services combined with AI-
enabled devices, and how women’s voices may
that will be citywide. I guess how does it also work with
suggest care and nurturing, yet the involvement of law
disabilities? How does it include disabled people? And then
with migrants, and Thailand has such a huge population enforcement may be otherwise experienced. Whilst
of migrant workers, how have people (documented or she noted differences in relations between women and
undocumented migrant workers) [been] included in a the state based on her experience working across the
US and parts of Asia, such as China, Japan, and Korea,
design of a system like that? Especially if it’s got anything to
do with identity.” what is important here is the capacity for women to
be embedded in very seductive, or what Gregg calls
If actors managing AI-enabled processes do not “normalizing”, activities enabled by AI in the name of
incorporate inclusive practice, there may be no way to well-being, without understanding when these devices
tell if interfaces with a system actually function, or are are interfacing with new sets of actors, such as local
desirable for diverse women. police. Moreover, Padmini Murray’s research, with Prof.
Ayona Datta, uncovered how in India, when young
The occupation of public spaces by new interfaces women chronicled their engagement with the smart
with AI-enabled systems is also a concern. Melissa city in Delhi through daily WhatsApp diaries, they often
Gregg reflected on some of the challenges emerging found it difficult to draw boundaries between their
from her involvement in the research and development experience with the city and “the smart city”:
of smart home devices, primarily in the US:

225
How to expand the capacity of AI to build better society

“So, I think we found that they would often tell us about Diane Bell spoke of the need to question what
ways in which the infrastructure of the city would let them problems AI is meant to solve, and having the capacity
down. During the monsoon, Delhi would flood very easily to debate whether or not it serves the collective
and how that would cause a lot of difficulty, even would interests of citizens, including those of diverse women:
cause deaths because of electrocution and things like that.
So, the picture that we got from their journals was just “AI has enormous capacity to improve our lives, but is it
basically that they were always at war with the city. But being developed within a framework where the narrative
what wasn’t immediately available to us was how does the is one of rights and responsibilities, or is it developed
smart city impact... It’s not really possible for them to parse because we can do it, therefore we’ll do it? Not, why should
what the city is doing to them through the lens of what the we do it? Well, we can do it, but should we do it? There’s
smart city is doing. And it also depends on what we mean many things we can do but should we?”
by the ravages of the smart city.”
Genevieve Bell spoke of the reality underlying the
In this case, interfaces are often difficult to identify or development of many smart city initiatives:
disentangle from the broader city living experience.
The majority of the experts interviewed argued for “So, if you imagine that most technical systems are not
greater transparency and education opportunities to built because someone has a generous whim, they are
help women understand and claim their rights in this mostly built because they are either designed to perpetuate
context, as Ruhiya Seward stated: power, or general capital, or both… So, it’s not surprising
in that sense that most technologies sit within systems of
“Most people just don’t really understand data ecosystems. disenfranchisement because that will be the flip side of
They just don’t have a fundamental understanding of power and money.”
their own human rights, of what data can do… Inclusion
is having the skills to know what your rights are, and These quotes challenge policymakers and
activating those rights, and working with them.” practitioners to expose power-relations within a
system, and to ensure that the intent of AI is balanced
The interfaces theme draws out concerns about by a framework of rights and responsibilities.
whether AI-enabled systems have designed interaction
experiences for diverse women, especially the most Furthermore, almost everyone pointed to the
marginalized who often lack accessibility to technology challenges of acknowledging intersectional
that is used to gather data for AI. More importantly, differences in power and access in context, where the
there is a need to make the interfaces of a system intents of the more powerful or directly implicated are
visible and to debate the terms of informed consent in at play. Genevieve Bell gave the following example to
this context. highlight this point:

5.6. Intent: Examining power relations and “The classic example for me about the place that went
potential misuses horribly wrong... might be Chicago, certainly Illinois…
[where] they had a smart traffic lights system… [that] was
A central concern raised by the experts reflects the being run not by the police but by an outside third party.
dual nature of AI technology used within smart city And in order to hit their revenue targets every quarter, they
initiatives. Even if facial recognition can be used for used to vary the traffic signal rate. So, the amount of time
stated purposes related to safety and security, it the light was yellow used to diminish towards the end of the
enables other outcomes that may be experienced as quarter so they could catch more people running red lights.”
harmful, such as increased surveillance and control,
lack of freedom of expression, and unknown data
privacy management practices.

226
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

She argued that we need to ask questions that are not them down, and ensure he or she is brought to justice. That
necessarily about gender but about the problems that improves the lives of people vulnerable to harassment.
the system is intended to resolve: “How do you start to On the other side of that security, if you have a state that
imagine what is safe, right? Because what a government decides doesn’t believe in free expression, this same technology
is safe may not be what its citizens decide is safe.” Joanna can be used to track down people who are dissenting,
Zubrzycki agreed, noting that it is often in working who are protesting, who might not want to be identified or
through the intent of a system that policymakers may singled out... We know that [democratic systems are] being
begin to deal with the complexity of AI-enabled systems: threatened all over the world... So how do we grapple
with this big brother that’s here, that’s arrived – where we
“I mean if you look now at the sort of issues or just last week want safer cities, but we don’t want our freedoms curbed.
with the tragedies around domestic violence which people Basically, it seems like it’s a trade off right now.”
are starting again to grapple with. I think when people
start naming those different problems, those intersections Seward’s framing of the multifarious intentions
become very clear. And I think that’s when policymakers that are purposeful and emergent within AI-enabled
start to realize that they’ve actually got to deal with multiple systems suggests that potential harms, specifically to
dimensions of the problem and women’s experiences.” women, are vital to evaluate. As Araba Sey related:

Alternatively, in some countries, the powerful classes “How do we ensure that those that do have access might not
of actors may have little power to define their own abuse them… This is more about those that have access
intents and purposes. As Amanda Watson explained: to the system and ensuring that they are ethical, or… that
there are measures in place to ensure that the potential
“Many of the Pacific Island nations do have donor funding for [misuse] is limited. Because women tend to be the
or if you count the donor dollars themselves going in. It’s a predominant victims of abuse, I think, it becomes definitely
huge percentage of the overall budget or the overall money a gender-related issue. Women and people of other non-
that’s spent in these countries. So, I guess that’s why or one masculine genders tend to be the ones that are victimized
of the reasons why so many of these things would end up more often, so I think there’s a definite gender component.”
being donor projects, because the governments themselves
don’t necessarily have money to even run their health and There are likewise many components and levels of an
education systems.” AI-enabled system that must be considered. Hannah
Thinyane spoke about how her design decisions ripple
Power-relations are essential to unpack the intent of throughout a system, of which they can be taken
AI-enabled systems, as well as to situate actors within advantage. Her thought process was:
them and their capacity to address core issues.
Working through issues surrounding intent may also “If we captured this extra information, how could that be
gather insights into potential misuses of AI. Ruhiya abused? So, for example, we were asked from very early on
Seward’s thoughts encapsulate comments from a if we could capture a camera photo, because say the NGOs
number of experts: would say, if you think of workers from Myanmar, they all
have the same name. And if you have five people who were
“I mean essentially, it’s kind of a big brother issue, and I on the boat and they all have the same name, how would
don’t see any other way of framing it… I think actually you know which one you talked to? … Any system that has
this really speaks to the tension of technology in general, corruption, if someone can make a few extra bucks and
broadly considered, in that there are all these potential they don’t feel like [they’re] paid enough, well they might
advantages (and disadvantages) that come with security. give that information to someone else.”
[Say] a woman is harassed or attacked. If you have big
brother surveillance, it can identify the attacker, track

227
How to expand the capacity of AI to build better society

Altogether, the intent theme captured the experts’ As Joanna Zubrzycki poignantly stated, the inclusion
attention to power-relations in context, as well as how agenda risks essentializing women, and can be used to
these are expressed. When considering the power and disempower women as much as the reverse. Padmini
position of women, strategies to hold powerful actors Murray reminded us that some forms of participation
to account and to protect against the misuse of AI- can actually widen the gap between “inclusion” and
enabled systems are needed. “exclusion” when certain classes of women are favored
over others during consultation processes. All of the
6. Securing voice and recasting experts were likewise in agreement that most women
lack knowledge to engage in data ecosystems that
participation: Examining roles and underpin AI applications. In Section 5.1, Hannah Thinyane
responsibilities for the inclusion of highlighted that vulnerable women are also hesitant to
women in AI-enabled systems share their perspectives when trusting relationships are
lacking. It seems clear that a main purpose of inclusive
The following two sections discuss our findings and practice is to support the most marginalized women in
generate key policy recommendations for the roles smart city design and implementation.
and responsibilities required to include women in AI-
enabled smart cities. We also review our exercise of Intersectional feminist theory (Bhavnani, Foran,
elaborating on the 3A Framework to address inclusion Kurian, & Munshi, 2016; Crenshaw, 1991) has provided
concerns and reflect on its application as a tool for a language to understand the social, cultural, and
future policymaking in this area. economic factors that influence the power and
position of the most-marginalized women in relation
6.1. Increasing voice and participation of to others, including men. Although all of the experts
women in smart city initiatives endorsed this framework for understanding a
woman’s power and position, it remains challenging
The past decade has seen growing support for the to adopt in practice. Examples discussed by the
notion of “inclusion” in the rhetoric of smart city experts incorporating ethnographic accounts (Padmini
initiatives, yet key decisions that affect women’s lives Murray, Melissa Gregg), participatory models (Anita
continue to be made without adequate consideration, Gurumurthy, Nandini Chami), and values-sensitive
consultation, or differentiation, especially when it design (Hannah Thinyane, Genevieve Bell), strengthen
comes to diverse women across various sections of understandings of women’s realities as multi-
society in the Asia-Pacific region. Why has the rise in dimensional, intersectional, and dynamic. These
the rhetoric of inclusion not coincided with greater methods may facilitate the inclusion of women’s
scope and attention to the voices of diverse women, voices in large smart city projects. However, disjoints
especially the most marginalized? How do AI for between rich accounts of women’s experiences
social good applications change the methods and and the design of AI technologies and smart city
practice of participation? The rise of AI has occurred infrastructures are still common.
simultaneously with some advances in methods and
approaches designed for greater citizen engagement Why is it that intersectional feminism has not
in smart city initiatives, such as deliberative decision- entered the mainstream in terms of framing and
making, citizen juries, and public consultations. There delivering public services such as AI-enabled public
is, however, limited evidence that these approaches transportation and CCTV systems? Typically, the
have been rolled out extensively, internalized, or that needs and aspirations of the most marginalized have
they have influenced wider policy or programmatic been served by specialist bodies and organizations,
budgeting and decision-making within AI-enabled such as social workers, community-based
smart cities. organization representatives, and care workers. A

228
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

promising solution might be to educate these front- on how AI developments will interact with power-
line workers on the opportunities and risks afforded relations, attitudes, and behaviors in context. Whilst
by AI-enabled systems, and to support their roles as participatory democratic processes should certainly be
advocates to move this agenda forward. As co-author prioritized, the costs and technical expertise required
and a trained social worker, Brenda Martin (2019, p.6) to implement many AI-enabled smart city systems
wrote: puts pressure on authorities to ensure strategic returns
on investments. There is still a need to develop checks
“In Australia, social workers are often in a unique and balances, along with rewards and incentives within
position to witness the impacts of new socio-technical a wider network of smart city actors.
systems on the lives of our most vulnerable individuals and
communities, to analyze structural inequities, to educate, 6.2. Roles and responsibilities in an interlaced
to elevate the voices and experiences of those excluded network of actors: The value of applying the 3A
from public debate, to influence public policy, and to framework
advocate for change. As social workers, we need to develop
the language and understanding to be meaning ful and This research elaborates on the 3A Framework as
powerful contributors to the debate on the current and a tool to outline the contours of inclusive practice
future roles of AI and cyber-physical systems.” within AI-enabled smart city systems, whilst taking
into account the culture and values of diverse
Such workers and organizations can provide women. Our review of the literature and analysis of
critical questioning and feedback into a system to the interviews with experts, points to the key issues
highlight specific and systemic biases and risks of that the experts suggested considering, which we
AI technologies. That said, this policy alone may summarize here. Too often the issues raised are
place greater stress and pressure on an already seen to have technical fixes, or as discussed in the
over-worked professional base, which may spread previous section, warranting participatory processes
their responsibilities for women too thin. In the next which may not adequately address the scope and
section, we consider how else to build responsibilities scale of AI. We argue that the 3A Framework enables
for the protection and empowerment of women policymakers and practitioners to work through the
into AI-enabled systems, and what these roles and issues holistically, and to identify relevant actors
responsibilities might look like. and responsibilities needed to include women in AI-
enabled smart cities.
Moreover, it is not likely that increasing participation
of women in smart city initiatives through deliberative Returning to the two applications of AI for social good,
decision-making, citizen juries, or otherwise will CCTV and smart transportation systems integrate
be enough in the context of AI-enabled smart city complex AI applications (Section 3). Socially good
initiatives. Particularly in the cases of using AI to outcomes, especially for women, are not guaranteed.
increase safety and mobility of women in smart cities, Developing and implementing these applications
there will be difficulties in establishing the trust and frequently involves multiple government, private
close relationships necessary for an open discussion sector, and community-based organizations, and
to share their views and preferences with authorities. they build on prior systems and infrastructures
Seemingly endless histories of violence against that are culturally and context-specific. Working
women and social control of women’s behavior towards socially good outcomes for diverse women,
exists in most contexts across Asia-Pacific. It seems particularly the most marginalized, requires an
dubious to suggest that women’s participation in effective distribution of roles and responsibilities
decision-making processes would be valued and across an interlaced network of actors.
embraced. It also takes time to experience and reflect

229
How to expand the capacity of AI to build better society

The views expressed in Section 5 illustrate that from is greater trust and transparency, as with Hannah
a big picture standpoint, a range of actors belonging Thinyane’s example of the Apprise system for frontline
to national government institutions, international workers (Section 5.1).
organizations, inter-organizational coordination
bodies, and workers’ unions, etc. have a role to play in In parallel, private sector actors, such as those
making conditions and opportunities more equitable managing CCTV or transportation systems and
for women across Asia-Pacific in the long-run. The intermediating between government and citizen
experts clearly articulated that root causes, rather groups, have an important facilitating role to play.
than symptoms of inequity, need to be addressed. These actors need to take time to understand local
However, viewed from such a vantage point, one might dynamics and ultimately help broaden and deepen
consider that different actors may attribute particular the design, implementation, and management of AI
exclusion issues to different root causes. From an technology in context, primarily by interacting with
institutional perspective, this may be due to varied critical actors such as women’s activist groups and
missions and objectives. Moreover, as Ruhiya Seward community groups. It is only when system operators
pointed out, institutions operate according to their own are aware of the interlaced network of actors and
organizational logic and may have specific challenges patterns of exclusion that they have the opportunity to
to which they must attend. This suggests that use their power to encourage and provide entry points
national governments have a role to play in clarifying to systemic decision-making processes. Nevertheless,
roles and responsibilities, especially in terms of the as Sue Keay reminds us, such actors are not likely
commitments they hold to gender equity. For instance, to take on such responsibility unless these tasks are
by creating stronger and more explicit connections mandated and reported on. National and municipal
in policy roadmaps between smart city plans and the governments must set high expectations of private
achievement of the SDGs related to gender equality sector actors to work more collaboratively with
(e.g., Goals 5 and 11), making this information readily community groups. Sanctions could also be instituted
available and accessible is necessary. as a means to hold industry partners accountable for
more than delivering technologies and systems alone.
In terms of the internal dynamics of AI-enabled
systems, the most critical issue for the experts What Thinyane’s research also demonstrates,
related to the need to expose the power relations at however, is that AI technology may also assist in
play. Both CCTV and smart transportation systems developing trusting, inclusive relationships if designed
may be used for surveillance, and it is not clear what responsively and supported holistically. The 3A
measures are in place to inform the public or take any Framework does not discriminate between human or
of the unique concerns women hold for their safety technological actors, or collections of these. There is
and well-being to heart. The 3A Framework facilitates scope for future work developing AI to find patterns of
discussions surrounding power differentials to take exclusion, to look for risks and breaches in a system,
shape, including a range of individual, community, or to find patterns that are exclusive to marginalized
and place-based aspects. A major impediment is that women and which may assist in a greater proportion
relationships between key decision makers of smart than other sections of a society. For example, by
city initiatives and women are not well-established. suggesting a public transport route or by optimizing
Successful examples provided by the experts reflected routes when stops are permitted in between stops at
how women, particularly the most marginalized, are night to enable women to disembark closer to their
more comfortable forming relationships within their homes. Padmini Murray’s work points to innovation in
communities, as in Anita Gurumurthy’s example of designing AI to build consensus in local governance,
community-driven water sanitation; or when there which may facilitate rebalancing the age-old power

230
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

issues that have plagued participatory decision- between perspectives and make suggestions and
making for diverse women. On the other hand, IT for improvements about how AI is designed, managed,
Change has been exploring community ownership of and regulated in context. Another aspect identified
data resources that has likewise improved inclusion by Anita Gurumurthy requiring further research is the
outcomes. Progress in these areas suggests that AI, influence of more powerful countries in Asia-Pacific
in terms of its design and features, will have a role and on nations that have less capacity and resources to
certain responsibilities in making AI-enabled smart shape and control their own AI futures. Such intra-
cities more inclusive to women. regional development may well impact on how the
inclusion of women is taken up across the region (if,
Further research is needed to identify the roles and for instance, all countries begin to adopt the same
responsibilities that will enable the holistic integration AFRT system, and states are unable to modify or
of both the big and more granular pictures in smart adapt it to their local context). Nevertheless, the 3A
city developments. We argue that a new class of Framework may still be a useful tool for policymakers
practitioners able to mobilize and circulate across the to use to navigate such tensions and global
network of actors is needed. These practitioners will developments.
require a plethora of knowledge and skills to translate

231
How to expand the capacity of AI to build better society

References
Allwinkle, S., & Cruickshank, P. (2011). Creating Smart-er Cities: An Overview. Journal of
Urban Technology, 18(2), 1–16. http://doi.org/10.1080/10630732.2011.601103

ASEAN. (2018). ASEAN Smart Cities Network: Smart City Action Plans. Singapore: ASEAN.

Australian Human Rights Commission. (2019). Human Rights and Technology Discussion
Paper launches | Australian Human Rights Commission. Retrieved May 1, 2020, from
https://www.humanrights.gov.au/about/news/human-rights-and-technology-discussion-
paper-launches

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional
Expressions Reconsidered: Challenges to Inferring Emotion from Human Facial Movements.
Psychological Science in the Public Interest, 20, 1–68. doi:10.1177/1529100619832930

Baruah, N. (2020). Her Right to the City: Must Women Tread in Fear? Retrieved May 1, 2020,
from https://asiafoundation.org/2020/02/05/her-right-to-the-city-must-women-tread-in-fear/

Baud, I., Scott, D., Pfeffer, K., Sydenstricker-Neto, J., & Denis, E. (2014). Digital and spatial
knowledge management in urban governance: Emerging issues in India, Brazil, South Africa,
and Peru. Habitat International, 44, 501–509.

Baxter, W. (2017). Thailand 4.0 and the future of work in the Kingdom. Retrieved May 1, 2020,
from https://www.ilo.org/wcmsp5/groups/public/---dgreports/---dcomm/documents/
meetingdocument/wcms_549062.pdf

Bharucha, J., & Khatri, R. (2018). The sexual street harassment battle: perceptions of women
in urban India. The Journal of Adult Protection. 20(2), 101-109. https://doi.org/10.1108/JAP-
12-2017-0038

Bhavnani, K.-K., Foran, J., Kurian, P. A., & Munshi, D. (Eds.). (2016). Feminist Futures:
Reimagining Women, Culture and Development (2nd ed.). London: Zed Books Ltd.

Cabinet Office, Government of Japan. (n.d.). Society 5.0. Retrieved May 1, 2020, from
https://www8.cao.go.jp/cstp/english/society5_0/index.html

Cathelat, B. (2019). Smart Cities: Shaping the Society of 2030. Paris: UNESCO and
NETEXPLO.

232
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Chan, J. K.-S., & Anderson, S. (2015). Rethinking Smart Cities–ICT for New-type Urbanization
and Public Participation at the City and Community Level in China. Beijing: Intel & UNDP.

Crenshaw, K. (1991). Mapping the Margins: Intersectionality, Identity Politics, and Violence
against Women of Color. Stanford Law Review, 43(6). http://doi.org/10.2307/1229039

Daniell, K.A. (2012) Co-engineering and participatory water management: organisational


challenges for water governance, UNESCO International Hydrology Series, Cambridge
University Press, Cambridge, UK.

Davy, J. (2019). What lies ahead of Indonesia’s 100 smart cities movement? Retrieved
May 1, 2020, from https://www.thejakartapost.com/life/2019/12/05/what-lies-ahead-of-
indonesias-100-smart-cities-movement.html

Department of the Prime Minister and Cabinet. (2016). Smart Cities Plan. Canberra:
Commonwealth of Australia.

Empower Foundation. (2012). Hit & Run: Sex Worker’s Research on Anti trafficking in
Thailand. Bangkok: Empower Foundation.

Finlay, A. (Ed.). (2019). Artificial Intelligence: Human Rights, Social Justice and Development.
New York: APC, Sida & Article 19.

Friedmann, J. (1992). Empowerment: The Politics of Alternative Development (1st ed.).


Oxford: Wiley-Blackwell.

Ghazal, B., ElKhatib, K., Chahine, K., & Kherfan, M. (2016, April). Smart traffic light control
system. In 2016 third international conference on electrical, electronics, computer
engineering and their applications (EECEA) (pp. 140-145). IEEE.

Gekoski, A., Gray, J. M., Adler, J. R., & Horvath, M. A. (2017). The prevalence and nature of
sexual harassment and assault against women and girls on public transport: an international
review. Journal of Criminological Research, Policy and Practice, 3(1), 3-16, https://doi.
org/10.1108/JCRPP-08-2016-0016

Giest, S. (2017). Big data analytics for mitigating carbon emissions in smart cities:
Opportunities and challenges. European Planning Studies, 25(6), 941–957. http://doi.org/10.
1080/09654313.2017.1294149

233
How to expand the capacity of AI to build better society

Global Slavery Index (2019). Asia and the Pacific | Global Slavery Index. Retrieved May 1,
2020, from https://www.globalslaveryindex.org/2018/findings/regional-analysis/asia-and-
the-pacific/

Grother, P., Ngan, M., Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3:
Demographic Effects. Washington, DC: U.S. Department of Commerce, National Institute of
Standards and Technology.

Haque, M. M., Chin, H. C., & Debnath, A. K. (2013). Sustainable, safe, smart—three key
elements of Singapore’s evolving transport policies. Transport Policy, 27, 20-31.

Heise, L., Ellsberg, M., & Gottmoeller, M. (2002). A global overview of gender‐based violence.
International Journal of Gynecology & Obstetrics, 78, S5-S14.

Höjer, M., & Wangel, J. (2015). Smart sustainable cities: Definition and challenges. In L.
Hilty, B. Aebischer (Eds.), ICT innovations for sustainability (Vol. 310, pp. 333–349). Cham:
Springer, Cham.

Hörold, S., Mayas, C., & Krömker, H. (2015, August). Towards paperless mobility information
in public transport. In International Conference on Human-Computer Interaction (pp. 340-
349). Springer, Cham.

Hollands, R. G. (2008). Will the real smart city please stand up? City, 12(3), 303–320.
http://doi.org/10.1080/13604810802479126

Huawei Enterprise. (2019). Smart City Framework and Guidance for Thailand: Smart City
services for Phuket. Retrieved May 1, 2020 from https://www.huawei.com/th/industry-
insights/technology/smart-city-framework-and-guidance-for-thailand-smart-city-services-
for-phuket

Innovation and Technology Bureau. (2017). Hong Kong Smart City Blueprint. Hong Kong:
Innovation and Technology Bureau.

Jackman, M. R. (2006). Gender, violence, and harassment. In B. Risman, C. Froyum, W.J.


Scarborough (Eds.) Handbook of the Sociology of Gender (pp. 275-317). Boston: Springer.

Javaid, S., Sufian, A., Pervaiz, S., & Tanveer, M. (2018). Smart traffic management
system using Internet of Things. In 2018 20th International Conference on Advanced
Communication Technology (ICACT) (pp. 393-398). IEEE.

Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1–14.
http://doi.org/10.1007/s10708-013-9516-8

234
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Korea Legislation Research Institute. (2019). Spatial Data Industry Promotion Act. Retrieved
May 1, 2020 from http://elaw.klri.re.kr/eng_service/lawView.do?hseq=38429&lang=ENG

Laksmi, S. (2018). Buku 2: Master Plan Smart City Kabupaten Pati. Retrieved May 1, 2020,
from https://www.patikab.go.id/v2/uploaded/2019/Buku%202%20-%20MasterPlan%20
Smart%20City%20Pati.pdf

Lam, P. T., & Yang, W. (2020). Factors influencing the consideration of Public-Private
Partnerships (PPP) for smart city projects: Evidence from Hong Kong. Cities, 99, 102606,
https://doi.org/10.1016/j.cities.2020.102606

Lee, J. J. (2005). Human trafficking in East Asia: current trends, data collection, and
knowledge gaps. International Migration, 43(1‐2), 165-201.

Long, Y., Zhang, E., Zhang, Y., Chen, Y., & Chen, Y. (2019). Brief Review for Smart Cities of the
Planet: Research Report (pp. 1–23). Beijing: Hitachi China & Tsinghua University.

MacQueen, K. M., McLellan, E., Kay, K., & Milstein, B. (1998). Codebook development for
team-based qualitative analysis. Cultural Anthropology Methods, 10(2), 31–36.

Marsal-Llacuna, M.-L. (2015). Building Universal Socio-cultural Indicators for Standardizing


the Safeguarding of Citizens’ Rights in Smart Cities. Social Indicators Research, 130(2),
563–579. http://doi.org/10.1007/s11205-015-1192-2

Martin, B. (2019). A Social Worker in the New Applied Science: An Individual Portfolio for
CECS 6001, Fundamentals of a New Applied Science 1. Canberra: Australian National
University.

Miles, M. B., & Huberman, A. M. (1994). Qualitative Data Analysis: An Expanded Sourcebook
(2nd ed.). London: Sage.

Ministry of Communication and Information – Public Relations Bureau. (2017). Tahap


Pertama Gerakan Menuju 100 Smart City 2017, 24 Kota/Kabupaten Berhasil Menyelesaikan
Smart City Masterplan. Retrieved May 1, 2020, from https://www.kominfo.go.id/
content/detail/11489/siaran-pers-no-223hmkominfo112017-tentang-tahap-pertama-
gerakanmenuju-100-smart-city-2017-24-kotakabupaten-berhasil-menyelesaikan-smart-
city-masterplan/0/siaran_pers

Ministry of Housing and Local Government. (2018). Malaysia Smart City Framework. Kuala
Lumpur: Ministry of Housing and Local Government, Government of Malaysia.

235
How to expand the capacity of AI to build better society

Ministry of Housing and Urban Affairs. (2015). Smart Pune: Creation of a Vision Community.
New Delhi: Government of India.

Ministry of Information and Communications. (2019a). Groundbreaking ceremony for first


smart city project in Hanoi. Retrieved May 1, 2020, from https://english.mic.gov.vn/Pages/
TinTuc/139806/Groundbreaking-ceremony-for-first-smart-city-project-in-Hanoi.html

Ministry of Information and Communications. (2019b). Korean Smart Cities. Seoul:


Government of South Korea.

Ministry of Information and Communications. (n.d.). Smart city challenges Vietnamese


Gov’t and localities. Retrieved April 26, 2020, from https://english.mic.gov.vn/Pages/
TinTuc/139821/Smart-city-challenges-Vietnamese-Gov-t-and-localities.html

Ministry of Urban Development (2015). Citizen Consultations to Prepare Smart Cities


Proposals (SCP). New Delhi: Government of India.

Ministry of Urban Development (2015). Smart Cities – Mission Statement & Guidelines.
New Delhi: Government of India.

Musadat, A. (2019) Participatory Planning and Budgeting in Decentralised Indonesia:


Understanding Participation, Responsiveness and Accountability, PhD Thesis,
Australian National University, https://openresearch-repository.anu.edu.au/
bitstream/1885/167001/1/Musadat2019PhDThesis.pdf

NITI Aayog. (2018). National Strategy for Artificial Intelligence. New Dehli: Government of India.

Oakley, P. (Ed.). (2001). Evaluating Empowerment: Reviewing the Concept and Practice
(INTRAC NGO Management & Policy). Oxford: INTRAC.

Panori, A., Kakderi, C., & Tsarchopoulos, P. (2019). Designing the Ontology of a Smart City
Application for Measuring Multidimensional Urban Poverty, 1–20. http://doi.org/10.1007/
s13132-017-0504-y

Perkins, D. D., & Zimmerman, M. A. (1995). Empowerment theory, research, and application.
American Journal of Community Psychology, 23(5), 569–579. http://doi.org/10.1007/
BF02506982

Piper, N. (2005). A problem by a different name? A review of research on trafficking in South‐


East Asia and Oceania. International migration, 43(1‐2), 203-233.

Plan International. (2016). A Right to the Night: Australian Girls on their Safety in Public Spaces.
Sydney: Plan International Australia and Our Watch.

236
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Planning and Urban Management Agency. (2013). The Samoa National Urban Policy.
Apia: Government of Samoa.

Rao, T. (2017). Women’s Safety Audit Walk Commences 16 Days. Retrieved May 1, 2020,
from https://asiapacific.unwomen.org/en/news-and-events/stories/2017/11/womens-
safety-audit-walk-commences-16-days

Roces, M. (2010). Asian feminisms: Women’s movements from the Asian perspective. In
M. Roces, & L. Edwards (Eds.), Women’s Movements in Asia: Feminisms and Transnational
Activism. London: Routledge.

Sadoway, D., & Shekhar, S. (2014). (Re) prioritizing citizens in smart cities governance:
examples of smart citizenship from urban India. The Journal of Community Informatics, 10(3).

SAGE. (2018, May 23). Science in Australia Gender Equity (SAGE). Retrieved May 1, 2020, from
https://www.sciencegenderequity.org.au/

SCC India Staff (2019). Financing smart cities in India. Retrieved May 1, 2020, from
https://india.smartcitiescouncil.com/article/financing-smart-cities-india

Sey, A., & Hafkin, N. (Eds.). (2019). Taking Stock: Data and Evidence on Gender Equality in
Digital Access, Skills, and Leadership (pp. 1–340). Tokyo: United Nations University.

Singh, Y. J. (2019). Is smart mobility also gender-smart? Journal of Gender Studies,


http://doi.org/10.1080/09589236.2019.1650728

Smart Nation Singapore. (2018). Digital Government Blueprint. Singapore: Smart Nation
Singapore.

Smart Nation Singapore. (2020). Pillars of smart nation. Retrieved May 1, 2020, from
https://www.smartnation.gov.sg/why-Smart-Nation/pillars-of-smart-nation

Smart City Thailand. (2018). Smart City Thailand: Annual Report 2018. Bangkok: Smart City
Thailand.

Taweesaengsakulthai, S., Laochankham, S., Kamnuansilpa, P., & Wongthanavasu, S. (2019).


Thailand Smart Cities: What is the Path to Success?. Asian Politics & Policy, 11(1), 144-156.

Thinyane, H., & Bhat, K. S. (2019). Apprise (pp. 1–14). Presented at the 2019 CHI Conference,
New York, New York, USA: ACM Press. http://doi.org/10.1145/3290605.3300385

Thrive (2018). How the private sector can partner on smart cities. Retrieved May 1, 2020,
from https://thrive.dxc.technology/asia/2018/07/17/how-the-private-sector-can-partner-
on-smart-cities/

237
How to expand the capacity of AI to build better society

Thynell, M. (2016). The quest for gender-sensitive and inclusive transport policies in growing
Asian cities. Social Inclusion, 4(3), 72-82.

Transport New South Wales. (2020). The Challenge | Future Transport. Retrieved May 1,
2020, from https://future.transport.nsw.gov.au/technology/roadmap-in-delivery/transport-
digital-accelerator/challenge

Trencher, G. (2019). Towards the smart city 2.0: Empirical evidence of using smartness as a
tool for tackling social challenges. Technological Forecasting and Social Change, 142, 117–
128. http://doi.org/10.1016/j.techfore.2018.07.033

UN DESA. (n.d.). Social Inclusion. Retrieved May 1, 2020, from https://www.un.org/


development/desa/socialperspectiveondevelopment/issues/social-integration.html

UN Women. (2017). Safe Cities and Safe Public Spaces: Global Results Report. New York: UN
Women.

UNESCO. (2019, February 22). Japan pushing ahead with Society 5.0 to overcome chronic
social challenges. Retrieved May 1, 2020, from https://en.unesco.org/news/japan-pushing-
ahead-society-50-overcome-chronic-social-challenges

Widadio, N. A. (2019). Many Indonesian women face sexually harassment: survey. Retrieved
May 1, 2020, from https://www.aa.com.tr/en/asia-pacific/many-indonesian-women-face-
sexually-harassment-survey/1658677

World Vision Australia. (2007). Human trafficking in Asia: Policy brief. Retrieved May 1, 2020,
from https://www.worldvision.com.au/docs/default-source/publications/human-rights-
and-trafficking/people-trafficking-in-the-asia-region.pdf

Zaugg, J. (2019). India is trying to build the world’s largest facial recognition system.
Retrieved May 1, 2020, from https://edition.cnn.com/2019/10/17/tech/india-facial-
recognition-intl-hnk/index.html

Zhao, D., Dai, Y., & Zhang, Z. (2011). Computational intelligence in urban traffic signal control:
A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and
Reviews), 42(4), 485-494.

238
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Appendix 1: Technical specifications of AFRT and smart transportation systems

AFRT: Overview of algorithms and factors


influencing performance

Within AFRT systems, after the image(s) have been systematic biases, then these can also be learnt by
obtained for processing, they are transformed into a the algorithm (and if they are also not present during
mathematical representation which is then compared implementation this will lead to error). For example, in
with other representations of faces to obtain a the NIST report (Grother et al., 2019) underexposure
of photographs of dark-skinned individuals was
similarity score. The similarity score is essentially the
probability of a match between two faces (the sensed identified as a possible source of bias. The types of
face and the previously recorded face). Various deep cameras used can mitigate possible sources of bias
neural network methods have been developed to by providing more consistent images. For instance,
produce a similarity score. The vast majority of these verification systems that take images using infrared
use a labelled dataset to train the neural network to sensors provide more consistent illumination in
produce the correct result. In deployment, the neural different lighting conditions. Some commercial face
network is then used to recognize patterns based on verification algorithms (such as Apple’s Face ID)
similar features found in previously classified images instead use a depth image or are used in conjunction
when compared to unseen images (Masi, 2018). with a colored or monochrome image. Including depth
information reduces false matches and makes it
Therefore, these systems only perform well on harder to spoof such systems by, for example, printing
2
pictures which are drawn from distributions similar to an image of a person’s face. Depth can also be used
that of the training dataset. When software developers for identification systems, but getting accurate and
use biased or unrepresentative datasets to train an high-resolution depth is harder when the person’s face
algorithm, error rates increase. This is especially is far away from the sensor. However, it is important to
problematic when AFRT systems are developed make sure that the system is trained using the same
in foreign cultural contexts. For instance, for facial type of images that will be used during deployment.
recognition systems developed in the US, the false
match rate is the highest in East Asian populations, Another factor which can influence performance is
whereas for many (but not all) systems developed in the threshold, which determines how similar two
East Asia, false positive matches between people born images need to be before they are considered a match
in East Asian countries are lower. (according to the similarity score outlined earlier).
The system is not likely to make perfect predictions,
The demographics of the people included in the so trade-offs occur between the number of false and
dataset is not the only factor which influences the true matches3 – suitable trade-offs depend on the
generalizability of the dataset. If the images exhibit application. Consider, for example, unlocking a phone

2. Including depth is not the only way to combat spoof attacks, see Ramachandra and Busch (2017) for an overview of spoof detection methods.
3. The Relative Operating Characteristic (ROC) curve is one way of examining the trade-offs for various thresholds for binary classification problems

239
How to expand the capacity of AI to build better society

using facial verification: setting a high threshold is would still need human interpretation. There is also no
feasible because the user can retry at different angles evidence to suggest that the situations which women
and a back-up method exists for unlocking, such as a face are being factored into technological design and
pin. In contrast, if facial identification is used to search development of such systems.
for trafficked women or perpetrators of violence,
a lower threshold may be appropriate, especially if Smart transportation systems: objectives and
combined with a human review before intervention. constraints
Developers also try to improve performance by
grouping images into demographics, which essentially The efficiency gains derived from smart traffic
sorts the images before they are analyzed. However, lights focus on optimizing traffic flows based on
classifying individuals into demographics can be real-time monitoring of traffic conditions. Data on
hurtful to people if they are misclassified4 and the traffic conditions is collected using vehicle detection
number of demographics which can be usefully sensors, and is either used to determine optimal
defined is likely to be limited (e.g., by the availability
timing for a single traffic light or transmitted over
of training data for each demographic), and so useful the Internet to a data processing center where it is
demographics may never be suitable for everyone. automatically analyzed to determine optimal traffic
lights for a broader system. What “optimal” means will
AFRT: Extensions depend on how designers have encoded the priorities
to optimize for into the system. For instance, there
There is speculation about the other possible will be a trade-off between efficiency of the overall
functionalities that could be built into AFRT systems traffic (which has environmental implications) and
to support public safety and security. An integrated incentives designers may want to introduce, such
CCTV system in Shenzhen, China has been designed as prioritizing cyclists, public transport vehicles,5
to supposedly “formulate behavior prediction based or emergency vehicles (Javaid, 2018; Ghazal et al.,
on facial and behavioral reaction” (Huawei Enterprise, 2016). It is common for smart traffic lights to control
2019, p. 74). However, a major review found no a single traffic light without connection to a larger
evidence that emotional states can be accurately network, thus taking into account the volume of traffic
inferred from the analysis of facial movements alone, to shorten or lengthen the amount of time a light
without reference to culture or context (Barrett et remains green. As the system becomes more complex
al., 2019). There are also vision-based systems for (e.g., controlling multiple lights, balancing multiple
detecting unusual behavior which have been proposed priorities, monitoring performance for a variety of
in academia (Xiang and Gong, 2008; Wiliem et al., well-travelled and less-travelled routes, and ensuring
2012) and implemented in commercial products that people on less-travelled routes do not have to
(Rhombus Systems, 2019). Behavioral prediction wait unreasonable amounts of time) more advanced
algorithms, which may help to identify struggles, algorithms and computational resources are required.
health crises, or other aspects often use unsupervised This is the primary application of AI in this context.
learning techniques to detect “unusual” behavior, and Zhao et al. (2012) found fuzzy logic, artificial neural

4. For example, gender detectors can be hurtful to members of the transgender community (Hamidi et al., 2018; Keyes, 2018).
5. Copenhagen is a good example of this – State of Green (2016); Rasmussen (2018); Copenhagen Technical and Environmental Administration (2011).

240
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

network, evolutionary and swarm, reinforcement seem to be any specific functions or procedures in
learning and adaptive dynamic programming, and place to address them.
agent and game methods are common. Given the
complexity and development time required to test When considering potential smart traffic systems,
and implement solutions to complex traffic flow a critical aspect is the underlying infrastructure
management problems, it seems problematic that requirements that affect both the traffic management
gendered preferences and perspectives have not been system performance and how diverse women
considered here. may benefit differently from it. For complex traffic
management systems, it is crucial to have a strong
Efficiency gains in smart public transport are and reliable Internet network for the sensors, control
envisaged in a similar manner. Public transportation center, and traffic lights to be able to communicate
services can be integrated within the same traffic in real-time and to be responsive to the current
management system to both prioritize public transport conditions. All smart traffic light systems depend
vehicles over private vehicles at intersections, as well fundamentally on a high density and dispersion of
as to inform route optimization to service popular networked vehicle sensors to provide enough real-
routes effectively and avoid congestion. As such, time data for meaningful decision-making. These
smart traffic management systems usually include an may include microwave radar (Ho and Chung 2016),
end-to-end platform that users have access to (usually video (Javaid et al., 2018), motion sensors (e.g., using
a mobile application). People can use the platform infrared transmitters and receivers) (Ghazal et al.,
to plan, book, and pay for their journeys, as well as 2016; Jagadeesh et al., 2015), and under road sensors
access real-time information about their transport – including induction loops and various weight in
(Hörold et al., 2015). The platform can include a motion estimation systems, which can be based on
range of transport options beyond traditional buses technologies such as piezoelectric, capacitive mats,
and trains, such as bike/car sharing or hire options. bending plates, load cells, and optical (Hancke and
Singapore, for instance, has a system in place that Hancke, 2013) – with some sensors focused on
manages its public trains and buses, and integrates detecting pedestrian and cyclist traffic. Some work
private shuttle buses servicing social housing and also suggests using smartphones as distributed
condo blocks (Haque et al., 2013). Its payment sensors (Anagnostopoulos, 2016; Wang et al., 2012;
system functions across these services and enables Jayapal and Roy, 2016) although this usually relies
monitoring of journeys from start to finish. In these on the cooperation of the smartphone owners and
systems, data protection practices would need to be could disadvantage those who do not own or regularly
carefully designed and incorporated to comply with carry a smartphone. Such extensive infrastructure and
privacy laws and consumer expectations. In the case resource requirements have severe implications on
of smart public transportation, efficiency gains are the types of roads and neighborhoods in which these
built into existing systems and networks. If there are systems can be built. Women with the most need for
mobility issues that a woman experiences which are mobility support may, in contrast, live in places where
not addressed in the existing system, there does not it is not possible to construct these systems.

241
How to expand the capacity of AI to build better society

References
Anagnostopoulos, T., Ferreira, D., Samodelkin, A., Ahmed, M., & Kostakos, V. (2016).
Cyclist-aware traffic lights through distributed smartphone sensing. Pervasive and Mobile
Computing, 31, 22-36.

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional
Expressions Reconsidered: Challenges to Inferring Emotion from Human Facial Movements.
Psychological Science in the Public Interest, 20, 1–68. doi:10.1177/1529100619832930

Copenhagen Technical and Environmental Administration. (2011). Good, Better, Best: The
City of Copenhagen’s Bicycle Strategy 2011-2025. Retrieved May 1, 2020, from https://www.
eltis.org/sites/default/files/case-studies/documents/copenhagens_cycling_strategy.pdf

Ghazal, B., ElKhatib, K., Chahine, K., & Kherfan, M. (2016, April). Smart traffic light control
system. In 2016 third international conference on electrical, electronics, computer
engineering and their applications (EECEA) (pp. 140-145). IEEE.

Grother, P., Ngan, M., Hanaoka, K. (2019). Face Recognition Vendor Test (FRVT) Part 3:
Demographic Effects. Washington, DC: U.S. Department of Commerce, National Institute of
Standards and Technology.

Haque, M. M., Chin, H. C., & Debnath, A. K. (2013). Sustainable, safe, smart—three key
elements of Singapore’s evolving transport policies. Transport Policy, 27, 20-31.

Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender recognition or gender
reductionism? The social implications of embedded gender recognition systems.
Proceedings of the 2018 chi conference on human factors in computing systems (pp. 1-13).

Hancke, G. P., & Hancke Jr, G. P. (2013). The role of advanced sensing in smart cities.
Sensors, 13(1), 393-425.

Ho, T. J., & Chung, M. J. (2016). Information-aided smart schemes for vehicle flow detection
enhancements of traffic microwave radar detectors. Applied Sciences, 6(7), 196.

Hörold, S., Mayas, C., & Krömker, H. (2015, August). Towards paperless mobility information
in public transport. In International Conference on Human-Computer Interaction (pp. 340-
349). Springer, Cham.

Huawei Enterprise (2019). Smart City Framework and Guidance for Thailand: Smart City
services for Phuket. Retrieved May 1, 2020 from https://www.huawei.com/th/industry-
insights/technology/smart-city-framework-and-guidance-for-thailand-smart-city-services-
for-phuket

242
Including Women in AI-enabled Smart Cities:
Developing Gender-inclusive AI Policy and Practice in the Asia-Pacific Region

Javaid, S., Sufian, A., Pervaiz, S., & Tanveer, M. (2018). Smart traffic management
system using Internet of Things. In 2018 20th International Conference on Advanced
Communication Technology (ICACT) (pp. 393-398). IEEE.

Jayapal, C., & Roy, S. S. (2016). Road traffic congestion management using VANET. In 2016
International Conference on Advances in Human Machine Interaction (HMI) (pp. 1-7). IEEE.

Keyes, O. (2018). The misgendering machines: Trans/HCI implications of automatic gender


recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-22.

Masi, I., Wu, Y., Hassner, T., & Natarajan, P. (2018). Deep face recognition: A survey. In 2018
31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI) (pp. 471-478).
IEEE.

Ramachandra, R., & Busch, C. (2017). Presentation attack detection methods for face
recognition systems: A comprehensive survey. ACM Computing Surveys (CSUR), 50(1), 1-37.

Rasmussen, S. (2018). Green mobility and traffic safety in Copenhagen. Retrieved


May 1, 2020, from https://transops.s3.amazonaws.com/uploaded_files/2018%20
AASHTO%20International%20Day%20-%20Panel%201%20-%20Rasmussen%20-%20
City%20of%20Copenhagen.pdf

Rhombus Systems. (2019). Introducing Unusual Behavior Detection (UBD) – Human Stance,
Behavior, and Fall Detection. Retrieved May 1, 2020, from https://www.rhombussystems.
com/blog/ai/introducing-unusual-behavior-detection-ubd-%E2%80%93-human-stance-
behavior-and-fall-detection/

State of Green. (2016). Sustainable Urban Transportation. Retrieved May 1, 2020, from
https://stateofgreen.com/en/uploads/2016/06/Sustainable-Urban-Transportation.pdf

Wang, W. Q., Zhang, X., Zhang, J., & Lim, H. B. (2012). Smart traffic cloud: An infrastructure
for traffic applications. In 2012 IEEE 18th International Conference on Parallel and
Distributed Systems (pp. 822-827). IEEE.

Wiliem, A., Madasu, V., Boles, W., & Yarlagadda, P. (2012). A suspicious behaviour detection
using a context space model for smart surveillance systems. Computer Vision and Image
Understanding, 116(2), 194-209.

Xiang, T., & Gong, S. (2008). Incremental and adaptive abnormal behaviour detection.
Computer Vision and Image Understanding, 111(1), 59-73.

Zhao, D., Dai, Y., & Zhang, Z. (2011). Computational intelligence in urban traffic signal control:
A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and
Reviews), 42(4), 485-494.

243
AI and the
Wilson Wong
Associate Professor,
Data Science and

Future of Work: Policy Studies Programme,


The Chinese University of Hong Kong

A Policy
Framework for
Transforming
Job Disruption
into Social
Good for All
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Introduction: Policy as the Key to AI Promises

This paper examines the impact of artificial intelligence (AI) on the future of work to develop
a policy framework for transforming job disruption caused by AI into social good for all.
With the rapid advancement and progress of AI technology, there is little doubt that the era
of AI will have an unprecedented impact on societies, economies, and governments in a
significant and profound way with long-term effects and implications (Kaplan, 2016; OECD
2019a). Among them is the effect on the employment market through job disruptions. This
can be referred to as the general process of replacing existing jobs by AI automation with
the simultaneous potential of re-creating new opportunities and positions, which is the
primary focus of this study.

Although the ideal state of AI is clearly desirable, and its promised returns to society are
attractive and potentially enormous, it should never be taken for granted or assumed to
be implemented effortlessly and automatically. The enabling factors in terms of good
governance and sound policies are often less emphasized and frequently neglected in the
current discussion. The cost of not paying serious attention to the issues and problems of
job disruption can be too high to bear as it would mean the possibility of countries not being
able to make a successful and smooth transition to the AI economy (Deming, 2017). In the
absence of equity and fairness, even if an AI economy is achieved, the goals of AI for social
good and using AI to empower all people can be severely compromised. Without smart and
effective policies to meet the AI challenge of job disruption, the disadvantaged and high-risk
members of society would be displaced by AI automation and face economic hardship and
social marginalization.

A major goal of this paper is to set up a policy framework on the role of the government as
well as the policy responses it should make in order to address the concerns and challenges
brought by AI job disruption. According to Kai-Fu Lee, a world-renowned expert and venture
capitalist of AI, the total disruption of patterns of work and employment would lead to an
alarming estimate of 40% for current jobs lost to AI (Lee & Moon, 2019). His estimate is
echoed by the statistics of Organization for Economic Co-operation and Development (OECD)
(see Figure 1). The combined share of jobs at high risk of automation and significant risk of
automation is higher than 40% for the average of OECD countries. Even for countries such as
Norway and Finland, which face a relatively lower risk than the global standard, their share of
jobs threatened by AI automation is still over 30%. At the higher end, countries such as Greece,
Turkey, Lithuania, and Slovakia are around 60%. Shockingly, even for countries such as Japan,
which is an advanced economy, its share of jobs at risk is still more than 50%, meaning that
one out of two members of the labor force would be affected by AI automation.

245
How to expand the capacity of AI to build better society

Large shares of jobs are at risk of automation or significant change


70
Risk of significant change

High risk of automation


60

50

40

30

20

10

0
NOR FIN USA DNK CAN IRL KOR OECD CZE POL ESP CHL JPN TUR SVK
NZL SWE GBR NLD BEL EST ISR AUT FRA ITA SVN DEU GRC LUT

Figure 1: Jobs at risk of automation in OECD countries


(Source: OECD The Future of Work 2019)

In theory, with the widespread deployment of AI, by AI. Another alarming finding in the same report is
nations and societies should win in the long run due that the most vulnerable population, whose jobs are
to efficiency and productivity gains (OECD, 2018). at high risk under AI, are not being offered re-training
However, with so much employment at risk under or re-skilling opportunities. For example, for adults
AI, in the short run, it is increasingly inevitable that whose jobs face a high risk of automation, less than
there could be losers, of which include countries and 20% of them are receiving re-training. Ironically, to the
citizens who are ill-prepared for the impact of AI. AI contrary, for adults whose jobs face low automation
should be capable of creating a win-win outcome risk, close to 70% of them are receiving re-training.
for all members of society (Lee & Moon, 2019). Any Similarly, less than 20% of low-skilled adults are
trade-off between labor rights and automation as well receiving re-training, whereas up to 70% of high-skilled
as tension between winners and losers should be a adults are undergoing re-training. All of these figures
false dilemma. The key is whether proper policies are and statistics clearly show that there is a mismatch of
formulated and implemented to ensure all members policy and mistargeting in the allocation of resources
of society can capture the benefits of AI. for countries and governments in their transition to AI.
Unless proper government policies are implemented
As seen in Figure 2, the OECD Report of The Future of in time, the future of AI could mean more inequalities
Work (2019b) finds that six out of ten adults lack the in society and across nations, with the ambition of the
ICT skills necessary for the emerging jobs generated technology unfulfilled.

246
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Skills and the future of work

Many adults do not have the right


skills for emerging jobs
Share of highly-skilled
jobs has increased by 25%
over last 2 decades.
Low-skilled jobs have also 25%
increased, but middle-skilled jobs
have decreased.
6 out of 10 adults lack basic ICT skills
or have no computer experience..

Adults participation in training, by skill level, employment status and risk of automation.
0% 20% 40% 60% 80%

Skill level Low skilled High skilled


Adult training should
better target the
Automation risk High automation Low automation
disadvantaged.
Self-empolyed Full-time
Employment status (own-account) permanent

Can improve work-life balance (when, where and how to work)


Can create new opportunities
Tedious and dangerous tasks can be automated
TECHNOLOGY Health and safety can be improved
Productivity boosted

BUT LEARNING NEW SKILLS IS KEY

Figure 2: Skills and the future of work (OECD)


(Source: OECD Employment Outlook 2019: The Future of Work 2019)

At the same time, a considerable gap has been there is an evident shortage of relevant studies in the
observed between the demand for policy solutions public policy and public administration literature to
and the supply of current knowledge on this topic. examine and analyze the proper role of governments
While there is a substantial amount of research and and the required policy responses for addressing
discussion on the impact of AI on economic growth the impact of AI on the job market. Although this
and employment, there is relatively less research on finding is concerning, many people believe that AI
what governments should do to turn the risk and will have a major impact on the job market, including
threat of AI into job opportunities and social good for the issue of job losses and job elimination through
all. In the literature review conducted in this project, automation. That said, there is limited knowledge on

247
How to expand the capacity of AI to build better society

what governments can do in order to address these PART I : AI Impact on the Job Market
adverse consequences (Kaplan, 2016). This is one of
the key reasons why both policymakers and scholars The Typology of Job Replacement
must make greater efforts in preparing society for
the AI era, especially because of its impact on policy, Among the attempts to understand and theorize
governance, and society (Desouza, 2018; Partnership the impact of AI on the future of work, one of most
for Public Service, 2018, 2019). useful and best-known frameworks for analyzing
the effect of AI on the job market is the typology of
In bridging this gap, this paper will accomplish job replacement developed by Lee Kai-Fu (2018) in
the following two major tasks: It first builds on his book “AI Super-Powers”. His typology is shown in
the typology of job replacement and AI to set up a Figure 3. In basic terms, to analyze his framework, Lee
policy framework on the role of government and uses two major dimensions: social nature of the job
policy responses to address various concerns and (social vs. non-social) and the degree to which the job
challenges. On the principle of “rise with AI, not race can be replaced by automation (optimization-based
with it” (World Bank, 2018), governments must play vs. creativity or strategy based). Under this typology,
active or even aggressive roles not only on re-training, four types of jobs with different effects under AI job
knowledge and skill building, and job re-creation, but replacement can be identified as below:
also on social protection and a fair re-allocation of
resources. Second, this paper conducts a survey of
national AI strategies to assess the extent to which AI Social
policy of job disruption is taken seriously by countries.
It reveals that many countries, especially developing
ones, are not well-prepared for AI, and most seem

Creativity or strategy based


to be overlooking fairness and equity issues. In Human Safe
Optimization based

response, this paper suggests providing actionable Venteer Zone


policy recommendations to national governments and
international authorities.

It is important to recognize that this paper is not


an isolated effort in addressing these important Danger Slow
questions and issues. Instead, it is a new step in Zone Creep
a series of efforts by researchers and scholars of
related projects to generate knowledge and findings
substantiated by solid research on the social impact
of AI and technology. More specifically, this is the Asocial
second publication by the Association of Pacific
Rim Universities (APRU) on technology and the
transformation of work. It is hoped that this will build Figure 3: A typology of risk of replacement by jobs
upon and extend the insights and findings of the first (Source: Lee (2019) and Author)
report, “Transformation of Work in Asia-Pacific in the
21st Century” published in 2019. This paper moves I. Danger Zone (non-social and optimization-based)
the collective project to the next stage by adopting a As evident by the title, jobs in the “Danger Zone” are
policy-oriented focus and a governmental approach those facing the highest risk of being replaced by AI
to examine what governments should do to transform automation (e.g., customer service representatives,
the threats and uncertainties of AI job disruption into drivers, basic translators, telemarketers, garment
opportunities for achieving social good for all. factory workers, chefs, and so on). These jobs face
the most immediate danger of being replaced by

248
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

AI, and therefore should receive the highest policy III. Slow Creep (non-social and creativity /
priority. Low-skilled labor groups are often the most strategy based)
vulnerable, as they have limited access to re-training The “Slow Creep” quadrant includes jobs which do not
opportunities. Providing re-skilling and re-training rely on human social skills, but would require another
to this group of people should create a win-win dimension of capacities which currently cannot
outcome. For society, higher efficiency can be yielded be performed by AI: dexterity, strategic thinking,
by eliminating “Danger Zone” jobs and replacing creativity, and the ability to adapt to an unstructured
them with AI technology. For the workers concerned, environment (OECD 2018; Frey and Osborne, 2017).
through re-training and re-skilling, they can shift to Examples of jobs under this category include
job opportunities found in the other three quadrants, aerospace mechanics, scientists, artists, columnists,
where they will experience higher productivity through graphic designers, and security guards. This category
taking advantage of AI, and as a result will enjoy higher is labelled as “Slow Creep” because it is generally
wages. believed that given the progress of AI technology and
the advent of Big Data for AI training, it is plausible
II. Human Veneer (social and optimization-based) for AI to gradually narrow the gap with humans in
“Human Veneer” is a mixed and somewhat tricky terms of creativity and adaptation to uncertainties
category. In principle, most of the functions, tasks, and and contingencies. The pace of job elimination in this
duties can already be done by AI, but the key social quadrant would depend less on process innovation
interactive element of the job makes it difficult to be in companies and organizations—a major factor
fully automated (e.g., cafe waiters, wedding planners, affecting the job elimination in the “Human Veneer”
teachers, doctors, hotel receptionists, and so on). If quadrant—but would be more influenced by the
behind-the-scenes optimization work was completely progress and advancement of AI technology.
taken over by AI, human actors would still be required
as the social interface (the veneer) for clients and The special nature of “Slow Creep” has helped to
customers, representing the delicate performance accentuate the important principle advocated by the
balance and intricate symbolic relationship between AI World Bank (2018) in the development of AI: “Rise
and humans. This is exactly why bank tellers were not with – not against – the Machine”. In other words,
eliminated when the automated teller machine (ATM) humans should “rise with AI, not race with it” (World
was invented, as human interaction was still valued and Bank, 2018). From a policy standpoint, it is pointless
preferred by many customers (Kang & Francisco, 2019). and fruitless for humans to have direct competition
with AI, which is also contradictory to the intention of
According to Lee (2018), there are two factors which inventing new technology. Machines and technology
determine the percentage and how quickly jobs in the are invented to aid humans—competing or replacing
“Human Veneer” quadrant would be replaced by AI: humans is not the objective. The development
the capability of restructuring the task and making of AI should be human-centric for elevating the
AI more human-like in performing it; how open and performance and strengthening the capacity of
receptive customers are to interacting with AI. Since humans. Those in the “Slow Creep” category should
the second factor can vary across cultures and social be equipped with knowledge and skills of AI in order to
contexts, we can expect to see variations across enhance their ability to become more productive and
countries on the type, degree, and pace of jobs being creative.
replaced by AI under “Human Veneer”. In formulating
proper policy response to job disruption, this quadrant IV. Safe Zone (social and creativity / strategy based)
underscores the importance of enhancing the social Jobs in the “Safe Zone” quadrant are those which
intelligence of workers in skill upgrade and re-training possess two of the three major “engineering
as it is a capability which cannot be performed and bottlenecks” (i.e., elements which cannot be easily
replaced by AI (OECD 2018). automated by AI), such as social and creative

249
How to expand the capacity of AI to build better society

intelligence (Frey and Osborne, 2017). Some major and staff in this quadrant are themselves leaders
examples of jobs under this category include CEOs, and changemakers in companies, governments, and
social workers, PR directors, dog trainers, physical non-profit organizations who can provide leadership
therapists, and hair stylists. It is estimated that all of and foresight in the development and adoption of AI
these jobs, due to their nature and the limitation of in society through sectoral collaboration and other
current AI capacities, are unlikely to be replaced by AI cooperative and engagement platforms.
automation in the near and foreseeable future.
Job Disruption and the Generic Approach
Nevertheless, it would be a mistake to take the
“Safe Zone” as an “No-Action Zone” from a policy Understanding the impact of job disruption
perspective. Job disruption policy should adopt a should be a critical step towards formulating
balanced, two-way approach to help those at a high effective and appropriate policy responses. In this
risk of job replacement. This policy should also expand connection, some common misunderstandings and
job opportunities and enhance the performance of misperceptions about the effects of job disruption
people in the low-risk zone by upgrading their AI should be addressed here. First, job disruption
capacities. This should be the path leading to the impacts both physical and cognitive labor. All the
overall goal of “AI for Social Good” and “AI for All” above examples under each quadrant are taken from
to benefit and empower all members of society. Lee’s book “AI Super-Powers” (2018), which includes
Although people with jobs in the “Safe Zone” quadrant the jobs of both classifications. While there are
face a much lower risk of losing their positions to debates and controversies about the suitability and
AI, this does not exclude them from benefiting from correctness of each example, Lee’s typology provides
AI itself. In this regard, workers and professionals in a useful framework for concretely and analytically
the “Safe Zone” should also be offered AI knowledge understanding the effect of AI on the job market
and skills through policy responses so that they can for providing a rigorous and scientifically based
delegate more of their routine tasks to AI and fully estimation of the effect on job loss, job elimination,
concentrate on areas and duties in which they out- and job disruption in the era of AI.
perform AI. In the meantime, many professionals

Austria
Switzerland
Ireland
Spain
Greece
Denmark
France
Sweden
Portugal
United Kingdom
Norway
Netherlands
Finland
OECD average
Italy
Germany
Belgium Low Skill
United States
Solvenia
Canada Middle Skill
Slovak Republic
Japan
Hungary
High Skill
Czech Republic
-20 -15 -10 -5 0 5 10 15 20
< relative share decreased relative share increased>

Figure 4: Change of jobs by skill level (low, middle, high) in OECD countries (1995-2015)
(Source: OECD Employment Outlook 2017)
250
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

A second common misperception is that AI automaton The above numbers should be considered together
would only replace low-skilled jobs. Since cognitive with the change in manufacturing and non-
labor is also at risk under AI automation, it is simply manufacturing employment in OECD countries in the
an oversimplification and is untrue. As shown by the same period. Figure 5 shows significant shrinking of
examples provided in the above discussion, high- the manufacturing sector in many industries when
skilled and professional jobs such as teachers, doctors, there was remarkable growth elsewhere, such as the
and financial planners can still be replaced by AI. Skill service industry. According to OECD (2019b), between
level is not the most accurate and reliable indicator 1995 and 2015, employment in the manufacturing
of whether a job would be disrupted by AI. It is still sector declined by 20%, while increasing by 27%
the two main factors: social intelligence and creative in the service sector. For example, employment
intelligence, which measure the risk of replacement. in hotels and restaurants increased by over 40%
These two are limitations of the current technology and rose by about 20% in finance and insurance.
of AI (Frey and Osborne, 2017), meaning that humans After interpreting these figures, there are some key
can keep their jobs as long as they can out-perform AI messages to take into consideration. First, most of
in terms of capacities and cost. To further substantiate the manufacturing jobs belong to the “Danger Zone”
this point (see Figure 4), between 1995 and 2015, quadrant, which would explain their massive decline
middle-skill jobs were “disappearing”, leading to a as a result of AI automation. Despite the fact that jobs
notable and intriguing situation of job polarization are disappearing in this quadrant, new opportunities
in the employment market in OECD countries. The are being generated in other quadrants such as
average decrease of OECD countries in middle-skill “Human Sheer” and “Safe Zone”. This is why the non-
jobs during this ten-year period was about negative manufacturing and service sectors are showing strong
10%. In contrast, both low-skilled and high-skilled jobs and robust growth, as many new jobs created belong
have grown by about 2% and 7%, respectively. to the other three quadrants.

Manufacturing Non-manufacturing

Real estate, renting and business activities


Hotels and restaurants
Finance and insurance
Construction
Average industry growth
Wholesale and retail trade; repairs
Transport and storage, post and telecommunication
Food products, beverages and tobacco
Transport equipment manufacturing
Electricity, gas and water supply
Coke, refined petroleum products and nuclear fuel
Manufacturing n.e.c; recycling
Chemicals and chemical products
Machinery and equipment n.e.c
Basic metals and fabricated metal products
Rubber and plastics products
Other non-metallic mineral products
Electrical and optical equipment manufacturing
Pulp, paper, paper products, printing and publishing
Wood and products of wood and cork
Textiles, textile products, leather and footwear
-80 -60 -40 -20 0 20 40 60 80

Figure 5: The decline of the manufacturing sector in total


employment within industry in OECD countries (1995-2015)
(Source: OECD The Furure of Work 2019)

251
How to expand the capacity of AI to build better society

Recognizing a number of misperceptions, a more As shown in Figure 6, the only quadrant in which the
discerning and cautionary approach should be co-existence of humans and AI is not possible is the
adopted in translating the findings of job disruption “Danger Zone”. However, in the other three quadrants,
into policy implications. Even if employment can AI and humans can co-exist and reinforce each other in
have a net and overall increase, there can be policy different modes and combinations in order to enhance
problems at both personal and country levels. For performance and outcomes.
individuals, the government should offer re-skilling
and re-training opportunities. At a country level, the
government should invest heavily and strategically in Social
AI infrastructure in order to build a labor force with AI
knowledge and skills. When new job opportunities are
created by AI, there is no guarantee that those jobs

Creativity or strategy based


AI Human
would necessarily be available in the country where the
+

Optimization based
old positions were eliminated. New job opportunities Human AI
pushed by AI can be created in
advanced and developed countries, as poor and
developing countries would likely suffer from huge
job losses as a result of AI automation.
Human
AI AI +
For this reason, without proper policies, AI automation AI

can generate more inequities among individuals


within society and internationally. In a free and global
market, jobs and investment can move across national
boundaries so that both individuals and countries can Asocial
be AI-ready before receiving the benefits of AI (OECD,
2019b; World Economic Forum, 2018). This also
reminds us of the importance and relevancy of context Figure 6: Human – AI co-existence in the labor market
in assessing the impact of technology upgrades for (Source: Lee (2018))
any particular country (Kang & Francisco, 2019).
For a country with poor AI infrastructure and low To address the disruptive impact of AI on the job
readiness of AI workforce, the rise of AI could market, a generic “3R” approach has been developed:
potentially be devastating. This could cause a large- Reduce, Redistribute, and Retrain (Lee 2018). With
scale elimination of jobs as a result of AI, while new regards to “Reduce,” automation in the “Danger Zone”
opportunities would be outflowed to other countries would reduce the working hours of many people.
with a higher AI advantage. That said, people would work less but still enjoy the
same standard of living. It is a symbol of the progress
While job loss and elimination under AI is inevitable, it and prosperity of society in which AI provides more
can represent a “creative destruction” of the job market, comfort and affluence. In principle, we can use
as technology evolves and makes progress to create a redistribution, through means such as taxation and
brighter future for humankind (Schumpeter, 1942). Lee public expenditure, to shift resources from those who
(2018) also gives a generally positive view of the future are still working (with higher performance) to those
in which AI and humans can coexist in the labor market. whose jobs have been replaced by AI. At the same

252
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

time, if there are still people who would like to stay in technologies would only be a fatal, counter-productive,
the job market, they can be “Retrained” (the third “R”) and irrational strategy (Kang and Francisco, 2017).
to pick up the skills and knowledge required in the AI
era (World Economic Forum, 2018). The Policy Framework: Responses and
Enabling Factors
The typology by Lee is consistent with and
complementary to the other frameworks set up for When we consider the 3Rs in real-world settings,
evaluating the impact of AI on job disruption. Frey with real politics and policies, the situation would be
and Osborne (2017) have identified three types of much more complicated (Howlett & Ramesh, 1998;
tasks which cannot be easily replaced by AI and Kingdon, 1984; Lindblom, 2004). Many difficulties and
automation: perception and manipulation tasks, obstacles would be encountered in addressing the
creative intelligence tasks, and social intelligence impacts of technology such as AI on the job market in
tasks. These three sets of tasks create serious the complex and dynamic political environment (Ferro,
challenges for codification and have been known et. al., 2013; Kitchin, 2014). For example, many people
as “engineering bottlenecks.” Perception and with jobs in the “Danger Zone” are believed to belong
manipulation tasks refer to tasks that are performed to the poor, older, and less educated segment of the
in unstructured, complex situations and handling population. For them, “retrain” and “redistribute” may
irregular objects such as operating in cramped work not be preferable or politically feasible. Since they are
spaces. “Creative intelligence tasks” refer to tasks that old and less educated, re-training may not be realistic
require original ideas. “Social intelligence tasks” need or affordable for them. In addition, poor people are
the understanding of other people’s reactions in social often under-represented in politics, hence it would be
contexts, or require assisting and caring for others. unlikely for them to influence the government to have
a re-distribution policy to compensate for job losses
Acemoglu and Autor (2011) have developed a helpful caused by AI and fund them for re-training programs.
framework for assessing the impact of AI on wages Resources used for redistribution must be generated
and employment. Essentially, they divide technologies from certain sources, such as those already benefiting
into two major types: enabling technologies and from AI technology. However, companies and people
replacing technologies. Enabling technologies would profiting from AI are generally believed to be rich and
help to expand the productivity of labor and therefore powerful. It is therefore politically difficult to tax them
increase wages and job opportunities. Replacing in order to generate new resources to compensate
technologies, such as manufacturing robots, would those who would need help and assistance in adapting
allow machines to be substituted for labor, which to the AI era.
would result in jobs losses and wage reductions. From
a standpoint of the whole society, both technologies To conclude, the 3Rs are an underestimation of the
are important to its progress. Yet, in formulating a complexity and an oversimplification of the difficulties
labor and employment policy, the desirable direction in the real-world policymaking process. Importantly, a
should be to train an AI-competent labor force to work well-developed and comprehensive framework does
and rise with technologies to allow workers to benefit not exist, and therefore the 3Rs cannot be translated
from the enabling technologies. This idea follows the into effective and actionable policy responses. We also
guiding principle of “rise with AI, not race with it” (World argue that 4Rs (i.e., “Rethink” as the fourth R) may be
Bank, 2018). Directing the labor force to compete required in order to develop proper policy responses to
with robots and AI in tasks related to replacing address the challenges of AI.

253
How to expand the capacity of AI to build better society

to age, education, and other limiting factors such as


Social health issues, UBI should become a long-term and
stable source of income. In fact, this is closer to the
original ideal of UBI in which all members of society
should be unconditionally guaranteed a basic level of

Creativity or strategy based


Social Sectoral
Intelligence income, as AI should give rise to a rich society and
Optimization based

Collaboration
Training provide a better quality of living for everyone.

Re-training is the key policy direction for both


“Human Veneer” and “Slow Creep”. That said, there
is a subtle but important difference between the
Universal
Basic
Tech-enabled policy responses of the two. While the re-training in
Re-skilling
Income “Human Veneer” represents how to make humans
more people-oriented, the re-training in “Slow Creep”
should place more emphasis on enhancing the human
capacity in mastering AI. This will enable them to be
Asocial more creative and perform better at human functions
and capacities that are unattainable by AI. In sum, re-
training is more “human-oriented” in “Human Veneer”
Figure 7: Job disruption and policy responses but should be more “technology-oriented” in “Slow
(Source: Lee (2019) and Author) Creep”. It is untrue that the government has no role
to play in the last quadrant of the “Safe Zone”. To
push AI technology forward and make sure it benefits
Figure 7, Table 1, and Table 2 represent some of the future society, a partnership and collaboration among
initial but major efforts to set up a policy framework different sectors including governments, NGOs,
to address the policy issues and problems of AI on the universities, and industries, should be formed in order
job market. Figure 7 shows the major mode of policy to lead the future development and application of AI
response under each quadrant of job disruption. This technologies, rather than reacting to them passively.
does not exclude the possibility that there are many
other complementary and compatible responses in Table 1 examines the impact of AI job disruption
each quadrant. However, the major mode represents and policy responses by identifying the challenges
the crux of the issues and concerns regarding the and difficulties by type of job disruption and major
nature of the job category in the quadrant which policy mode. For example, it is expected that there
should receive the most attention from policymakers. would be significant problems regarding the politics
of readjustment, transformation, and redistribution
Universal Basic Income (UBI) should be the major from vested interests after adopting innovative (but
mode of policy when addressing the “Danger Zone”. also controversial) policies such as UBI (Haggard,
Even if these workers can take up new jobs in other 1990; Polidano, 2001; Przeworski, & Limongi, 1993;
quadrants after re-training and re-skilling, UBI should Rodrik, 1992). Interest groups are powerful, and their
also be needed during the re-training period to rent-seeking activities often prevent the adoption of
support their lives and maintain their income. For the new technologies and slow down the progress and
vulnerable population in the “Danger Zone”, of which development of societies (Evans 1995; Johnson 1982;
re-training and re-skilling would be less feasible due Kruger, 1974; Olson, 1982).

254
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Types of Disruption Policy Responses Politics and Challenges


(ranked in terms of time
urgency)

Danger Zone • Universal Basic Income (UBI) • Politics of adjustment and


(Reduce and • Taxing AI and analysis of vulnerable transformation (sectoral vested
Redistribute) population interests)
• Politics of redistribution

Human Veneer • Retraining and education (social • Government partnerships with


(Retrain) intelligence) universities
• Life-long education (long-term • Reforming curriculum to eliminate
education contract) the wall and divide between AI and
human dimensions

Slow Creep (Retrain) • Retraining and education (making • Reforming curriculum to eliminate
humans more AI-equipped) the wall and divide between AI and
• Life-long education (long-term human dimensions
education contract) • Government partnerships with
universities

Safe Zone (Rethink) • Exploring the opportunities, potential, • Collaboration between multiple
and threats of AI sectors (universities, governments,
• Providing foresight and leadership and industries)
• Balancing multiple and competing
values in the process (including profit
vs. social good)

Table 1: Policy and challenges in AI and job disruption


(Source: Author)

As seen in Table 1, the changes required do not more frequently than before for training and education
necessarily relate only to politics and institutional as new technologies arise. Besides, the wall and
change; the change of role and mindset are equally divide separating the boundary between human-
as important. In this regard, universities play an centric liberal arts education and the technology-
irreplaceable role in leading AI technology and the based STEM education should no longer be relevant
creation of a knowledge-based learning society (Asia and sensible in a world of AI. Critical revamping and
Development Bank, 2018; Florida, 2002). One of the radical restructuring of the curriculum in universities
major aspects, which requires a new mindset and would be necessary to integrate the two into a single,
fresh perspective, includes taking university education coherent body of knowledge and skills to enable the
as a long-term contract between universities and new generation to be fully-equipped for the challenge
citizens rather than a four-year commitment. and impact of AI (Tam, 2019; Yahya, 2019).
“Students” are expected to return to campus much

255
How to expand the capacity of AI to build better society

Enabling Factors Environment and Context

Domestic level • Transparency and accountability in governance


• Participation and inclusive governance
• Fairness and justice in distribution and re-distribution
• Top-level government commitment
• Interagency task force
• Mechanisms for collaboration across sectors
• A knowledge-based learning society
• Active sector of university education
• Platform for learning and communication across universities, industries,
society, and the government

Human Veneer (Retrain) • A reliable and trustworthy international organization for learning and
knowledge diffusion
• A regulation and enforcement framework on basic principles of AI
• International advice and support to eliminate the gap of “AI divide”
between AI-rich and AI-poor countries

Table 2: Enabling factors – domestic and international levels


(Source: Author)

Table 2 identifies the enabling factors for generating At the international level, organizations such as the
the policy responses in Table 1, and these factors United Nations (UN) and OECD should take the lead
are consistent with the major principles of good in major areas and capacities. Despite that, concrete
governance in the relevant studies and literature and specific policy decisions should be conducted
(Anderson, 2015; Cairney, 2016; Cath, 2018; Painter & at the country level to respect its sovereignty while
Pierre, 2005). These factors can be divided into two enabling it to design solutions that best fit its
major levels: domestic level and international level. context (Welch & Wong, 1998). Despite this situation,
At the domestic level, transparency, accountability, international organizations and authorities can still
and participation should be some of the key elements make an outstanding and significant contribution
in the public administration apparatus and decision- to learning and knowledge diffusion by becoming a
making process for formulating the effective and major hub of international AI cooperation (Straub,
appropriate policy responses to AI job disruption. 2009). There should also be a key role for them
There should also be an inclusive and open process to take up in establishing a regulatory framework
to ensure the involvement of all major stakeholders on the basic principles of AI. If there is any area in
and actors in making all important policies. It would which international organizations should have a
ascertain that the policy solutions are comprehensive more direct and close partnership with countries,
and broadly supported for the welfare and benefit of all it would be to provide resources and support to
members of society, regardless of their political status developing countries, which are most vulnerable to
and economic wealth. To facilitate the communication AI job disruption. Eliminating huge and detrimental
and collaboration of all actors and participants, a international inequalities, the “AI divide” between AI-
cross-sectoral platform should also be set up as the rich and AI-poor countries, should be a new and
nexus of interaction and policymaking. fundamental mission of international organizations in
the AI era.

256
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

PART II: Policy in Action – National AI Strategies


The Survey

To assess the extent to which AI policy of job documents selected should also be dedicated to AI
disruption is considered by major countries around exclusively, as opposed to AI being listed together
the world, the second part of this paper conducts a with other digital and ICT technologies. As the policy
survey of national AI strategies. The following major issue and concern is our center of attention, progress
research and policy questions will be examined in reports following up on national AI strategies have
this study. First, it attempts to find out if the impact of not been included in the study. These documents do
AI is being seriously considered at the country level, not include new policy positions and mostly cover
which can be easily reflected by whether or not the technical tools and the implementation details of these
country has produced any open national document strategies. Furthermore, because the focus and scope
on AI strategy. If an AI strategy document exists, of analysis of our study is national governments,
we would further examine its content and major documents issued by international organizations such
initiatives, particularly the role of state and market as the UN, EU, and OECD have not been included.
in developing AI technology. In this regard, there are Despite this decision, the major content of relevant
several possibilities and combinations: AI policy led AI documents from these international organizations
by government, AI policy led by market, or AI policy will still be summarized as a reference in the following
led by a coalition of both government and market—a sections.
hybrid type of governance. Since the general goal of
the market is profit-making, it is unlikely that a national This study follows a two-step methodological
AI strategy led mainly by it would be fair and equitable. approach. First, starting from a comprehensive
With this in mind, we would also like to discover if list of all national strategies complied from online
equity and social protection are among the key areas research, all those which have an English version are
emphasized in the national strategies. If so, what is selected according to our criteria stated above. The
the policy position and solutions that the country has earliest AI national strategy document released is
raised in addressing these issues and concerns. produced by South Korea, which can be dated back
to as early as April 2016. The latest one included in
As an increasing number of countries prepare for the analysis is the National AI Strategy of Singapore,
the socio-economic transformation generated which was published in November 2019. After our
by AI, strategic documents are issued at various selection, the national AI strategies will be analyzed in
levels, crystallizing and encapsulating the vision and accordance with our research questions. A total of 15
perspectives of top policymakers. A wide array of documents by 12 countries were identified, collected,
working group papers, consultations, guidelines, and and analyzed (see Table 3). It should be noted that
reports precede and inform the design of a national the actual number of documents would be much
strategy, but our analysis primarily focuses on the higher if some of our selection criteria was released.
governmental strategies or national programs in their Because countries will continue to produce AI strategy
final form. These national AI strategy documents documents, no list of such documents would be
represent the policy consensus that are carefully- exhaustive. Since national AI strategy documents
worded, influential, and committed. As a result, non- are a major policy communication tool for citizens,
national AI strategy documents issued by non- international partners, and stakeholders, we are
state actors have not been included in this study. confident that our study has included many important
Preliminary, discussion, and consultation national documents. They should also provide a representative
documents on AI were also not selected, as they sample of the state of AI national strategies for most
reflect more on “work-in-progress” or “initial thinking” countries throughout the world.
than an adopted national policy position on AI. The

257
How to expand the capacity of AI to build better society

Using qualitative content analysis and comparative caused by AI automation. The analysis is driven by
methods, the national AI strategies of Canada, China, theoretical insights from the governance and ICT
Finland, France, Germany, Japan, Russia, Singapore, literature (Fountain, 2001; Norris, 2012; Wong, et. al.,
South Korea, Sweden, the United Kingdom, and the 2006), and thus should contribute to the current policy
United States have been assessed to unveil their discussions by conceptually structuring the debates
articulation of AI and its impact. This also includes and offering a critical perspective of AI governance
their country-level policy responses to job disruption and the future of work.

Date Name of strategy Country


April 2016 AI Information Industry Development Strategy South Korea

October 2016 The National Artificial Intelligent Research and Development United States
Strategic Plan

March 2017 Pan-Canadian AI Strategy Canada

May 2017 AI Program Finland

May 2017 AI Technology Strategy Japan

July 2017 Next Generation AI Development Plan China

March 2018 AI Sector Deal United Kingdom

March 2018 AI for Humanity France

May 2018 National Approach to AI Sweden

November 2018 Federal Government’s AI Strategy Germany

February 2019 Executive Order on Maintaining American Leadership in AI; United States
and American AI Initiative

May 2019 Beijing AI Principles China

June 2019 The National Artificial Intelligent Research and Development United States
Strategic Plans: 2019 Update

October 2019 On the Development of AI in the Russian Federation Russia

November 2019 National AI Strategy Singapore

Table 3: National AI strategies included in the analysis


(Source: Author)

258
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

The AI Governance Landscape: Major Themes and Principles

AI has become a key focus of both national and by equipping law enforcement authorities with
international strategies, as their documents have been appropriate tools to ensure the security of citizens,
produced by individual countries and international with proper safeguards to respect their rights and
organizations which are open to the public. For the freedoms.”
latter, OECD published its “OECD Principles on AI”
document in May 2019 and the EU released its “White National strategies, on the other hand, tend to be
Paper on Artificial Intelligence” in February 2020. dominant, prescriptive approaches. They are unifying
Since international organizations generally have no governmental documents that outline directions and
jurisdiction over its member countries, their AI strategy priorities for domestic efforts and the allocation of
documents tend to be guiding documents and a resources. In some cases, they may apply to different
commitment to collaboration beyond state borders. levels of government in an uncoordinated manner,
They usually stand for agreement about continuing such as in the US. AI has generated an unprecedented
discussions on AI R&D and promoting cooperation to number of national strategies and frameworks in
reach a human-centered AI society as well as reducing a relatively short period of time. Although the field
the risks of AI. They also include non-binding, principle- of AI can be dated back to the 1950s, the current
driven commitments that frame the international development of AI strategy and regulation closely
debate, highlighting the need to work together in order mirrors those of the Internet (Radu, 2019). This
to remain competitive in AI. similarity can be linked to the fact that the Internet
remains a key vehicle for “feeding” AI devices and for
The two recent documents by OECD and the EU real-time experimentation with large amounts of data
provide excellent examples of the major points and (Schonberger & Cukier, 2013).
observations above. In “OCED AI Principles”, two of
its five major principles are: “AI should benefit people It is not difficult to understand the background for
and the planet by driving inclusive growth, sustainable the sudden surge of national AI strategies in recent
development and well-being”, and “AI systems should years. Widely recognized as a disruptive technology
be designed in a way that respects the rule of law, (Bower & Christensen, 1995), AI is at the center of
human rights, democratic values and diversity, societal transformation, technology innovation, risk
and they should include appropriate safeguards— assessment, and governance debates. The ubiquity
for example, enabling human intervention where and extensive applications of AI corresponds with
necessary—to ensure a fair and just society”. Similar the focus of attention in AI discussions, which ranges
statements, declarations, and principles have also from designing efficient systems and ensuring
been made by the EU. In the EU “White Paper on AI”, competitiveness to constructing ethical frameworks,
shares and promotes the EU’s vision of the benefits risks assessment, legal responsibility, and certainly the
of AI to citizens, businesses, and public interest. For impact on the human labor market and job disruption
citizens, the EU believes that they should be able “to as AI advances.
reap new benefits for example improved health care,
fewer breakdowns of household machinery, safer With reference to the first research question in our
and cleaner transport systems, better, and more study, the findings of our study are both striking
accountable public services”. In respect of public and alarming. The number of countries which have
interest, the EU expects better and more efficient national AI strategies (as defined by our selection
public services: “for services of public interest, for criteria) are much fewer than expected—only 12 to
example by reducing the costs of providing services be exact. In the UN membership, there are currently
(transport, education, energy and waste management), a total of 195 countries. This means that only 6% of
by improving the sustainability of products and them have a formal and well-articulated national AI

259
How to expand the capacity of AI to build better society

strategy to take advantage of AI and cope with its levels, it can be predicted that there would be a
potentially negative impact; these figures cast doubt global AI-divide between developed and developing
on their readiness for AI. countries. There is a “race to the top” among AI-rich
countries, but a “race to the bottom” among AI-poor
It is not only a small number of countries which countries. These two concurrent and parallel global
causes concern here. The type of countries with or races will eventually converge and quickly degenerate
without a national AI strategy is worth discussing. A into enormous economic and social inequalities
gap can also be found between the early adopters across countries. Similar gaps, such as the digital
of AI strategies and countries which are still in the divide, have been observed from the differences in
process of drafting a national policy. The first tend rates of progress, diffusion, and adoption of new
to be AI leaders and developed countries (e.g., the technologies (Ake, 2001; Wong & Welch, 2004; Welch,
US, Germany, Japan, and South Korea), rather than Hinnant & Moon, 2005). They essentially reflect the
developing countries (e.g., Laos, Nepal, Nigeria, contextual and institutional factors of the countries
and Myanmar). This validates and confirms the rather than the technical content and nature of the
existence of an “AI divide” on a global scale. Out of technology itself (Haque, 1996; Fountain, 2001; North,
the 12 countries in our survey, only China may still 1990; Painter & Pierre, 2005; Pollitt & Bouckaert, 2011;
be considered as a developing country. However, this Wong, 2013).
nation is clearly an exception rather than the norm
given its economic power and international influence. The need for international cooperation is recognized
A closer look at China would reveal that it is not a by the majority of countries. Among EU member
developing country from a typical sense, as it has states, there is coherence around the perceived
attained the standard of many developed countries in regional influence and work conducted at the
terms of many major aspects, such as research and supra-national level. Surprisingly, the relationship
technology, and is a rising global power. with developing countries is rarely mentioned. One
exception is Germany, whose national strategy “Federal
Since AI would impact both developed and developing Government’s Artificial Intelligence Strategy” has an
countries, the poor preparation and low readiness of action point to build up capacities and knowledge
developing countries for AI automation should be a about AI in developing countries to promote economic
priority for the global policy agenda. Without proper cooperation and utilize economic and social
policy responses at both country and international opportunities.

260
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Highlights of major principles and objectives in the national strategies

Country Major Principles and Objectives


South Korea • Foster an intelligent information society on the basis of public-private partnership,
with businesses and citizens playing leading roles and the government and research
community providing support.
• Devise and implement a balanced policy regime that encompasses technologies,
industries, and society and shapes the development of a more humane society.
• Provide strategic support for the prompt securement of the rights and access to
Intelligent IT and other related resources to ensure and foster industrial competitiveness
in advance.
• Reform policies and expand the social security net on the basis of social consensuses.

United States Strategy 1: Make long-term investments in AI research


Strategy 2: Develop effective methods for human-AI collaboration
Strategy 3: Understand and address the ethical, legal, and societal implications of AI
Strategy 4: Ensure the safety and security of AI systems
Strategy 5: Develop shared public datasets and environments for AI training and testing
Strategy 6: Measure and evaluate AI technologies through benchmarks and standards
Strategy 7: Better understand the national AI R&D workforce needs
Strategy 8: Expand public-private partnerships in AI to accelerate advances in AI

Canada The strategy has five major goals:


• Build a critical mass of talent within existing geographic areas of research excellence
• Increase the number of outstanding faculty in deep AI nationwide
• Dramatically increase the number of Canadian graduate and undergraduate students
being trained in deep AI
• Create national programs that build a pan-Canadian AI community
• Position Canada as scientific leaders in AI research, and build on this science to ensure
continuing prosperity and progress for all Canadians

Finland Eleven key actions:


1. Enhance business competitiveness through the use of AI
2. Effectively utilize data in all sectors
3. Ensure that AI can be adopted more quickly and easily
4. Ensure top-level expertise and attract top experts
5. Make bold decisions and investments
6. Build the world’s best public services
7. Establish new models for collaboration
8. Make Finland a frontrunner in the age of AI
9. Prepare for AI to change the nature of work
10. Steer AI development into a trust-based, human-centric direction
11. Prepare for security challenges

Table 4: Highlights of major principles and objectives in the national strategies

261
How to expand the capacity of AI to build better society

Japan Basic Philosophies:


• Human-centered society
• Share guidelines as non-binding soft law with stakeholders internationally
• Ensure balance of benefits and risks
• Avoid hindering technologies or imposing excessive burdens on developers

9 Principles:
• Principle of collaboration • Principle of transparency
• Principle of controllability • Principle of safety
• Principle of security • Principle of privacy
• Principle of user assistance • Principle of accountability
• Principle of ethics (respect human dignity and individual autonomy)

China Beijing AI principles:


• The R&D of AI should observe the following principles:
do good; for humanity; be responsible; control risks; be ethical; be diverse and inclusive;
open and share
• The use of AI should observe the following principles:
use wisely and properly; informed-consent; education and training
• The governance of AI should observe the following principles:
optimizing employment; harmony and cooperation: adaptation and moderation;
subdivision and implementation; long-term planning

United Five Foundations


Kingdom • Ideas - the world’s most innovative economy
• People - good jobs and greater earning power for all
• Infrastructure - a major upgrade to the UK’s infrastructure
• Business environment - the best place to start and grow a business
• Places - prosperous communities across the UK

Four Grand Challenges


• AI and Data Economy - We will put the UK at the forefront of the AI and data revolution
• Future of Mobility - We will become a world leader in the way people, goods and services
move
• Clean Growth - We will maximize the advantages for UK industry from the global shift to
clean growth
• Ageing Society - We will harness the power of innovation to help meet the needs of an
ageing society

France Primary themes:


1. Developing an aggressive data policy [to improve access to big data];
2. Targeting four strategic sectors [healthcare, environment, transport, and defense];
3. Boosting the potential of French research [and investing in talent];
4. Planning for the impact of AI on labor;
5. Making AI more environmentally friendly;
6. Opening up the black boxes of AI; and
7. Ensuring that AI supports inclusivity and diversity.

(Cont.) Table 4: Highlights of major principles and objectives in the national strategies
262
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Sweden The government’s goals are to develop standards and principles – while acknowledging
existing national and international regulations and norms – for ethical, sustainable, and
safe AI; to continue to improve digital infrastructure to leverage opportunities in AI; to
increase access to data; and to play an active role in the EU’s digitization efforts.

Germany The strategy pursues the following three objectives:


1. Making Germany and Europe global leaders on the development and use of AI
technologies and securing Germany’s competitiveness in the future;
2. Safeguarding the responsible development and use of AI which serves the good of
society; and
3. Integrating AI in society in ethical, legal, cultural, and institutional terms in the context
of a broad societal dialogue and active political measures.

Russia Basic Principles of the Development and Use of AI Technologies:


a) The protection of human rights and liberties
b) Security
c) Transparency
d) Technological sovereignty
e) Innovation cycle integrity
f) Reasonable thrift
g) Support for competition

Singapore This strategy serves three purposes:


1. Identify areas to focus attention and resources at a national level.
2. Set out how governments, companies, and researchers can work together to realize the
positive impact of AI.
3. Address areas where attention is needed to manage change and/or manage new
forms of risks that arise when AI becomes more pervasive.

Vision:
By 2030, Singapore will be a leader in developing and deploying scalable, impactful AI
solutions, in key sectors of high value and relevance to our citizens and businesses (Smart
Nation).

Approach:
1. Emphasize deployment
2. Focus on key sectors
3. Strengthen the AI Deployment Loop
4. Adopt a human-centric approach

(Cont.) Table 4: Highlights of major principles and objectives in the national strategies

263
How to expand the capacity of AI to build better society

The major content of exemplary national AI development is considered crucial for the new race to
documents is summarized in Table 4. The strategies the top among powerful nations. For instance, Canada
analyzed here vary in scope and length, ranging from would like to position itself as “a scientific leader
visions of development in the sector to full-fledged in artificial intelligence research, and build on this
industrial strategies or comprehensive, all-sector science to ensure continuing prosperity and progress
approaches. Withstanding these minor differences, for all Canadians.” For Germany, using its AI strategy, it
in general, there is a strong market orientation as the pursues the objective of “making Germany and Europe
private sector traditionally takes the lead in AI research global leaders on the development and use of AI
and development. For example, one of the major technologies and securing Germany’s competitiveness
AI strategies of the US is to “expand public-private in the future.” For France, one of its primary themes
partnerships in AI to accelerate advances in AI”. There of AI strategy is “developing an aggressive data
is also an overwhelming and implicit assumption policy to improve access to big data.” For the UK, its
underlying all of these documents, which is the ability government aims to put the country “at the forefront
of AI to generate net positive social benefits. They of the artificial intelligence and data revolution.”
also focus mostly on economic growth, national
competitiveness and research, and investment. For National strategies are the first crucial step towards
the US, their number one strategy is to make long-term setting up a policy direction for AI. Their construction,
investments in AI research. In the same vein, Canada’s articulation, production, and presentation in the
top goal is to “build a critical mass of talent within public domain is a powerful political statement and a
existing geographic areas of research excellence.” In demonstration of national pride and supremacy. All
the UK, a key national strategy for AI is transforming major countries have the ambition of becoming the
itself into “the world’s most innovative economy.” world leaders of this technology. Furthermore, some
Unfortunately, equity and social protection is clearly countries, such as China, have even taken a further
not a significant topic in these national AI strategies, step by highlighting their intention to drive the global
which seems quite alarming. governance of AI. From a historical perspective, the
centrality of the nation state in AI debates is rather new
Global politics and international competition are major (Radu, 2019). While international relations scholars
factors driving the increase in national AI strategies. have long reflected on the networked aspect of
The “global AI race” is often linked to the “great powers” governance, where the state can be an orchestrator or
discourse, which includes countries such as the US, partner, AI discourse at the national level brings forward
Russia, and China, who are constantly competing for a new dimension of state involvement in emerging
global dominance and supremacy (Lee, 2018). Apart technology regulation, which is in line with recent
from prevailing global powers, other major countries efforts to command control over strategic areas.
are eager to join the AI race. There is often a co-
existence of a dual image in the documents—technical The Missing Piece: Equity and Social Protection
and political. On one hand, AI is presented in technical
languages as relying on neutral networks modelling Whilst national AI strategies focus primarily on
to mathematically analyze huge amounts of data economic growth, national competitiveness, and
for scientific and industrial revolutions. Politically, AI research and development, equity and social

264
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

protection is an important missing piece. It has never AI patents were registered in Japan (33%), Republic of
been a major topic or focus in the national AI strategy Korea (20%), USA (18%), Taiwan, China (8%), Germany
documents surveyed, and in some cases, it was (3%), and France (2%) (UNESCO, 2014). In international
simply ignored or forgotten. For example, no country patent applications, China came second after the
has raised the idea of UBI, and social policy and re- US last year (WIPO, 2018). A few companies from
distribution was not a key topic in any of the national these two countries also have the largest AI research
AI strategy documents reviewed. Overall, social policy investments and development of standards, which has
and readjustment in welfare programs do not seem been further integrated in their products and services.
to be the main concern. The job disruption problem
is generally understood and framed as a re-training In consistency with the above trend, it is also
problem. It is also assumed that if significant wealth common for the state to work alongside companies
can be generated from AI development, there would for financial investments in R&D. For example, the
be sufficient resources to handle other problems Canadian strategy focuses exclusively on research
generated subsequently and naturally. leadership and points to the use of government
investment as a catalyst for investments from other
The mode and role of national AI strategy should be levels of government and from the private sector.
one of the main elements to reinforce the negligence Following a similar approach, the UK and Germany
and inattentiveness of equity and social protection for mentioned export support for innovative AI and data
AI automation and job disruption. In most countries, businesses, as well as specific programs to attract
hybrid governance—an alliance between government such companies to establish headquarters on their
and market—is the primary driver of AI strategy for territory, in addition to the use of trade missions
economic growth and national competitiveness. As abroad for their promotion. Moreover, the continuing
reflected by the priorities of national AI strategy, the interest and involvement of private actors is visible in
most urgent and primary concern of most countries is the composition of oversight bodies or organizations
joining the private sector in the AI race to avoid being driving the AI policy mandates, while nonprofit
overtaken by other countries. With the market as the organizations and rights groups tend not to be equally
major partner, it is unlikely that a national AI strategy well-represented (e.g., the UK and Canada).
would result in a fair and equitable society.
Under the heavy influence of market ideology and the
In effect, AI governance is highly dominated by chief orientation on economic growth and national
corporate interests. Overall, AI R&D continues to competitiveness, there is a strong tendency of using
be driven by multinationals with headquarters re-training and education in lieu of social policy and re-
concentrated in a few countries, while policy directions distribution in national AI strategies. The derived lack
appear to be more reactive than anticipatory. From the of serious concern and in-depth discussion on AI job
patenting behavior of the largest companies between disruption and the related remedial policies can be
2012 and 2014, the overwhelming majority (93%) of seen by examples shown in Table 5.

265
How to expand the capacity of AI to build better society

Examples of explicit wording regarding education and social protection (if any) in national strategies

Country Examples
South Korea “Policy objective: Reform and tailor education, employment, and welfare services in
response to changes in order to ensure that all citizens are able to enjoy the benefits of
the intelligent information society.”

“Foster and educate active workers capable of leading the intelligent information society
based on their creativity and emotional intelligence. Ensure opportunities for a decent
and humane standard of living by supporting the re-training of personnel and improving
the employment and welfare environments.”

United States “Attaining the needed AI R&D advances outlined in this strategy will require a sufficient
AI R&D workforce. Nations with the strongest presence in AI R&D will establish
leading positions in the automation of the future. They will become the frontrunners
in competencies like algorithm creation and development; capability demonstration;
and commercialization. Developing technical expertise will provide the basis for these
advancements.” (The National Artificial Intelligent Research and Development Strategic
Plan, 2016)

“The American AI Initiative is accelerating our Nation’s leadership in AI. By driving


technological breakthroughs in AI, breaking barriers to AI innovation, preparing our
workforce for the jobs of the future, and protecting America’s advantage in AI we are
ensuring that AI technologies continue to improve the lives of our people, create jobs,
reflect our Nation’s values, and keep Americans safe at home and abroad.” (The American
AI Initiative, 2019)

“The United States must train current and future generations of American workers with
the skills to develop and apply AI technologies to prepare them for today’s economy
and jobs of the future.” (Executive Order on Maintaining American Leadership in Artificial
Intelligence, 2019)

Finland “The prerequisite for the broad-based utilization of artificial intelligence is that the
population for the most part has a command of the skills and knowledge needed for
its application. The requirements for the age of artificial intelligence should be visible in
study content throughout the entire education system. At the moment, it is believed that
the importance of skills related to social intelligence will grow.

The social security system must function flawlessly as people’s working careers become
diversified. Transitions between paid labor and entrepreneurship should be more flexible.
Earnings level insurances misfortune allows for risk-taking in the broad sense. On the
other hand, comprehensive earnings security insurance inevitably involves incentive
problems. The long-term objective should be to increase the inventiveness of both social
and unemployment security and improve the strengths related to these.”

Table 5: Examples of explicit wording regarding education and social protection (if any) in
national strategies

266
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

China Vigorously strengthen training for the labor force working in AI. Accelerate the study of how
AI affects the employment structure, the change of employment methods and the skills
demands of new occupations and jobs. Establish lifelong learning and employment training
systems to meet the needs of intelligent economy and intelligent society, and support
institutions of higher learning, vocational schools and socialization training Institutions
to carry out AI skills training, substantially increasing the professional skills of workers to
meet the demands of the high-quality jobs in China’s AI research. Encourage enterprises
and organizations to provide AI skills training for employees. Strengthen re-employment
training and guidance for workers to ensure that simple and repetitive work and the smooth
transition of workers due to AI.” (Next Generation AI Development Plan, 2017)

“Optimizing Employment: An inclusive attitude should be taken towards the potential impact
of AI on human employment. A cautious attitude should be taken towards the promotion
of AI applications that may have huge impacts on human employment. Explorations
on Human-AI coordination and new forms of work that would give full play to human
advantages and characteristics should be encouraged.” (Beijing AI Principles, 2019)

United “People
Kingdom
• Establish a technical education system that rivals the best in the world to stand
alongside our world-class higher education system
• Invest an additional £406 million in mathematics, digital and technical education,
helping to address the shortage of science, technology, engineering and mathematics
(STEM) skills
• Create a new National Retraining Scheme that supports people to re-skill, beginning with
a £64 million investment for digital and construction training”

France “Human Capital

To ensure a smooth transition towards an AI-oriented economy, a thorough


transformation of learning paths is needed, involving both reforms to the initial education
of upcoming generations and opportunities of vocational training and lifelong learning for
the current and upcoming workforce.

The AI for Humanity strategy highlights two important prerequisites for the successful
development of human capital in AI. A first prerequisite relates to the inclusion of effective
and compulsory digital and AI-related disciplines at all levels of the education and training
curricula. This requires both reforms to the course content and to the teaching methods
used. A second prerequisite is that the proposed education pathways should be free of
any social inequality. This could be achieved by setting up incentive policies to ensure
more diversity and to achieve more equality in participation rates, with a special attention
to counteract any form of gender stereotyping (e.g. by incentivizing participation of
women into digital and AI courses).”

(Cont.) Table 5: Examples of explicit wording regarding education and social protection (if any) in
national strategies

267
How to expand the capacity of AI to build better society

Sweden “Training: AI creates an increased need for life-far learning. It is therefore necessary
with opportunities for relevant continuing education and further education by already
professionals.”

Germany “World of work and labor market: shaping structural change:

The potential for AI to serve society as a whole lies in its promise of productivity gains
going hand in hand with improvements for the workforce, delegating monotonous or
dangerous tasks to machines so that human beings can focus on using their creativity
to resolve problems. This requires a proactive approach to the design of future of work”;
“The draft legislation wants to give employees whose jobs are at risk of becoming lost to
technologies, those otherwise affected by structural changes, and those wishing to train
for a profession for which is labor is scarce, an opportunity to acquire the skills they need.”

Russia “The protection of human rights and liberties:

…ensuring the protection of the human rights and liberties guaranteed by Russian and
international laws, including the right to work, and affording individuals the opportunity to
obtain the knowledge and acquire the skills needed in order to successfully adapt to the
conditions of a digital economy.”

Singapore “Adopt a human-centric approach

We will build an AI-ready population and workforce. At the societal level, as part of the
overall promotion of digital literacy, we will raise awareness of AI, so that citizens are
prepared for technological change, and are engaged in thinking about AI’s benefits
and implications for the nation’s future. At the workforce level, we will prepare our
professionals to adapt to new ways of working, in which workers are augmented by AI
capabilities.”

(Cont.) Table 5: Examples of explicit wording regarding education and social protection (if any) in
national strategies

For the US, the main preparation for the job disruption the skills they need.” In the AI strategy of France,
on the labor market is to: “Foster and educate active instead of guaranteeing a good level of living under AI,
workers capable of leading the intelligent information what is promised is only indiscriminating and equal
society based on their creativity and emotional access to re-training and education opportunities: “A
intelligence” to “ensure opportunities for a decent second prerequisite is that the proposed education
and humane standard of living by supporting the re- pathways should be free of any social inequality. This
training of personnel and improving the employment could be achieved by setting up incentive policies to
and welfare environments.” Similar content can ensure more diversity and to achieve more equality
be found in the AI strategy of Germany: “to give in participation rates, with a special attention to
employees whose jobs are at risk of becoming lost to counteract any form of gender stereotyping
technologies, those otherwise affected by structural (e.g., by incentivizing participation of women into
changes, and those wishing to train for a profession digital and AI courses).”
for which labor is scarce, an opportunity to acquire

268
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

We have also examined the national AI strategies of same approach as the West in favoring education and
Western welfare states, such as Finland and Sweden, re-training over improving social protection, the net
as well as Asian countries with Confucian tradition impact of AI job disruption on the labor force could be
and family values such as China and Japan. These much more extensive in Asia, with workers absorbing
countries have been compared to others with regards a higher share of the negative economic effects.
to equity and social protection under AI job disruption.
Surprisingly, little difference was found, meaning that Despite the fact that the AI strategy of international
a market-based and non-social-policy approach is the organizations tend to be more prescriptive and guiding
dominant and cross-cutting theme of most national AI in nature, no significant differences between national
strategies. For Finland, it states in its national strategy AI strategies and those of international organizations
that: “The prerequisite for the broad-based utilization regarding equity and social protection were found in
of artificial intelligence is that the population for the our analysis. This means that taking a market-oriented
most part has a command of the skills and knowledge approach and deploying re-training and education
needed for its application. The requirements for the programs as a replacement for strengthening social
age of artificial intelligence should be visible in study protection is currently a well-accepted international
content throughout the entire education system.” norm. In the “OECD Principles on AI”, it recommends:
Perhaps, what is even more surprising is, instead “Empower people with the skills for AI and support
of assuring the provision of social protection in an workers for a fair transition.” In the EU White Paper
AI society, it has pointed out the drawbacks and on AI, it also recognizes “skills” as the most important
limitations of such schemes: “On the other hand, hurdle in the transition to the AI society: “The European
comprehensive earnings security insurance inevitably approach to AI will need to be underpinned by a
involves incentive problems. The long-term objective strong focus on skills to fill competence shortages”;
should be to increase the inventiveness of both “Initiatives could also include the support of sectoral
social and unemployment security and improve the regulators to enhance their AI skills in order to
strengths related to these.” effectively and efficiently implement relevant rules.” In
addition, “The Plan will also increase awareness of AI
In China (an Asian, Confucian, and Socialist country), at all levels of education in order to prepare citizens for
employment and re-training is still preferred to social informed decisions that will be increasingly affected
protection: “Vigorously strengthen training for the by AI.”
labor force working in AI. Accelerate the study of
how AI affects the employment structure, the change Presumably, the use of market and re-training in lieu
of employment methods and the skills demands of of an explicit social policy for addressing the job
new occupations and jobs. Establish lifelong learning disruption builds on two tenets. First, it assumes
and employment training systems to meet the needs that the market is self-regulating, and therefore could
of intelligent economy and intelligent society, and fix itself and take care of most issues and concerns
support institutions of higher learning, vocational including unemployment caused by AI job disruption.
schools and socialization training institutions to carry For example, the labor force could seek re-training
out AI skills training, substantially increasing the opportunities by themselves or those opportunities
professional skills of workers to meet the demands of would be provided by firms and employers. Second,
the high-quality jobs in China’s AI research.” a two-stage development strategy may be used in AI
strategy. In the first stage, technological advancement
Asian and Western countries typically have two and economic growth should be the main concern
different welfare state models in which the state in the and focus. As society grows richer and accumulates
latter provides a much better and a more generous more wealth through AI development, the government
protection of income and welfare to citizens (Aspalter, would have more resources to address the equity and
2006). As a result, with Asian countries taking the social protection issues at a later stage. Nevertheless,

269
How to expand the capacity of AI to build better society

by past experience of technological change and of resources. However, this paper finds that many
international development, these two scenarios are countries, especially developing ones, are not well-
more likely to be flawed and over-optimistic. For prepared for AI, and most countries seem to be
instance, labor in the “Danger Zone” might have limited overlooking fairness and equity issues under job
access to re-training opportunities, and companies disruption. The ideal state of AI will not be realized
are likely to shift their investment to AI-rich countries without a certain amount of effort, and the absence of
rather than paying to train the labor force of a proper policies and enabling factors could easily lead
particular country. Training and education should also to a “AI Divide” between AI-rich countries and AI-poor
be public goods, which are mostly provided by the countries. Policymakers must work hard to ensure
government, not by the market. those enabling factors, which include institutions and
societal conditions, do exist for making sure their
Since equity is one of the major market failures, governments and countries are well prepared for the
relying on market self-adjustment alone for resolving arrival of AI and its major impact on society, turning
equity issues is unrealistic and defies economic theory all possible threats into opportunities in order to bring
(Stiglitz, 2000). A country with plentiful resources progress and prosperity.
being re-allocated through governmental actions (i.e.,
taxation and public expenditure) is not necessarily As revealed by analyzing various national AI strategies,
correlated empirically (Acemoglu & Robinson, 2012). focusing only on economic growth and national
Many studies have provided abundant evidence competitiveness whilst ignoring equity and social
that economic inequalities persist in many well- protection is a flawed and dangerous proposition. The
developed and advanced countries (Aspalter, 2006). proposition has an implicit assumption that as long as
State-driven debates concerning the rise of AI more wealth can be created by AI, equity issues can
should be complemented by a call for reform and be resolved easily and over a certain period of time.
modernization of the governmental apparatus and The implicit assumption has overlooked a few very
services to respond to the new needs of the digital major and important points. First, equity is a market
society (Cheung, 2005; Dunleavy et. al., 2008). This failure, and inequalities exist even in rich societies so
leads to the conclusion that we should not assume that the role of the government in ensuring equity and
the economic power of a nation would automatically fairness under AI job disruption is necessary. Second,
translate into a fair and equitable society in the AI era. as education and re-training have positive externalities
and can even be taken as a “public good”, a major
Conclusion: The Future Direction - and targeted investment headed by the government
on education and re-training is necessary. In addition,
Policy Gap and Recommendations some segments of the population may be vulnerable
and cannot be easily retrained for AI (e.g., the older
As we approach the era of AI job disruption, there is a
population). To them, new social protection programs
policy gap between the demand for policy solutions
such as UBI may be the best and only solution.
and the supply of the current wealth of knowledge
on the future of work. While there is a large amount
It is noteworthy that UBI was first raised in the book
of research and discussion on the impact of AI on
“Utopia” (1516) by Sir Thomas More, who also coined
economic growth and employment, there is relatively
the word. In this connection, in the era of AI and job
less research on what governments should do to
disruption, policies of equity and social protection
turn the risk and threat of AI into job opportunities
would determine the difference between a utopian and
and social good for all. On the principle of “rise with
dystopian future. It can further draw the line between
AI, not race with it” (World Bank, 2018), governments
job destruction or creative destruction. “Creative
must play an active or even aggressive role not only
destruction” is the concept proposed by the famous
on economic growth and national competitiveness,
economist Joseph Schumpeter (1942) which refers
but also on social protection and a fair re-allocation

270
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

to the process of industrial mutation that incessantly Recommendation 2: International Organizations


revolutionizes the economic structure from within to and Developing World
create a new and better one with more opportunities
and resources. Paradoxically, without a reinforced AI impacts both developed and developing countries.
state’s role on equity and social protection, we can That said, many developing countries are ill-prepared
only see the destruction of jobs but never the creation due to limitations in resources, technology know-
of new opportunities brought by innovation and how and policy capacity. National AI strategies have
technology. only been released by developed countries and
global powers; no developing countries have set up a
In the hybrid governance analyzed in this paper, it is comprehensive AI strategy. Context and institutions
hard to disentangle efforts to steer national policies also matter in determining the ability of a nation to
in a particular direction from business interests. It embrace and survive job disruption by AI. Unlike
seems rather unfortunate to see that job disruption the welfare states of Western countries, the social
and counteracting policies—especially towards those protection system of many developing countries is
who may not be able to adjust—are all missing in feeble and depends much more on self-reliance, the
the AI strategies of major countries. The change of vitality of the economic system, and family support.
new technology in AI requires the changing role of This means that the ability of individuals to sustain
the state—including new capacities and integrated economic instability and downturn caused by AI
functions, allowing for fairness and equity. Future job disruption would be weak and non-sustainable.
studies expanding on the knowledge frontier of the Understanding the limited capacities and resource
societal impacts of AI automation should pave the way concerns of developing countries, it is recommended
towards understanding the intended and unintended that global and international organizations such as
consequences of the disruptive changes and shift of the World Bank, UN, and World Economic Forum take
power brought by AI. In this regard, three major policy the lead in offering advice and support for developing
recommendations are made in the following. countries to craft their own AI strategies.

Recommendation 1: Theory and Practice Recommendation 3: AI for All

Governments should have more alignment and A good AI policy should ensure that all members
integration between theory and policy in formatting of society benefit from this powerful technology.
their AI strategies. Only by breaking the wall between To build on the major theme of “AI for Social Good”,
academic research and policy discussion can there there should also be “AI for All” – benefiting and
be a possibility for the formulation of effective policies empowering all members in society. It is inevitable
well-supported by research and well-grounded in that some people, especially the older population, will
knowledge and theories. For example, governments likely find it difficult to re-train for the AI era. As society
should discuss how to prepare their labor force to rise gets richer and wealthier with AI, how this vulnerable
with AI by equipping them with skills and capacities to population should be protected and funded will require
work with enabling technologies rather than replacing some tough decisions, which can be delayed but never
technologies. Education and training in schools and avoided. In this connection, equity, social security, and
the labor force should put more emphasis on social fair re-distribution (e.g., introducing UBI to protect the
intelligence and creative intelligence, which are not vulnerable population) should be critical and essential
going to be replaced by AI in the future of work. elements in all future AI policy responses.

271
How to expand the capacity of AI to build better society

References
Acemoglu, D., & Autor, D. (2011). Skills, Tasks and Technologies: Implications for
Employment and Earnings. Handbook of Labor Economics.

Acemoglu, D., & Robinson, J. A. (2012). Why Nations Fail: The Origins of Power, Prosperity,
and Poverty. NY: Crown Business.

Grönlund, Å. (2011). Connecting E-government to Real Government – The Failure of UN


E-participation Index. International Conference on Electronic Government, 26-37.

Anderson, J. (2015). Public Policy-Making. Stamford: Cengage.

Asia Development Bank. (2018). Asian Development Outlook 2018: How Technology Affects
Jobs. Asia Development Bank.

Aspalter, C. (2006). The East Asian Welfare Model. International Journal of Social Welfare,
290-301.

Bower, J. L., & Christensen, C. M. (1995). Disruptive Technologies: Catching the Wave.
Harvard Business Review, 73(1), 43-53.

Cairney, P. (2016). The Politics of Evidence-Based Policy Making. NY: Palgrave Macmillan.

Cath, C. (2018). Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities
and Challenges. Philosophical Transactions of the Royal Society A.

Cheung, A. (2005). The Politics of Administrative Reforms in Asia: Paradigms and Legacies,
Paths and Diversities. Governance, 18(2), 257-282.

Deming, D. J. (2017). The Growing Importance of Social Skills in the Labor Market.
The Quarterly Journal of Economics, 132(4), 1593-1640.

Desouza, K. C. (2018). Delivering Artificial Intelligence in Government: Challenges and


Opportunities. Washington. IBM Center for The Business of Government.

Dunleavy, P., Margetts, H., Bastow, S., & Tinkler, J. (2008). Digital Era Governance: IT
Corporations, the State and E-government. NY: Oxford University Press.

Evans, P. (1995). Embedded Autonomy: States and Industrial Transformation. NJ: Princeton
University Press.

Ferro, E., N. Loukis, E., Charalabidis, Y., & Osella, M. (2013). Policy Making 2.0: From Theory
to Practice. Government Information Quarterly, 359-368.

272
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Florida, R. (2002). The Rise of the Creative Class. NY: Basic Books.

Fountain, J. (2001). Building the Virtual State: Information Technology and Institutional
Change. Washington D.C.: Brookings Institution Press.

Frey, C. B., & Osborne, M. A. (2017). The Future of Employment: How Susceptible are Jobs to
Computerization? Technological Forecasting and Social Change, 114, 254-280.

Haggard, S. (1990). Pathways from the Periphery: The Politics of Growth in the Newly
Industrializing Countries. NY: Cornell University Press.

Haque, M. S. (1996). The Contextless Nature of Public Administration in Third World


Countries. International Review of Administrative Sciences, 62(3), 315-329.

Howlett, M., & Ramesh, M. (1998). Policy Subsystem Configurations and Policy Change:
Operationalizing the Postpositivist Analysis of the Politics of the Policy Process. Policy
Studies Journal, 26(3), 466-481.

Johnson, C. (1982). MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-
1975. CA: Stanford University Press.

Kang, J., & Francisco, J. P. (2019). Automation and the Future of Work in Developing
Countries. Transformation of Work in Asia-Pacific in the 21st Century.

Kaplan, J. (2016). Artificial Intelligence: What Everyone Needs to Know. NY: Oxford University
Press.

Kingdon, J. (1984). Agendas, Alternatives, and Public Policies. NY: Harper Collins.

Kitchin, R. (2014). The Real-Time City? Big Data and Smart Urbanism. GeoJournal, 1-14.

Krueger, A. O. (1974). The Political Economy of the Rent-Seeking Society. The American
Economic Review, 64(3), 291-303.

Lee, J., & Moon, M. J. (2019). Coming Age of Digital Automation: Backgrounds and
Prospects. Transformation of Work in Asia-Pacific in the 21st Century.

Lee, K.-F. (2018). AI Super-Powers: China, Silicon Valley and the New World Order.
NY: Mariner.

Lindblom, C. (2004) “The Science of Muddling Through” in Jay Shafritz, Albert Hyde, and
Sandra Parkes, eds., Classics of Public Administration. 5th ed. Belmont, CA: Wadsworth.

Norris, P. (2012). Digital Divide: Civic Engagement, Information Poverty, and the Internet
Worldwide. NY: Cambridge University Press.

273
How to expand the capacity of AI to build better society

North, D. (1990). Institutions, Institutional Change and Economic Performance.


NY: Cambridge University Press.

OECD. (2018). Putting Faces to the Jobs at Risk of Automation. Policy Brief on the Future
of Work.

OECD. (2019a). Artificial Intelligence in Society. OECD.

OECD. (2019b). Employment Outlook 2019: The Future of Work. OECD.

Olson, M. (1982). The Rise and Decline of Nations. New Haven: Yale University Press.

Partnership for Public Service. (2018). Using Artificial Intelligence to Transform Government.
IBM Center for the Business of Government.

Partnership for Public Service. (2019). More Than Meets AI. IBM Center for the Business of
Government.

Painter, M., & Pierre, J. (2005). Unpacking Policy Capacity: Issue and Themes. In In
Challenges to State Policy Capacity: Global Trends and Comparative Perspectives (pp. 1-18).
UK: Palgrave Macmillan.

Polidano, C. (2001). Don’t Discard State Autonomy: Revisiting the East Asian Experience of
Development. Political Studies, 49, 513-527.

Pollitt, C., & Bouckaert, G. (2011). Public Management Reform: A Comparative Analysis – A
New Public Management, Governance, and the Neo-Weberian State. UK: Oxford University
Press.

Przeworski, A., & Limongi, F. (1993). Political Regimes and Economic Growth. Journal of
Economic Perspectives, 7(3), 51-69.

Radu, R. (2019). Negotiating Internet Governance. Oxford: Oxford University Press.

Rodrik, D. (1992). Political Economy and Development Policy. European Economic Review,
36(2-3), 329-336.

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How
We Live, Work, and Think. Boston: Houghton Mifflin Harcourt.

Schumpeter, J. (1942). Capitalism, Socialism and Democracy. NY: Harper & Brothers.

Stiglitz, J. (2000). Economics of the Public Sector. NY: W. W. Norton.

274
AI and the Future of Work: A Policy Framework for Transforming Job Disruption into Social Good for All

Straub, E. T. (2009). Understanding Technology Adoption: Theory and Future Directions for
Informal Learning. Review of Educational Research, 625-649.

Tam, K. Y. (2019). Digital Transformation in the 21st Century: Implications and Policy.
Transformation of Work in Asia-Pacific in the 21st Century.

Welch, E. W., Hinnant, C. C., & Moon, M. J. (2005). Linking Citizen Satisfaction with
E-Government with Trust in Government. Journal of Public Administration Research and
Theory, 15(3), 371-391.

Welch, E., & Wong, W. (1998). Public Administration in a Global Context: Bridging the Gaps
of Theory and Practice between Western and Non-Western Nations. Public Administration
Review, 58(1), 40-49.

The World Bank. (2018). The Future of Work: Race with – not against – the Machine.
Research and Policy Briefs, World Bank Malaysia Hub No. 16.

World Economic Forum. (2018). The Future of Jobs Report 2018. World Economic Forum.

World Intellectual Property Organisation. (2018). China Drives International Patent


Applications to Record Heights; Demand Rising for Trademark and Industrial Design
Protection. WIPO.

Wong, W. (2013). The Search for a Model of Public Administration Reform in Hong Kong:
Weberian Bureaucracy, New Pubic Management or Something Else. Public Administration
and Development, 33(4), 297-310.

Wong, W., & Welch, E. (2004). Does E-Government Promote Accountability? A Comparative
Analysis of Website Openness and Government Accountability in Fourteen Countries.
Governance, 275-297.

Wong, W., Welch, E., & Moon, M. (2006). What Drives Global E-Governance: An Exploratory
Study at a Macro Level. Proceedings of the 38th Annual Hawaii International Conference on
System Sciences, 275-294.

Yahya, F. B. (2019). Preparing the Future Workforce – Reskilling, Retraining and Redeploying
and the Transformation of the Education System. Transformation of Work in Asia-Pacific in
the 21st Century.

275
Bios of Project Authors
BASU, Arindrajit, Centre for Internet & Society, India
Arindrajit Basu is a research manager at the Centre for Internet & Society, India, where he
focuses on the geopolitics and constitutionality of emerging technologies. He is a lawyer by
training and holds a BA, LLB (Hons) degree from the National University of Juridical Sciences,
Kolkata, and an LLM in public international law from the University of Cambridge, U.K.

BENTLEY, Caitlin, Australian National University & Sheffield University


Caitlin Bentley joined the 3A Institute in 2018, and assisted in the development of a new
branch of engineering through her role as Research Fellow teaching the 3Ai Master of
Applied Cybernetics until 2020. She is now a lecturer in AI-enabled Information Systems
at Sheffield University’s iSchool. Caitlin conducts research on cyber-physical systems, and
how to make them more socially inclusive. With a research career spanning from Canada to
the UK, Africa, Southeast Asia and Australia, Caitlin has contributed to a number of projects
focused on enhancing learning and accountability through ICT, open development, the
platform economy and artificial intelligence. Caitlin holds a PhD in Human Geography from
Royal Holloway University of London, UK, an MA in Educational Technology from Concordia
University, Canada, and a BA in Computer Science from McGill University, Canada.

FINDLAY, Mark, Singapore Management University


Mark Findlay is a Professor of Law at Singapore Management University, and Deputy
Director of its Centre for AI and Data Governance. In addition, he has honorary Chairs at the
Australian National University, and the University of New South Wales. Professor Findlay
is the author of 29 monographs and collections and over 150 refereed articles and book
chapters. He has held Chairs in Australia, Hong Kong, Singapore, England and Ireland.
or over 20 years he was at the University of Sydney as the Chair in Criminal Justice, the
Director of the Institute of Criminology. Most recent publications include Law’s Regulatory
Relevance and Principled International Criminal Justice: Lessons from tort law.

HICKOK, Elonnai, Centre for Internet & Society, India


Elonnai Hickok is Chief Operating Officer at the Centre for Internet & Society India (CIS).
Elonnai has graduated from the University of Toronto where she studied international
development and political science. Elonnai leads the privacy, surveillance and cyber security
work at the Centre and has also written extensively on issues pertaining to intermediary
liability, digital rights, identity, cyber security and DNA profiling.

276
HONGLADAROM, Soraj, Chulalongkorn University
Soraj Hongladarom is Professor of Philosophy and Director of the Center for Ethics of
Science and Technology at Chulalongkorn University in Bangkok, Thailand. He has published
books and articles on such diverse issues as bioethics, computer ethics, and the roles that
science and technology play in the culture of developing countries. His concern is mainly
on how science and technology can be integrated into the life-world of the people in the so-
called Third World countries, and what kind of ethical considerations can be obtained from
such relation. His most recent book is The Ethics of AI and Robotics: A Buddhist Viewpoint,
forthcoming this year from Rowman & Littlefield. He is also the author of The Online Self
and A Buddhist Theory of Privacy, both published by Springer. His articles have appeared
in The Information Society, AI & Society, Philosophy in the Contemporary World, and Social
Epistemology, among others.

LEE, Kyoung Jun, Kyung Hee University


Kyoung Jun Lee is a professor of Kyung Hee University. He graduated with a BS/MS/PhD in
Management Science from KAIST, and completed a MS/PhD course in Public Administration
at Seoul National University. He won Innovative Applications of Artificial Intelligence Awards
in 1995, 1997, and 2020. He was a visiting scholar at CMU, MIT, and UC Berkeley. He is
2017’s president of Korean Intelligent Information Systems Society and current president
of Korean Association for Business Communication. He is currently the Director of Big
Data Research Center and International Center for Electronic Commerce. He received 2017
Presidential Award for e-government of Korea.

MOON, M. Jae, Yonsei University


M. Jae Moon is Dean of the College of Social Sciences and Director of the Institute for
Future Government at Yonsei University. His research interests include digital government,
public management, and comparative public administration. He is an elected Fellow of
National Academy of Public Administration (NAPA). Recently, he was recipient of the
Highest Research Award of Yonsei University in 2019 and the Stone Award of the American
Society for Public Administration in 2020. He is also selected as one of world’s 100 most
influential people in Digital Government 2018 and 2019 consecutively by Apolitical which is
a London-based leading non-profit organization.

277
SINHA, Amber, Centre for Internet & Society, India
Amber Sinha is the Executive Director at the CIS. He works on issues surrounding privacy,
big data, and cyber security. Amber is interested in the impact of emerging technologies like
artificial intelligence and learning algorithms on existing legal frameworks, and how they
need to evolve in response. He has studied humanities and law at National Law School of
India University, Bangalore.

WONG, Wilson, Chinese University of Hong Kong


Wilson Wong is an Associate Professor of the Department of Government and Public
Administration and the Director of the Data Science and Policy Studies (DSPS) Programme
of the Social Science Faculty in the Chinese University of Hong Kong. He received his
bachelor degree in the Chinese University of Hong Kong, a Master in Public Administration
degree and a PhD in Public Administration, both from Syracuse University. Professor Wong’s
major areas of research include ICT and E-governance, Big Data, AI and public policy, public
management and comparative public policy. He has served as a visiting fellow of the
Brookings Institution and Harvard University.

YARIME , Masaru, Hong Kong University of Science and Technology


Masaru Yarime is an Associate Professor at the Division of Public Policy in the Hong Kong
University of Science and Technology. He also has appointments as Honorary Reader at
the Department of Science, Technology, Engineering and Public Policy in University College
London and Visiting Associate Professor at the Graduate School of Public Policy in the
University of Tokyo. He received BEng in Chemical Engineering from the University of
Tokyo, MS in Chemical Engineering from the California Institute of Technology, and PhD in
Economics and Policy Studies of Innovation and Technological Change from Maastricht
University in the Netherlands.

278
Acknowledgement
We would like to thank the people who made this publication possible. First and foremost,
the authors who contributed to the AI for Social Good project, who shared with us their
insights and expertise on the application of AI to support of inclusive and sustainable
development, our chief editors and advisory board members who helped to provide
guidance and review the papers. Thanks also go to our partners, UNESCAP and Google, for
their supports and constructive advice.

We are grateful to have Ms. Christy Yeung for her unconditional support as our copyeditor,
our designer - Mr. Andrew Tang, the proofreading team from English Editorial Solutions, and
last but not least, our project team from UNESCAP, Google, APRU and Keio University to
facilitate the project and materialize the publication. We sincerely hope this publication will
benefit all the stakeholders in the digital age.

279
Partnership
We in particular thank United Nations ESCAP and Google for their continuing support
to this project.

United Nations ESCAP


The Economic and Social Commission for Asia and the Pacific (ESCAP) serves as the
United Nations’ regional hub promoting co-operation among countries to achieve inclusive
and sustainable development. It is the largest regional intergovernmental platform with 53
Member States and 9 Associate Members. The Commission’s strategic focus is to deliver on
the 2030 Agenda for Sustainable Development, through reinforcing and deepening regional
co-operation and integration to advance connectivity, financial co-operation and market
integration. ESCAP, through its research and analysis, policy advisory services, capacity
building and technical assistance, aims to support sustainable and inclusive development in
member countries.

Google
Google’s mission is to organize the world’s information and make it universally accessible
and useful. We believe that AI is a powerful tool to explore and address difficult challenges
such as better predicting natural disasters, or improving accuracy of medical diagnoses. In
2018, we launched AI for Social Good to meaningfully contribute to these solutions, drawing
on the scale of our products and services, investment in AI research, and our commitment
to empowering the social sector with AI resources and funding.

280
281
ISBN 979-988-77283-0-6.
Publisher: Association of Pacific Rim Universities Limited
Co-publisher: Keio University

You might also like