0% found this document useful (0 votes)
22 views

Proactive policies_AI

Uploaded by

Younghun Seo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Proactive policies_AI

Uploaded by

Younghun Seo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

A Call for Proactive Policies for Informatics and Artificial Intelligence Technologies

Jim Samuel, Rutgers University


(Preprint version)
Artificial Intelligence (AI) presents a powerful paradigm shift opportunity to human society as
no other technology-dimension has done in the history of the human race. The reason for this
is evident: past scientific and technological revolutions replaced human muscle power and this
increased the value of human intelligence. Thus, in spite of labor displacement, mass human
capital remained relevant and vital. In the information age, computers helped create, store,
process and share vast quantities of digitized information, and human intelligence began to be
valued even more highly for its capabilities to manage computing meaningfully and profitably.
However, the dawn of the AI technology dimension challenges the human identity as never
before as AI has begun competing for the crown of superior “intelligence” with human entities.
AI is on a winning trajectory in many of the battles with human intelligence, as has been seen in
gaming through the sustained victories of AI agents over the best of human Chess and Go
grandmasters.
These AI technologies’ powerful evolution, pervasive growth and ubiquitous opportunities
present humanity with many risks and challenges, some of which we understand and others we
have just begun to identify. In a simplified view, Artificial Intelligence is a set of technologies
that mimic the functions and expressions of human intelligence, specifically cognition and logic
and informatics is advanced technology driven big data analytics. (Samuel et al., 2021). The
general sense is that we are yet to perceive the best and the worst impacts of the AI
technology-dimension. The 2021 Stanford globalAI-100 report (Littman et al., 2021) states that
few nations “have moved definitively to regulate AI specifically”. The critical question then
becomes, can governments and organizations continue to use the same post-hoc catch-up
strategy that has been used for pre-AI technologies for effective governance of AI technologies
and their applications?
Unparalleled Power Calls for Great Responsibility
Artificial Intelligence may sound futuristic to some, but it is already ubiquitous: It is employed in
cellphones and personal computing machines, and in services from companies such as
Facebook, Amazon, Netflix, Apple and Google. Multiple times a day, humans use these
technologies and in turn, these technologies gather information and “learn” about our habits,
preferences, and personal lives. Human intelligence driven data science and analytics are being
increasingly replaced by scaled automation of analytical processes, informatics and AI
technologies that drive powerful AI applications – recent research shows the power of big data
informatics and machine learning in tracking and classifying individual and aggregate emotions
(Samuel, et al., 2020). Many corporations across a broad range of domains, such as global
financial institutions, are increasingly adopting this trend of leveraging AI and informatics
without creating sufficient customer awareness of what algorithms are applied to vast
quantities of personal data.

Electronic copy available at: https://ssrn.com/abstract=4000077


This AI-fueled harvesting of information potentially threatens the rights of common individuals
and can have serious implications for individual sovereignty and “self-ownership.” For example,
the use of AI and informatics may generate advanced deep insights about customers’ personal
behaviors without the customers’ knowledge, and be used to manipulate customer behavior
and decision making, providing unfair and unjustified advantage and control to companies using
these technologies.
McKinsey Global Institute authors, Dobbs et al., contrast AI to the industrial revolution, and
emphasized the superiority of AI as a technological dimension which they stated as “happening
ten times faster and at 300 times the scale, or roughly 3,000times the impact”. Imagine the
control of all of this Artificial Intelligence-driven human manipulation power being
concentrated in the hands of the “elite” and subject to the whims of a few billionaires owning
global technology corporations, without appropriate public governance policies and laws in
place to safeguard the interests of the masses. Policymakers who recognize the depth of this
growing problem should focus on three critical areas to shore up the safety of technology users:

• Curbing misuse: Regulating the abusive use of Artificial Intelligence technologies by


those who have extensive AI and informatics capabilities must be a top priority for
policy makers. For example, the unethical practice of using deep insights based on
harvested personal information, to clandestinely mass manipulate human behavior.

• Mitigating inherent risk: There are many examples of AI gone wrong: AI facial
recognition systems have misidentified persons accused of crimes, AI credit scoring has
demonstrated gender bias, AI-driven housing and benefits applications have amplified
discriminatory language, and many AI development projects—like IBM’s Watson for
Oncology project, which burned through around $62 million before being abandoned—
have failed. There is a need for elaborate policy development to cover multiple levels of
AI technology risks, including ethics, performance and equity.

• Educating the public: The development of AI education and


transparency policies directed at organizations, requiring all AI implementations to be
accompanied by educational materials providing technology transparency, would
provide opportunities for end-users to make educated decisions. Multiple levels of AI
education across disciplines must be prioritized.
It is true that society must foster entrepreneurship, risk taking, and other business-supportive
practices to ensure a vibrant economy. While it can be useful to reward risk, it is
counterproductive to encourage recklessness and lack of concern for people’s rights. The
potential for business applications of AI is vast—and ultimately, profitability will not be
disproportionately affected by policies and regulations that secure consumers’ rights. A balance
between innovative enterprise and regulation is necessary for human-centric AI, and it is
currently lacking globally.

Electronic copy available at: https://ssrn.com/abstract=4000077


The risks of AI and informatics will need to be addressed through an array of tactics, and one
critical component is the frontend framing, development and implementation of appropriate
public policies. Every human being must be empowered to have the choice to not be subject to
the tyranny of gargantuan and often monopolistic AI systems with a “take it as it is or suffer”
implementation. For example, organizations using AI should be required by the government to
provide every end-user the opportunity to opt out of being subject to their algorithms without
coercive penalties. Given the enormous implications for individuals and for society at large, it is
critical that governments and organizations adopt a significantly different and renewed policy
strategy: it is not going to be sufficient to use the strategies employed to govern the
disbursement of information age technologies. With Artificial Intelligence technologies, given
the scope, speed and scale at which damage can occur, it is compellingly necessary to
implement forward-thinking policies now to ensure the future safety and sustainability of
human rights and the human way of life.

Cite as: Samuel, J. (2021), “A Call for Proactive Policies for Informatics and Artificial Intelligence
Technologies”, Scholars Strategy Network. Url: https://scholars.org/contribution/call-proactive-
policies-informatics-and

References & Readings:


Michael L. Littman, et al., “Gathering Strength, Gathering Storms: The One Hundred Year Study on
Artificial Intelligence (AI100) 2021 Study Panel Report,” Stanford University, 2021;
Samuel, J., Kashyap, R., Samuel, Y. and Pelaez, A. (2022) “Adaptive Cognitive Fit: Artificial
Intelligence Augmented Management of Information Facets and Representations.”.
Samuel, J., Ali, G. G., Rahman, M., Esawi, E., & Samuel, Y. (2020). Covid-19 public sentiment
insights and machine learning for tweets classification. Information, 11(6), 314.

Dobbs, et al. (2015), “The four global forces breaking all the trends”, McKinsey Global Institute

Samuel, J. (2021), “Artificial Intelligence – Science Without Philosophy”, EDGECON-2021.


Samuel, J. (2022), “Selfish Artificial Intelligence? Motivations, Expectations and Risks”, AIKC-
2022.
McCarthy, J. (2008). The philosophy of AI and the AI of philosophy. In Philosophy of
Information (pp. 711-740). North-Holland.

Electronic copy available at: https://ssrn.com/abstract=4000077

You might also like