The Hindu AI
The Hindu AI
P.J. Narayanan
is a researcher in computer vision and Professor and (ex-officio) Director of the International Institute of
Information Technology (IIIT) Hyderabad. He was the President of the Association for Computing Machinery
(ACM) India, and currently serves on the global Technology Policy Council
of the ACM
Artificial Intelligence (AI) has been dominating the headlines for its triumphs, and also
fears being expressed by many including some of the best minds in AI. The Association
for Computing Machinery released a statement in October 2022 on ‘Principles for
Responsible Algorithmic Systems’, a broader class of systems that include AI systems.
Several leading AI experts and thinkers have been part of different cautionary messages
about AI, issued by the Future of Life Institute, the Association for the Advancement of
Artificial Intelligence and the Center for AI Safety. There is deep concern about AI among
many who know it. What is behind this?
The performance and utility of AI systems improve as the task is narrowed, making
them valuable assistants to humans. Speech recognition, translation, and even
identifying common objects such as photographs, are just a few tasks that AI systems
tackle today, even exceeding human performance in some instances. Their
performance and utility degrade on more “general” or ill-defined tasks. They are weak in
integrating inferences across situations based on the common sense humans have.
Artificial General Intelligence (AGI) refers to intelligence that is not limited or narrow.
Think of it as human “common sense” but absent in AI systems. Common sense will
make a human save his life in a life-threatening situation while a robot may remain
unmoved. There are no credible efforts towards building AGI yet. Many experts believe
AGI will never be achieved by a machine; others believe it could be in the far future.
A big moment for AI was the release of ChatGPT, in November 2022. ChatGPT is a
generative AI tool that uses a Large Language Model (LLM) to generate text. LLMs are
large artificial neural networks that ingest large amounts of digital text to build a
statistical “model”. Several LLMs have been built by Google, Meta, Amazon, and others.
ChatGPT’s stunning success in generating flawless paragraphs caught the world’s
attention. Writing could now be outsourced to it. Some experts even saw “sparks of AGI”
in GPT-4; AGI could emerge from a bigger LLM in the near future.
Other experts refute this vociferously, based on how LLMs work. At the basic level, LLMs
merely predict the most probable or relevant word to follow a given sequence of words,
based on the learned statistical model. They are just “stochastic parrots,” with no sense
of meaning. They famously “hallucinate” facts, confidently (and wrongly) — awarding
Nobel prizes generously and conjuring credible citations to non-existent academic
papers.
True AGI will be a big deal if and when it arrives. Machines outperform humans in every
physical task today and AGI may lead to AI “machines” bettering humans in many
intellectual or mental tasks. Bleak scenarios of super-intelligent machines enslaving
humans have been imagined. AGI systems could be a superior species created by
humans outside of evolution. AGI will indeed be a momentous development that the
world must prepare for seriously.
I believe current LLMs and their successors are not even close to AGI. But will AGI arrive
some day? I reserve my judgement. However, the hype and panic about LLMs or AI
leading directly to human extinction are baseless. The odds of the successors of the
current tools “taking over the world” are zero.
Does that mean we can live happily without worrying about the impact of AI? I see three
possible types of dangers arising from AI.
Malicious humans with powerful AI: AI tools are relatively easy to build. Even narrow AI
tools can cause serious harm when matched with malicious intent. LLMs can generate
believable untruths as fake news and create deep mental anguish leading to self-harm.
Public opinion can be manipulated to affect democratic elections. AI tools work globally,
taking little cognisance of boundaries and barriers. Individual malice can instantly
impact the globe. Governments may approve or support such actions against
“enemies”. We have no effective defence against malicious human behaviour. Well-
meaning people have expressed concern about AI-powered “smart” weapons in the
military. Unfortunately, calls for a ban are not effective in such situations. I do not see
any easy defence against the malicious use of AI.
Highly capable and inscrutable AI: AI systems will continue to improve and will be
employed to assist humans. They may end up harming some sections more than others
unintentionally, despite the best intentions of their creators. These systems are created
using Machine Learning from data from the world and can perpetuate the shortcomings
of the data. They may introduce asymmetric behaviours that go against certain groups.
Camera-based face recognition systems have been shown to be more accurate on fair-
skinned men than on dark-skinned women. Such unintended and unknown bias can be
catastrophic in AI systems that steer autonomous cars and diagnose medical
conditions. Privacy is a critical concern as algorithmic systems watch the world
constantly. Every person can be tracked always, violating the fundamental right to
privacy.
Another worry is about who develops these technologies and how. Most recent
advances took place in companies with huge computational, data, and human
resources. ChatGPT was developed by OpenAI which began as a non-profit and
transformed into a for-profit entity. Other players in the AI game are Google, Meta,
Microsoft, and Apple. Commercial entities with no effective public oversight are the
centres of action. Do they have the incentive to keep AI systems just?
Many a social media debate rages about AI leading to destruction. Amidst doomsday
scenarios, solutions such as banning or pausing research and development in AI — as
suggested by many — are neither practical nor effective. They may draw attention away
from the serious issues posed by insufficient scrutiny of AI. We need to talk more about
the unintentional harm AI may inflict on some or all of humanity. These are solvable,
but concerted efforts are needed.
Awareness and debate on these issues are largely absent in India. The adoption of AI
systems is low in the country, but those used are mostly made in the West. We need
systematic evaluation of their efficacy and shortcomings in Indian situations. We need
to establish mechanisms of checks and balances before large-scale deployment of AI
systems. AI holds tremendous potential in different sectors such as public health,
agriculture, transportation and governance. As we exploit India’s advantages in them,
we need more discussions to make AI systems responsible, fair, and just to our society.
The European Union is on the verge of enacting an AI Act that proposes regulations
based on a stratification of potential risks. India needs a framework for itself, keeping in
mind that regulations have been heavy-handed as well as lax in the past.