0% found this document useful (0 votes)
10 views3 pages

ai

The document discusses the potential risks and ethical concerns surrounding the development of artificial intelligence (AI), particularly the fear of superintelligent machines that may act against human interests. It highlights issues such as biased datasets, the challenge of aligning machine values with human values, and the existential threat posed by uncontrolled AI. The text raises questions about the implications of AI in decision-making and the responsibility of humans in its deployment.

Uploaded by

Olivier Raffin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

ai

The document discusses the potential risks and ethical concerns surrounding the development of artificial intelligence (AI), particularly the fear of superintelligent machines that may act against human interests. It highlights issues such as biased datasets, the challenge of aligning machine values with human values, and the existential threat posed by uncontrolled AI. The text raises questions about the implications of AI in decision-making and the responsibility of humans in its deployment.

Uploaded by

Olivier Raffin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Artificial intelligence

Link :

The urgent risks of runaway AI — and what to do about them :

https://www.youtube.com/watch?v=JL5OFXeXenA

The big idea: Should we worry about artificial intelligence?


Mon 29 Nov 2021, adapted from The Guardian - Steven Poole

Ever since Garry Kasparov lost his second chess match against IBM’s Deep Blue in 1997, the writing
has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will
lead – by some estimates, in only a few decades – to the development of superintelligent, sentient
machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather
undesirable. But is this anything more than yet another sci-fi “Project Fear”? (…)

How do we get there from here, assuming we want to? Modern AI employs machine learning (or
deep learning): rather than programming rules into the machine directly we allow it to learn by itself.
In this way, AlphaZero, the chess-playing entity created by the British firm Deepmind (now part of
Google), played millions of training matches against itself and then trounced its top competitor. (…)

Machine learning works by training the machine on vast quantities of data. But datasets are not
simply neutral repositories of information; they often encode human biases in unforeseen ways.
Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black men if
they wanted to “keep seeing videos about primates”. So-called “AI” is already being used in several
US states to predict whether candidates for parole will reoffend, with critics claiming that the data
the algorithms are trained on reflects historical bias in policing.

Computerised systems (as in aircraft autopilots) can be a boon to humans , so the flaws of existing
“AI” aren’t in themselves arguments against the principle of designing intelligent systems to help us
in fields such as medical diagnosis. The more challenging sociological problem is that adoption of
algorithm-driven judgments is a tempting means of passing the buck, so that no blame attaches to
the humans in charge – be they judges, doctors or tech entrepreneurs . Will robots take all the jobs?
(…)

The existential problem, meanwhile, is this: if computers do eventually acquire some kind of
god-level self-aware intelligence – something that is explicitly in Deepmind’s mission statement, for
one (“our long-term aim is to solve intelligence” and build an AGI) – will they still be as keen to be of
service? If we build something so powerful, we had better be confident it will not turn on us. (…)

AI wouldn’t have to be actively malicious to cause catastrophe. This is illustrated by Bostrom’s


famous “paperclip problem”. Suppose you tell the AI to make paperclips. What could be more
boring? Unfortunately, you forgot to tell it when to stop making paperclips. So it turns all the matter
on Earth into paperclips, having first disabled its off switch because allowing itself to be turned off
would stop it pursuing its noble goal of making paperclips.

That is an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s
excellent Human Compatible: AI and the Problem of Control, which argues that it is impossible to
fully specify any goal we might give a superintelligent machine so as to prevent such disastrous
misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence, meanwhile, the
physicist Max Tegmark, emphasises the problem of “value alignment” – how to ensure the
machine’s values line up with ours. This too might be an insoluble problem, given that thousands of
years of moral philosophy have not been sufficient for humanity to agree on what “our values” really
are. (…)

As the computer scientist Drew McDermott suggested in a paper as long ago as 1976, perhaps after
all we have less to fear from artificial intelligence than from natural stupidity.

Vocabulary : Translate the selected words (in bold letters only).

Interesting structures : Read and understand the passages in italics :

yet another

rather than + Ving

In this way,

by + Ving

to claim that

So

The challenging problem is that

Be they

auxiliaire (do/does/did) + BV (formule d’insistance (voir texte : if they do acquire …)

to be keen to + V

S + had better + BV

This is illustrated by

That is an example of

to argue that

so as to + V

given that

As …suggested,

Cultural knowledge + context :

What are Deep Blue, Alphago, AlphaZero, OpenAI, ChatGPT and Dall-E ?

What is deep learning ?


What are deepfakes ?

Comprehension:

1-Underline the relevant passages and answer in English (trace écrite) :

-According to the article, what is there to fear of Artificial Intelligence?

-Explain the issue with datasets.

-According to the journalist, to what extent is there a “problem of control” when it comes to AI ?

2-Explain in English (trace écrite) :

-the writing has been on the wall for humanity

-(these films) have portrayed this prospect as rather undesirable

-(AlphaZero) trounced its top competitor

-So-called “AI” is already being used in several US states to predict whether candidates for
parole will reoffend

-Computerised systems … can be a boon to humans

-adoption of algorithm-driven judgments is a tempting means of passing the buck, so that no


blame attaches to the humans in charge

-to ensure the machine’s values line up with ours

Think about it :

-Do the benefits of artificial intelligence outweigh the risks?

-Should we fear artificial intelligence?

You might also like