ai
ai
Link :
https://www.youtube.com/watch?v=JL5OFXeXenA
Ever since Garry Kasparov lost his second chess match against IBM’s Deep Blue in 1997, the writing
has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will
lead – by some estimates, in only a few decades – to the development of superintelligent, sentient
machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather
undesirable. But is this anything more than yet another sci-fi “Project Fear”? (…)
How do we get there from here, assuming we want to? Modern AI employs machine learning (or
deep learning): rather than programming rules into the machine directly we allow it to learn by itself.
In this way, AlphaZero, the chess-playing entity created by the British firm Deepmind (now part of
Google), played millions of training matches against itself and then trounced its top competitor. (…)
Machine learning works by training the machine on vast quantities of data. But datasets are not
simply neutral repositories of information; they often encode human biases in unforeseen ways.
Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black men if
they wanted to “keep seeing videos about primates”. So-called “AI” is already being used in several
US states to predict whether candidates for parole will reoffend, with critics claiming that the data
the algorithms are trained on reflects historical bias in policing.
Computerised systems (as in aircraft autopilots) can be a boon to humans , so the flaws of existing
“AI” aren’t in themselves arguments against the principle of designing intelligent systems to help us
in fields such as medical diagnosis. The more challenging sociological problem is that adoption of
algorithm-driven judgments is a tempting means of passing the buck, so that no blame attaches to
the humans in charge – be they judges, doctors or tech entrepreneurs . Will robots take all the jobs?
(…)
The existential problem, meanwhile, is this: if computers do eventually acquire some kind of
god-level self-aware intelligence – something that is explicitly in Deepmind’s mission statement, for
one (“our long-term aim is to solve intelligence” and build an AGI) – will they still be as keen to be of
service? If we build something so powerful, we had better be confident it will not turn on us. (…)
That is an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s
excellent Human Compatible: AI and the Problem of Control, which argues that it is impossible to
fully specify any goal we might give a superintelligent machine so as to prevent such disastrous
misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence, meanwhile, the
physicist Max Tegmark, emphasises the problem of “value alignment” – how to ensure the
machine’s values line up with ours. This too might be an insoluble problem, given that thousands of
years of moral philosophy have not been sufficient for humanity to agree on what “our values” really
are. (…)
As the computer scientist Drew McDermott suggested in a paper as long ago as 1976, perhaps after
all we have less to fear from artificial intelligence than from natural stupidity.
yet another
In this way,
by + Ving
to claim that
So
Be they
to be keen to + V
S + had better + BV
This is illustrated by
That is an example of
to argue that
so as to + V
given that
As …suggested,
What are Deep Blue, Alphago, AlphaZero, OpenAI, ChatGPT and Dall-E ?
Comprehension:
-According to the journalist, to what extent is there a “problem of control” when it comes to AI ?
-So-called “AI” is already being used in several US states to predict whether candidates for
parole will reoffend
Think about it :