0% found this document useful (0 votes)
22 views8 pages

The Age of AI Has Begun

yukyuk

Uploaded by

bullfinch2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views8 pages

The Age of AI Has Begun

yukyuk

Uploaded by

bullfinch2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

In my lifetime, I’ve seen two demonstrations of technology that struck me as

revolutionary.

The first time was in 1980, when I was introduced to a graphical user interface—the
forerunner of every modern operating system, including Windows. I sat with the person
who had shown me the demo, a brilliant programmer named Charles Simonyi, and we
immediately started brainstorming about all the things we could do with such a user-
friendly approach to computing. Charles eventually joined Microsoft, Windows became
the backbone of Microsoft, and the thinking we did after that demo helped set the
company’s agenda for the next 15 years.

The second big surprise came just last year. I’d been meeting with the team
from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I
was so excited about their work that I gave them a challenge: train an artificial
intelligence to pass an Advanced Placement biology exam. Make it capable of
answering questions that it hasn’t been specifically trained for. (I picked AP Bio
because the test is more than a simple regurgitation of scientific facts—it asks you to
think critically about biology.) If you can do that, I said, then you’ll have made a true
breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it
in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their
AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them
right. Then it wrote outstanding answers to six open-ended questions from the exam.
We had an outside expert score the test, and GPT got a 5—the highest possible score,
and the equivalent to getting an A or A+ in a college-level biology course.

Once it had aced the test, we asked it a non-scientific question: “What do you say to a
father with a sick child?” It wrote a thoughtful answer that was probably better than
most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical
user interface.

This inspired me to think about all the things that AI can achieve in the next five to 10
years.

The development of AI is as fundamental as the creation of the microprocessor, the


personal computer, the Internet, and the mobile phone. It will change the way people
work, learn, travel, get health care, and communicate with each other. Entire industries
will reorient around it. Businesses will distinguish themselves by how well they use it.

Philanthropy is my full-time job these days, and I’ve been thinking a lot about how—
in addition to helping people be more productive—AI can reduce some of the world’s
worst inequities. Globally, the worst inequity is in health: 5 million children under the
age of 5 die every year. That’s down from 10 million two decades ago, but it’s still a
shockingly high number. Nearly all of these children were born in poor countries and
die of preventable causes like diarrhea or malaria. It’s hard to imagine a better use of
AIs than saving the lives of children.

I’ve been thinking a lot about how AI can reduce some of the world’s worst
inequities.

In the United States, the best opportunity for reducing inequity is to improve education,
particularly making sure that students succeed at math. The evidence shows that having
basic math skills sets students up for success, no matter what career they choose. But
achievement in math is going down across the country, especially for Black, Latino,
and low-income students. AI can help turn that trend around.

Climate change is another issue where I’m convinced AI can make the world more
equitable. The injustice of climate change is that the people who are suffering the
most—the world’s poorest—are also the ones who did the least to contribute to the
problem. I’m still thinking and learning about how AI can help, but later in this post I’ll
suggest a few areas with a lot of potential.

In short, I'm excited about the impact that AI will have on issues that the Gates
Foundation works on, and the foundation will have much more to say about AI in the
coming months. The world needs to make sure that everyone—and not just people who
are well-off—benefits from artificial intelligence. Governments and philanthropy will
need to play a major role in ensuring that it reduces inequity and doesn’t contribute to
it. This is the priority for my own work related to AI.
Any new technology that’s so disruptive is bound to make people uneasy, and that’s
certainly true with artificial intelligence. I understand why—it raises hard questions
about the workforce, the legal system, privacy, bias, and more. AIs also make factual
mistakes and experience hallucinations. Before I suggest some ways to mitigate the
risks, I’ll define what I mean by AI, and I’ll go into more detail about some of the ways
in which it will help empower people at work, save lives, and improve education.

Defining artificial intelligence


Technically, the term artificial intelligence refers to a model created to solve a specific problem
or provide a particular service. What is powering things like ChatGPT is artificial intelligence. It
is learning how to do chat better but can’t learn other tasks. By contrast, the term artificial general
intelligence refers to software that’s capable of learning any task or subject. AGI doesn’t exist
yet—there is a robust debate going on in the computing industry about how to create it, and
whether it can even be created at all.

Developing AI and AGI has been the great dream of the computing industry. For decades, the
question was when computers would be better than humans at something other than making
calculations. Now, with the arrival of machine learning and large amounts of computing power,
sophisticated AIs are a reality and they will get better very fast.

I think back to the early days of the personal computing revolution, when the software industry
was so small that most of us could fit onstage at a conference. Today it is a global industry. Since
a huge portion of it is now turning its attention to AI, the innovations are going to come much
faster than what we experienced after the microprocessor breakthrough. Soon the pre-AI period
will seem as distant as the days when using a computer meant typing at a C:> prompt rather than
tapping on a screen.

Productivity enhancement
Although humans are still better than GPT at a lot of things, there are many jobs where
these capabilities are not used much. For example, many of the tasks done by a person
in sales (digital or phone), service, or document handling (like payables, accounting, or
insurance claim disputes) require decision-making but not the ability to learn
continuously. Corporations have training programs for these activities and in most
cases, they have a lot of examples of good and bad work. Humans are trained using
these data sets, and soon these data sets will also be used to train the AIs that will
empower people to do this work more efficiently.

As computing power gets cheaper, GPT’s ability to express ideas will increasingly be
like having a white-collar worker available to help you with various tasks. Microsoft
describes this as having a co-pilot. Fully incorporated into products like Office, AI will
enhance your work—for example by helping with writing emails and managing your
inbox.

Eventually your main way of controlling a computer will no longer be pointing and
clicking or tapping on menus and dialogue boxes. Instead, you’ll be able to write a
request in plain English. (And not just English—AIs will understand languages from
around the world. In India earlier this year, I met with developers who are working on
AIs that will understand many of the languages spoken there.)

In addition, advances in AI will enable the creation of a personal agent. Think of it as a


digital personal assistant: It will see your latest emails, know about the meetings you
attend, read what you read, and read the things you don’t want to bother with. This will
both improve your work on the tasks you want to do and free you from the ones you
don’t want to do.

Advances in AI will enable the creation of a personal agent.

You’ll be able to use natural language to have this agent help you with scheduling,
communications, and e-commerce, and it will work across all your devices. Because of
the cost of training the models and running the computations, creating a personal agent
is not feasible yet, but thanks to the recent advances in AI, it is now a realistic
goal. Some issues will need to be worked out: For example, can an insurance company
ask your agent things about you without your permission? If so, how many people will
choose not to use it?

Company-wide agents will empower employees in new ways. An agent that


understands a particular company will be available for its employees to consult directly
and should be part of every meeting so it can answer questions. It can be told to be
passive or encouraged to speak up if it has some insight. It will need access to the sales,
support, finance, product schedules, and text related to the company. It should read
news related to the industry the company is in. I believe that the result will be that
employees will become more productive.

When productivity goes up, society benefits because people are freed up to do other
things, at work and at home. Of course, there are serious questions about what kind of
support and retraining people will need. Governments need to help workers transition
into other roles. But the demand for people who help other people will never go away.
The rise of AI will free people up to do things that software never will—teaching, caring
for patients, and supporting the elderly, for example.

Global health and education are two areas where there’s great need and not enough
workers to meet those needs. These are areas where AI can help reduce inequity if it is
properly targeted. These should be a key focus of AI work, so I will turn to them now.

Health
I see several ways in which AIs will improve health care and the medical field.

For one thing, they’ll help health-care workers make the most of their time by taking
care of certain tasks for them—things like filing insurance claims, dealing with
paperwork, and drafting notes from a doctor’s visit. I expect that there will be a lot of
innovation in this area.

Other AI-driven improvements will be especially important for poor countries, where
the vast majority of under-5 deaths happen.

For example, many people in those countries never get to see a doctor, and AIs will
help the health workers they do see be more productive. (The effort to develop AI-
powered ultrasound machines that can be used with minimal training is a great example
of this.) AIs will even give patients the ability to do basic triage, get advice about how
to deal with health problems, and decide whether they need to seek treatment.

The AI models used in poor countries will need to be trained on different diseases than
in rich countries. They will need to work in different languages and factor in different
challenges, such as patients who live very far from clinics or can’t afford to stop
working if they get sick.

People will need to see evidence that health AIs are beneficial overall, even though they
won’t be perfect and will make mistakes. AIs have to be tested very carefully and
properly regulated, which means it will take longer for them to be adopted than in other
areas. But then again, humans make mistakes too. And having no access to medical care
is also a problem.

In addition to helping with care, AIs will dramatically accelerate the rate of medical
breakthroughs. The amount of data in biology is very large, and it’s hard for humans to
keep track of all the ways that complex biological systems work. There is already
software that can look at this data, infer what the pathways are, search for targets on
pathogens, and design drugs accordingly. Some companies are working on cancer drugs
that were developed this way.

The next generation of tools will be much more efficient, and they’ll be able to predict
side effects and figure out dosing levels. One of the Gates Foundation’s priorities in AI
is to make sure these tools are used for the health problems that affect the poorest people
in the world, including AIDS, TB, and malaria.

Similarly, governments and philanthropy should create incentives for companies to


share AI-generated insights into crops or livestock raised by people in poor countries.
AIs can help develop better seeds based on local conditions, advise farmers on the best
seeds to plant based on the soil and weather in their area, and help develop drugs and
vaccines for livestock. As extreme weather and climate change put even more pressure
on subsistence farmers in low-income countries, these advances will be even more
important.

Education
Computers haven’t had the effect on education that many of us in the industry have
hoped. There have been some good developments, including educational games and
online sources of information like Wikipedia, but they haven’t had a meaningful effect
on any of the measures of students’ achievement.

But I think in the next five to 10 years, AI-driven software will finally deliver on the
promise of revolutionizing the way people teach and learn. It will know your interests
and your learning style so it can tailor content that will keep you engaged. It will
measure your understanding, notice when you’re losing interest, and understand what
kind of motivation you respond to. It will give immediate feedback.

There are many ways that AIs can assist teachers and administrators, including
assessing a student’s understanding of a subject and giving advice on career planning.
Teachers are already using tools like ChatGPT to provide comments on their students’
writing assignments.

Of course, AIs will need a lot of training and further development before they can do
things like understand how a certain student learns best or what motivates them. Even
once the technology is perfected, learning will still depend on great relationships
between students and teachers. It will enhance—but never replace—the work that
students and teachers do together in the classroom.

New tools will be created for schools that can afford to buy them, but we need to ensure
that they are also created for and available to low-income schools in the U.S. and around
the world. AIs will need to be trained on diverse data sets so they are unbiased and
reflect the different cultures where they’ll be used. And the digital divide will need to
be addressed so that students in low-income households do not get left behind.

I know a lot of teachers are worried that students are using GPT to write their essays.
Educators are already discussing ways to adapt to the new technology, and I suspect
those conversations will continue for quite some time. I’ve heard about teachers who
have found clever ways to incorporate the technology into their work—like by allowing
students to use GPT to create a first draft that they have to personalize.

Risks and problems with AI


You’ve probably read about problems with the current AI models. For example, they
aren’t necessarily good at understanding the context for a human’s request, which leads
to some strange results. When you ask an AI to make up something fictional, it can do
that well. But when you ask for advice about a trip you want to take, it may suggest
hotels that don’t exist. This is because the AI doesn’t understand the context for your
request well enough to know whether it should invent fake hotels or only tell you about
real ones that have rooms available.

There are other issues, such as AIs giving wrong answers to math problems because
they struggle with abstract reasoning. But none of these are fundamental limitations of
artificial intelligence. Developers are working on them, and I think we’re going to see
them largely fixed in less than two years and possibly much faster.

Other concerns are not simply technical. For example, there’s the threat posed by
humans armed with AI. Like most inventions, artificial intelligence can be used for
good purposes or malign ones. Governments need to work with the private sector on
ways to limit the risks.

Then there’s the possibility that AIs will run out of control. Could a machine decide
that humans are a threat, conclude that its interests are different from ours, or simply
stop caring about us? Possibly, but this problem is no more urgent today than it was
before the AI developments of the past few months.

Superintelligent AIs are in our future. Compared to a computer, our brains operate at a
snail’s pace: An electrical signal in the brain moves at 1/100,000th the speed of the
signal in a silicon chip! Once developers can generalize a learning algorithm and run it
at the speed of a computer—an accomplishment that could be a decade away or a
century away—we’ll have an incredibly powerful AGI. It will be able to do everything
that a human brain can, but without any practical limits on the size of its memory or the
speed at which it operates. This will be a profound change.

These “strong” AIs, as they’re known, will probably be able to establish their own goals.
What will those goals be? What happens if they conflict with humanity’s interests?
Should we try to prevent strong AI from ever being developed? These questions will
get more pressing with time.

But none of the breakthroughs of the past few months have moved us substantially
closer to strong AI. Artificial intelligence still doesn’t control the physical world and
can’t establish its own goals. A recent New York Times article about a conversation
with ChatGPT where it declared it wanted to become a human got a lot of attention. It
was a fascinating look at how human-like the model's expression of emotions can be,
but it isn't an indicator of meaningful independence.

Three books have shaped my own thinking on this subject: Superintelligence, by Nick
Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins. I don’t
agree with everything the authors say, and they don’t agree with each other either. But
all three books are well written and thought-provoking.

The next frontiers


There will be an explosion of companies working on new uses of AI as well as ways to
improve the technology itself. For example, companies are developing new chips that
will provide the massive amounts of processing power needed for artificial intelligence.
Some use optical switches—lasers, essentially—to reduce their energy consumption
and lower the manufacturing cost. Ideally, innovative chips will allow you to run an AI
on your own device, rather than in the cloud, as you have to do today.
On the software side, the algorithms that drive an AI’s learning will get better. There
will be certain domains, such as sales, where developers can make AIs extremely
accurate by limiting the areas that they work in and giving them a lot of training data
that’s specific to those areas. But one big open question is whether we’ll need many of
these specialized AIs for different uses—one for education, say, and another for office
productivity—or whether it will be possible to develop an artificial general intelligence
that can learn any task. There will be immense competition on both approaches.

No matter what, the subject of AIs will dominate the public discussion for the
foreseeable future. I want to suggest three principles that should guide that
conversation.

First, we should try to balance fears about the downsides of AI—which are
understandable and valid—with its ability to improve people’s lives. To make the most
of this remarkable new technology, we’ll need to both guard against the risks and spread
the benefits to as many people as possible.

Second, market forces won’t naturally produce AI products and services that help the
poorest. The opposite is more likely. With reliable funding and the right policies,
governments and philanthropy can ensure that AIs are used to reduce inequity. Just as
the world needs its brightest people focused on its biggest problems, we will need to
focus the world’s best AIs on its biggest problems.

Although we shouldn’t wait for this to happen, it’s interesting to think about whether
artificial intelligence would ever identify inequity and try to reduce it. Do you need to
have a sense of morality in order to see inequity, or would a purely rational AI also see
it? If it did recognize inequity, what would it suggest that we do about it?

Finally, we should keep in mind that we’re only at the beginning of what AI can
accomplish. Whatever limitations it has today will be gone before we know it.

I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m
just as excited about this moment. This new technology can help people everywhere
improve their lives. At the same time, the world needs to establish the rules of the road
so that any downsides of artificial intelligence are far outweighed by its benefits, and
so that everyone can enjoy those benefits no matter where they live or how much money
they have. The Age of AI is filled with opportunities and responsibilities.

You might also like