0% found this document useful (0 votes)
309 views49 pages

Artificial Intelligence

Artificial intelligence (AI) refers to machines that can mimic human cognitive functions like learning, problem-solving, and reasoning. A subset of AI is machine learning, which allows computer programs to automatically learn from large amounts of data without being explicitly programmed. The goals of AI include developing systems that can learn, reason, and perceive like humans. While AI is often associated with robots, current AI systems are focused on narrow tasks rather than general human-level intelligence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
309 views49 pages

Artificial Intelligence

Artificial intelligence (AI) refers to machines that can mimic human cognitive functions like learning, problem-solving, and reasoning. A subset of AI is machine learning, which allows computer programs to automatically learn from large amounts of data without being explicitly programmed. The goals of AI include developing systems that can learn, reason, and perceive like humans. While AI is often associated with robots, current AI systems are focused on narrow tasks rather than general human-level intelligence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

What Is Artificial Intelligence (AI)?

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are

programmed to think like humans and mimic their actions. The term may also be applied to any

machine that exhibits traits associated with a human mind such as learning and problem-solving.

The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that

have the best chance of achieving a specific goal. A subset of artificial intelligence is machine

learning (ML), which refers to the concept that computer programs can automatically learn from

and adapt to new data without being assisted by humans. Deep learning techniques enable this

automatic learning through the absorption of huge amounts of unstructured data such as text,

images, or video.

When most people hear the term artificial intelligence, the first thing they usually think of is

robots. That's because big-budget films and novels weave stories about human-like machines that

wreak havoc on Earth. But nothing could be further from the truth.

Artificial intelligence is based on the principle that human intelligence can be defined in a way

that a machine can easily mimic it and execute tasks, from the most simple to those that are even

more complex. The goals of artificial intelligence include mimicking human cognitive activity.

Researchers and developers in the field are making surprisingly rapid strides in mimicking

activities such as learning, reasoning, and perception, to the extent that these can be concretely

defined. Some believe that innovators may soon be able to develop systems that exceed the

capacity of humans to learn or reason out any subject. But others remain skeptical because all

cognitive activity is laced with value judgments that are subject to human experience.
As technology advances, previous benchmarks that defined artificial intelligence become

outdated. For example, machines that calculate basic functions or recognize text through optical

character recognition are no longer considered to embody artificial intelligence, since this

function is now taken for granted as an inherent computer function.

AI is continuously evolving to benefit many different industries. Machines are wired using a

cross-disciplinary approach based on mathematics, computer science, linguistics, psychology,

and more.
How does AI work?

As the hype around AI has accelerated, vendors have been scrambling to promote how their

products and services use AI. Often what they refer to as AI is simply one component of AI, such

as machine learning. AI requires a foundation of specialized hardware and software for writing

and training machine learning algorithms. No one programming language is synonymous with

AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the

data for correlations and patterns, and using these patterns to make predictions about future

states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike

exchanges with people, or an image recognition tool can learn to identify and describe objects in

images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules

for how to turn the data into actionable information. The rules, which are called algorithms,

provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to

reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune

algorithms and ensure they provide the most accurate results possible.
Why is artificial intelligence important?

AI is important because it can give enterprises insights into their operations that they may not

have been aware of previously and because, in some cases, AI can perform tasks better than

humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large

numbers of legal documents to ensure relevant fields are filled in properly, AI tools often

complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business

opportunities for some larger enterprises. Prior to the current wave of AI, it would have been

hard to imagine using computer software to connect riders to taxis, but today Uber has become

one of the largest companies in the world by doing just that. It utilizes sophisticated machine

learning algorithms to predict when people are likely to need rides in certain areas, which helps

proactively get drivers on the road before they're needed. As another example, Google has

become one of the largest players for a range of online services by using machine learning to

understand how people use their services and then improving them. In 2017, the company's

CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and

gain advantage on their competitors.


What are the advantages and disadvantages of artificial intelligence?

Artificial neural networks and deep learning artificial intelligence technologies are quickly

evolving, primarily because AI processes large amounts of data much faster and makes

predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI

applications that use machine learning can take that data and quickly turn it into actionable

information. As of this writing, the primary disadvantage of using AI is that it is expensive to

process the large amounts of data that AI programming requires.

Advantages

Good at detail-oriented jobs;

Reduced time for data-heavy tasks;

Delivers consistent results; and

AI-powered virtual agents are always available.

Disadvantages

Expensive;

Requires deep technical expertise;

Limited supply of qualified workers to build AI tools;

Only knows what it's been shown; and

Lack of ability to generalize from one task to another.


Categories of AI

Artificial intelligence can be divided into two different categories: weak and strong.

Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a

specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.

Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI

systems include video games such as the chess example from above and personal assistants such

as Amazon's Alexa and Apple's Siri. You ask the assistant a question, and it answers it for you.

Strong AI, also known as artificial general intelligence (AGI), describes programming that can

replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a

strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a

solution autonomously. In theory, a strong AI program should be able to pass both a Turing Test

and the Chinese room test. Strong artificial intelligence systems are systems that carry on the

tasks considered to be human-like. These tend to be more complex and complicated systems.

They are programmed to handle situations in which they may be required to problem solve

without having a person intervene. These kinds of systems can be found in applications like self-

driving cars or in hospital operating rooms.


What are the 4 types of artificial intelligence?

Arend Hintze, an assistant professor of integrative biology and computer science and engineering

at Michigan State University, explained in a 2016 article that AI can be categorized into four

types, beginning with the task-specific intelligent systems in wide use today and progressing to

sentient systems, which do not yet exist. The categories are as follows:

Type 1: Reactive machines. These AI systems have no memory and are task specific. An

example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue

can identify pieces on the chessboard and make predictions, but because it has no memory, it

cannot use past experiences to inform future ones.

Type 2: Limited memory. These AI systems have memory, so they can use past experiences to

inform future decisions. Some of the decision-making functions in self-driving cars are designed

this way.

Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that

the system would have the social intelligence to understand emotions. This type of AI will be

able to infer human intentions and predict behavior, a necessary skill for AI systems to become

integral members of human teams.

Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them

consciousness. Machines with self-awareness understand their own current state. This type of AI

does not yet exist.


What are examples of AI technology and how is it used today?

AI is incorporated into a variety of different types of technology. Here are six examples:

Automation. When paired with AI technologies, automation tools can expand the volume and

types of tasks performed. An example is robotic process automation (RPA), a type of software

that automates repetitive, rules-based data processing tasks traditionally done by humans. When

combined with machine learning and emerging AI tools, RPA can automate bigger portions of

enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to

process changes.

Machine learning. This is the science of getting a computer to act without programming. Deep

learning is a subset of machine learning that, in very simple terms, can be thought of as the

automation of predictive analytics. There are three types of machine learning algorithms:

Supervised learning. Data sets are labeled so that patterns can be detected and used to label new

data sets.

Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or

differences.

Reinforcement learning. Data sets aren't labeled but, after performing an action or several

actions, the AI system is given feedback.

Machine vision. This technology gives a machine the ability to see. Machine vision captures and

analyzes visual information using a camera, analog-to-digital conversion and digital signal

processing. It is often compared to human eyesight, but machine vision isn't bound by biology
and can be programmed to see through walls, for example. It is used in a range of applications

from signature identification to medical image analysis. Computer vision, which is focused on

machine-based image processing, is often conflated with machine vision.

Natural language processing (NLP). This is the processing of human language by a computer

program. One of the older and best-known examples of NLP is spam detection, which looks at

the subject line and text of an email and decides if it's junk. Current approaches to NLP are based

on machine learning. NLP tasks include text translation, sentiment analysis and speech

recognition.

Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots

are often used to perform tasks that are difficult for humans to perform or perform consistently.

For example, robots are used in assembly lines for car production or by NASA to move large

objects in space. Researchers are also using machine learning to build robots that can interact in

social settings.

Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition

and deep learning to build automated skill at piloting a vehicle while staying in a given lane and

avoiding unexpected obstructions, such as pedestrians.


What are the applications of AI?

The applications for artificial intelligence are endless. The technology can be applied to many

different sectors and industries. AI is being tested and used in the healthcare industry for dosing

drugs and doling out different treatments tailored to specific patients, and for aiding in surgical

procedures in the operating room.

Other examples of machines with artificial intelligence include computers that play chess and

self-driving cars. Each of these machines must weigh the consequences of any action they take,

as each action will impact the end result. In chess, the end result is winning the game. For self-

driving cars, the computer system must account for all external data and compute it to act in a

way that prevents a collision.

Artificial intelligence also has applications in the financial industry, where it is used to detect and

flag activity in banking and finance such as unusual debit card usage and large account deposits

—all of which help a bank's fraud department. Applications for AI are also being used to help

streamline and make trading easier. This is done by making supply, demand, and pricing of

securities easier to estimate.

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs.

Companies are applying machine learning to make better and faster diagnoses than humans. One

of the best-known healthcare technologies is IBM Watson. It understands natural language and
can respond to questions asked of it. The system mines patient data and other available data

sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI

applications include using online virtual health assistants and chatbots to help patients and

healthcare customers find medical information, schedule appointments, understand the billing

process and complete other administrative processes. An array of AI technologies is also being

used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer

relationship management (CRM) platforms to uncover information on how to better serve

customers. Chatbots have been incorporated into websites to provide immediate service to

customers. Automation of job positions has also become a talking point among academics and IT

analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and

adapt to their needs, helping them work at their own pace. AI tutors can provide additional

support to students, ensuring they stay on track. And it could change where and how students

learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting

financial institutions. Applications such as these collect personal data and provide financial

advice. Other programs, such as IBM Watson, have been applied to the process of buying a

home. Today, artificial intelligence software performs much of the trading on Wall Street.
AI in law. The discovery process -- sifting through documents -- in law is often overwhelming

for humans. Using AI to help automate the legal industry's labor-intensive processes is saving

time and improving client service. Law firms are using machine learning to describe data and

predict outcomes, computer vision to classify and extract information from documents and

natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the

workflow. For example, the industrial robots that were at one time programmed to perform

single tasks and separated from human workers, increasingly function as cobots: Smaller,

multitasking robots that collaborate with humans and take on responsibility for more parts of the

job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of

services and offerings and to handle transactions that don't require human intervention. AI virtual

assistants are being used to improve and cut the costs of compliance with banking regulations.

Banking organizations are also using AI to improve their decision-making for loans, and to set

credit limits and identify investment opportunities.


AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI

technologies are used in transportation to manage traffic, predict flight delays, and make ocean

shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today

to differentiate their offerings. Those terms also represent truly viable technologies.

Organizations use machine learning in security information and event management (SIEM)

software and related areas to detect anomalies and identify suspicious activities that indicate

threats. By analyzing data and using logic to identify similarities to known malicious code, AI

can provide alerts to new and emerging attacks much sooner than human employees and

previous technology iterations. The maturing technology is playing a big role in helping

organizations fight off cyber attacks.


Ethical use of artificial intelligence

While AI tools present a range of new functionality for businesses, the use of artificial

intelligence also raises ethical questions because, for better or worse, an AI system will reinforce

what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most

advanced AI tools, are only as smart as the data they are given in training. Because a human

being selects what data is used to train an AI program, the potential for machine learning bias is

inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to

factor ethics into their AI training processes and strive to avoid bias. This is especially true when

using AI algorithms that are inherently unexplainable in deep learning and generative adversarial

network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict

regulatory compliance requirements. For example, financial institutions in the United States

operate under regulations that require them to explain their credit-issuing decisions. When a

decision to refuse credit is made by AI programming, however, it can be difficult to explain how

the decision was arrived at because the AI tools used to make such decisions operate by teasing
out subtle correlations between thousands of variables. When the decision-making process

cannot be explained, the program may be referred to as black box AI.

How Is AI Used Today?

AI is used extensively across a range of applications today, with varying levels of sophistication.

Recommendation algorithms that suggest what you might like next are popular AI

implementations, as are chatbots that appear on websites or in the form of smart speakers (e.g.,

Alexa or Siri). AI is used to make predictions in terms of weather and financial forecasting, to

streamline production processes, and to cut down on various forms of redundant cognitive labor

(e.g., tax accounting or editing). AI is also used to play games, operate autonomous vehicles,

process language, and much, much, more.


Components of responsible AI use.

These components make up responsible AI use.

Despite potential risks, there are currently few regulations governing the use of AI tools, and

where laws do exist, they typically pertain to AI indirectly. For example, as previously

mentioned, United States Fair Lending regulations require financial institutions to explain credit

decisions to potential customers. This limits the extent to which lenders can use deep learning

algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how

enterprises can use consumer data, which impedes the training and functionality of many

consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the

potential role governmental regulation might play in AI development, but it did not recommend

specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of

technologies that companies use for different ends, and partly because regulations can come at

the cost of AI progress and development. The rapid evolution of AI technologies is another

obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel

applications can make existing laws instantly obsolete. For example, existing laws regulating the
privacy of conversations and recorded conversations do not cover the challenge posed by voice

assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation --

except to the companies' technology teams which use it to improve machine learning algorithms.

And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals

from using the technology with malicious intent.


The history of artificial intelligence

The history of artificial intelligence dates back to antiquity with philosophers mulling over the

idea that artificial beings, mechanical men, and other automatons had existed or could exist in

some fashion.

Thanks to early thinkers, artificial intelligence became increasingly more tangible throughout the

1700s and beyond. Philosophers contemplated how human thinking could be artificially

mechanized and manipulated by intelligent non-human machines. The thought processes that

fueled interest in AI originated when classical philosophers, mathematicians, and logicians

considered the manipulation of symbols (mechanically), eventually leading to the invention of

the programmable digital computer, the Atanasoff Berry Computer (ABC) in the 1940s. This

specific invention inspired scientists to move forward with the idea of creating an “electronic

brain,” or an artificially intelligent being.

Nearly a decade passed before icons in AI aided in the understanding of the field we have today.

Alan Turing, a mathematician among other things, proposed a test that measured a machine’s

ability to replicate human actions to a degree that was indistinguishable from human behavior.

Later that decade, the field of AI research was founded during a summer conference at

Dartmouth College in the mid-1950s, where John McCarthy, computer and cognitive scientist,

coined the term “artificial intelligence.”


From the 1950s forward, many scientists, programmers, logicians, and theorists aided in

solidifying the modern understanding of artificial intelligence as a whole. With each new decade

came innovations and findings that changed people’s fundamental knowledge of the field of

artificial intelligence and how historical advancements have catapulted AI from being an

unattainable fantasy to a tangible reality for current and future generations.

Between 380 BC and the late 1600s: Various mathematicians, theologians, philosophers,

professors, and authors mused about mechanical techniques, calculating machines, and numeral

systems that all eventually led to the concept of mechanized “human” thought in non-human

beings.

Early 1700s: Depictions of all-knowing machines akin to computers were more widely discussed

in popular literature. Jonathan Swift’s novel “Gulliver’s Travels” mentioned a device called the

engine, which is one of the earliest references to modern-day technology, specifically a

computer. This device’s intended purpose was to improve knowledge and mechanical operations

to a point where even the least talented person would seem to be skilled – all with the assistance

and knowledge of a non-human mind (mimicking artificial intelligence.)


1872: Author Samuel Butler’s novel “Erewhon” toyed with the idea that at an indeterminate

point in the future machines would have the potential to possess consciousness.

AI from 1900-1950

Once the 1900s hit, the pace with which innovation in artificial intelligence grew was significant.

1921: Karel Čapek, a Czech playwright, released his science fiction play “Rossum’s Universal

Robots” (English translation). His play explored the concept of factory-made artificial people

who he called robots – the first known reference to the word. From this point onward, people

took the “robot” idea and implemented it into their research, art, and discoveries.
1927: The sci-fi film Metropolis, directed by Fritz Lang, featured a robotic girl who was

physically indistinguishable from the human counterpart from which it took its likeness. The

artificially intelligent robot-girl then attacks the town, wreaking havoc on a futuristic Berlin. This

film holds significance because it is the first on-screen depiction of a robot and thus lent

inspiration to other famous non-human characters such as C-P30 in Star Wars.


1929: Japanese biologist and professor Makoto Nishimura created Gakutensoku, the first robot to

be built in Japan. Gakutensoku translates to “learning from the laws of nature,” implying the

robot’s artificially intelligent mind could derive knowledge from people and nature. Some of its

features included moving its head and hands as well as changing its facial expressions.

1939: John Vincent Atanasoff (physicist and inventor), alongside his graduate student assistant

Clifford Berry, created the Atanasoff-Berry Computer (ABC) with a grant of $650 at Iowa State

University. The ABC weighed over 700 pounds and could solve up to 29 simultaneous linear

equations.
1949: Computer scientist Edmund Berkeley’s book “Giant Brains: Or Machines That Think”

noted that machines have increasingly been capable of handling large amounts of information

with speed and skill. He went on to compare machines to a human brain if it were made of

“hardware and wire instead of flesh and nerves,” describing machine ability to that of the human

mind, stating that “a machine, therefore, can think.”

AI in the 1950s

The 1950s proved to be a time when many advances in the field of artificial intelligence came to

fruition with an upswing in research-based findings in AI by various computer scientists among

others.

1950: Claude Shannon, “the father of information theory,” published “Programming a Computer

for Playing Chess,” which was the first article to discuss the development of a chess-playing

computer program.
1950: Alan Turing published “Computing Machinery and Intelligence,” which proposed the idea

of The Imitation Game – a question that considered if machines can think. This proposal later

became The Turing Test, which measured machine (artificial) intelligence. Turing’s

development tested a machine’s ability to think as a human would. The Turing Test became an

important component in the philosophy of artificial intelligence, which discusses intelligence,

consciousness, and ability in machines.

1952: Arthur Samuel, a computer scientist, developed a checkers-playing computer program –

the first to independently learn how to play a game.


1955: John McCarthy and a team of men created a proposal for a workshop on “artificial

intelligence.” In 1956 when the workshop took place, the official birth of the word was attributed

to McCarthy.

1955: Allen Newell (researcher), Herbert Simon (economist), and Cliff Shaw (programmer) co-

authored Logic Theorist, the first artificial intelligence computer program.

1958: McCarthy developed Lisp, the most popular and still favored programming language for

artificial intelligence research.

1959: Samuel coined the term “machine learning” when speaking about programming a

computer to play a game of chess better than the human who wrote its program.

AI in the 1960s

Innovation in the field of artificial intelligence grew rapidly through the 1960s. The creation of

new programming languages, robots and automatons, research studies, and films that depicted

artificially intelligent beings increased in popularity. This heavily highlighted the importance of

AI in the second half of the 20th century.


1961: Unimate, an industrial robot invented by George Devol in the 1950s, became the first to

work on a General Motors assembly line in New Jersey. Its responsibilities included transporting

die castings from the assembly line and welding the parts on to cars – a task deemed dangerous

for humans.

1961: James Slagle, computer scientist and professor, developed SAINT (Symbolic Automatic

INTegrator), a heuristic problem-solving program whose focus was symbolic integration in

freshman calculus.

1964: Daniel Bobrow, computer scientist, created STUDENT, an early AI program written in

Lisp that solved algebra word problems. STUDENT is cited as an early milestone of AI natural

language processing.
1965: Joseph Weizenbaum, computer scientist and professor, developed ELIZA, an interactive

computer program that could functionally converse in English with a person. Weizenbaum’s goal

was to demonstrate how communication between an artificially intelligent mind versus a human

mind was “superficial,” but discovered many people attributed anthropomorphic characteristics

to ELIZA.

1966: Shakey the Robot, developed by Charles Rosen with the help of 11 others, was the first

general-purpose mobile robot, also known as the “first electronic person.”


1968: The sci-fi film 2001: A Space Odyssey, directed by Stanley Kubrick, is released. It

features HAL (Heuristically programmed ALgorithmic computer), a sentient computer. HAL

controls the spacecraft’s systems and interacts with the ship’s crew, conversing with them as if

HAL were human until a malfunction changes HAL’s interactions in a negative manner.

1968: Terry Winograd, professor of computer science, created SHRDLU, an early natural

language computer program.


AI in the 1970s

Like the 1960s, the 1970s gave way to accelerated advancements, particularly focusing on robots

and automatons. However, artificial intelligence in the 1970s faced challenges, such as reduced

government support for AI research.

1970: WABOT-1, the first anthropomorphic robot, was built in Japan at Waseda University. Its

features included moveable limbs, ability to see, and ability to converse.

1973: James Lighthill, applied mathematician, reported the state of artificial intelligence research

to the British Science Council, stating: “in no part of the field have discoveries made so far

produced the major impact that was then promised,” which led to significantly reduced support

in AI research via the British government.


1977: Director George Lucas’ film Star Wars is released. The film features C-3PO, a humanoid

robot who is designed as a protocol droid and is “fluent in more than seven million forms of

communication.” As a companion to C-3PO, the film also features R2-D2 – a small, astromech

droid who is incapable of human speech (the inverse of C-3PO); instead, R2-D2 communicates

with electronic beeps. Its functions include small repairs and co-piloting starfighters.

1979: The Stanford Cart, a remote controlled, tv-equipped mobile robot was created by then-

mechanical engineering grad student James L. Adams in 1961. In 1979, a “slider,” or mechanical

swivel that moved the TV camera from side-to-side, was added by Hans Moravec, then-PhD

student. The cart successfully crossed a chair-filled room without human interference in

approximately five hours, making it one of the earliest examples of an autonomous vehicle.
AI in the 1980s

The rapid growth of artificial intelligence continued through the 1980s. Despite advancements

and excitement behind AI, caution surrounded an inevitable “AI Winter,” a period of reduced

funding and interest in artificial intelligence.

1980: WABOT-2 was built at Waseda University. This inception of the WABOT allowed the

humanoid to communicate with people as well as read musical scores and play music on an

electronic organ.
1981: The Japanese Ministry of International Trade and Industry allocated $850 million to the

Fifth Generation Computer project, whose goal was to develop computers that could converse,

translate languages, interpret pictures, and express humanlike reasoning.

1984: The film Electric Dreams, directed by Steve Barron, is released. The plot revolves around

a love triangle between a man, woman, and a sentient personal computer called “Edgar.”

1984: At the Association for the Advancement of Artificial Intelligence (AAAI), Roger Schank

(AI theorist) and Marvin Minsky (cognitive scientist) warn of the AI winter, the first instance

where interest and funding for artificial intelligence research would decrease. Their warning

came true within three years’ time.

1986: Mercedes-Benz built and released a driverless van equipped with cameras and sensors

under the direction of Ernst Dickmanns. It was able to drive up to 55 mph on a road with no

other obstacles nor human drivers.


1988: Computer scientist and philosopher Judea Pearl published “Probabilistic Reasoning in

Intelligent Systems.” Pearl is also credited with inventing Bayesian networks, a “probabilistic

graphical model” that represents sets of variables and their dependencies via directed acyclic

graph (DAG).

1988: Rollo Carpenter, programmer and inventor of two chatbots, Jabberwacky and Cleverbot

(released in the 1990s), developed Jabberwacky to "simulate natural human chat in an

interesting, entertaining and humorous manner." This is an example of AI via a chatbot

communicating with people.

AI in the 1990s

The end of the millennium was on the horizon, but this anticipation only helped artificial

intelligence in its continued stages of growth.

1995: Computer scientist Richard Wallace developed the chatbot A.L.I.C.E (Artificial Linguistic

Internet Computer Entity), inspired by Weizenbaum's ELIZA. What differentiated A.L.I.C.E.

from ELIZA was the addition of natural language sample data collection.
1997: Computer scientists Sepp Hochreiter and Jürgen Schmidhuber developed Long Short-

Term Memory (LSTM), a type of a recurrent neural network (RNN) architecture used for

handwriting and speech recognition.

1997: Deep Blue, a chess-playing computer developed by IBM became the first system to win a

chess game and match against a reigning world champion.

1998: Dave Hampton and Caleb Chung invented Furby, the first “pet” toy robot for children.
1999: In line with Furby, Sony introduced AIBO (Artificial Intelligence RoBOt), a $2,000

robotic pet dog crafted to “learn” by interacting with its environment, owners, and other AIBOs.

Its features included the ability to understand and respond to 100+ voice commands and

communicate with its human owner.

AI from 2000-2010
The new millennium was underway – and after the fears of Y2K died down – AI continued

trending upward. As expected, more artificially intelligent beings were created as well as

creative media (film, specifically) about the concept of artificial intelligence and where it might

be headed.

2000: The Y2K problem, also known as the year 2000 problem, was a class of computer bugs

related to the formatting and storage of electronic calendar data beginning on 01/01/2000. Given

that all internet software and programs had been created in the 1900s, some systems would have

trouble adapting to the new year format of 2000 (and beyond). Previously, these automated

systems only had to change the final two digits of the year; now, all four digits had to be

switched over – a challenge for technology and those who used it.

2000: Professor Cynthia Breazeal developed Kismet, a robot that could recognize and simulate

emotions with its face. It was structured like a human face with eyes, lips, eyelids, and eyebrows.
2000: Honda releases ASIMO, an artificially intelligent humanoid robot.

2001: Sci-fi film A.I. Artificial Intelligence, directed by Steven Spielberg, is released. The movie

is set in a futuristic, dystopian society and follows David, an advanced humanoid child that is

programmed with anthropomorphic feelings, including the ability to love.


002: i-Robot released Roomba, an autonomous robot vacuum that cleans while avoiding

obstacles.
2004: NASA's robotic exploration rovers Spirit and Opportunity navigate Mars’ surface without

human intervention.

2004: Sci-fi film I, Robot, directed by Alex Proyas, is released. Set in the year 2035, humanoid

robots serve humankind while one individual is vehemently anti-robot, given the outcome of a

personal tragedy (determined by a robot.)


2006: Oren Etzioni (computer science professor), Michele Banko, and Michael Cafarella

(computer scientists), coined the term “machine reading,” defining it as unsupervised

autonomous understanding of text.

2007: Computer science professor Fei Fei Li and colleagues assembled ImageNet, a database of

annotated images whose purpose is to aid in object recognition software research.

2009: Google secretly developed a driverless car. By 2014, it passed Nevada’s self-driving test.

AI 2010 to present day

The current decade has been immensely important for AI innovation. From 2010 onward,

artificial intelligence has become embedded in our day-to-day existence. We use smartphones

that have voice assistants and computers that have “intelligence” functions most of us take for

granted. AI is no longer a pipe dream and hasn’t been for some time.
2010: ImageNet launched the ImageNet Large Scale Visual Recognition Challenge (ILSVRC),

their annual AI object recognition competition.

2010: Microsoft launched Kinect for Xbox 360, the first gaming device that tracked human body

movement using a 3D camera and infrared detection.

2011: Watson, a natural language question answering computer created by IBM, defeated two

former Jeopardy! champions, Ken Jennings and Brad Rutter, in a televised game.

2011: Apple released Siri, a virtual assistant on Apple iOS operating systems. Siri uses a natural-

language user interface to infer, observe, answer, and recommend things to its human user. It

adapts to voice commands and projects an “individualized experience” per user.


2012: Jeff Dean and Andrew Ng (Google researchers) trained a large neural network of 16,000

processors to recognize images of cats (despite giving no background information) by showing it

10 million unlabeled images from YouTube videos.

2013: A research team from Carnegie Mellon University released Never Ending Image Learner

(NEIL), a semantic machine learning system that could compare and analyze image

relationships.

2014: Microsoft released Cortana, their version of a virtual assistant similar to Siri on iOS.
2014: Amazon created Amazon Alexa, a home assistant that developed into smart speakers that

function as personal assistants.

2015: Elon Musk, Stephen Hawking, and Steve Wozniak among 3,000 others signed an open

letter banning the development and use of autonomous weapons (for purposes of war.)

2015-2017: Google DeepMind’s AlphaGo, a computer program that plays the board game Go,

defeated various (human) champions.

2016: A humanoid robot named Sophia is created by Hanson Robotics. She is known as the first

“robot citizen.” What distinguishes Sophia from previous humanoids is her likeness to an actual

human being, with her ability to see (image recognition), make facial expressions, and

communicate through AI.


2016: Google released Google Home, a smart speaker that uses AI to act as a “personal

assistant” to help users remember tasks, create appointments, and search for information by

voice.

2017: The Facebook Artificial Intelligence Research lab trained two “dialog agents” (chatbots) to

communicate with each other in order to learn how to negotiate. However, as the chatbots

conversed, they diverged from human language (programmed in English) and invented their own

language to communicate with one another – exhibiting artificial intelligence to a great degree.

2018: Alibaba (Chinese tech group) language processing AI outscored human intellect at a

Stanford reading and comprehension test. The Alibaba language processing scored “82.44

against 82.30 on a set of 100,000 questions” – a narrow defeat, but a defeat nonetheless.
2018: Google developed BERT, the first “bidirectional, unsupervised language representation

that can be used on a variety of natural language tasks using transfer learning.”

2018: Samsung introduced Bixby, a virtual assistant. Bixby’s functions include Voice, where the

user can speak to and ask questions, recommendations, and suggestions; Vision, where Bixby’s

“seeing” ability is built into the camera app and can see what the user sees (i.e. object

identification, search, purchase, translation, landmark recognition); and Home, where Bixby uses

app-based information to help utilize and interact with the user (e.g. weather and fitness

applications.)
A Brief History of Artificial Intelligence

The idea of inanimate objects coming to life as intelligent beings has been around for a long

time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built

automatons.

The beginnings of modern AI can be traced to classical philosophers' attempts to describe human

thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a

conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial

intelligence" was coined.

MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely

optimistic about AI's future. "Within a generation [...] the problem of creating 'artificial

intelligence' will substantially be solved," Minsky is quoted as saying in the book "AI: The

Tumultuous Search for Artificial Intelligence" (Basic Books, 1994).

But achieving an artificially intelligent being wasn't so simple. After several reports criticizing

progress in AI, government funding and interest in the field dropped off – a period from 1974–80

that became known as the "AI winter." The field later revived in the 1980s when the British

government started funding it again in part to compete with efforts by the Japanese.

The field experienced another major winter from 1987 to 1993, coinciding with the collapse of

the market for some of the early general-purpose computers, and reduced government funding.
But research began to pick up again after that, and in 1997, IBM's Deep Blue became the first

computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And

in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!"

by beating reigning champions Brad Rutter and Ken Jennings.

In 2014, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges

into thinking he was real skin-and-blood human during a Turing test, a competition developed by

British mathematician and computer scientist Alan Turing in 1950 as a way to assess whether a

machine is intelligent.

But the accomplishment has been controversial, with artificial intelligence experts saying that

only a third of the judges were fooled, and pointing out that the bot was able to dodge some

questions by claiming it was an adolescent who spoke English as a second language.

Manyexperts now believe the Turing test isn't a good measure of artificial intelligence.

"The vast majority of people in AI who've thought about the matter, for the most part, think it’s a

very poor test, because it only looks at external behavior," Perlis told Live Science.

In fact, some scientists now plan to develop an updated version of the test. But the field of AI has

become much broader than just the pursuit of true, humanlike intelligence.
Reference

https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp

https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence

https://www.g2.com/articles/history-of-artificial-intelligence

https://www.livescience.com/49007-history-of-artificial-intelligence.html

You might also like