Artificial Intelligence
Artificial Intelligence
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think like humans and mimic their actions. The term may also be applied to any
machine that exhibits traits associated with a human mind such as learning and problem-solving.
The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that
have the best chance of achieving a specific goal. A subset of artificial intelligence is machine
learning (ML), which refers to the concept that computer programs can automatically learn from
and adapt to new data without being assisted by humans. Deep learning techniques enable this
automatic learning through the absorption of huge amounts of unstructured data such as text,
images, or video.
When most people hear the term artificial intelligence, the first thing they usually think of is
robots. That's because big-budget films and novels weave stories about human-like machines that
wreak havoc on Earth. But nothing could be further from the truth.
Artificial intelligence is based on the principle that human intelligence can be defined in a way
that a machine can easily mimic it and execute tasks, from the most simple to those that are even
more complex. The goals of artificial intelligence include mimicking human cognitive activity.
Researchers and developers in the field are making surprisingly rapid strides in mimicking
activities such as learning, reasoning, and perception, to the extent that these can be concretely
defined. Some believe that innovators may soon be able to develop systems that exceed the
capacity of humans to learn or reason out any subject. But others remain skeptical because all
cognitive activity is laced with value judgments that are subject to human experience.
As technology advances, previous benchmarks that defined artificial intelligence become
outdated. For example, machines that calculate basic functions or recognize text through optical
character recognition are no longer considered to embody artificial intelligence, since this
AI is continuously evolving to benefit many different industries. Machines are wired using a
and more.
How does AI work?
As the hype around AI has accelerated, vendors have been scrambling to promote how their
products and services use AI. Often what they refer to as AI is simply one component of AI, such
as machine learning. AI requires a foundation of specialized hardware and software for writing
and training machine learning algorithms. No one programming language is synonymous with
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the
data for correlations and patterns, and using these patterns to make predictions about future
states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike
exchanges with people, or an image recognition tool can learn to identify and describe objects in
Learning processes. This aspect of AI programming focuses on acquiring data and creating rules
for how to turn the data into actionable information. The rules, which are called algorithms,
provide computing devices with step-by-step instructions for how to complete a specific task.
Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to
algorithms and ensure they provide the most accurate results possible.
Why is artificial intelligence important?
AI is important because it can give enterprises insights into their operations that they may not
have been aware of previously and because, in some cases, AI can perform tasks better than
humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large
numbers of legal documents to ensure relevant fields are filled in properly, AI tools often
This has helped fuel an explosion in efficiency and opened the door to entirely new business
opportunities for some larger enterprises. Prior to the current wave of AI, it would have been
hard to imagine using computer software to connect riders to taxis, but today Uber has become
one of the largest companies in the world by doing just that. It utilizes sophisticated machine
learning algorithms to predict when people are likely to need rides in certain areas, which helps
proactively get drivers on the road before they're needed. As another example, Google has
become one of the largest players for a range of online services by using machine learning to
understand how people use their services and then improving them. In 2017, the company's
CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.
Today's largest and most successful enterprises have used AI to improve their operations and
Artificial neural networks and deep learning artificial intelligence technologies are quickly
evolving, primarily because AI processes large amounts of data much faster and makes
While the huge volume of data being created on a daily basis would bury a human researcher, AI
applications that use machine learning can take that data and quickly turn it into actionable
Advantages
Disadvantages
Expensive;
Artificial intelligence can be divided into two different categories: weak and strong.
Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a
specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.
Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI
systems include video games such as the chess example from above and personal assistants such
as Amazon's Alexa and Apple's Siri. You ask the assistant a question, and it answers it for you.
Strong AI, also known as artificial general intelligence (AGI), describes programming that can
replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a
strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a
solution autonomously. In theory, a strong AI program should be able to pass both a Turing Test
and the Chinese room test. Strong artificial intelligence systems are systems that carry on the
tasks considered to be human-like. These tend to be more complex and complicated systems.
They are programmed to handle situations in which they may be required to problem solve
without having a person intervene. These kinds of systems can be found in applications like self-
Arend Hintze, an assistant professor of integrative biology and computer science and engineering
at Michigan State University, explained in a 2016 article that AI can be categorized into four
types, beginning with the task-specific intelligent systems in wide use today and progressing to
sentient systems, which do not yet exist. The categories are as follows:
Type 1: Reactive machines. These AI systems have no memory and are task specific. An
example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue
can identify pieces on the chessboard and make predictions, but because it has no memory, it
Type 2: Limited memory. These AI systems have memory, so they can use past experiences to
inform future decisions. Some of the decision-making functions in self-driving cars are designed
this way.
Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that
the system would have the social intelligence to understand emotions. This type of AI will be
able to infer human intentions and predict behavior, a necessary skill for AI systems to become
Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them
consciousness. Machines with self-awareness understand their own current state. This type of AI
AI is incorporated into a variety of different types of technology. Here are six examples:
Automation. When paired with AI technologies, automation tools can expand the volume and
types of tasks performed. An example is robotic process automation (RPA), a type of software
that automates repetitive, rules-based data processing tasks traditionally done by humans. When
combined with machine learning and emerging AI tools, RPA can automate bigger portions of
enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to
process changes.
Machine learning. This is the science of getting a computer to act without programming. Deep
learning is a subset of machine learning that, in very simple terms, can be thought of as the
automation of predictive analytics. There are three types of machine learning algorithms:
Supervised learning. Data sets are labeled so that patterns can be detected and used to label new
data sets.
Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or
differences.
Reinforcement learning. Data sets aren't labeled but, after performing an action or several
Machine vision. This technology gives a machine the ability to see. Machine vision captures and
analyzes visual information using a camera, analog-to-digital conversion and digital signal
processing. It is often compared to human eyesight, but machine vision isn't bound by biology
and can be programmed to see through walls, for example. It is used in a range of applications
from signature identification to medical image analysis. Computer vision, which is focused on
Natural language processing (NLP). This is the processing of human language by a computer
program. One of the older and best-known examples of NLP is spam detection, which looks at
the subject line and text of an email and decides if it's junk. Current approaches to NLP are based
on machine learning. NLP tasks include text translation, sentiment analysis and speech
recognition.
Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots
are often used to perform tasks that are difficult for humans to perform or perform consistently.
For example, robots are used in assembly lines for car production or by NASA to move large
objects in space. Researchers are also using machine learning to build robots that can interact in
social settings.
Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition
and deep learning to build automated skill at piloting a vehicle while staying in a given lane and
The applications for artificial intelligence are endless. The technology can be applied to many
different sectors and industries. AI is being tested and used in the healthcare industry for dosing
drugs and doling out different treatments tailored to specific patients, and for aiding in surgical
Other examples of machines with artificial intelligence include computers that play chess and
self-driving cars. Each of these machines must weigh the consequences of any action they take,
as each action will impact the end result. In chess, the end result is winning the game. For self-
driving cars, the computer system must account for all external data and compute it to act in a
Artificial intelligence also has applications in the financial industry, where it is used to detect and
flag activity in banking and finance such as unusual debit card usage and large account deposits
—all of which help a bank's fraud department. Applications for AI are also being used to help
streamline and make trading easier. This is done by making supply, demand, and pricing of
Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.
AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs.
Companies are applying machine learning to make better and faster diagnoses than humans. One
of the best-known healthcare technologies is IBM Watson. It understands natural language and
can respond to questions asked of it. The system mines patient data and other available data
sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI
applications include using online virtual health assistants and chatbots to help patients and
healthcare customers find medical information, schedule appointments, understand the billing
process and complete other administrative processes. An array of AI technologies is also being
AI in business. Machine learning algorithms are being integrated into analytics and customer
customers. Chatbots have been incorporated into websites to provide immediate service to
customers. Automation of job positions has also become a talking point among academics and IT
analysts.
AI in education. AI can automate grading, giving educators more time. It can assess students and
adapt to their needs, helping them work at their own pace. AI tutors can provide additional
support to students, ensuring they stay on track. And it could change where and how students
financial institutions. Applications such as these collect personal data and provide financial
advice. Other programs, such as IBM Watson, have been applied to the process of buying a
home. Today, artificial intelligence software performs much of the trading on Wall Street.
AI in law. The discovery process -- sifting through documents -- in law is often overwhelming
for humans. Using AI to help automate the legal industry's labor-intensive processes is saving
time and improving client service. Law firms are using machine learning to describe data and
predict outcomes, computer vision to classify and extract information from documents and
AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the
workflow. For example, the industrial robots that were at one time programmed to perform
single tasks and separated from human workers, increasingly function as cobots: Smaller,
multitasking robots that collaborate with humans and take on responsibility for more parts of the
AI in banking. Banks are successfully employing chatbots to make their customers aware of
services and offerings and to handle transactions that don't require human intervention. AI virtual
assistants are being used to improve and cut the costs of compliance with banking regulations.
Banking organizations are also using AI to improve their decision-making for loans, and to set
technologies are used in transportation to manage traffic, predict flight delays, and make ocean
Security. AI and machine learning are at the top of the buzzword list security vendors use today
to differentiate their offerings. Those terms also represent truly viable technologies.
Organizations use machine learning in security information and event management (SIEM)
software and related areas to detect anomalies and identify suspicious activities that indicate
threats. By analyzing data and using logic to identify similarities to known malicious code, AI
can provide alerts to new and emerging attacks much sooner than human employees and
previous technology iterations. The maturing technology is playing a big role in helping
While AI tools present a range of new functionality for businesses, the use of artificial
intelligence also raises ethical questions because, for better or worse, an AI system will reinforce
This can be problematic because machine learning algorithms, which underpin many of the most
advanced AI tools, are only as smart as the data they are given in training. Because a human
being selects what data is used to train an AI program, the potential for machine learning bias is
Anyone looking to use machine learning as part of real-world, in-production systems needs to
factor ethics into their AI training processes and strive to avoid bias. This is especially true when
using AI algorithms that are inherently unexplainable in deep learning and generative adversarial
Explainability is a potential stumbling block to using AI in industries that operate under strict
regulatory compliance requirements. For example, financial institutions in the United States
operate under regulations that require them to explain their credit-issuing decisions. When a
decision to refuse credit is made by AI programming, however, it can be difficult to explain how
the decision was arrived at because the AI tools used to make such decisions operate by teasing
out subtle correlations between thousands of variables. When the decision-making process
AI is used extensively across a range of applications today, with varying levels of sophistication.
Recommendation algorithms that suggest what you might like next are popular AI
implementations, as are chatbots that appear on websites or in the form of smart speakers (e.g.,
Alexa or Siri). AI is used to make predictions in terms of weather and financial forecasting, to
streamline production processes, and to cut down on various forms of redundant cognitive labor
(e.g., tax accounting or editing). AI is also used to play games, operate autonomous vehicles,
Despite potential risks, there are currently few regulations governing the use of AI tools, and
where laws do exist, they typically pertain to AI indirectly. For example, as previously
mentioned, United States Fair Lending regulations require financial institutions to explain credit
decisions to potential customers. This limits the extent to which lenders can use deep learning
The European Union's General Data Protection Regulation (GDPR) puts strict limits on how
enterprises can use consumer data, which impedes the training and functionality of many
consumer-facing AI applications.
In October 2016, the National Science and Technology Council issued a report examining the
potential role governmental regulation might play in AI development, but it did not recommend
Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of
technologies that companies use for different ends, and partly because regulations can come at
the cost of AI progress and development. The rapid evolution of AI technologies is another
applications can make existing laws instantly obsolete. For example, existing laws regulating the
privacy of conversations and recorded conversations do not cover the challenge posed by voice
assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation --
except to the companies' technology teams which use it to improve machine learning algorithms.
And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals
The history of artificial intelligence dates back to antiquity with philosophers mulling over the
idea that artificial beings, mechanical men, and other automatons had existed or could exist in
some fashion.
Thanks to early thinkers, artificial intelligence became increasingly more tangible throughout the
1700s and beyond. Philosophers contemplated how human thinking could be artificially
mechanized and manipulated by intelligent non-human machines. The thought processes that
the programmable digital computer, the Atanasoff Berry Computer (ABC) in the 1940s. This
specific invention inspired scientists to move forward with the idea of creating an “electronic
Nearly a decade passed before icons in AI aided in the understanding of the field we have today.
Alan Turing, a mathematician among other things, proposed a test that measured a machine’s
ability to replicate human actions to a degree that was indistinguishable from human behavior.
Later that decade, the field of AI research was founded during a summer conference at
Dartmouth College in the mid-1950s, where John McCarthy, computer and cognitive scientist,
solidifying the modern understanding of artificial intelligence as a whole. With each new decade
came innovations and findings that changed people’s fundamental knowledge of the field of
artificial intelligence and how historical advancements have catapulted AI from being an
Between 380 BC and the late 1600s: Various mathematicians, theologians, philosophers,
professors, and authors mused about mechanical techniques, calculating machines, and numeral
systems that all eventually led to the concept of mechanized “human” thought in non-human
beings.
Early 1700s: Depictions of all-knowing machines akin to computers were more widely discussed
in popular literature. Jonathan Swift’s novel “Gulliver’s Travels” mentioned a device called the
computer. This device’s intended purpose was to improve knowledge and mechanical operations
to a point where even the least talented person would seem to be skilled – all with the assistance
point in the future machines would have the potential to possess consciousness.
AI from 1900-1950
Once the 1900s hit, the pace with which innovation in artificial intelligence grew was significant.
1921: Karel Čapek, a Czech playwright, released his science fiction play “Rossum’s Universal
Robots” (English translation). His play explored the concept of factory-made artificial people
who he called robots – the first known reference to the word. From this point onward, people
took the “robot” idea and implemented it into their research, art, and discoveries.
1927: The sci-fi film Metropolis, directed by Fritz Lang, featured a robotic girl who was
physically indistinguishable from the human counterpart from which it took its likeness. The
artificially intelligent robot-girl then attacks the town, wreaking havoc on a futuristic Berlin. This
film holds significance because it is the first on-screen depiction of a robot and thus lent
be built in Japan. Gakutensoku translates to “learning from the laws of nature,” implying the
robot’s artificially intelligent mind could derive knowledge from people and nature. Some of its
features included moving its head and hands as well as changing its facial expressions.
1939: John Vincent Atanasoff (physicist and inventor), alongside his graduate student assistant
Clifford Berry, created the Atanasoff-Berry Computer (ABC) with a grant of $650 at Iowa State
University. The ABC weighed over 700 pounds and could solve up to 29 simultaneous linear
equations.
1949: Computer scientist Edmund Berkeley’s book “Giant Brains: Or Machines That Think”
noted that machines have increasingly been capable of handling large amounts of information
with speed and skill. He went on to compare machines to a human brain if it were made of
“hardware and wire instead of flesh and nerves,” describing machine ability to that of the human
AI in the 1950s
The 1950s proved to be a time when many advances in the field of artificial intelligence came to
others.
1950: Claude Shannon, “the father of information theory,” published “Programming a Computer
for Playing Chess,” which was the first article to discuss the development of a chess-playing
computer program.
1950: Alan Turing published “Computing Machinery and Intelligence,” which proposed the idea
of The Imitation Game – a question that considered if machines can think. This proposal later
became The Turing Test, which measured machine (artificial) intelligence. Turing’s
development tested a machine’s ability to think as a human would. The Turing Test became an
intelligence.” In 1956 when the workshop took place, the official birth of the word was attributed
to McCarthy.
1955: Allen Newell (researcher), Herbert Simon (economist), and Cliff Shaw (programmer) co-
1958: McCarthy developed Lisp, the most popular and still favored programming language for
1959: Samuel coined the term “machine learning” when speaking about programming a
computer to play a game of chess better than the human who wrote its program.
AI in the 1960s
Innovation in the field of artificial intelligence grew rapidly through the 1960s. The creation of
new programming languages, robots and automatons, research studies, and films that depicted
artificially intelligent beings increased in popularity. This heavily highlighted the importance of
work on a General Motors assembly line in New Jersey. Its responsibilities included transporting
die castings from the assembly line and welding the parts on to cars – a task deemed dangerous
for humans.
1961: James Slagle, computer scientist and professor, developed SAINT (Symbolic Automatic
freshman calculus.
1964: Daniel Bobrow, computer scientist, created STUDENT, an early AI program written in
Lisp that solved algebra word problems. STUDENT is cited as an early milestone of AI natural
language processing.
1965: Joseph Weizenbaum, computer scientist and professor, developed ELIZA, an interactive
computer program that could functionally converse in English with a person. Weizenbaum’s goal
was to demonstrate how communication between an artificially intelligent mind versus a human
mind was “superficial,” but discovered many people attributed anthropomorphic characteristics
to ELIZA.
1966: Shakey the Robot, developed by Charles Rosen with the help of 11 others, was the first
controls the spacecraft’s systems and interacts with the ship’s crew, conversing with them as if
HAL were human until a malfunction changes HAL’s interactions in a negative manner.
1968: Terry Winograd, professor of computer science, created SHRDLU, an early natural
Like the 1960s, the 1970s gave way to accelerated advancements, particularly focusing on robots
and automatons. However, artificial intelligence in the 1970s faced challenges, such as reduced
1970: WABOT-1, the first anthropomorphic robot, was built in Japan at Waseda University. Its
1973: James Lighthill, applied mathematician, reported the state of artificial intelligence research
to the British Science Council, stating: “in no part of the field have discoveries made so far
produced the major impact that was then promised,” which led to significantly reduced support
robot who is designed as a protocol droid and is “fluent in more than seven million forms of
communication.” As a companion to C-3PO, the film also features R2-D2 – a small, astromech
droid who is incapable of human speech (the inverse of C-3PO); instead, R2-D2 communicates
with electronic beeps. Its functions include small repairs and co-piloting starfighters.
1979: The Stanford Cart, a remote controlled, tv-equipped mobile robot was created by then-
mechanical engineering grad student James L. Adams in 1961. In 1979, a “slider,” or mechanical
swivel that moved the TV camera from side-to-side, was added by Hans Moravec, then-PhD
student. The cart successfully crossed a chair-filled room without human interference in
approximately five hours, making it one of the earliest examples of an autonomous vehicle.
AI in the 1980s
The rapid growth of artificial intelligence continued through the 1980s. Despite advancements
and excitement behind AI, caution surrounded an inevitable “AI Winter,” a period of reduced
1980: WABOT-2 was built at Waseda University. This inception of the WABOT allowed the
humanoid to communicate with people as well as read musical scores and play music on an
electronic organ.
1981: The Japanese Ministry of International Trade and Industry allocated $850 million to the
Fifth Generation Computer project, whose goal was to develop computers that could converse,
1984: The film Electric Dreams, directed by Steve Barron, is released. The plot revolves around
a love triangle between a man, woman, and a sentient personal computer called “Edgar.”
1984: At the Association for the Advancement of Artificial Intelligence (AAAI), Roger Schank
(AI theorist) and Marvin Minsky (cognitive scientist) warn of the AI winter, the first instance
where interest and funding for artificial intelligence research would decrease. Their warning
1986: Mercedes-Benz built and released a driverless van equipped with cameras and sensors
under the direction of Ernst Dickmanns. It was able to drive up to 55 mph on a road with no
Intelligent Systems.” Pearl is also credited with inventing Bayesian networks, a “probabilistic
graphical model” that represents sets of variables and their dependencies via directed acyclic
graph (DAG).
1988: Rollo Carpenter, programmer and inventor of two chatbots, Jabberwacky and Cleverbot
AI in the 1990s
The end of the millennium was on the horizon, but this anticipation only helped artificial
1995: Computer scientist Richard Wallace developed the chatbot A.L.I.C.E (Artificial Linguistic
from ELIZA was the addition of natural language sample data collection.
1997: Computer scientists Sepp Hochreiter and Jürgen Schmidhuber developed Long Short-
Term Memory (LSTM), a type of a recurrent neural network (RNN) architecture used for
1997: Deep Blue, a chess-playing computer developed by IBM became the first system to win a
1998: Dave Hampton and Caleb Chung invented Furby, the first “pet” toy robot for children.
1999: In line with Furby, Sony introduced AIBO (Artificial Intelligence RoBOt), a $2,000
robotic pet dog crafted to “learn” by interacting with its environment, owners, and other AIBOs.
Its features included the ability to understand and respond to 100+ voice commands and
AI from 2000-2010
The new millennium was underway – and after the fears of Y2K died down – AI continued
trending upward. As expected, more artificially intelligent beings were created as well as
creative media (film, specifically) about the concept of artificial intelligence and where it might
be headed.
2000: The Y2K problem, also known as the year 2000 problem, was a class of computer bugs
related to the formatting and storage of electronic calendar data beginning on 01/01/2000. Given
that all internet software and programs had been created in the 1900s, some systems would have
trouble adapting to the new year format of 2000 (and beyond). Previously, these automated
systems only had to change the final two digits of the year; now, all four digits had to be
switched over – a challenge for technology and those who used it.
2000: Professor Cynthia Breazeal developed Kismet, a robot that could recognize and simulate
emotions with its face. It was structured like a human face with eyes, lips, eyelids, and eyebrows.
2000: Honda releases ASIMO, an artificially intelligent humanoid robot.
2001: Sci-fi film A.I. Artificial Intelligence, directed by Steven Spielberg, is released. The movie
is set in a futuristic, dystopian society and follows David, an advanced humanoid child that is
obstacles.
2004: NASA's robotic exploration rovers Spirit and Opportunity navigate Mars’ surface without
human intervention.
2004: Sci-fi film I, Robot, directed by Alex Proyas, is released. Set in the year 2035, humanoid
robots serve humankind while one individual is vehemently anti-robot, given the outcome of a
2007: Computer science professor Fei Fei Li and colleagues assembled ImageNet, a database of
2009: Google secretly developed a driverless car. By 2014, it passed Nevada’s self-driving test.
The current decade has been immensely important for AI innovation. From 2010 onward,
artificial intelligence has become embedded in our day-to-day existence. We use smartphones
that have voice assistants and computers that have “intelligence” functions most of us take for
granted. AI is no longer a pipe dream and hasn’t been for some time.
2010: ImageNet launched the ImageNet Large Scale Visual Recognition Challenge (ILSVRC),
2010: Microsoft launched Kinect for Xbox 360, the first gaming device that tracked human body
2011: Watson, a natural language question answering computer created by IBM, defeated two
former Jeopardy! champions, Ken Jennings and Brad Rutter, in a televised game.
2011: Apple released Siri, a virtual assistant on Apple iOS operating systems. Siri uses a natural-
language user interface to infer, observe, answer, and recommend things to its human user. It
2013: A research team from Carnegie Mellon University released Never Ending Image Learner
(NEIL), a semantic machine learning system that could compare and analyze image
relationships.
2014: Microsoft released Cortana, their version of a virtual assistant similar to Siri on iOS.
2014: Amazon created Amazon Alexa, a home assistant that developed into smart speakers that
2015: Elon Musk, Stephen Hawking, and Steve Wozniak among 3,000 others signed an open
letter banning the development and use of autonomous weapons (for purposes of war.)
2015-2017: Google DeepMind’s AlphaGo, a computer program that plays the board game Go,
2016: A humanoid robot named Sophia is created by Hanson Robotics. She is known as the first
“robot citizen.” What distinguishes Sophia from previous humanoids is her likeness to an actual
human being, with her ability to see (image recognition), make facial expressions, and
assistant” to help users remember tasks, create appointments, and search for information by
voice.
2017: The Facebook Artificial Intelligence Research lab trained two “dialog agents” (chatbots) to
communicate with each other in order to learn how to negotiate. However, as the chatbots
conversed, they diverged from human language (programmed in English) and invented their own
language to communicate with one another – exhibiting artificial intelligence to a great degree.
2018: Alibaba (Chinese tech group) language processing AI outscored human intellect at a
Stanford reading and comprehension test. The Alibaba language processing scored “82.44
against 82.30 on a set of 100,000 questions” – a narrow defeat, but a defeat nonetheless.
2018: Google developed BERT, the first “bidirectional, unsupervised language representation
that can be used on a variety of natural language tasks using transfer learning.”
2018: Samsung introduced Bixby, a virtual assistant. Bixby’s functions include Voice, where the
user can speak to and ask questions, recommendations, and suggestions; Vision, where Bixby’s
“seeing” ability is built into the camera app and can see what the user sees (i.e. object
identification, search, purchase, translation, landmark recognition); and Home, where Bixby uses
app-based information to help utilize and interact with the user (e.g. weather and fitness
applications.)
A Brief History of Artificial Intelligence
The idea of inanimate objects coming to life as intelligent beings has been around for a long
time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built
automatons.
The beginnings of modern AI can be traced to classical philosophers' attempts to describe human
thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a
conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial
MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely
optimistic about AI's future. "Within a generation [...] the problem of creating 'artificial
intelligence' will substantially be solved," Minsky is quoted as saying in the book "AI: The
But achieving an artificially intelligent being wasn't so simple. After several reports criticizing
progress in AI, government funding and interest in the field dropped off – a period from 1974–80
that became known as the "AI winter." The field later revived in the 1980s when the British
government started funding it again in part to compete with efforts by the Japanese.
The field experienced another major winter from 1987 to 1993, coinciding with the collapse of
the market for some of the early general-purpose computers, and reduced government funding.
But research began to pick up again after that, and in 1997, IBM's Deep Blue became the first
computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And
in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!"
In 2014, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges
into thinking he was real skin-and-blood human during a Turing test, a competition developed by
British mathematician and computer scientist Alan Turing in 1950 as a way to assess whether a
machine is intelligent.
But the accomplishment has been controversial, with artificial intelligence experts saying that
only a third of the judges were fooled, and pointing out that the bot was able to dodge some
Manyexperts now believe the Turing test isn't a good measure of artificial intelligence.
"The vast majority of people in AI who've thought about the matter, for the most part, think it’s a
very poor test, because it only looks at external behavior," Perlis told Live Science.
In fact, some scientists now plan to develop an updated version of the test. But the field of AI has
become much broader than just the pursuit of true, humanlike intelligence.
Reference
https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
https://www.g2.com/articles/history-of-artificial-intelligence
https://www.livescience.com/49007-history-of-artificial-intelligence.html