0% found this document useful (0 votes)
31 views

AI_Module 1 (1)[1]

artifical intelligence module 1 vtu 5 th sem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

AI_Module 1 (1)[1]

artifical intelligence module 1 vtu 5 th sem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Artificial Intelligence (BCS515B) Module 1

MODULE 1
Chapter - 1 – INTRODUCTION
1. What is AI?
2. The State of the ART

1. What is AI?
➢ According to the father of Artificial Intelligence, John McCarthy, Artificial
Intelligence is “The science and engineering of making intelligent machines,
especially intelligent computer programs”.
➢ Artificial Intelligence is a way of making a computer, a computer-controlled robot,
or a software think intelligently, in the similar manner the intelligent humans think.
➢ AI is accomplished by studying how human brain thinks, and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of
this study as a basis of developing intelligent software and systems.

Four Approaches of AI:

1. Acting humanly: The Turing Test approach


➢ The Turing Test, proposed by Alan Turing (1950), was designed to provide
a satisfactory operational definition of intelligence.
➢ A computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person or
from a computer.

Page 1
Artificial Intelligence (BCS515B) Module 1
➢ The computer would need to possess the following capabilities:
a. Natural language processing to enable it to communicate
successfully in English.
b. Knowledge representation to store what it knows or hears.
c. Automated Reasoning to use the stored information to answer
questions and to draw new conclusions.
d. Machine Learning to adapt to new circumstances and to detect and
extrapolate patterns.
➢ Total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator
to pass physical objects “through the hatch.”
➢ To pass the total Turing Test, the computer will need:
a. Computer Vision to perceive objects.
b. Robotics to manipulate objects and move about.
➢ These six disciplines compose most of AI, and Turing deserves credit for
designing a test that remains relevant 60 years later.
2. Thinking humanly: The cognitive modeling approach
➢ We need to get inside actual working of the human mind :
a. Through introspection - trying to catch our own thoughts as they go
by.
b. Through psychological experiments - observing a person in action.
c. Through brain imaging - observing the brain in action.
➢ Allen Newell and Herbert Simon, who developed GPS, the “General
Problem Solver” tried to trace the reasoning steps to traces of human subjects
solving the same problems.
➢ The interdisciplinary field of cognitive science brings together computer
models from AI and experimental techniques from psychology to construct
precise and testable theories of the human mind.
➢ Cognitive science is a fascinating field in itself, worthy of several textbooks
and at least one encyclopedia (Wilson and Keil, 1999).
➢ Real cognitive science, however, is necessarily based on experimental
investigation of actual humans or animals.
➢ Both AI and cognitive science are developing more rapidly.

Page 2
Artificial Intelligence (BCS515B) Module 1
➢ The two fields continue to fertilize each other, most notably in computer
vision, which incorporates neurophysiologic evidence into computational
models.
3. Thinking rationally: The “laws of thought” approach
➢ The Greek philosopher Aristotle was one of the first to attempt to codify
“right thinking” that is irrefutable reasoning processes.
➢ His syllogism provided patterns for argument structures that always
yielded correct conclusions when given correct premises.
➢ For example, “Socrates is a man; all men are mortal; therefore, Socrates is
mortal”.
➢ These laws of thought were supposed to govern the operation of the mind;
their study initiated a field called logic.
➢ There are two main obstacles to this approach:
1. First, it is not easy to take informal knowledge and state it in the
formal terms required by logical notation, particularly when the
knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem “in
principle” and solving it in practice.
4. Acting rationally: The rational agent approach
➢ An agent is something that acts.
➢ Computer agents are not mere programs, but they are expected to have the
following attributes also:
a. Operating under autonomous control
b. Perceiving their environment
c. Persisting over a prolonged time period
d. Adapting to change
e. Create
f. Pursue goals
➢ A rational agent is one that acts so as to achieve the best outcome or, when
there is uncertainty, the best expected outcome.
➢ The rational-agent approach has two advantages over the other approaches.
1. First, it is more general than the “laws of thought” approach because
correct inference is just one of several possible mechanisms for
achieving rationality.

Page 3
Artificial Intelligence (BCS515B) Module 1
2. Second, it is more amenable to scientific development than are
approaches based on human behavior or human thought.
2.The State of the ART

A few applications of AI

Robotic vehicles: A driverless robotic car named STANLEY sped through the rough
terrain of the Mojave desert at 22 mph, finishing the 132-mile course first to win the 2005
DARPA Grand Challenge. STANLEY is a Volkswagen Touareg outfitted with cameras,
radar, and laser rangefinders to sense the environment and onboard software to command the
steering, braking, and acceleration (Thrun, 2006). The following year CMU’s BOSS won the
Urban Challenge, safely driving in traffic through the streets of a closed Air Force base,
obeying traffic rules and avoiding pedestrians and other vehicles.

Speech recognition: A traveler calling United Airlines to book a flight can have the en-
tire conversation guided by an automated speech recognition and dialog management
system.

Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s
Remote Agent program became the first on-board autonomous planning program to control
the scheduling of operations for a spacecraft (Jonsson et al., 2000). REMOTE AGENT gen-
erated plans from high-level goals specified from the ground and monitored the execution of
those plans—detecting, diagnosing, and recovering from problems as they occurred. Succes-
sor program MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s Mars
Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logistics
and science planning—for the European Space Agency’s Mars Express mission in 2008.

Game playing: IBM’s DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in
an exhibition match (Goodman and Keene, 1997). Kasparov said that he felt a “new kind of
intelligence” across the board from him. Newsweek magazine described the match as “The
brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions
studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the
most recent human-computer matches have been won convincingly by the computer.
Spam fighting: Each day, learning algorithms classify over a billion messages as spam,
saving the recipient from having to waste time deleting what, for many users, could
comprise 80% or 90% of all messages, if not classified away by algorithms. Because the
spammers are continually updating their tactics, it is difficult for a static programmed
approach to keep up, and learning algorithms work best (Sahami et al., 1998; Goodman and
Heckerman, 2004).

Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated
logistics planning and scheduling for transportation. This involved up to 50,000 vehicles,

Page 4
Artificial Intelligence (BCS515B) Module 1

cargo, and people at a time, and had to account for starting points, destinations, routes, and
conflict resolution among all parameters. The AI planning techniques generated in hours
a plan that would have taken weeks with older methods. The Defense Advanced Research
Project Agency (DARPA) stated that this single application more than paid back DARPA’s
30-year investment in AI.

Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum
cleaners for home use. The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify
the location of snipers.

Machine Translation: A computer program automatically translates from Arabic to


English, allowing an English speaker to see the headline “Ardogan Confirms That Turkey
Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.” The program uses a
statistical model built from examples of Arabic-to-English translations and from examples of
English text totaling two trillion words (Brants et al., 2007). None of the computer scientists
on the team speak Arabic, but they do understand statistics and machine learning algorithms.
These are just a few examples of artificial intelligence systems that exist today. Not
magic or science fiction—but rather science, engineering, and mathematics, to which this
book provides an introduction.

Page 5
Artificial Intelligence (BCS515B) Module 1
Chapter - 2 – INTELLIGENT AGENTS
1. Agents and Environments
2. Concept of Rationality
3. The Nature of Environment
4. The Structure of agents

Agents and Environments


➢ An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
➢ A human agent has eyes, ears, and other organs for sensors and hands, legs,
vocal tract, and so on for actuators.
➢ A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators.
➢ A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.
➢ The below figure shows how agents interact with environments through sensors
and actuators.

➢ We use the term percept to refer to the agent’s perceptual inputs at any given
instant.
➢ An agent’s percept sequence is the complete history of everything the agent has
ever perceived.
➢ An agent’s behavior is described by the agent function that maps any given
percept sequence to an action.

Page 6
Artificial Intelligence (BCS515B) Module 1
➢ The agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is
an abstract mathematical description; the agent program is a concrete
implementation, running within some physical system.
➢ To illustrate these ideas, consider the example—the vacuum cleaner world
shown in Figure 2.2.

➢ This particular world has just two locations: squares A and B.


➢ The vacuum agent perceives which square it is in and whether there is dirt in the
square.
➢ It can choose to move left, move right, suck up the dirt, or do nothing.
➢ One very simple agent function is the following: if the current square is dirty,
then suck; otherwise, move to the other square.
➢ A partial tabulation of this agent function is shown in Figure 2.3.

Good Behavior: The Concept of Rationality


➢ A rational agent is one that does the right thing—conceptually speaking, every
entry in the table for the agent function is filled out correctly.
➢ When an agent is plunked down in an environment, it generates a sequence of actions
according to the percepts it receives.
➢ This sequence of actions causes the environment to go through a sequence of states.
➢ If the sequence is desirable, then the agent has performed well.

Page 7
Artificial Intelligence (BCS515B) Module 1
➢ This notion of desirability is captured by a performance measure that evaluates any
given sequence of environment states.
Rationality
The rational at any given time depends on four things:
1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent’s percept sequence to date.
Rational agent can be defined as “For each possible percept sequence, a rational agent
should select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the agent has”.
➢ Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and
moves to the other square if not; this is the agent function tabulated in Figure 2.3.
➢ Is this a rational agent? That depends! First, we need to say what the performance
measure is, what is known about the environment, and what sensors and actuators
the agent has.
➢ Let us assume the following:
a. The performance measure awards one point for each clean square at each time
step, over a “lifetime” of 1000-time steps.
b. The “geography” of the environment is known a priori (Figure 2.2) but the
dirt distribution and the initial location of the agent are not. Clean squares stay
clean and sucking cleans the current square. The Left and Right actions move
the agent left and right except when this would take the agent outside the
environment, in which case the agent remains where it is.
c. The only available actions are Left, Right, and Suck.
d. The agent correctly perceives its location and whether that location contains
dirt.
➢ We claim that under these circumstances the agent is indeed rational.
➢ The same agent would be irrational under different circumstances.
➢ For example, once all the dirt is cleaned up, the agent will oscillate needlessly back
and forth; the agent will fare poorly. A better agent for this case would do nothing
once it is sure that all the squares are clean. If clean squares can become dirty again,
the agent should occasionally check and re-clean them if needed.

Page 8
Artificial Intelligence (BCS515B) Module 1
Omniscience, learning, and autonomy
➢ An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
➢ Rationality maximizes expected performance, while perfection maximizes actual
performance.
➢ Doing actions in order to modify future percepts—sometimes called information
gathering—is an important part of rationality.
➢ Our definition requires a rational agent not only to gather information but also to
learn as much as possible from what it perceives. The agent’s initial configuration
could reflect some prior knowledge of the environment, but as the agent gains
experience this may be modified and augmented.
➢ To the extent that an agent relies on the prior knowledge of its designer rather than
on its own percepts, we say that the agent lacks autonomy. A rational agent should
be autonomous—it should learn what it can to compensate for partial or incorrect
prior knowledge.

The Nature of Environments


Task environments, which are essentially the “problems” to which rational agents are the
“solutions.”
Specifying the task environment
➢ In designing an agent, the first step must always be to specify the task environment
as fully as possible.
➢ For the acronymically minded, we call this the PEAS (Performance, Environment,
Actuators, Sensors) description.
➢ The below table summarizes the PEAS description for the taxi’s task environment.

➢ Performance measure of automated driver include getting to the correct


destination; minimizing fuel consumption and wear and tear; minimizing the trip
time or cost; minimizing violations of traffic laws and disturbances to other drivers;
maximizing safety and passenger comfort; maximizing profits.

Page 9
Artificial Intelligence (BCS515B) Module 1
➢ The driving environment that the taxi driver will face are a variety of roads, ranging
from rural lanes and urban alleys to 12-lane freeways. The roads contain other
traffic, pedestrians, stray animals, road works, police cars, puddles, and potholes.
➢ The actuators for an automated taxi include those available to a human driver:
control over the engine through the accelerator and control over steering and
braking. In addition, it will need output to a display screen or voice synthesizer to
talk back to the passengers, and perhaps some way to communicate with other
vehicles, politely or otherwise.
➢ The basic sensors for the taxi will include one or more controllable video cameras
so that it can see the road; it might augment these with infrared or sonar sensors to
detect distances to other cars and obstacles. To avoid speeding tickets, the taxi
should have a speedometer, and to control the vehicle properly, especially on curves,
it should have an accelerometer. To determine the mechanical state of the vehicle, it
will need the usual array of engine, fuel, and electrical system sensors.
➢ Below Figure shows the basic PEAS elements for a number of additional agent types.

Page 10
Artificial Intelligence (BCS515B) Module 1

Page 11
Artificial Intelligence (BCS515B) Module 1

Properties of Task Environments


1. Fully observable vs. Partially observable:
➢ If an agent’s sensors give it access to the complete state of the environment
at each point in time, then we say that the task environment is fully
observable.
➢ A task environment is effectively fully observable if the sensors detect all
aspects that are relevant to the choice of action; relevance, in turn, depends
on the performance measure.
➢ An environment might be partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing from the
sensor data - for example, a vacuum agent with only a local dirt sensor cannot
tell whether there is dirt in other squares, and an automated taxi cannot see
what other drivers are thinking.
2. Single agent vs. Multiagent:
➢ An agent solving a crossword puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess is in a two-agent environment.
➢ Chess is a competitive multiagent environment.
➢ In the taxi-driving environment, on the other hand, avoiding collisions
maximizes the performance measure of all agents, so it is a partially cooperative
multiagent environment.
3. Deterministic vs. Stochastic:
➢ If the next state of the environment is completely determined by the current
state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
➢ Taxi driving is clearly stochastic in this sense, because one can never predict
the behavior of traffic exactly; moreover, one’s tires blow out and one’s
engine seizes up without warning. The vacuum world is deterministic, but
variations can include stochastic elements such as randomly appearing dirt
and an unreliable suction mechanism.

Page 12
Artificial Intelligence (BCS515B) Module 1
4. Episodic vs. Sequential:
➢ In an episodic task environment, the agent’s experience is divided into
atomic episodes.
➢ In each episode the agent receives a percept and then performs a single
action.
➢ Crucially, the next episode does not depend on the actions taken in previous
episodes.
➢ For example, an agent that has to spot defective parts on an assembly line
bases each decision on the current part, regardless of previous decisions;
moreover, the current decision doesn’t affect whether the next part is
defective.
➢ In sequential environments, the current decision could affect all future
decisions.
➢ Chess and taxi driving are sequential: in both cases, short-term actions can
have long-term consequences.
5. Static vs. Dynamic:
➢ If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
➢ Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about
the passage of time.
➢ Dynamic environments, on the other hand, are continuously asking the agent
what it wants to do.
➢ Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving
while the driving algorithm dithers about what to do next. Crossword puzzles
are static.
6. Discrete vs. Continuous:
➢ If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (For example, chess); otherwise, it
is continuous. Taxi-driving actions are also continuous (steering angles,
etc.).
7. Known vs. Unknown:
➢ This distinction refers not to the environment itself but to the agent’s (or
designer’s) state of knowledge about the environment.
➢ In a known environment, the outcomes for all actions are given.

Page 13
Artificial Intelligence (BCS515B) Module 1
➢ Obviously, if the environment is unknown, the agent will have to learn how
it works in order to make good decisions.
Below Figure lists the properties of a number of familiar environments.

The Structure of Agents


➢ Agent’s structure can be viewed as: Agent = Architecture + Agent Program.
➢ Architecture is the machinery that an agent executes on.
➢ Agent Program is an implementation of an agent function.
Agent Programs
➢ They take the current percept as input from the sensors and return an action to the
actuators.
➢ For example, the agent program for vacuum agent whose agent function is tabulated
in Figure 2.3 is shown below.

Simple reflex agents


➢ The simplest kind of agent which has limited intelligence is the simple reflex agent.
➢ This works only in a fully observable environment.
➢ These agents select actions on the basis of the current percept, ignoring the rest of
the percept history.

Page 14
Artificial Intelligence (BCS515B) Module 1
➢ Imagine yourself as the driver of the automated taxi.
➢ If the car in front brakes and its brake lights come on, then you should notice this
and initiate braking. In other words, some processing is done on the visual input to
establish the condition we call “The car in front is braking.” Then, this triggers some
established connection in the agent program to the action “initiate braking.”
➢ We call such a connection a condition–action rule, written as
if car-in-front-is-braking then initiate-braking.
➢ The below figure shows the schematic diagram of simple reflex agent.

➢ We use rectangles to denote the current internal state of the agent’s decision process,
and ovals to represent the background information used in the process.
➢ The agent program, is shown in Figure 2.10.

➢ The INTERPRET-INPUT function generates an abstracted description of the current


state from the percept, and the RULE-MATCH function returns the first rule in the
set of rules that matches the given state description.
➢ The agent in Figure 2.10 will work only if the environment is fully observable.
➢ For example, the braking rule given earlier assumes that the condition car-in-front-
is-braking can be determined from the current percept—a single frame of video. This
works if the car in front has a centrally mounted brake light.
➢ Suppose if the brake light of front car is not lighting up, then it may become worse.

Page 15
Principles of Artificial Intelligence (21AI54) Module 1
Model-based reflex agents
➢ The most effective way to handle partial observability is for the agent to keep track
of the part of the world it can’t see now.
➢ That is, the agent should maintain some sort of internal state that depends on the
percept history and thereby reflects at least some of the unobserved aspects of the
current state.
➢ Updating this internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
➢ First, we need some information about how the world evolves independently of the
agent.
➢ Second, we need some information about how the agent’s own actions affect the
world.
➢ This knowledge about “how the world works” - is called a model of the world. An
agent that uses such a model is called a model-based agent.
➢ Figure 2.11 gives the structure of the model-based reflex agent with internal state,
showing how the current percept is combined with the old internal state to generate
the updated description of the current state, based on the agent’s model of how the
world works.
➢ The agent program is shown in Figure 2.12.

Department of AI&ML, CIT - Ponnampet Page 16


Artificial Intelligence (BAD402) Module 1
➢ The interesting part is the function UPDATE-STATE, which is responsible for
creating the new internal state description.
Goal based agents
➢ Knowing something about the current state of the environment is not always enough
to decide what to do.
➢ For example, at a road junction, the taxi can turn left, turn right, or go straight on.
The correct decision depends on where the taxi is trying to get to.
➢ In other words, as well as a current state description, the agent needs some sort of
goal information that describes situations that are desirable.
➢ Figure 2.13 shows the goal-based agent’s structure.

➢ Search and planning are the subfields of AI devoted to finding action sequences that
achieve the agent’s goals.
➢ Notice that decision making of this kind is fundamentally different from the
condition– action rules of reflex agents, in that it involves consideration of the
future—both “What will happen if I do such-and-such?” and “Will that make me
happy?”
➢ The reflex agent brakes when it sees brake lights. A goal-based agent, in principle,
could reason that if the car in front has its brake lights on, it will slow down.
➢ Although the goal-based agent appears less efficient, it is more flexible because the
knowledge that supports its decisions is represented explicitly and can be modified.
➢ For the reflex agent, on the other hand, we would have to rewrite many condition–
action rules.

Page 17
Artificial Intelligence (BAD402) Module 1
Utility based agents
➢ Goals alone are not enough to generate high-quality behavior in most environments.
➢ A more general performance measure should allow a comparison of different world
states according to exactly how happy they would make the agent.
➢ “Happy” does not sound very scientific, economists and computer scientists use the
term utility instead.
➢ An agent’s utility function is essentially an internalization of the performance
measure.
➢ If the internal utility function and the external performance measure are in
agreement, then an agent that chooses actions to maximize its utility will be rational
according to the external performance measure.
➢ A rational utility-based agent chooses the action that maximizes the expected utility
of the action outcomes—that is, the utility the agent expects to derive, on average,
given the probabilities and utilities of each outcome.
➢ The utility-based agent structure appears in Figure 2.14.

Learning agents
➢ A learning agent can be divided into four conceptual components, as shown in
Figure 2.15.
➢ The learning element, which is responsible for making improvements, and the
performance element, which is responsible for selecting external actions.
➢ The performance element is what we have previously considered to be the entire
agent: it takes in percepts and decides on actions.

Page 18
Artificial Intelligence (BAD402) Module 1
➢ The learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the
future.
➢ The critic tells the learning element how well the agent is doing with respect to a
fixed performance standard.
➢ The critic is necessary because the percepts themselves provide no indication of the
agent’s success.
➢ The last component of the learning agent is the problem generator which is
responsible for suggesting actions that will lead to new and informative experiences.

Page 19

You might also like