0% found this document useful (0 votes)
16 views

Ai Unit 1 Bec Part 1 Final

Uploaded by

AMUJURI MONIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Ai Unit 1 Bec Part 1 Final

Uploaded by

AMUJURI MONIKA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

UNIT-I (14 Periods)

Introduction to AI: What is AI? , Foundations of AI, History of AI, State of the Art.
Intelligent Agents: Agents and Environments, Good Behavior: Concept of
Rationality, The Nature of Environments And The Structure of Agents.
Solving Problems by Searching: Problem Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth First Search, Uniform Cost Search, Depth
First Search, Iterative Deepening DFS and Bi­directional Search.
Informed (Heuristics) Search Strategies: Greedy BFS, A* Algorithm.
AND­OR Graphs.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems,
Local Search in CSPs.
UNIT I
Introduction: What Is AI?, The Foundations of
Artificial Intelligence, The History of Artificial
Intelligence, The State of the Art, Agents and
Environments, Good Behavior: The Concept of
Rationality, The Nature of Environments, The
Structure of Agents.
Artificial Intelligence: AI is study of making computer do things
intelligently.
Example: 1.Chess game
2.Driverless car
3.Robotics
Artificial: Any thing created by human.
Intelligence: Capacity to understand, think and learn.
Human intelligence behaviour is to be simulated to machines to
make intelligence.
AI programs can be simple AI programs to expert AI programs
What is Artificial Intelligence?

 Definitions of AI vary
 Artificial Intelligence is the study of
systems that
think like humans think rationally

act like humans act rationally

5
WHAT IS AI?
 A system is rational if it does the “right thing,” given what it knows.
1) Acting humanly: The Turing Test approach

 The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence.

The computer would need to possess the following capabilities:

a) natural language processing to enable it to communicate successfully in English.

b) knowledge representation to store what it knows or hears.

c) automated reasoning to use the stored information to answer questions and to draw new
conclusions.

d) machine learning to adapt to new circumstances and to detect and extrapolate


patterns
To pass the total Turing Test, the computer will need

e) computer vision to perceive objects, and


f) robotics to manipulate objects and move about.

These six disciplines compose most of AI.


2) Thinking humanly: The cognitive modeling approach

 If we are going to say that a given program thinks like a human, we must have
some way of determining how humans think.

The interdisciplinary field of cognitive science brings


together computer models from AI and experimental
techniques from psychology to construct precise and
testable theories of the human mind.
3) Thinking rationally: The “laws of thought” approach

 yield correct conclusions when given correct premises

for example, “Socrates is a man; all men are mortal;


therefore, Socrates is mortal.”
4) Acting rationally: The rational agent approach

 An agent is just something that acts

A rational agent is one that acts so as to achieve the best


outcome or, when there is uncertainty, the best expected
outcome.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

1. Philosophy

2. Mathematics

3. Economics

4. Neuroscience

5. Psychology

6. Computer engineering

7. Control theory and cybernetics

8. Linguistics
1. Philosophy

• Can formal rules be used to draw valid conclusions?


• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?

Materialism which holds that the brain’s operation according to the laws of physics
constitutes the mind.

The confirmation theory of Carnap and Carl Hempel (1905–1997) attempted to analyze the
acquisition of knowledge from experience.
2. Mathematics
What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?

Besides logic and computation, the third great contribution of mathematics to AI is the
theory of probability.

Thomas Bayes (1702–1761), proposed a rule for updating probabilities in the light of new
evidence.

Bayes’ rule underlies most modern approaches to uncertain reasoning in AI systems.

3. Economics
How should we make decisions so as to maximize payoff?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
4. Neuroscience
How do brains process information?

 Neuroscience is the study of the nervous system, particularly the brain.

 brain consists of nerve cells, or neurons


 Brains and digital computers have somewhat different properties.

 Figure 1.3 shows that computers have a cycle time that is a million times faster than a brain.
5. Psychology
 How do humans and animals think and act?

6. Computer engineering

 For artificial intelligence to succeed, we need two things: intelligence and an


artifact. The computer has been the artifact of choice.
7. Control theory and cybernetics
How can artifacts operate under their own control?

8. Linguistics

 Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences.
The History Of Artificial Intelligence
1) The gestation of artificial intelligence (1943–1955)

2) The birth of artificial intelligence (1956)

3) Early enthusiasm, great expectations (1952–1969)

4) A dose of reality (1966–1973)

5) Knowledge-based systems: The key to power? (1969–1979)

6) AI becomes an industry (1980–present)

7) The return of neural networks (1986–present)

8) AI becomes a scientific (1987–present)

9) The emergence of intelligent agents (1995–present)


THE STATE OF THE ART

few applications of AI:

1) Autonomous planning and scheduling:

2. Game playing:

3. Autonomous control:

4. Diagnosis:

5. Logistics Planning:

6. Robotics:

7. Language understanding and problem solving:


1) Autonomous planning and scheduling:

NASA's Remote Agent program became the first on-board


autonomous planning program to control the scheduling of operations
for a spacecraft.

Remote Agent generated plans from high-level goals specified from


the ground, and it monitored the operation of the spacecraft as the
plans were executed-detecting, diagnosing, and recovering from
problems as they occurred.

2) Game playing:

 IBM's Deep Blue became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a
score of 3.5 to 2.5 in an exhibition match.
3. Autonomous control:
The ALVINN computer vision system was trained to steer a car to keep it
following a lane.
 it was placed in CMU's NAVLAB computer-controlled minivan
and used to navigate across the United States-for 2850 miles it was in
control of steering the vehicle 98% of the time.

4. Diagnosis:

Medical diagnosis programs based on probabilistic analysis have


been able to perform at the level of an expert physician in several
areas of medicine.
5. Logistics Planning:

During the Persian Gulf crisis of 1991, U.S. forces deployed a


Dynamic Analysis and Replanning Tool, DART (Cross and Walker,
1994), to do automated logistics planning and scheduling for
transportation.

6. Robotics: Many surgeons now use robot assistants in


microsurgery.

7. Language understanding and problem solving:


PROVERB (Littman et al., 1999) is a computer program that solves
crossword puzzles better than most humans, using constraints on
possible word fillers, a large database of past puzzles, and a variety of
information sources including dictionaries and online databases such as
a list of movies and the actors that appear in them.
AGENTS AND ENVIRONMENTS
 An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators.
 A human agent has eyes, ears, and other organs for sensors and
hands, legs, mouth, and other body parts for actuators.

 A robotic agent might have cameras and infrared range finders


for sensors and various motors for actuators.

 A software agent receives keystrokes, file contents, and network


packets as sensory inputs and acts on the environment by
displaying on the screen, writing files, and sending network
packets.
 We use the term percept to refer to the agent’s perceptual inputs at any given
instant.
 An agent’s percept sequence is the complete history of everything the agent has
ever perceived.
An agent’s behavior is described by the agent function that maps any given percept
sequence to an action.
 Internally, the agent function for an artificial agent will be implemented by an
agent program.
 To illustrate these ideas, we use a very simple example—the vacuum-cleaner world
shown in Figure 2.2.
Two locations: squares A and B.
The vacuum agent perceives which square it is in and whether there is dirt in the square.

It can choose to move left, move right, suck up the dirt, or do nothing.

One very simple agent function is the following: if the current square is dirty, then suck;
otherwise, move to the other square.

A partial tabulation of this agent function is shown in Figure 2.3 and an agent program that
implements it appears in Figure 2.8
Good Behavior: The Concept of Rationality

1. rational agent

2. Performance measures

3. Rationality

4. Omniscience, learning, and autonomy


 A rational agent is one that does the right thing—conceptually speaking every
entry in the table for the agent function is filled out correctly.

 The right action is the one that will cause the agent to be most
successful.

 Therefore, we will need some way to measure success.

A performance measure embodies the criterion for success of an


agent's behaviour.

Rule for performance measure


 As a general rule, it is better to design performance measures
according to what one actually wants in the environment, rather
than according to how one thinks the agent should behave.
Rationality

What is rational at any given time depends on four things:

1. The performance measure that defines the criterion of success.


2. The agent's prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent's percept sequence to date.

This leads to a definition of a rational agent:

 For each possible percept sequence, a rational agent should


select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
The performance measure awards one point for each clean square
at each time step, over a "lifetime" of 1000 time steps.

The "geography" of the environment is known a priori (Figure 2.2)


but the dirt distribution and the initial location of the agent are not.

The only available actions are Left, Right, Suck, and NoOp (do
nothing).

The agent correctly perceives its location and whether that location
contains dirt.
Omniscience, learning, and autonomy
Meaning
Omniscience = having unlimited knowledge:
the state of knowing everything:

 An omniscient agent knows the actual outcome of its


actions and can act accordingly.
 Our definition requires a rational agent not only to gather
information, but also to learn as much as possible from what it
perceives.

 A rational agent should be autonomous-it should learn what it can


to compensate for partial or incorrect prior knowledge.

 For example, a vacuum-cleaning agent that learns to foresee where


and when additional dirt will appear will do better than one that does
not.
The Nature of Environments
task environments  which are essentially the "problems"
rational agents  which are the "solutions."

1. Specifying the task environment

2. Properties of task environments


a) Fully observable vs. partially observable.

b) Deterministic vs. stochastic.

c) Episodic vs sequential

d) Static vs dynamic.

e) Discrete vs continuous.

f) Single agent vs multiagent.


1. Specifying the task environment
 group all together under the heading of the task environment  call this as PEAS
(Performance, Environment, Actuators, Sensors) description.
PEAS elements for a number of additional agent types
2. Properties of task environments

a) Fully observable vs. partially observable.

Fully observable
If an agent's sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.

A task environment is effectively fully observable if the sensors detect all aspects
that are relevant to the choice of action.

partially observable

An environment might be partially observable because of noisy and inaccurate


sensors or because parts of the state are simply missing from the sensor data.

 for example, a vacuum agent with only a local dirt sensor cannot tell whether there
is dirt in other squares, and an automated taxi cannot see what other drivers are
thinking.
b) Deterministic vs. stochastic.
 If the next state of the environment is completely determined by the current
state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.

 If the environment is partially observable, however, then it could


appear to be stochastic.

 Taxi driving is clearly stochastic in this sense, because one can


never predict the behaviour of traffic exactly.
c) Episodic vs sequential
 In an episodic task environment, the agent's experience is divided
into atomic episodes. Each episode consists of the agent perceiving
and then performing a single action. Crucially, the next episode
does not depend on the actions taken in previous episodes.

In episodic environments, the choice of action in each episode


depends only on the episode itself.

Sequential
 In sequential environments, on the other hand, the current decision
could affect all future decisions.

Chess and taxi driving are sequential: in both cases, short-term


actions can have long-term consequences.

 Episodic environments are much simpler than sequential


environments because the agent does not need to think ahead.
d) Static vs dynamic.
 If the environment can change while an agent is deliberating,
then we say the environment is dynamic for that agent; otherwise, it
is static.

If the environment itself does not change with the passage of time but
the agent's performance score does, then we say the environment
is semidynamic.

Chess, when played with a clock, is semidynamic.

Taxi driving is clearly dynamic

Chess, Crossword puzzles are static.


e) Discrete vs continuous.
The discrete/continuous distinction can be applied to the state of the
environment, to the way time is handled, and to the percepts and
actions of the agent.

For example, a discrete-state environment such as a chess game has a


finite number of distinct states.
Chess also has a discrete set of percepts and actions.

Taxi driving is a continuous state and continuous-time problem:


Single agent vs multiagent.

 For example, an agent solving a crossword puzzle by itself is


clearly in a single-agent environment, whereas an agent playing
chess is in a two-agent environment.
The Structure of Agents

 The job of AI is to design the agent program that implements the


agent function mapping percepts to actions.

 We assume this program will run on some sort of computing device


with physical sensors and actuators-we call this the architecture:

agent = architecture +program


Types of Agents:-
1) Simple reflex agents;

2) Model-based reflex agents;

3) Goal-based agents; and

4) Utility-based agents.

We then explain in general terms how to convert all these


into learning agents.
1) Simple reflex agents

 The simplest kind of agent is the simple reflex agent.

 These agents select actions on the basis of the current percept,


ignoring the rest of the percept history.

For example, the vacuum agent whose agent function is tabulated in


Figure 2.3 is a simple reflex agent, because its decision is based only on
the current location and on whether that contains dirt.
 Figure 2.9 gives the structure of this general program in schematic
form, showing how the condition-action rules allow the agent to
make the connection from percept to action.

 We use rectangles to denote the current internal state of the agent's


decision process and ovals to represent the background information
used in the process.
 Imagine yourself as the driver of the automated taxi.
If the car in front brakes, and its brake lights come on, then you should
notice this and initiate braking.

In other words, some processing is done on the visual input to


establish the condition we call "The car in front is braking."

Then, this triggers some established connection in the agent program


to the action "initiate braking." We call such a connection a
condition-action rule: written as

if car-in-front-is-braking then initiate-braking.


The agent in Figure 2.10 will work only if the correct decision
can be made on the basis of only the current percept-that is, only if the
environment is fully observable.
2) Model-based reflex agents
The most effective way to handle partial observability is for the
agent to keep track of the part of the world it can't see now. That is, the
agent should maintain some sort of internal state that depends on the
percept history and thereby reflects at least some of the unobserved
aspects of the current state.

• Key difference (wrt simple reflex agents):

• Agents have internal state, which is used to keep track


of past states of the world.

• Agents have the ability to represent change in the


World.
Figure 2.11. gives the structure of the model reflex agent with internal
state, showing how the current percept is combined with the old
internal state to generate the updated description of the current state.

Figure 2.11 A model-based reflex agent.


 function UPDATE-STATE, which is responsible for creating the new
internal state description.
Goal-based agents

 Knowing about the current state of the environment is not


always enough to decide what to do.

 For example, at a road junction, the taxi can turn left, turn right, or
go straight on.

 The correct decision depends on where the taxi is trying to get to. In
other words, as well as a current state description, the agent needs
some sort of goal information that describes situations that are
desirable-for example, being at the passenger's destination.
Agent keeps track of the world state as well as set of goals it’s
trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
• Key difference wrt Model-Based Agents:

• In addition to state information, have goal


information that describes desirable situations to
be achieved.

• Agents of this kind take future events into


consideration.
• What sequence of actions can I take to achieve
certain goals?

• Choose actions so as to (eventually) achieve a


(given or computed) goal.
Utility-based agents
 Goals alone are not really enough to generate high-quality
behaviour in most environments.

 For example, there are many action sequences that will get the
taxi to its destination (there by achieving the goal) but some are
quicker, safer, more reliable, or cheaper than others.

 When there are multiple possible alternatives, how to decide


which one is best?

 Use decision theoretic models: e.g., faster vs. safer.


Learning agents

Figure 2.15 A general model of learning agents.


Learning Agents
Four conceptual components
 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
 Problem generator
 Suggest actions that will lead to new and informative
experiences.
Adapt and improve over time

they have the ability to improve performance through learning

A learning agent can be divided into four conceptual components, as


shown in Figure 2.15.

The most important distinction is between the learning element, which


is responsible for making improvements, and the performance
element, which is responsible for selecting external actions.

The performance element is what we have previously considered


to be the entire agent: it takes in percepts and decides on actions.

The learning element uses feedback from the critic on how the agent
is doing and determines how the performance element should be
modified to do better in the future.
The critic tells the learning element how well the agent is doing with
respect to a fixed performance standard.

The critic is necessary because the percepts themselves provide no


indication of the agent's success.

The last component of the learning agent is the problem generator. It


is responsible for suggesting actions that will lead to new and
informative experiences.

The point is that if the performance element had its way, it would keep
doing the actions that are best, given what it knows.

You might also like