Ai Unit 1 Bec Part 1 Final
Ai Unit 1 Bec Part 1 Final
Introduction to AI: What is AI? , Foundations of AI, History of AI, State of the Art.
Intelligent Agents: Agents and Environments, Good Behavior: Concept of
Rationality, The Nature of Environments And The Structure of Agents.
Solving Problems by Searching: Problem Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth First Search, Uniform Cost Search, Depth
First Search, Iterative Deepening DFS and Bidirectional Search.
Informed (Heuristics) Search Strategies: Greedy BFS, A* Algorithm.
ANDOR Graphs.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems,
Local Search in CSPs.
UNIT I
Introduction: What Is AI?, The Foundations of
Artificial Intelligence, The History of Artificial
Intelligence, The State of the Art, Agents and
Environments, Good Behavior: The Concept of
Rationality, The Nature of Environments, The
Structure of Agents.
Artificial Intelligence: AI is study of making computer do things
intelligently.
Example: 1.Chess game
2.Driverless car
3.Robotics
Artificial: Any thing created by human.
Intelligence: Capacity to understand, think and learn.
Human intelligence behaviour is to be simulated to machines to
make intelligence.
AI programs can be simple AI programs to expert AI programs
What is Artificial Intelligence?
Definitions of AI vary
Artificial Intelligence is the study of
systems that
think like humans think rationally
5
WHAT IS AI?
A system is rational if it does the “right thing,” given what it knows.
1) Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence.
c) automated reasoning to use the stored information to answer questions and to draw new
conclusions.
If we are going to say that a given program thinks like a human, we must have
some way of determining how humans think.
1. Philosophy
2. Mathematics
3. Economics
4. Neuroscience
5. Psychology
6. Computer engineering
8. Linguistics
1. Philosophy
Materialism which holds that the brain’s operation according to the laws of physics
constitutes the mind.
The confirmation theory of Carnap and Carl Hempel (1905–1997) attempted to analyze the
acquisition of knowledge from experience.
2. Mathematics
What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
Besides logic and computation, the third great contribution of mathematics to AI is the
theory of probability.
Thomas Bayes (1702–1761), proposed a rule for updating probabilities in the light of new
evidence.
3. Economics
How should we make decisions so as to maximize payoff?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
4. Neuroscience
How do brains process information?
Figure 1.3 shows that computers have a cycle time that is a million times faster than a brain.
5. Psychology
How do humans and animals think and act?
6. Computer engineering
8. Linguistics
Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences.
The History Of Artificial Intelligence
1) The gestation of artificial intelligence (1943–1955)
2. Game playing:
3. Autonomous control:
4. Diagnosis:
5. Logistics Planning:
6. Robotics:
2) Game playing:
IBM's Deep Blue became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a
score of 3.5 to 2.5 in an exhibition match.
3. Autonomous control:
The ALVINN computer vision system was trained to steer a car to keep it
following a lane.
it was placed in CMU's NAVLAB computer-controlled minivan
and used to navigate across the United States-for 2850 miles it was in
control of steering the vehicle 98% of the time.
4. Diagnosis:
It can choose to move left, move right, suck up the dirt, or do nothing.
One very simple agent function is the following: if the current square is dirty, then suck;
otherwise, move to the other square.
A partial tabulation of this agent function is shown in Figure 2.3 and an agent program that
implements it appears in Figure 2.8
Good Behavior: The Concept of Rationality
1. rational agent
2. Performance measures
3. Rationality
The right action is the one that will cause the agent to be most
successful.
The only available actions are Left, Right, Suck, and NoOp (do
nothing).
The agent correctly perceives its location and whether that location
contains dirt.
Omniscience, learning, and autonomy
Meaning
Omniscience = having unlimited knowledge:
the state of knowing everything:
c) Episodic vs sequential
d) Static vs dynamic.
e) Discrete vs continuous.
Fully observable
If an agent's sensors give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable.
A task environment is effectively fully observable if the sensors detect all aspects
that are relevant to the choice of action.
partially observable
for example, a vacuum agent with only a local dirt sensor cannot tell whether there
is dirt in other squares, and an automated taxi cannot see what other drivers are
thinking.
b) Deterministic vs. stochastic.
If the next state of the environment is completely determined by the current
state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
Sequential
In sequential environments, on the other hand, the current decision
could affect all future decisions.
If the environment itself does not change with the passage of time but
the agent's performance score does, then we say the environment
is semidynamic.
4) Utility-based agents.
For example, at a road junction, the taxi can turn left, turn right, or
go straight on.
The correct decision depends on where the taxi is trying to get to. In
other words, as well as a current state description, the agent needs
some sort of goal information that describes situations that are
desirable-for example, being at the passenger's destination.
Agent keeps track of the world state as well as set of goals it’s
trying to achieve: chooses
actions that will (eventually) lead to the goal(s).
• Key difference wrt Model-Based Agents:
For example, there are many action sequences that will get the
taxi to its destination (there by achieving the goal) but some are
quicker, safer, more reliable, or cheaper than others.
The learning element uses feedback from the critic on how the agent
is doing and determines how the performance element should be
modified to do better in the future.
The critic tells the learning element how well the agent is doing with
respect to a fixed performance standard.
The point is that if the performance element had its way, it would keep
doing the actions that are best, given what it knows.