ai-unit-4-notes
ai-unit-4-notes
AI Unit-4 notes
Syllabus
ARTIFICIAL INTELLIGENCE
UNIT - I
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search,
Iterative deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search
Strategies: Greedy best-first search, A* search, Heuristic Functions, Beyond Classical Search:
Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces,
Searching with Non-Deterministic Actions, Searching wih Partial Observations, Online
Search Agents and Unknown Environment .
UNIT - II
Problem Solving by Search-II and Propositional Logic
Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect
Real-Time Decisions.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint
Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of
Problems.
Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional
Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn
clauses and definite clauses, Forward and backward chaining, Effective Propositional Model
Checking, Agents Based on Propositional Logic.
UNIT - III
Logic and Knowledge Representation
First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-
Order Logic, Knowledge Engineering in First-Order Logic.
Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification and
Lifting, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events.
Mental Events and Mental Objects, Reasoning Systems for Categories, Reasoning with
Default Information.
UNIT - IV
Planning
Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-
Space Search, Planning Graphs, other Classical Planning Approaches, Analysis of Planning
approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical
Planning, Planning and Acting in Nondeterministic Domains, Multi agent Planning.
UNIT - V
Uncertain knowledge and Learning
Uncertainty: Acting under Uncertainty, Basic Probability Notation, Inference Using Full
Joint Distributions, Independence, Bayes’ Rule and Its Use,
Probabilistic Reasoning: Representing Knowledge in an Uncertain Domain, The Semantics
Of Bayesian Networks, Efficient Representation of Conditional Distributions, Approximate
TEXT BOOKS
1. Artificial Intelligence A Modern Approach, Third Edition, Stuart Russell and Peter
Norvig, Pearson Education.
REFERENCES:
1. Artificial Intelligence, 3rd Edn., E. Rich and K. Knight (TMH)
2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education.
3. Artificial Intelligence, Shivani Goel, Pearson Education.
4. Artificial Intelligence and Expert systems – Patterson, Pearson Education
LECTURE NOTES
Planning
Classical Planning: Definition of Classical Planning
Classical planning concentrates on problems where most actions leave most things unchanged.
Think of a world consisting of a bunch of objects on a flat surface.
The following points highlight the two main planning methods used to solve AI problems. The
methods are: 1. Planning with State-Space Search 2. Goal Stack Planning.
We start with the problem’s initial state, considering sequences of actions until we reach a goal state.
i. The initial state of the search is the initial state from the planning problem. In general each state will
be set of positive ground literals; literals not appearing are false.
ii. The actions which are applicable to a state are all those whose preconditions are satisfied. The
successor state resulting from an action is generated by adding the positive effect literals and deleting
the negative effect literals.
iii. The goal test checks whether the state satisfies the goal of the planning problem.
iv. The step cost of each action is typically 1. Although it would be easy to allow different costs for
different actions, this was seldom done by STRIPS planners.
ADVERTISEMENTS:
Since function symbols are not present, the state space of a planning problem is finite and therefore,
any graph search algorithm such as A * will be a complete planning algorithm.
From the early days of planning research it is known that forward state-space search is too inefficient
to be practical. Mainly, this is because of a big branching factor since forward search does not address
only relevant actions, (all applicable actions are considered). Consider for example, an air cargo
problem with 10 airports, where each airport has 5 planes and 20 pieces of cargo.
The goal is to move all the cargo at airport A to airport B. There is a simple solution to the problem:
load the 20 pieces of cargo into one of the planes at A, fly the plane to B, and unload the cargo. But
finding the solution can be difficult because the average branching factor is huge: each of the 50 planes
can fly to 9 other airports, and each of the 200 packages can be either unloaded (if it is loaded), or
loaded into any plane at its airport (if it is unloaded).
On average, let’s say there are about 1000 possible actions, so the search tree up to the depth of the
obvious solution has about 1000 nodes. It is thus clear that a very accurate heuristic will be needed to
make this kind of search efficient.
description of the possible predecessors of the set of goal states. The STRIPS representation makes this
quite easy because sets of states can be described by the literals which must be true in those states.
The main advantage of backward search is that it allows us to consider only relevant actions. An action
is relevant to a conjunctive goal if it achieves one of the conjuncts of the goal. For example, the goal in
our 10-airport air cargo problem is to have 20 pieces of cargo at airport B, or more precisely.
We may note that there are many irrelevant actions which can also lead to a goal state. For example,
we can fly an empty plane from Mumbai to Chennai; this action reaches a goal state from a
predecessor state in which the plane is at Mumbai and all the goal conjuncts are satisfied. A backward
search which allows irrelevant actions will still be complete, but it will be much less efficient. If a
solution exists, it should be found by a backward search which allows only relevant action.
This restriction to relevant actions only means that backward search often has a much lower branching
factor than forward search. For example, our air cargo problem has about 1000 actions leading forward
from the initial state, but only 20 actions working backward from the goal. Hence backward search is
more efficient than forward searching.
Searching backwards is also called regression planning. The principal question in regression planning
is: what are the states from which applying a given action leads to the goal? Computing the description
of these states is called regressing the goal through the action. To see how does it work, once again
consider the air cargo example. We have the goal
the action load (C2, p) would not be consistent with the current goal, because it would negate the literal
At (C2, B) (verify).
Given definitions of relevance and consistency, we can now describe the general process of
constructing predecessors for backward search. Given a goal description G, let A be an action which is
relevant and consistent.
Planning Graphs
Example
Actions
◮ Inconsistent effects: two actions that lead to inconsistent effects
◮ Interference: an effect of the first action negates the precondition of the other action
◮ Competing needs: a precondition of the first action is mutually exclusive with a
precondition of the second action.
Literals
◮ one literal is the negation of the other one
◮ Inconsistensy support: each pair of action archieving the two literals are mutually
exclusive.
Planning is an area of great current interest within AI. One reason for this is that it combines
the two major areas of AI we have covered so far: search and logic. That is, a planner can be seen
either as a program that searches for a solution or as one that (constructively) proves the existence of a
solution. The cross-fertilization of ideas from the two areas has led to both improvements in
performance amounting to several orders of magnitude in the last decade and an increased use of
planners in industrial applications. Unfortunately, we do not yet have a clear understanding of which
techniques work best on which kinds of problems. Quite possibly, new techniques will emerge that
dominate existing methods. Planning is foremost an exercise in controlling combinatorial explosion. If
there are p primitive propositionsin a domain, then there are 2 p states.
For complex domains, p can grow quite large. Consider that objects in the domain have
properties (Location, Color, etc.) and relations (At, On, Between, etc.). With d objects in a domain
with ternary relations, we get 2 d 3 states. We might conclude that, in the worst case, planning is
hopeless. Against such pessimism, the divide-and-conquer approach can be a powerful weapon. In the
best case—full decomposability of the problem—divide-and-conquer offers an exponential speedup.
Planning and Acting in the Real World: Time, Schedules, and Resources
♦ So far only looked at classical planning, i.e. environments are fully observable, static, determinist.
Also assumed that action descriptions are correct and complete.
♦ Unrealistic in many real-world applications: Don’t know everything - may even hold incorrect
information. Actions can go wrong.
♦ Distinction: bounded vs. unbounded indeterminacy: can possible preconditions and effects be listed
at all? Unbounded indeterminacy related to qualification problem
♦ Real world include temporal and resource constraints: • classical planning talks about what to do,
and in what order, but cannot talk about time: how long an action takes • an airline has a limited
number of staff - and staff who are on one flight cannot be on another at the same time
Critical path method: to determine the possible start and end times of each action. Critical path is that path
whose total duration is longest Actions have a window of time [ES,LS] ES - earliest possible start time, LS -
latest possible start time Given A, B actions and A ≺ B: ES(Start) = 0 ES(B) = max A ≺ BES(A) + Duration(A)
LS(Finish) = ES(Finish) LS(A) = min B ≻ ALS(B) − Duration(A)
Hierarchical Planning,
Principle
hierarchical organization of 'actions'
complex and less complex (or: abstract) actions
lowest level reflects directly executable actions
Procedure
planning starts with complex action on top
plan constructed through action decomposition
substitute complex action with plan of less complex actions (pre-defined plan schemata; or
learning of plans/plan abstraction)
overall plan must generate effect of complex action
Multiagent planning
Between the purely single-agent and truly multiagent cases is a wide spectrum of problems that exhibit
various degrees of decomposition of the monolithic agent
Multieffector planning: to manage each effector while handling positive and negative interactions among the
effectors
Multibody planning: effectors are physically decoupled into detached units - as in a fleet of delivery robots in a
factory
Decentralized planning: multiple reconnaissance robots covering a wide area may often be out of radio
contact with each other and should share their findings during times when communication is feasible
Assume perfect synchronization: each action takes the same amount of time and actions at each point in the
joint plan are simultaneous Joint action h a 1, ...,a n i, where ai is the action taken by the ith actor
Loosely coupled: focus on decoupling the actors to the extent possible, so that the complexity of the problem
grows linearly with n rather than exponentially.
Tutorial questions
Descriptive questions
ASSIGNMENT QUSESTIONS
Max marks:10
SET : 2
UNIT TEST –I
AI
Max marks:10
SET : 3
UNIT TEST –I
AI
Max marks:10
SET : 4
UNIT TEST –I
AI
Max marks:10
https://nptel.ac.in/courses/106/105/106105077/21
https://nptel.ac.in/courses/106/105/106105077/22
https://nptel.ac.in/courses/106/105/106105077/23
https://nptel.ac.in/courses/106/105/106105077/24
https://nptel.ac.in/courses/106/105/106105077/25
https://nptel.ac.in/courses/106/105/106105077/26
https://nptel.ac.in/courses/106/105/106105077/27
https://nptel.ac.in/courses/106/105/106105077/28
UNIVERSITY QUESTIONS
Objective Questions
Which instruments are used for perceiving and acting upon the environment?
a) Sensors and Actuators
b) Sensors
c) Perceiver
d) None of the mentioned.
Answer: a
Answer: C
Answer: d
Answer: b
Answer: c
c) Reflex agent
d) None of the mentioned
Answer: a
Answer: b
Answer: d
Answer: d
Answer: b
Answer: a
Answer: c
Answer: b
Answer: b
Answer: a
Answer: d
Answer: d
Answer: b
Answer: a
Answer: b
Problem solving agent is one kind of goal-based agent, where the agent
decide what do by finding sequences of actions that lead to desirable
states. If the agent understood the definition of problem, it is relatively
straight forward to construct a search process for finding solutions, which
implies that problem solving agent should be an intelligent agent to
maximize the performance measure.
Human Agent: A human agent has eyes, ears, and other organs for sensors and hands,
legs, mouth and other body parts for actuators.
Robotic Agent: A robotic agent has cameras and infrared range finders for the
sensors and various motors for the actuators.
Software Agent: A software agent has encoded bit strings as its percepts and
actions.
Internally, the agent function for an artificial agent will be implemented by an agent
program. An agent program is a function that implements the agent mapping from
percepts to actions. It is a concrete implementation, running on the agent architecture.
An agent should act as a Rational Agent. A rational agent is one that does the right thing
that is the right actions will cause the agent to be most successful in the environment.
Performance measures
Rationality
Blooms Taxonomy
1) Define Searching ?
Searching is used to find a particular goal node by making root node as initial
node.
Depth first search (DFS) algorithm starts with the initial node of the graph G, and
then goes to deeper and deeper until we find the goal node or the node which has
no children. The algorithm, then backtracks from the dead end towards the most
recent node that is yet to be completely unexplored.
The data structure which is being used in DFS is stack. The process is similar to
BFS algorithm. In DFS, the edges that leads to an unvisited node are called
discovery edges while the edges that leads to an already visited node are called
block edges.
Algorithm
o Step 1: SET STATUS = 1 (ready state) for each node in G
o Step 2: Push the starting node A on the stack and set its STATUS = 2
(waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is empty
o Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed
state)
o Step 5: Push on the stack all the neighbours of N that are in the ready state
(whose STATUS = 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
o Step 6: EXIT
Blooms Taxonomy
Topic :AI
o IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert
system approach combined with Machine learning and natural language processing.
o Some Examples of Narrow AI are playing chess, purchasing suggestions on e-
commerce site, self-driving cars, speech recognition, and image recognition.
2. General AI:
o General AI is a type of intelligence which could perform any intellectual task with
efficiency like a human.
o The idea behind the general AI to make such a system which could be smarter and
think like a human by its own.
o Currently, there is no such system exist which could come under general AI and can
perform any task as perfect as a human.
o The worldwide researchers are now focused on developing machines with General AI.
o As systems with general AI are still under research, and it will take lots of efforts and
time to develop such systems.
3. Super AI:
o Super AI is a level of Intelligence of Systems at which machines could surpass
human intelligence, and can perform any task better than human with cognitive
properties. It is an outcome of general AI.
o Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
o Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.
1. Reactive Machines
o Purely reactive machines are the most basic types of Artificial Intelligence.
o Such AI systems do not store memories or past experiences for future actions.
o These machines only focus on current scenarios and react on it as per possible best
action.
o IBM's Deep Blue system is an example of reactive machines.
o Google's AlphaGo is also an example of reactive machines.
2. Limited Memory
o Limited memory machines can store past experiences or some data for a short
period of time.
o These machines can use stored data for a limited time period only.
o Self-driving cars are one of the best examples of Limited Memory systems. These
cars can store recent speed of nearby cars, the distance of other cars, speed limit,
and other information to navigate the road.
3. Theory of Mind
o Theory of Mind AI should understand the human emotions, people, beliefs, and be
able to interact socially like humans.
o This type of AI machines are still not developed, but researchers are making lots of
efforts and improvement for developing such AI machines.
4. Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent, and will have their own consciousness, sentiments, and self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is a hypothetical concept.