0% found this document useful (0 votes)
9 views

21cs502 Ai Unit-II Notes Short

Uploaded by

vallaturulohitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

21cs502 Ai Unit-II Notes Short

Uploaded by

vallaturulohitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

21CS502 ARTIFICIAL INTELLIGENCE

UNIT II
PROBLEM SOLVING

Heuristic search strategies- heuristic functions- Game Playing- Mini-max Algorithm –


Optimal decisions in games Alpha-beta search- Monte-Carlo search for Games - Constraint
satisfaction problems- Constraint propagation Backtracking search for CSP- Local search for
CSP -Structure of CSP

Heuristic search strategies

 Informed Search (Heuristic Search) strategies uses problem-specific knowledge beyond


the definition of the problem itself—can find solutions more efficiently than can an
uninformed strategy.
 The general approach we consider is called best-first search in which a node is selected
for expansion based on an evaluation function, f(n). The evaluation function is
EVALUATION FUNCTION construed as a cost estimate, so the node with the lowest
evaluation is expanded first.
 Most best-first algorithms include as a component of f a heuristic function, denoted
h(n): HEURISTIC FUNCTION

h(n) = estimated cost of the cheapest path from the state at node n to a goal state.

The various Heuristic Search Strategies are:

1. Greedy Best First Search


2. A* Search

(1) Greedy Best first search:

 Greedy best-first search tries to expand the node that is closest to the goal

 It leads to a solution quickly.

 Thus, it evaluates nodes by using just the heuristic function, h(n)

Evaluation Function f(n) = h(n)

 h(n) determines how close is the node ‘n’ to the goal.


Greedy Best First Search Algorithm:

Step 1: Place the starting node into the OPEN list.

Step 2: If the OPEN list is empty, Stop and return failure.

Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.

Step 4: Expand the node n, and generate the successors of node n.

Step 5: Check each successor of node n, and find whether any node is a goal node or not. If
any successor node is goal node, then return success and terminate the search, else
proceed to Step 6.

Step 6: For each successor node, algorithm checks for evaluation function f(n), and then
check if the node has been in either OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.

Step 7: Return to Step 2.

( Open List – contains Nodes to be Explored, Closed List – contains Nodes already Explored)

Advantages:

 Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
 This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

 It can behave as an unguided depth-first search in the worst case scenario.


 It can get stuck in a loop as DFS.
 This algorithm is not optimal.

Performance Evaluation of Greedy Best First Search:

1. Completeness: No, Greedy Best First Search is not complete even in a finite state
space. It is much like depth-first search as it can get struck in loops.
2. Optimal : No, as it is not guaranteed to render lowest cost solution.
3. Time Complexity: O(bm) (in worst case), but a good heuristic can give dramatic
improvement (m is max depth of search space)
4. Space Complexity: O(bm) - keeps all nodes in memory(m is max depth of search space)
(2) A* Search Algorithm:

 The most widely known form of best-first search is called A* search

Evaluation Function f(n) = g(n) + h(n)

where,

f(n) is the estimated cost of the cheapest solution.

g(n) – actual cost to reach node n from start node.

h(n) – estimated cost to reach goal node from node n.

 It evaluates nodes by combining g(n)-the actual path cost to reach the node, and h(n) -

the estimated cost to get from the node to the goal.

 A* algorithm is similar to Uniform Cost Search except that it uses g(n)+h(n) instead of
g(n).
 In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and
stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For
each successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

( Open List – contains Nodes to be Explored, Closed List – contains Nodes already Explored)
Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.

Performance Evaluation of A* Algorithm:

1. Completeness: Yes, complete (unless there are infinitely many nodes to explore in the
search space).
2. Optimal : Yes, it is guaranteed to find a shortest path if one exists.
3. Time Complexity: O(bd), where b is the branching factor, d is the depth of the tree.
4. Space Complexity:O(bd)

 It is important to note that the completeness of A* relies on the admissibility of the heuristic
function.
 An admissible heuristic never overestimates the true cost to reach the goal from any given
node. If the heuristic is not admissible, A* may not be complete.
 In summary, A* is a complete and optimal path finding algorithm when used with an
admissible heuristic.
Example: Find a route from Arad to Bucharest using Best First Search, A* Search for the following

map.
GREEDY BEST FIRST SEARCH

Nodes are labeled with their h-values.


A* SEARCH

Nodes are labeled with f = g + h

Comparing to all leaf nodes, f-value of ‘Sibiu’ is less than others. So, Sibiu is expanded next

and the f-values of others are stored in memory.

Comparing to all leaf nodes, f-value of ‘Rimnicu Vilcea’ is less than others. So, Rimnicu

Vilcea is expanded next, f-values of other nodes are stored in memory.

Comparing to all leaf nodes, f-value of ‘Fagaras’ is less than others. There is a possibility to

find the optimal path to goal node via ‘Fagaras’. So, Fagaras is expanded next, f-values of

other nodes are stored in memory.


Though the path to Bucharest(Goal Node) exists via Fagaras with estimated cost of

450, we have to check whether any other shortest path exists. So, the Leaf node

Pitesti is expanded next, as it has least f value.

Here the Goal node Bucharest is reached in two possible ways:

Arad -> Sibiu -> Fagaras->Bucharest

 Estimated Cost of cheapest solution is 450

Arad ->Sibiu -> Rimnicu Vilcea ->Pitesti-> Bucharest

 Estimated Cost of cheapest solution is 418

Therefore, the Optimal path from Arad to Bucharest is

Arad ->Sibiu -> Rimnicu Vilcea ->Pitesti-> Bucharest


MEMORY BOUNDED HEURISTIC SEARCH

• The simplest way to reduce memory requirements for A∗ is iterative-deepening A∗


(IDA∗) algorithm

• The main difference between IDA∗ and standard iterative deepening is that the cutoff
used is the f -cost (g + h) rather than the depth; at each iteration, the cutoff value is
the smallest f -cost of any node that exceeded the cutoff on the previous iteration

Memory Bounded Algorithms:

• RECURSIVE BEST FIRST SEARCH

• MA *

PERFORMANCE EVAULATION-RBFS

 RBFS is an optimal algorithm if the heuristic function h(n) is admissible. Its space
complexity is linear in the depth of the deepest optimal solution
 But its time complexity is rather difficult to characterize: it depends both on the
accuracy of the heuristic function and on how often the best path changes as nodes
are expanded.

DIFFERENCE BETWEEN RBFS AND IDA*

IDA* RBFS

 Between iterations, IDA* retains only  RBFS retains more information in


a single number: the current f -cost memory,but it uses only linear space:
limit even if more memory were available,
RBFS has no way to make use of it.

• IDA∗ and RBFS suffer from using too little memory.

• Because they forget most of what they have done, both algorithms may end up re-
expanding the same states many times over. Furthermore, they suffer the potentially
exponential increase in complexity associated with redundant paths in graphs

MEMORY – BOUNDED A* ALGORITHM

In order to use full available memory two algorithms were devised

• Memory bounded A* algorithm

• Simplified A* algorithm
SIMPLIFIED MEMORY A * ALGORITHM

• SMA∗ proceeds just like A∗, expanding the best leaf until memory is full.

• At this point, it cannot add a new node to the search tree without dropping an old one.

• SMA∗ always drops the worst leaf node—the one with the highest f -value.

• Like RBFS, SMA∗ then backs up the value of the forgotten node to its parent. In this
way, the ancestor of a forgotten subtree knows the quality of the best path in that
subtree.

• SMA∗ regenerates the subtree only when all other paths have been shown to look worse
than the path it has forgotten.

• To avoid selecting the same node for deletion and expansion, SMA∗ expands the newest
best leaf and deletes the oldest worst leaf.

• If the leaf is not a goal node, then even if it is on an optimal solution path, that
solution is not reachable with the available memory. Therefore, the node can be
discarded exactly as if it had no successors\

PERFORMANCE EVALUATION

 SMA∗ is complete if there is any reachable solution—that is, if d, the depth of the
shallowest goal node, is less than the memory size (expressed in nodes).
 It is optimal if any optimal solution is reachable; otherwise, it returns the best
reachable solution.

ADVANTAGE

 SMA∗ is a fairly robust choice for finding optimal solutions, particularly when the
state space is a graph, step costs are not uniform, and node generation is expensive
compared to the overhead of maintaining the frontier and the explored set.
HEURISTIC FUNCTIONS

 A heuristic function estimates the approximate cost of solving a task.


 Example : Determining the shortest driving distance to a particular location.

8 PUZZLE PROBLEM SOLUTION USING HEURISTIC FUNCTION

• The 8-puzzle is an example of Heuristic search problem.

• The object of the puzzle is to slide the tiles horizontally or vertically into the empty
space until the configuration matches the goal configuration

• The average cost for a randomly generated 8-puzzle instance is about 22 steps.

• The branching factor is about 3.(When the empty tile is in the middle, there are four
possible moves; when it is in the corner there are two; and when it is along an edge t
here are three).

• This means that an exhaustive search to depth 22 would look at about 322
approximately =3.1 X 1010 states.

The two commonly used heuristic functions for the 8-puzzle are :

(1) h1 = the number of misplaced tiles.

(2) h2 = the sum of the distances of the tiles from their goal positions.

Example :

Solving 8 Puzzle problem using Heuristic function H= Sum of distances of the tiles from
their goal positions.(Manhattan Distance)

Heuristic value of the Initial State based on Manhattan Distance = 1+1+1 =3 (4 is one
position away from goal, 5 is one position away from goal, 8 is one position away from
goal)
Using Heuristic function, in each step, the state with lowest heuristic value is expanded
always. So, the solution can be attained quickly.

GAME PLAYING - OPTIMAL DECISION IN GAMES


• Game playing is an abstract and pure form of competition that seems to require
intelligence
• It is easy to represent states and actions
• To implement game playing little knowledge is required
• A game can be formally defined as a kind of search problem with the following
elements:
• S0: The initial state, which specifies how the game is set up at the start.
• PLAYER(s): Defines which player has the move in a state.
• ACTIONS(s): Returns the set of legal moves in a state.
• RESULT (s, a): The transition model, which defines the result of a move.
• TERMINAL -T EST (s): A terminal test, which is true when the game is over and
false otherwise. States where the game has ended are called terminal states.
• U TILITY(s, p): A utility function (also called an objective function or payoff
function), defines the final numeric value for a game that ends in terminal state
s for a player p. In chess, the outcome is a win, loss, or draw, with values +1, 0,
or 1/2 .
 Chess, Tic Tac Toe and checkes are two player, zero-sum games. These games uses Mini
max algorithm and Alpha Beta Pruning to make optimal decision in games.
1. MINIMAX ALGORITHM
 The minimax algorithm computes the minimax decision from the current state.
 It uses a simple recursive computation of the minimax values of each successor
state, directly implementing the defining equations.
 The recursion proceeds all the way down to the leaves of the tree, and then the
minimax values are backed up through the tree as the recursion unwinds.

Working of Min-Max Algorithm:

 The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-player
game.

 In this example, there are two players one is called Maximizer and other is called
Minimizer.

 Maximizer will try to get the Maximum possible score, and Minimizer will try to get
the minimum possible score.

 This algorithm applies DFS, so in this game-tree, we have to go all the way through
the leaves to reach the terminal nodes.

 At the terminal node, the terminal values are given so we will compare those value
and backtrack the tree until the initial state occurs.
Example:
Consider the following tree

Step 2:
Node D = max (-∞ , -1) = -1, max(-1,4) = 4 (Update Node D with 4)
Node E = max (-∞ , 2) = 2, max(2,6) = 6 (Update Node E with 6)
Node F = max (-∞ , -3) = -3, max(-3,-5) = -3 (Update Node F with -3)
Node G = max (-∞ , 0) = 0, max(0,7) = 7 (Update Node G with 7)
Step 3:
Node B = min (∞,4) = 4, min(4,6) = 4 (Update Node B with 4)
Node C = min (∞,-3) = -3, min(-3,7) = -3 (Update Node C with -3)

Step 4:
Node A = max (-∞ , 4) = 4, max(4,-3) = 4 (Update Node A with 4)
Performance Evaluation of Minimax Algorithm:

• Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist),
in the finite search tree.

• Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.

• Time complexity- As it performs DFS for the game-tree, so the time complexity of
Min-Max algorithm is O (bm), where b is branching factor of the game-tree, and m is
the maximum depth of the tree.

• Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS


which is O (bm).

2. ALPHA BETA PRUNING

Example:

Consider
CONSTRAINT SATISFACTION PROBLEMS

 A constraint satisfaction problem consists of three components, X,D, and C:

o X is a set of variables, {X1, . . . ,Xn}.

o D is a set of domains, {D1, . . . ,Dn}, one for each variable.

o C is a set of constraints that specify allowable combinations of values.

Examples of Constraint Satisfaction Problems:

1. Map Coloring Problem


2. Crypt Arithmetic Problem

1. MAP COLORING PROBLEM

 Map Coloring Problem is a type of Constraint Satisfaction Problem.


Constraint Graph

-----------------------------------------------------------------------------------------------------------

2. CRYPT ARITHMETIC PROBLEM


 Crypt arithmetic problem is a type of Constraint Satisfaction Problem.

Variables: {A –Z}

Domain : {0 – 9}

Constraints:

 A number 0-9 is assigned to a particular alphabet.


 Each different alphabet has a unique number.
 Sum of digits must be as shown in the problem.
 There should be only one carry forward.
 The starting character of the sum should not start with ‘0’.

Example:

You might also like