0% found this document useful (0 votes)
6 views

Search

Uploaded by

qixuanwang22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Search

Uploaded by

qixuanwang22
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Search

MSAI 348: Intro to Artificial Intelligence


Instructor: Mohammed A. Alam
Intelligence, insofar the application of its artificial variety goes, seems to be about making decisions or
predictions given a circumstance (loosely “problem”).

Any decision-making, any prediction, any thought, any utterance…


…can be thought of as a mapping from the problem to a solution from a vast solution space.

Such mapping requires some operation.

The operation can be a search process. It can be reasoning like we do. It can be direct computation of
the solution. Or something else.

2
One thing seems to be certain is that all the operations have one thing in common:

They all start with representation of whatever needs to be operated on.

Representation can take may forms. They can be words in a language. They can be vectors of values
that represent those words. They can be pixels in an image. They can be digitized soundwaves.

It all starts with representation. Let’s look at one from machine learning.

3
Say, we want our computer to learn about fish and we care only about these features of fish:

Length ― small, medium, large (3 values)


Width ― narrow, wide (2 values)
Shade ― dark, light (2 values)
Dotted ― yes, no (2 values)
Dorsal fin ― yes, no (2 values)
Hazel eyes ― yes, no (2 values)

4
Here’s a machine learning dataset showing examples or instances of fish:

Length Width Shade Dotted Dorsal Fin Hazel Eyes Label

small narrow dark yes no yes salmon

medium narrow dark no no yes trout

small wide light no yes yes bass

large wide dark yes yes no salmon

small narrow dark yes no yes salmon

How many unique instances can there be?

5
Here’s a machine learning dataset showing examples or instances of fish:

Length Width Shade Dotted Dorsal Fin Hazel Eyes Label

small narrow dark yes no yes salmon

medium narrow dark no no yes trout

small wide light no yes yes bass

large wide dark yes yes no salmon

small narrow dark yes no yes salmon

How many unique instances can there be? 3 × 2 × 2 × 2 × 2 × 2 = 96

6
Imagine a table with 96 rows, each a unique instance. The salmon are indicated by a 1.

Length Width Shade Dotted Dorsal Fin Hazel Eyes Label

small narrow dark yes no yes 1

medium narrow dark no no yes 0

small wide light no yes yes 0

large wide dark yes yes no 1

… … … … … … …

large narrow light no yes no 1

Each such table denotes a concept. How many unique concepts can there be?

7
Imagine a table with 96 rows, each a unique instance. The salmon are indicated by a 1.

Length Width Shade Dotted Dorsal Fin Hazel Eyes Label

small narrow dark yes no yes 1

medium narrow dark no no yes 0

small wide light no yes yes 0

large wide dark yes yes no 1

… … … … … … …

large narrow light no yes no 1

Each such table denotes a concept. How many unique concepts can there be? 296
Of all those concepts, the table above is the concept of salmon, at least per the given features.

8
Now imagine a table with 296 rows, each a unique concept. Allowed concepts are indicated by a 1.

Concept Allow

1 1

2 0

3 1

4 1

… …

296 0

22
96
Each such table denotes a hypothesis. How many unique hypotheses can there be?

9
Now imagine a table with 296 rows, each a unique concept. Allowed concepts are indicated by a 1.

Concept Allow

1 1

2 0

3 1

4 1 This number is larger than the number of atoms in the universe.


… …

296 0

22
96
Each such table denotes a hypothesis. How many unique hypotheses can there be?

10
Now imagine a decision or thought is plucked out of the hypothesis space we just saw.
Imagine a search of that hypothesis space.

If a mere handful of features can spawn such a gigantic number of hypotheses, how much more
gigantic is the hypothesis space we humans search? Or do we?

We’ll answer that question later, but let’s see how algorithmic search can be performed.

11
Search can be done upfront (given complete information and feasibility). That would be akin to
planning followed by execution.
Or actions can be interleaved with search.

Search can occur in a geographical space: think finding a route from point A to point B.

Search can also be fruitful for abstract spaces (e.g., concept spaces).

12
8-Puzzle

5 4 1 2 3

6 1 8 8 4

7 3 2 7 6 5

Start State Goal State


8-Puzzle moves can be represented as a tree.
One possible solution:
1–3–7–14–26–46

14
Breadth-first Search (BFS) – one way to search the tree

First search the breadth of this level

Then this level

This this one

And so on…

Breadth
15
Depth-first Search (DFS) – another way to search the tree

First search this node

Then this node Depth

This this one

And so on…

16
Tree Search

Goal: Find the path from the start state to the goal state
Strategy: Methodically search until the goal is reached

Shared theme of all tree search methods (algorithms):

1. Check node for goal.


2. If goal not found, expand node. Call it the Check-and-expand Routine

3. Repeat.

17
States vs. Nodes

parent
A state is a (representation of) a configuration of the world.

A node is a data structure constituting part of a search tree. Node


depth = 6
path cost = 7
It typically includes state, parent node, action, path cost,
and depth.

te
sta

The expand function yields new nodes, either by reading


State
from a map or by using a successor function to create new
nodes and fill in the various fields.

18
Tree Search Algorithm Pseudocode

19
There’s this thing called the fringe (aka frontier).
It’s what stores the nodes slated to be checked.

We check a node for the goal. If we don’t find it, we expand.


We check again. We expand again.
We rinse and repeat.
But which nodes do we check?
Answer: The ones in the fringe.

20
How the fringe is tracked differentiates between BFS and DFS:

BFS uses a queue DFS uses a stack


FIFO (First in, first out) LIFO (Last in, first out)

People in queue at the polls Plates in a stack

The first one in will be the first one out. The last one going on the stack will be the first one out.

21
Let’s do BFS. Say, our goal state is what’s there in node 16.
Remember, we’ll use a queue.

We push node 1 into our empty fringe.


We pop our fringe and get 1.
We check 1. We don’t find the goal.
We expand and get 2, 3, 4.
We push them into our fringe.

Fringe now:
???
Breadth
22
Let’s do BFS. Say, our goal state is what’s there in node 16.
Remember, we’ll use a queue.

We push node 1 into our empty fringe.


We pop our fringe and get 1.
We check 1. We don’t find the goal.
We expand and get 2, 3, 4.
We push them into our fringe.

Fringe now:
2, 3, 4
Breadth
23
We pop our fringe and get 2.
We check 2. We don’t find the goal.
We expand and get 5.
We push it into our fringe.

Fringe now:
???
Breadth
24
We pop our fringe and get 2.
We check 2. We don’t find the goal.
We expand and get 5.
We push it into our fringe.

Fringe now:
3, 4, 5
Breadth
25
We pop our fringe and get 3.
We check 3. We don’t find the goal.
We expand and get 6, 7, 8.
We push them into our fringe.

Fringe now:
???
Breadth
26
We pop our fringe and get 3.
We check 3. We don’t find the goal.
We expand and get 6, 7, 8.
We push them into our fringe.

Fringe now:
4, 5, 6, 7, 8
Breadth
27
We pop our fringe and get 4.
We check 4. We don’t find the goal.
We expand and get 9.
We push it into our fringe.

Fringe now:
???
Breadth
28
We pop our fringe and get 4.
We check 4. We don’t find the goal.
We expand and get 9.
We push it into our fringe.

Fringe now:
5, 6, 7, 8, 9
Breadth
29
We pop our fringe and get 5.
We check 5. We don’t find the goal.
We expand and get 10, 11.
We push them into our fringe.

Fringe now:
???
Breadth
30
We pop our fringe and get 5.
We check 5. We don’t find the goal.
We expand and get 10, 11.
We push them into our fringe.

Fringe now:
6, 7, 8, 9, 10, 11
Breadth
31
After a while, we’re done checking 7.
Note that in the fringe, 7 is followed by 8.

We pop our fringe and get 8.


We check 8. We don’t find the goal.
We expand and get 16, 17.
We push them into our fringe.

Fringe now:
???
Breadth
32
After a while, we’re done checking 7.
Note that in the fringe, 7 is followed by 8.

We pop our fringe and get 8.


We check 8. We don’t find the goal.
We expand and get 16, 17.
We push them into our fringe.

Fringe now:
9, 10, 11, 12, 13, 14, 15, 16, 17
Breadth
33
With BFS, we’ve always expanded the shallowest unexpanded node.

After a while more, we check 16.


We find our goal state and end our search.

What remains in the fringe?

Breadth
34
Now let’s do DFS. Say, our goal state is again what’s there in node 16.
Remember, we’ll use a stack now.

We push node 1 onto our empty fringe. Depth

We pop our fringe and get 1.


We check 1. We don’t find the goal.
We expand and get 2, 3, 4.
We push them onto our fringe.

Fringe now:
[ 2, 3, 4
Stack bottom

35
Halt!!! Hold your horses!
What do we get if we pop our fringe?

We get 4. At some point, we’ll get 3. Later, 2. Depth

So, we end up searching right to left.


How do we search left to right?
We push onto the fringe right to left.
Let’s do that.

Fringe now:
[ 4, 3, 2

36
We pop our fringe and get 2. Depth

We check 2. We don’t find the goal.


We expand and get 5.
We push it onto our fringe.

Fringe now:
[ ???

37
We pop our fringe and get 2. Depth

We check 2. We don’t find the goal.


We expand and get 5.
We push it onto our fringe.

Fringe now:
[ 4, 3, 5

38
We pop our fringe and get 5. Depth

We check 5. We don’t find the goal.


We expand and get 10, 11.
We push them onto our fringe.

Fringe now:
[ ???

39
We pop our fringe and get 5. Depth

We check 5. We don’t find the goal.


We expand and get 10, 11.
We push them onto our fringe.

Fringe now:
[ 4, 3, 11, 10

40
Notice that we’re going down the leftmost branch.
So far, we’ve checked 1, 2, and 5.

We pop our fringe and get 10. Depth

We check 10. We don’t find the goal.


We expand and get 20.
We push it onto our fringe.

Fringe now:
[ ???

41
Notice that we’re going down the leftmost branch.
So far, we’ve checked 1, 2, and 5.

We pop our fringe and get 10. Depth

We check 10. We don’t find the goal.


We expand and get 20.
We push it onto our fringe.

Fringe now:
[ 4, 3, 11, 20

42
We pop our fringe and get 20. Depth

We check 20. We don’t find the goal.


We expand and get 34, 35.
We push them onto our fringe.

Fringe now:
[ ???

43
We pop our fringe and get 20. Depth

We check 20. We don’t find the goal.


We expand and get 34, 35.
We push them onto our fringe.

Fringe now:
[ 4, 3, 11, 35, 34

44
We pop our fringe and get 34. Depth

We check 34. We don’t find the goal.


We CAN’T expand more.

Fringe now:
[ ???

45
We pop our fringe and get 34. Depth

We check 34. We don’t find the goal.


We CAN’T expand more.

Fringe now:
[ 4, 3, 11, 35

46
Notice that we’re done going down the leftmost branch.
We’ve now shifted focus to the right branch below 20.

We pop our fringe and get 35. Depth

We check 35. We don’t find the goal.


We CAN’T expand more here either.

Fringe now:
[ ???

47
Notice that we’re done going down the leftmost branch.
We’ve now shifted focus to the right branch below 20.

We pop our fringe and get 35. Depth

We check 35. We don’t find the goal.


We CAN’T expand more here either.

Fringe now:
[ 4, 3, 11

48
We’ve now searched the entire left branch under 5.
As before, we’ll now shift focus to the right branch under 5.

We pop our fringe and get 11. Depth

We check 11. We don’t find the goal.


We expand and get 21, 22, 23.
We push them onto our fringe.

Fringe now:
[ ???

49
We’ve now searched the entire left branch under 5.
As before, we’ll now shift focus to the right branch under 5.

We pop our fringe and get 11. Depth

We check 11. We don’t find the goal.


We expand and get 21, 22, 23.
We push them onto our fringe.

Fringe now:
[ 4, 3, 23, 22, 21

50
As before, we’ll now go down the left branch under 11, then right.
So, we’re searching left to right.

The main idea is that we search down Depth

the depth to the deepest level we can


reach on the left before coming back
up to search on the right.

We could choose to do the same from


right to left.

51
With DFS, we’ve always expanded the deepest unexpanded node.

After a while more, we check 16.


Depth
We find our goal state and end our search.

What remains in the fringe?

52
Why do both BFS and DFS and not just one?
It has to do with memory and computation speed.

As such, we worry about space complexity and time complexity.


BFS and DFS differ in both ways.

We worry about completeness and optimality as well.

53
Our worries, tabulated:

The worry What it means for BFS and DFS

Space Max no. of nodes stored in memory any time


Time Approx. no. of nodes generated (not necessarily examined)
Complete? If a solution exists, the algorithm will find it.
Optimal? A found solution will be the best/optimal (often shallowest).

54
BFS and DFS compared:

The worry What it means for BFS What it means for DFS

Space O(bd+1) O(bm)


Time O(bd+1) O(bm)
Complete? Yes No
Optimal? Yes No

Space and time complexities use Big O notation, depicting the worst case.
b = branching factor
d = depth of shallowest solution
m = max depth of tree 55
BFS and DFS compared:

In plain English,
BFS has a high memory requirement.
DFS is neither necessarily complete nor optimal.

So, if we have plenty of memory, do we just go with BFS?


If memory is scant, do we simply go with DFS?
Not quite in either case. Let’s see a couple of examples.

56
A tree for 8-Puzzle graphs all possible move sequences.
Only some paths from root to leaf will lead to the goal. We want that path.
We don’t need an exhaustive search of the whole tree. We can use DFS.

A tree for your LinkedIn graphs all your connections, level (degree) by level.
We want to find the shortest path to a connection.
We can do BFS. If we find the connection, we know that’s the shortest path.

57
We want this:
Optimality and completeness of BFS + memory-friendliness of DFS

Two ideas to do that:


1. Depth-limited DFS
2. Iterative Deepening

58
Depth-limited DFS is DFS but with a depth limit of L specified.

Depth-limited DFS characteristics:

The worry Depth-limited DFS

Space O(bL)
Time O(bL)
Complete? Not if solution is longer than L
Optimal? No, for the same reason why DFS isn’t

b = branching factor
59
Iterative Deepening is Depth-limited DFS with iterative increase in depth, i.e., 0, 1, 2, … , ∞.

Blends the benefits of BFS and DFS:


§ Searches in a similar order to BFS (optimal)
§ But has the memory requirements of DFS.

Will find a solution when L is the depth of the shallowest goal.

60
Iterative Deepening is wasteful, but that’s not such a bad thing, really.

Cost for L = 0 is 1
Cost for L = 1 is 1 + b
Cost for L = 2 is 1 + b + b2
Cost for L = 3 is 1 + b + b2 + b3

Cost for L = d is L = d: 1 + b + b2 + b3 + … + bd

Overall, cost ends up being O(bd)


Lower-depth repeated search cost is dominated by that at the highest depth.

61
Iterative Deepening DFS characteristics:

The worry Depth-limited DFS

Space O(bd)
Time O(bd)
Complete? Yes
Optimal? Yes

b = branching factor
d = depth of shallowest solution

62
BFS and DFS are uninformed search strategies.
They use only the information available in the basic problem definition, i.e., the tree of possible states.
They don’t look at the states themselves except to ask: Is this the goal? Have I been here before?

In summary:

The worry BFS DFS Depth-limited DFS Iter’ve Deepening

Space O(bd+1) O(bm) O(bL) O(bd)


Time O(bd+1) O(bm) O(bL) O(bd)
Complete? Yes No No Yes
Optimal? Yes No No Yes

63
Can we do better than uninformed search? Can we make an intelligent (think informed) choice?
We can heuristics.

An algorithm is a set of well-defined (unambiguous) steps for solving a problem.


It generally comes to a stop.
It usually aims to achieve optimality, completeness, accuracy, or precision.

A heuristic is a rule of thumb or a set of those.


It usually trades the italicized terms above for speed.
It is expected to give a good enough result in a short time.

64
Two heuristics for 8-Puzzle

Start state Tiles out of place Sum of distances Hamming Distance


out of place Manhattan Distance

5 6
1 2 3

3 4 8 4

7 6 5
5 6
Goal State

65
Best-first Search

Idea: Use an evaluation function f(n) for each node


§ Estimate of goodness
§ Estimate best unexpanded node.

Implementation: Order nodes in fringe in decreasing order of desirability

Special cases:
§ Greedy best-first search
§ A* search

66
Greedy Best-first Search (GBFS)

Idea: Expand the best node in the fringe


Evaluation function: f(n) = h(n)
§ h(n) = estimated cost from n to goal

Just like BFS uses a queue and DFS uses a stack, GBFS uses a priority-queue.
A priority-queue stays sorted based on the priority of its items, e.g., from smallest to largest distance.

67
A* Search

Idea: Avoid expanding paths that are already expensive


Evaluation function: f(n) = g(n) + h(n)
§ g(n) = cost so far to reach n
§ h(n) = estimated cost from n to goal
§ f(n) = estimated total cost of path through n to goal

evaluation of the desirability of n

68
A* Search outperforms BFS and DFS for 8-Puzzle

BFS

DFS

A* Search

69
In Romania, what are ways to go from Sibiu to Bucharest? Easy to figure out.

§ Sibiu — Fagaras — Bucharest


§ Sibiu — Rimnicu Vilcea — Pitesti — Bucharest
§ Sibiu — Rimnicu Vilcea — Craiova — Pitesti — Bucharest

70
But what about Oradea to Neamt? Not so easy now, is it?

Well, we can use A* Search.

71
How can search handle combinatorial explosion?

Search can utilize heuristics that make the task tractable.


A* uses straight-line distance (aka Euclidean distance) as a heuristic.

Let’s explore the beautiful A* Search algorithm.


The following slides are based on Karl Matthes’ nice article, Pathfinding: A* Search Algorithm.

72
Let’s A* Search through a map.

Say, we want to go from A to L in the map below.


We set up some data structures. Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A
10 B
C
0 36.1 0 36.1 0 36.1 0 36.1
D
36.1 36.1 36.1 36.1 E
A 14 B C D
10 F
0 36.1 0 36.1 0 36.1 0 36.1 Open = {}
G
36.1 36.1 36.1 36.1 H
E F G H Closed = {} I
0 36.1 0 36.1 0 36.1 0 36.1 J

36.1 36.1 36.1 36.1 K


I J K L L

73
We add A to the Open set. It’s our current vertex.
We measure A’s distance from the start, which is 0. We record it in the table.
We also record the heuristic distance from A to L, which is 36.1.
We then record the sum, which is also 36.1. Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
10 B
C
0 36.1 0 36.1 0 36.1 0 36.1
D
36.1 36.1 36.1 36.1 E
A 14 B C D
10 F
0 36.1 0 36.1 0 36.1 0 36.1 Open = {A}
G
36.1 36.1 36.1 36.1 H
E F G H Closed = {} I
0 36.1 0 36.1 0 36.1 0 36.1 J

36.1 36.1 36.1 36.1 K


I J K L L

74
We now want to look at A’s neighbors, so we add B, E, and F to the Open set.
We record the distance from the start for each node: 10 for B and E, 14 for F.
We record the heuristic distances and the sums.
We record A as each node’s previous vertex. Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C
0 36.1 10 28.3 0 36.1 0 36.1
D
36.1 38.3 36.1 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 0 36.1 0 36.1 Open = {A, B, E, F}
G
41.6 36.4 36.1 36.1 H
E F G H Closed = {} I
0 36.1 0 36.1 0 36.1 0 36.1 J

36.1 36.1 36.1 36.1 K


I J K L L

75
Now that everything involving A is complete, we move it to the Closed set.
We now need a new current vertex. We look at the sums of B, E, and F.
F has the lowest sum, so it’s our new current vertex.
Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C
0 36.1 10 28.3 0 36.1 0 36.1
D
36.1 38.3 36.1 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 0 36.1 0 36.1 Open = {B, E, F}
G
41.6 36.4 36.1 36.1 H
E F G H Closed = {A} I
0 36.1 0 36.1 0 36.1 0 36.1 J

36.1 36.1 36.1 36.1 K


I J K L L

76
We now want to look at F’s 8 neighbors: A, B, C, E, G, I, J, and K.
A is in Closed. We add the other 7 to Open. It already has B and E, so we end up adding the other 5.
We record the distances, sums, and previous vertices of all 7.
The values for B and E don’t change. Why? Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 0 36.1 Open = {B, C, E, F, G, I, J, K}
G 24 14.1 38.1 F
41.6 36.4 38.1 36.1 H
E F G H Closed = {A} I 28 30 58 F
28 30 24 20 28 10 0 36.1 J 24 20 44 F

58 44 38 36.1 K 28 10 38 F
I J K L L

77
Since B is F’s neighbor, B’s distance from the start via F is 14 + 10 = 24.
However, B already has a shorter distance from the start. As such, we don’t update B.
Likewise for E.
Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 0 36.1 Open = {B, C, E, F, G, I, J, K}
G 24 14.1 38.1 F
41.6 36.4 38.1 36.1 H
E F G H Closed = {A} I 28 30 58 F
28 30 24 20 28 10 0 36.1 J 24 20 44 F

58 44 38 36.1 K 28 10 38 F
I J K L L

78
Now that everything involving F is complete, we move it to the Closed set.
We now need a new current vertex again. K has the lowest sum, so it’s our new current vertex.
(When two nodes tie on the sum, we can use the heuristic distance to break the tie.)
Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 0 36.1 Open = {B, C, E, G, I, J, K}
G 24 14.1 38.1 F
41.6 36.4 38.1 36.1 H
E F G H Closed = {A, F} I 28 30 58 F
28 30 24 20 28 10 0 36.1 J 24 20 44 F

58 44 38 36.1 K 28 10 38 F
I J K L L

79
We now want to look at K’s 5 neighbors: F, G, H, J, and L.
Previously looked-at neighbors either are in Closed or have shorter distances from the start.
So, only H and L get updated.
Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 42 10 Open = {B, C, E, G, I, J, K}
G 24 14.1 38.1 F
41.6 36.4 38.1 52 H 42 10 52 K
E F G H Closed = {A, F} I 28 30 58 F
28 30 24 20 28 10 38 0 J 24 20 44 F

58 44 38 38 K 28 10 38 F
I J K L L 38 0 38 K

80
Now that everything involving K is complete, we move it to the Closed set.
We now need a current vertex yet again. L has the lowest sum, and it’s also our goal state!
The path from A to L can be traced backward using the previous vertices!
The solution path: A–F–K–L Vertex Distance Heuristic Sum Previous
from Start Distance Vertex
A 0 36.1 36.1
B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 42 10 Open = {B, C, E, G, I, J}
G 24 14.1 38.1 F
41.6 36.4 38.1 52 H 42 10 52 K
E F G H Closed = {A, F, K} I 28 30 58 F
28 30 24 20 28 10 38 0 J 24 20 44 F

58 44 38 38 K 28 10 38 F
I J K L L 38 0 38 K

81
Something curious to notice: C’s distance from the start
It’s 28. Why not 20?!

Because we went to C via F. Vertex Distance Heuristic Sum Previous


from Start Distance Vertex

A* Search still gets finds the shortest path! A 0 36.1 36.1


B 10 28.3 38.3 A
C 28 22.4 50.4 F
0 36.1 10 28.3 28 22.4 0 36.1
D
36.1 38.3 50.4 36.1 E 10 31.6 41.6 A
A B C D
F 14 22.4 36.4 A
10 31.6 14 22.4 24 14.1 42 10 Open = {B, C, E, G, I, J}
G 24 14.1 38.1 F
41.6 36.4 38.1 52 H 42 10 52 K
E F G H Closed = {A, F, K} I 28 30 58 F
28 30 24 20 28 10 38 0 J 24 20 44 F

58 44 38 38 K 28 10 38 F
I J K L L 38 0 38 K

82
All trees are graphs, but not all graphs are trees.
A tree is a Directed Acyclic Graph (DAG) with one parent per node.

Graphs but not trees Trees and also graphs

83
The graph shown here is not a tree but can be traversed like one, which is tree search.
Caveat: The same node may be visited more than once.

The graph can also be searched using graph search.


Caveat: Memory cost to maintain a closed list to avoid repeated visits

Tree search and graph search have to do with types of heuristics:


1. Admissible heuristics
2. Consistent heuristics
3. Inconsistent

84
Admissible heuristics:

A heuristic, h(n), is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach
the goal state from n.

It never overestimates the cost to reach the goal, i.e., it is optimistic.

Example:
hSLD(n) (straight-line distance or “as the crow flies”) never overestimates the actual driving
distance.

85
Consistent heuristics:

A heuristic, h(n), is consistent, or monotone, if for every node n and each successor p,
h(n) ≤ c(n, p) + h(p), where c(n, p) is the actual cost to reach p from n.

In other words, while moving along a given path, the total estimated cost for that path, including the
real distance discovered so far + the remaining estimated distance, never gets smaller.

86
Some facts about heuristics:

§ Most heuristics are admissible and consistent.


§ If h(n) is admissible, A* is guaranteed to give an optimal solution.
§ If h(n) is consistent, A* Tree Search and A* Graph Search both work.
§ If h(n) is inconsistent, only A* Tree Search will give an optimal solution.
• Graph search, having not completely carried through some paths, may miss the optimal
path in this case.

87
Adversarial search is one way in which computers play games.
A two-player game (think chess) typically has an adversary, hence the name.
Strategic thinking is key in playing such games.

Does strategic thinking constitute intelligence?

88
Here’s Tic-Tac-Toe as a graph of moves:

Initial state

Successor function

Terminal state

Utility

89
Pseudocode for Minimax, an optimal-strategy algorithm for games like Tic-Tac-Toe:

90
Max(imizer) and Min(imizer) both assume each other is playing optimally.

Max’s goal is to maximize the value of its every action.


Min’s goal is to minimize the value of Max’s every action.

Minimax applies the assumption and the goal to each move of the game to foresee every move
sequence resulting in a win or a loss, picking moves accordingly.

The beauty of Minimax is in its simple elegance.


Note that Max and Min are not the namesakes of Minimax. Instead, Minimax seeks to minimize, via
Max’s actions, the maximum loss posed by Min’s strategy.

91
Minimax has a shortcoming: searching the whole tree is redundant.
But depth-limiting this DFS-style search may miss out on the good moves that are beyond the frontier
of cut-off. This is the horizon effect.

We can prune the tree: eliminate parts of the tree from consideration using Alpha-Beta Pruning:
It prunes away branches that can’t possibly influence the final decision.

Consider a node n:
If a player has a better choice m (at a parent or further up), then n will never be reached.
So, once we know enough about n by looking at some successors, we can prune it.

92
Pseudocode for Minimax with Alpha-Beta Pruning:

93
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a = -¥
b=¥

a = -¥
b=¥

94
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a = -¥
b=¥

a = -¥
b=3

95
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a = -¥
b=¥

a = -¥
b=3

3 17

96
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a=3
b=¥

a = -¥
3 b=3

3 17

97
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a=3
b=¥

a = -¥ a=3
3 b=3 b=¥

3 17

98
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a=3
b=¥

a = -¥ a=3
3 b=3 b=2

=
3 17 2

99
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=¥

a=3
b=¥

a = -¥ a=3
3 2
b=3 b=2

=
3 17 2

100
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=3

a=3
3 b=¥

a = -¥ a=3
3 2
b=3 b=2

=
3 17 2

101
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=3

a=3 a = -¥
3 b=¥ b=3

a = -¥ a=3 a = -¥
3 2
b=3 b=2 b=3

=
3 17 2

102
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=3

a=3 a = -¥
3 b=¥ b=3

a = -¥ a=3 a = -¥
3 2
b=3 b=2 b=3

=
3 17 2 15

103
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=3

a=3 a = 15
3 b=¥ b=3

=
a = -¥ a=3 a = -¥
3 2 15
b=3 b=2 b=3

=
3 17 2 15

104
Walkthrough of Minimax with Alpha-Beta Pruning:
a = -¥
Max b=¥

Min
a = -¥
b=3

a=3 a = 15
3 b=¥ 15 b = 3

=
a = -¥ a=3 a = -¥
3 2 15
b=3 b=2 b=3

=
3 17 2 15

105
Walkthrough of Minimax with Alpha-Beta Pruning:
a=3
Max b=¥

Min
a = -¥
3
b=3

a=3 a = 15
3 b=¥ 15 b = 3

=
a = -¥ a=3 a = -¥
3 2 15
b=3 b=2 b=3

=
3 17 2 15

106
What did we see about searching through a space of possibilities?

We saw a variety of search methods with their own pros and cons.
Above all, we got a sense of the suitability of each method to a problem.

107

You might also like