0% found this document useful (0 votes)
2 views

MODULE 2 Chapter 3 EC (1) Modified

This document discusses problem-solving agents and the search process required when the correct action is not immediately clear. It outlines the components of well-defined problems, examples of toy and real-world problems, and introduces search algorithms and their performance metrics. The document also covers uninformed search strategies, specifically breadth-first search, and the structure of search trees.

Uploaded by

rohith p
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

MODULE 2 Chapter 3 EC (1) Modified

This document discusses problem-solving agents and the search process required when the correct action is not immediately clear. It outlines the components of well-defined problems, examples of toy and real-world problems, and introduces search algorithms and their performance metrics. The document also covers uninformed search strategies, specifically breadth-first search, and the structure of search trees.

Uploaded by

rohith p
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

MODULE 2

Chapter 3: Solving Problems by Searching

When correct action to take is not immediately obvious, an agent may need to plan ahead: to
consider a sequence of actions that form a path to a goal state. Such an agent is called problem-
solving agent and the computational process it undertakes is called search

3.1 Problem solving agents


• Imagine an agent enjoying a touring vacation in Romania. Suppose agent currently in the
city of Arad and has a nonrefundable ticket to fly out of Bucharest the following day.
• The agent observes street signs and sees that there are three roads leading out of Arad: one
toward Sibiu, one to Timisoara, and one to Zerind. If the agent has no additional
information—that is, if the environment is unknown—then the agent can do no better than
to execute one of the actions at random.
• We assume agent has the information about the world such as a map in figure 3.1

• With that information, the agent can follow this four-phase problem-solving process:
1. Goal formulation: The agent adopts the goal of reaching Bucharest. Goals organize
behavior by limiting the objectives and hence the actions to be considered.
2. Problem formulation: is the process of deciding what actions and states to consider, given
a goal.
3. The process of looking for a sequence of actions that reaches the goal is called search.
4. A search algorithm takes a problem as input and returns a solution in the form of an action
sequence. Once a solution is found, the actions it recommends can be carried out.This is
called the execution phase.
3.1.1 Well-defined problems and solutions
• A problem can be defined formally by five components:
The initial state that the agent starts in. For example, the initial state for our agent in
Romania might be described as In(Arad).
• A description of what each action does; the formal name for this is the transition model,
specified by a function RESULT(s, a) that returns the state that results from doing action
a in states. We also use the term successor to refer to any state reachable from a given
state by a single action.2 For example, we have
RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
• Together, the initial state, actions, and transition model implicitly define the state space
of the problem—the set of all states reachable from the initial state by any sequence of
actions.

• The goal test, which determines whether a given state is a goal state. The agent’s goal in
Romania is the singleton set {In(Bucharest)}.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure. For the agent trying
to get to Bucharest, time is of the essence, so the cost of a path might be its length in
kilometers.

3.1.2 Formulating problems

• Our formulation of the problem of getting to Bucharest is a model—an abstract mathematical


description—and not the real thing. The process of removing detail from a representation is
called abstraction. A good problem formulation has the right level of detail.
• Level of abstraction: The choice of a good abstraction involves removing as much detail as
possible while retaining validity and ensuring that the abstract actions are easy to carry out.
3.2 EXAMPLE PROBLEMS
• The problem-solving approach has been applied to a vast array of task environments. We
list some of the best known here, distinguishing between toy(standardized) and real-world
problems. A toy problem is intended to illustrate or exercise various problem-solving
methods.
• It can be given a concise, exact description and hence is usable by different researchers
to compare the performance of algorithms. A real-world problem, such as robot navigation,
is one whose solutions people actually use, and whose formulation is idiosyncratic(strange),
not standardized, because, for example, each robot has different sensors that produce different
data.

3.2.1 Toy problems


• The first example we examine is the vacuum world first introduced in Chapter 2. (See
Figure 2.2.) This can be formulated as a problem as follows:
• States: The state is determined by both the agent location and the dirt locations. The agent
is in one of two locations, each of which might or might not contain dirt. Thus, there are
2 × 2^2 = 8 possible world states. A larger environment with n locations has n · 2^n states.
• Initial state: Any state can be designated as the initial state.

• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
The 8-puzzle, an instance of which is shown in Figure 3.4, consists of a 3×3 board with
eight numbered tiles and a blank space. A tile adjacent to the blank space can slide into the
space. The object is to reach a specified goal state, such as the one shown on the right of the
figure. The standard formulation is as follows:

• States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state.
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down.
• Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

The goal of the 8-queens problem is to place eight queens on a chessboard such that no
queen attacks any other. (A queen attacks any piece in the same row, column or diagonal.) Figure
3.5 shows an attempted solution that fails: the queen in the rightmost column is attacked by the
queen at the top left.
• There are two main kinds of formulation.

1. Incremental formulation
2. complete-state formulation

• An incremental formulation involves operators that augment the state description, starting
with an empty state; for the 8-queens problem, this means that each action adds a queen to
the state.
• A complete-state formulation starts with all 8 queens on the board and moves them
around

The first incremental formulation one might try is the following:

• States: Any arrangement of 0 to 8 queens on the board is a state.


• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.

• In this formulation, we have 64 ・ 63 ・ ・ ・ 57 ≈ 1.8×1014 possible sequences to


investigate.

• A better formulation would prohibit placing a queen in any square that is already attacked:

• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the leftmost n
columns, with no queen attacking another.

• Actions: Add a queen to any square in the leftmost empty column such that it is not attacked
by any other queen.
• This formulation reduces the 8-queens state space from 1.8×1014 to just 2,057, and solutions
are easy to find.

• On the other hand, for 100 queens the reduction is from roughly 10400 states to about 1052
states -a big improvement, but not enough to make the problem tractable.

• A simple algorithm is available that solves even the million-queens problem with ease.
Our final toy problem was devised by Donald Knuth (1964) and illustrates how infinite
state spaces can arise. Knuth conjectured that, starting with the number 4, a sequence of
factorial, square root, and floor operations will reach any desired positive integer. For example,
we can reach 5 from 4 as follows

The problem definition is very simple:


• States: Positive numbers.
• Initial state: 4

• Actions: Apply factorial, square root, or floor operation (factorial for integers only).
• Transition model: As given by the mathematical definitions of the operations.
• Goal test: State is the desired positive integer.

3.2.2 Real-world problems

• We have already seen how the route-finding problem is defined in terms of specified
locations and transitions along links between them. Route-finding algorithms are used in
a variety of applications. Some, such as Web sites and in-car systems that provide driving
directions, are relatively straightforward extensions of the Romania example. Others, such
as routing video streams in computer networks, military operations planning, and airline
travel-planning systems, involve much more complex specifications.
• Consider the airline travel problems that must be solved by a travel-planning Web site:
➢ States: Each state obviously includes a location (e.g., an airport) and the current
time. Furthermore, because the cost of an action (a flight segment) may depend on
previous segments, their fare bases, and their status as domestic or international,
the state must record extra information about these “historical” aspects.
➢ Initial state: The user’s home airport.
➢ Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
➢ Transition model: The state resulting from taking a flight will have the flight’s
destination as the current location and the flight’s arrival time as the current time.
➢ Goal test: Are we at the final destination specified by the user?
➢ Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer
mileage awards, and so on.
• Touring problems describe a set of locations that must be visited, Touring problem rather
than a single goal destination.
• The traveling salesperson problem (TSP) is a touring problem in which every city must
be visited exactly once. The aim is to find the shortest tour.
• A VLSI layout problem requires positioning millions of components and connections on
a chip to minimize area, minimize circuit delays, minimize stray capacitances, and
maximize manufacturing yield.
• Robot navigation is a generalization of the route-finding problem described earlier.

• Automatic assembly sequencing of complex objects by a robot was first demonstrated


by FREDDY (Michie, 1972). Progress since then has been slow but sure, to the point
where the assembly of intricate objects such as electric motors is economically feasible.
3.3 SEARCHING FOR SOLUTIONS

• A search algorithm takes a search problem as input and returns a solution, or an indication of
failure. We consider algorithms that superimpose a search tree over the state-
space graph, forming various paths from the initial state, trying to find a path
that reaches a goal state.
• Each node in the search tree corresponds to a state in the state space and the edges in the search
tree correspond to actions.
• The root of the tree corresponds to the initial state of the problem.
• The state space describes the (possibly infinite) set of states in the world, and the actions
that allow transitions from one state to another.
• The search tree describes paths between these states, reaching towards the goal.
• The search tree may have multiple paths to any given state, but each node in the tree has a
unique path back to the root.
• Figure 3.6 shows the first few steps in finding a path from Arad to Bucharest.
3.3.1 Infrastructure for search algorithms
• Search algorithms require a data structure to keep track of the search tree that is being
constructed. For each node n of the tree, we have a structure that contains four
components:
o n.STATE: the state in the state space to which the node corresponds;
o n.PARENT: the node in the search tree that generated this node;
o n.ACTION: the action that was applied to the parent to generate the node;
o n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial
state to the node, as indicated by the parent pointers.

Given the components for a parent node, it is easy to see how to compute the necessary components
for a child node. The function CHILD-NODE takes a parent node and an action and returns the
resulting child node:

• A node is a bookkeeping data structure used to represent the search tree.


• A state corresponds to a configuration of the world.
• Thus, nodes are on particular paths, as defined by PARENT pointers, whereas states are not.
• Now that we have nodes, we need somewhere to put them. The frontier needs to be stored in
such a way that the search algorithm can easily choose the next node to expand according to
its preferred strategy.
• The appropriate data structure for this is a queue.
The operations on a queue are as follows:
a.) EMPTY?(queue) returns true only if there are no more elements in the queue.
b.) POP(queue) removes the first element of the queue and returns it.
c.) INSERT(element, queue) inserts an element and returns the resulting queue.

• Queues are characterized by the order in which they store the inserted nodes.

• Three common variants are the first-in, first-out or FIFO queue, which pops the oldest
element of the queue; the last-in, first-out or LIFO queue (also known as a stack), which
pops the newest element of the queue; and the priority queue, which pops the element of
the queue with the highest priority according to some ordering function.

3.3.2 Measuring problem-solving performance


• We can evaluate an algorithm’s performance in four ways:
o Completeness: Is the algorithm guaranteed to find a solution when there is one?
o Optimality: Does the strategy find the optimal solution, as defined on page 68?
o Time complexity: How long does it take to find a solution?
o Space complexity: How much memory is needed to perform the search?

• In AI, the graph is often represented implicitly by the initial state, actions, and transition
model and is frequently infinite. For these reasons, complexity is expressed in terms of
three quantities: b, the branching factor or maximum number of successors of any node;
d, the depth of the shallowest goal node (i.e., the number of steps along the path from the
root); and m, the maximum length of any path in the state space.
• Time is often measured in terms of the number of nodes generated during the search, and
space in terms of the maximum number of nodes stored in memory.

3.4 UNINFORMED SEARCH STRATEGIES


• Uninformed search (also called blind search). The term means that the strategies have no
additional information about states beyond that provided in the problem definition.
• All they can do is generate successors and distinguish a goal state from a non-goal state.
All search strategies are distinguished by the order in which nodes are expanded.
Strategies that know whether one non-goal state is “more promising” than another are
called informed search or heuristic search strategies.
3.4.1 Breadth-first search

• Breadth-first search is a simple strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their successors, and so on.
Imagine searching a uniform tree where every state has b successors. The root of the search tree
generates b nodes at the first level, each of which generates b more nodes, for a total of b^2 at the
second level. Each of these generates b more nodes, yielding b^3 nodes at the third level, and so on.
Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that
level.
• Then the total number of nodes generated is b + b^2 + b^3 + ··· + b^d = O(b^d) .
• As for space complexity: For breadth-first graph search in particular, every node
generated remains in memory.
Limitations of BFS

1. The memory requirements are a bigger problem for breadth-first search than
is the execution time.
2. time is still a major factor. If problem has a solution at depth 16, then it will
take about 350 years for breadth-first search to find it.

3.4.3 Depth-first search

• Depth-first search always expands the deepest node in the current frontier of the search
tree. The progress of the search is illustrated in Figure 3.16. The search proceeds
immediately to the deepest level of the search tree, where the nodes have no successors.
As those nodes are expanded, they are dropped from the frontier, so then the search
“backs up” to the next deepest node that still has unexplored successors.
• The depth-first search algorithm is an instance of the graph-search algorithm in Figure
3.7; whereas breadth-first-search uses a FIFO queue, depth-first search uses a LIFO
queue.
• The time complexity of depth-first graph search is bounded by the size of the state space. A
depth-first tree search, on the other hand, may generate all of the O(bm) nodes in the search
tree, where m is the maximum depth of any node; this can be much greater than the size of
the state space.

• A variant of depth-first search called backtracking search uses still less memory.
Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea
of generating a successor by modifying the current state description directly rather than
copying it first.

• This reduces the memory requirements to just one state description and O(m) actions.
For this to work, we must be able to undo each modification when we go back to generate the
next successor. For problems with large state descriptions, such as robotic assembly, these
techniques are critical to success
3.5 Informed Search Strategies

Greedy best-first search


• Greedy best-first search algorithm always selects the path which appears best at that
moment.
• It is the combination of depth-first search and breadth-first search algorithms. It uses the
heuristic function and search.
• Best-first search allows us to take the advantages of both algorithms.
• With the help of best-first search, at each step, we can choose the most promising node.
• In the best first search algorithm, we expand the node which is closest to the goal node and
the closest cost is estimated by heuristic function,

i.e. f(n)= h(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

Step 1: Place the starting node into the OPEN list.

Step 2: If the OPEN list is empty, Stop and return failure.

Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places
it in the CLOSED list.

Step 4: Expand the node n, and generate the successors of node n.

Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step 6.

Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check
if the node has been in either OPEN or CLOSED list. If the node has not been in both lists, then
add it to the OPEN list.

Step 7: Return to Step 2.


Advantages:

• Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

• It can behave as an unguided depth-first search in the worst-case scenario.


• It can get stuck in a loop as DFS.
• This algorithm is not optimal.

Example:

Consider the below search problem, and we will traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below
table.

In this search example, we are using two lists which are OPEN and CLOSED Lists. Following
are the iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]

: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]

: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F --- > G
Time Complexity: The worst-case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst-case space complexity of Greedy best first search is O(bm).
Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.


Greedy best-first search tries to expand the node that is closest to the goal, on the grounds
that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the
heuristic function; that is, f(n) = h(n). Let us see how this works for route-finding problems
in Romania; we use the straight line distance heuristic, which we will call hSLD . If the goal
is Bucharest, we need to know the straight-line distances to Bucharest, which are shown in
Figure 3.22. For example, hSLD (In(Arad)) = 366. Notice that the values of hSLD cannot
be computed from the problem description itself. Moreover, it takes a certain amount of
experience to know that hSLD is correlated with actual road distances and is, therefore, a
useful heuristic. Figure 3.23 shows the progress of a greedy best-first search using hSLD to
find a path from Arad to Bucharest.

The first node to be expanded from Arad will be Sibiu because it is closer to
Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras
because it is closest. Fagaras in turn generates Bucharest, which is the goal. For this
particular problem, greedy best-first search using hSLD finds a solution without ever
expanding a node that is not on the solution path; hence, its search cost is minimal. It is not
optimal, however: the path via Sibiu and Fagaras to Bucharest is 32 kilometers longer than
the path through Rimnicu Vilcea and Pitesti. This shows why the algorithm is called
“greedy”—at each step it tries to get as close to the goal as it can. Greedy best-first tree
search is also incomplete even in a finite state space, much like depth-first search. Consider
the problem of getting from Iasi to Fagaras. The heuristic suggests that Neamt be expanded
first because it is closest to Fagaras, but it is a dead end. The solution is to go first to Vaslui—
a step that is actually farther from the goal according to the heuristic—and then to continue
to Urziceni, Bucharest, and Fagaras. The algorithm will never find this solution, however,
because expanding Neamt puts Iasi back into the frontier, Iasi is closer to Fagaras than
Vaslui is, and so Iasi will be expanded again, leading

to an infinite loop. (The graph search version is complete in finite spaces, but not in infinite
ones.) The worst-case time and space complexity for the tree version is O(bm), where m is
the maximum depth of the search space. With a good heuristic function, however, the

complexity can be reduced substantially. The amount of the reduction depends on the
particular problem and on the quality of the heuristic.

Topic: -02: A*search


• A* search is the most commonly known form of best-first search.
• It uses heuristic function h(n), and cost to reach the node n from the start state g(n).
• It has combined features of UCS and greedy best-first search, by which it solves the problem
efficiently.
• A* search algorithm finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides optimal result faster.
• A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence, we
can combine both costs as following, and this sum is called as a fitness number.
Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function

(g+h), if node n is goal node, then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each

successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute

evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back

pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

• A* search algorithm is the best algorithm than other search algorithms.

• A* search algorithm is optimal and complete.

• This algorithm can solve very complex problems.

Disadvantages:

• It does not always produce the shortest path as it mostly based on heuristics and
approximation.

• A* search algorithm has some complexity issues.

• The main drawback of A* is memory requirement as it keeps all generated nodes in the

memory, so it is not practical for various large-scale problems.

Example:

In this example, we will traverse the given graph using the A* algorithm. The heuristic value of

all states is given in the below table so we will calculate the f(n) of each state using the formula

f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.

Here we will use OPEN and CLOSED list.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost

6.

Points to remember:

• A* algorithm returns the path which occurred first, and it does not search for all remaining

paths.

• The efficiency of A* algorithm depends on the quality of heuristic.

• A* algorithm expands all nodes which satisfy the condition f(n)

Complete: A* algorithm is complete as long as:


Branching factor is finite.

Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

Admissible: the first condition requires for optimality is that h(n) should be an admissible

heuristic for A* tree search. An admissible heuristic is optimistic in nature.

Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic function,

and the number of nodes expanded is exponential to the depth of solution d. So, the time complexity

is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


Topic: -03: Heuristic functions
• Heuristic functions in order to reach the goal node in a more prominent way.

• Therefore, there are several pathways in a search tree to reach the goal node from the

current node.

• The selection of a good heuristic function matters certainly.

• A good heuristic function is determined by its efficiency.

• More is the information about the problem, more is the processing time.

Example:

• Consider the following 8-puzzle problem where we have a start state and a goal state.

• Our task is to slide the tiles of the current/start state and place it in an order followed in the

goal state. There can be four moves either left, right, up, or down.

• There can be several ways to convert the current/start state to the goal state, but we can use

a heuristic function h(n) to solve the problem more efficiently.


A heuristic function for the 8-puzzle problem is defined below:

h(n)=Number of tiles out of position.

So, there is total of three tiles out of position i.e., 6,5 and 4. Do not count the empty tile present in

the goal state). i.e. h(n)=3. Now, we require to minimize the value of h(n) =0.

We can construct a state-space tree to minimize the h(n) value to 0, as shown below:

• It is seen from the above state space tree that the goal state is minimized from h(n)=3 to
h(n)=0. However, we can create and use several heuristic functions as per the requirement.
• It is also clear from the above example that a heuristic function h(n) can be defined as the
information required to solve a given problem more efficiently.
• The information can be related to the nature of the state, cost of transforming from one
state to another, goal node characteristics, etc., which is expressed as a heuristic function.
Properties of a Heuristic search Algorithm

Use of heuristic function in a heuristic search algorithm leads to following properties of a


heuristic search algorithm:

• Admissible Condition: An algorithm is said to be admissible, if it returns an optimal


solution.
• Completeness: An algorithm is said to be complete, if it terminates with a solution (if the
solution exists).
• Dominance Property: If there are two admissible heuristic algorithms A1 and A2 having
h1 and h2 heuristic functions, then A1 is said to dominate A2 if h1 is better than h2 for all
the values of node n.
• Optimality Property: If an algorithm is complete, admissible, and dominating other
algorithms, it will be the best one and will definitely give an optimal solution

You might also like