0% found this document useful (0 votes)
1 views

AI unit2

ai unit 2 bca

Uploaded by

samprithgowda004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

AI unit2

ai unit 2 bca

Uploaded by

samprithgowda004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit-2 : Problem Solving Agent Definition

 Problem-solving agent is a goal-based agent that focuses on goals using a


group of algorithms and techniques to solve a well-defined problem.
 An agent may need to plan a sequence of actions that form a path to a goal
state.
 Such an agent is called a problem-solving agent, and the
computational process it undertakes is called search.

1.1 Steps performed by Problem-solving agent


 Goal Formulation:
o It is the first and simplest step in problem-solving.
o It organizes the steps/sequence required to formulate one goal out of
multiple goals as well as actions to achieve that goal. Goal formulation is
based on the current situation and the agent's performance measure
 Problem Formulation:
o It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal.
o Components in problem formulation:
1. Initial State: It is the starting state or initial step of the agent towards its
goal.
2. Actions: It is the description of the possible actions available to the agent.
3. Transition Model: It describes what each action does.
4. Goal Test: It determines if the given state is a goal state.
5. Path cost: It assigns a numeric cost to each path that follows the goal.
The problem-solving agent selects a cost function, which reflects its
performance measure.
o Initial state, actions, and transition model together define the state-
space of the problem implicitly.
o State-space of a problem is a set of all states which can be reached from the
initial state followed by any sequence of actions.
o The state-space forms a directed map or graph where nodes are the states,
links between the nodes are actions, and the path is a sequence of states
connected by the sequence of actions.
 Search:
o It identifies all the best possible sequence of actions to reach the goal state
from the current state. It takes a problem as an input and returns solution
as its output.
 Solution:
o It finds the best algorithm out of various algorithms, which may be proven
1
as the best optimal solution.
 Execution:
o It executes the best optimal solution from the searching
algorithms to reach the goal state from the current state.

1.2 Types of problem approaches:


 Toy Problem or Standardized Problem:
o It is a concise and exact description of the problem which is used by the
researchers to compare the performance of algorithms.
Example Problem 1 - 8 Puzzle Problem

 Consider a 3x3 matrix with movable tiles numbered from 1 to 8 with a blank
space.
 The tile adjacent to the blank space can slide into that space.
 The objective is to reach a specified goal state similar to the goal state, as
shown in the below figure
 The task is to convert the current state into goal state by sliding digits into
the blank space

The problem formulation is as follows:


 States:
o It describes the location of each numbered tiles and the blank tile.
 Initial State:
o Can start from any state as the initial state.
 Actions:
o Actions of the blank space is defined, i.e., either left, right, up or down
 Transition Model:
o It returns the resulting state as per the given state and actions.
 Goal test:
o It identifies whether have reached the correct goal-state.
 Path cost:

2
o The path cost is the number of steps in the path where the cost of each step
is 1.
Note: The 8-puzzle problem is a type of sliding-block problem which is used
for testing new search algorithms in artificial intelligence.

The problem formulation is as follows(Ex:2):


 States:
o For the vacuum world, the objects are the agent and any dirt. In the simple
two-cell version, the agent can be in either of the two cells, and each cell
can either contain dirt or not.
 Initial state:
o Any state can be designated as the initial state.
 Actions:
o In the two-cell world three actions are: suck, move left, and move Right,
upward and downward, or forward, backward, turnright, and turnleft.
 Transition model:
o Suck removes any dirt from the agent’s cell;
o Forward moves the agent ahead one cell in the direction it is facing, unless
it hits a wall, in which case the action has no effect.
o Backward moves the agent in the opposite direction
o Turnright and turnleft change the direction it is facing by.
 Goal states:
o The states in which every cell is clean.
 Action cost:
o Each action costs 1.

Example Problem 1 – Route-Finding Problem


 Route-finding problem is defined in terms of specified locations
and transitions along edges between them.
The problem formulation is as follows:
 Consider the airline travel problems that must be solved by a
travel- planning Web site:
• States:
o Each state obviously includes a location (e.g., an airport) and the current
time.
• Initial state:
o The user’s home airport.
• Actions:
o Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
3
• Transition model:
o The state resulting from taking a flight will have the flight’s Destination as
the new location and the flight’s arrival time as the new time.
• Goal state:
o A destination city.
• Action cost:
o A combination of monetary cost, waiting time, flight time, customs and
Immigration procedures, seat quality, time of day, type of airplane, frequent-
flyer reward Points, and so on.
Algorithm – Problem solving agent

4
1.3 Search Algorithm
o A search algorithm takes a search problem as input and returns a solution.

1.4 Types of Search Algorithm

1.4.1 UNINFORMED SEARCH ALGORITHMS:


• Uninformed search is also called Blind search.
• These algorithms can only generate the successors and
differentiate between the goal state and non goal state.
• These type of search does not maintain any internal state, that’s it is also known
as Blind search.
• Types of uninformed search algorithms are shown in Figure 1.13.
1.4.1.1 Depth First Search
1.4.1.2 Depth Limited Search
1.4.1.3 Breadth First Search
1.4.1.4 Iterative Deepening Search
1.4.1.5 Uniform Cost Search
1.4.1.6 Bidirectional Search

Each of these algorithms will have:

 A problem graph, containing the start node S and the goal node G.
 A strategy, describing the manner in which the graph will be traversed
to get to G.
 A fringe, which is a data structure used to store all the possible states
(nodes) from the current states.
 A tree, that results while traversing to the goal node.
 A solution plan, which the sequence of nodes from S to G.

5
DEPTH FIRST SEARCH:
• Depth-first search (DFS) is an algorithm for traversing or searching tree or
graph data structures.
• The algorithm starts at the root node and explores as far as possible along
each branch before backtracking.
• It uses last in-first-out strategy and hence it is implemented using a stack.
d= the depth of the search tree = the number of levels of the search
tree.

n^i = number of nodes in level i

The performance measure of DFS


 Completeness: DFS does not guarantee to reach the goal state.
 Optimality: It does not give an optimal solution as it expands nodes in one
direction deeply.
 Space complexity: It needs to store only a single path from the root node to
the leaf node. Therefore, DFS has O(bm) space complexity where b is the
branching factor(i.e., total no. of child nodes, a parent node have) and m is
the maximum length of any path.
 Time complexity: DFS has O(bm) time complexity.

6
Advantage of Depth-first Search:
 DFS uses extremely little memory.
 It takes less time to reach the goal node than the BFS method.

Disadvantage of Depth-first Search:


 There's a chance that many states will recur, and there's no certainty that a
solution will be found.
 The DFS algorithm performs deep searching and may occasionally enter
an infinite cycle.
Example:

Algorithm

7
DEPTH-LIMITED SEARCH ALGORITHM
 A depth-limited search algorithm is similar to depth-first search with a
predetermined limit.
 Depth-limited search can solve the drawback of the infinite path in
the Depth-first search.
 In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
 Depth-limited search can be terminated with two Conditions of failure:
o Standard failure value: It indicates that problem does not have any
solution.
Cutoff failure value: It defines no solution for the problem within a given depth limit.

Depth-limited search Algorithm


 Set a variable NODE to the initial state, i.e., the root node.
 Set a variable GOAL which contains the value of the goal state.
 Set a variable LIMIT which carries a depth-limit value.
 Loop each node by traversing in DFS manner till the depth-limit
value.
 While performing the looping, start removing the elements from the
stack in LIFO order.
 If the goal state is found, return goal state. Else terminate the
search.
The performance measure of Depth-limited search
 Completeness: Depth-limited search does not guarantee to reach the
goal node.
 Optimality: It does not give an optimal solution as it expands the
nodes till the depth-limit.
 Space Complexity: The space complexity of the depth-limited search
is O(bl).
 Time Complexity: The time complexity of the depth-limited search
is O(bl).
Advantages:
 Depth-limited search is Memory efficient.
Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.

8
Example

Figure 1.15 –Depth Limited Search

7.2.1.1 BREADTH FIRST SEARCH:


• Breadth-first search (BFS) is an algorithm for traversing or searching
tree or graph data structures.
• It starts at the tree root and explores all of the neighbor nodes at the
present depth prior to moving on to the nodes at the next depth level
Refer Figure 1.16.
• It is implemented using a queue.
S= the depth of the shallowest solution.

n^i= number of nodes in level .

The performance measure of BFS is as follows:


o Completeness: It is a complete strategy as it definitely finds the goal
state.
o Optimality: It gives an optimal solution if the cost of each node is
same.
o Space Complexity: The space complexity of BFS is O(bd), i.e., it
requires a huge amount of memory. Here, b is the branching
factor and d denotes the depth/level of the tree
o Time Complexity: BFS consumes much time to reach the goal node
for large instances.

9
Breadth First Search Algorithm:

Advantages of Breadth-first Search:

 If a solution is available, BFS will provide it.


 If there are multiple answers to a problem, BFS will present the simplest
solution with the fewest steps.
Disadvantages of Breadth-first Search:
 It necessitates a large amount of memory.
 If the solution is located far from the root node, BFS will take a long time.

Example:
Which solution would BFS find to move from node S to node G if run on the graph below

Solution

10
Path: S -> D -> G

ITERATIVE DEEPENING DEPTH-FIRST


SEARCH/ ITERATIVE DEEPENING SEARCH
 This search is a combination of BFS and DFS, as BFS guarantees to
reach the goal node and DFS occupies less memory space.
 Therefore, iterative deepening search combines these two advantages of
BFS and DFS to reach the goal node.
It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.

Figure 1.17 – Iterative Deepening Search

 In the above figure 1.17, the goal node is H and initial


depth-limit =[0-1].
 So, it will expand level 0 and 1 and will terminate with
A->B->C sequence.

Further, change the depth-limit =[0-3], it will again expand the nodes from level 0 till level
3 and the search terminate with A->B->D->F->E->H sequence where H is the desired goal
node.

Iterative deepening search Algorithm

 Explore the nodes in DFS order.


 Set a LIMIT variable with a limit value.
 Loop each node up to the limit value and further increase the limit
value accordingly.
 Terminate the search when the goal state is found.

11
The performance measure of Iterative deepening search
 Completeness: Iterative deepening search may or may not reach the
goal state.
 Optimality: It does not give an optimal solution always.
 Space Complexity: It has the same space complexity as BFS,
i.e., O(bd).
 Time Complexity: It has O(d) time complexity.

Disadvantages of Iterative deepening search


 The drawback of iterative deepening search is that it seems wasteful
because it generates states multiple times.

Uniform Cost Search Algorithm

The performance measure of Uniform-cost search


 Completeness: It guarantees to reach the goal state.
 Optimality: It gives optimal path cost solution for the search.
Space and time complexity: The worst space and time complexity of the uniform-cost search is
O(b1+LC*/??).

Advantages of Uniform-cost Search:


o The path with the lowest cost is chosen at each state.
o UCS is complete only if states are finite and there should be no loop
with zero weight.
o UCS is optimal only if there is no negative cost.
Disadvantages of Uniform-cost Search:
o It isjust concerned with the expense of the path.

Example:
Which solution would UCS find to move from node S to node G if run on the graph
below?

12
Solution.

Path: S -> A -> B -> G

Cost: 5

BIDIRECTIONAL SEARCH
The strategy behind the bidirectional search is to run two searches simultaneously--one forward
search from the initial state and other from the backside of the goal--hoping that both searches
will meet in the middle.

As soon as the two searches intersect one another, the bidirectional search terminates with the goal
node.

This search is implemented by replacing the goal test to check if the two searches intersect.
Because if they do so, it means a solution is found.

The performance measure of Bidirectional search


 Complete: Bidirectional search is complete.
 Optimal: It gives an optimal solution.
 Time and space complexity: Bidirectional search has O(bd/2)
Disadvantage of Bidirectional Search
It requires a lot of memory space.

13
INFORMED (HEURISTIC) SEARCH STRATEGIES:
 Here, the algorithms have information on the goal state, which
helps in more efficient searching.
 This information is obtained by something called a heuristic.

Search Heuristics:
In an informed search, a heuristic is a function h(n) estimates how close a state
is to the goal state.

h(n) = estimated cost of the cheapest path from the state at node n to a goal state.

Types of Informed search algorithms

 Greedy Search
 A* Tree Search
 A* Graph Search

GREEDY SEARCH:
In greedy search, we expand the node closest to the goal node. The “closeness” is
estimated by a heuristic h(x).

Heuristic: A heuristic h is defined as-

h(x) = Estimate of distance of node x from the goal node.


Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower h
value.

The performance measure of Best-first search Algorithm:


 Completeness: Best-first search is incomplete even in
finite state space.
 Optimality: It does not provide an optimal solution.
 Time and Space complexity: It has O(bm) worst time and
space complexity, where m is the maximum depth of the
search tree. If the quality of the heuristic function is good, the
complexities could be reduced substantially.

14
 Advantage: Works well with informed search problems,
with fewer steps to reach a goal.
 Disadvantage: Can turn into unguided DFS in the worst case.

Example:
Question. Find the path from S to G using greedy search. The heuristic values h of each
node below the name of the node.

Solution

Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the


lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E
with a lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is
shown in the search tree below, in blue.

Path: S -> D -> E -> G

15
A* TREE SEARCH:
A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost
search and greedy search.

In this search, the heuristic is the summation of the cost in UCS, denoted by g(x), and the
cost in the greedy search, denoted by h(x). The summed cost is denoted by f(x).

Heuristic:
The following points should be noted with heuristics in A* search.
f(x) = g(x) + h(x)

 h(x) is called the forward cost and is an estimate of the distance of the
current node from the goal node.
 g(x) is called the backward cost and is the cumulative cost of a node
from the root node.
 A* search is optimal only when for all nodes, the forward cost for a node
h(x) underestimates the actual cost h*(x) to reach the goal.
This property of A* heuristic is called admissibility. Admissibility: 0 <=
h(x) <= h*(x)

Strategy: Choose the node with the lowest f(x) value.

The performance measure of A* search


Completeness: The star(*) in A* search guarantees to reach the goal node.
Optimality: An underestimated cost will always give an optimal solution.
Space and time complexity: A* search has O(bd) space and time
complexities.
Example: Find the path to reach from S to G using A* search.

16
Solution.
Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at
each step, choosing the node with the lowest sum.

The entire work is shown in the table below

A* algorithm

A* Graph Search:

 In A* tree search, if the same node has expanded twice in different


branches of the search tree, A* search might explore both of those
branches, thus wasting time
 A* Graph Search, or simply Graph Search, removes this limitation by
adding this rule: do not expand the same node more than once.
 Heuristic. Graph search is optimal only when the forward cost between

17
two successive nodes A and B, given by h(A) – h (B), is less than or equal
to the backward cost between those two nodes g(A -> B). This property
of the graph search heuristic is called consistency.
 Consistency: h(A) – h(B) <= g ( A->B )

Example:
Question. Use graph searches to find paths from S to G in the following graph.

Solution

Path: S -> D -> B -> E -> G

Cost: 7

18

You might also like