0% found this document useful (0 votes)
12 views

AI UNIT 2

The document introduces various data structures and search algorithms through a narrative set in the Kingdom of Data, where different regions represent stacks, queues, trees, and graphs. It outlines group assignments for simulations of search techniques such as Depth-First Search (DFS) and Breadth-First Search (BFS), and discusses the characteristics, pros, and cons of various search algorithms, including uninformed and informed strategies. Additionally, it explains the concept of problem-solving agents and the steps involved in formulating and executing search problems.

Uploaded by

mainak1331sen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

AI UNIT 2

The document introduces various data structures and search algorithms through a narrative set in the Kingdom of Data, where different regions represent stacks, queues, trees, and graphs. It outlines group assignments for simulations of search techniques such as Depth-First Search (DFS) and Breadth-First Search (BFS), and discusses the characteristics, pros, and cons of various search algorithms, including uninformed and informed strategies. Additionally, it explains the concept of problem-solving agents and the steps involved in formulating and executing search problems.

Uploaded by

mainak1331sen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Unit 2 - Introduction to Data

Structure and Search Algorithms

Kaviyaraj R
In the mystical Kingdom of Data, a priceless treasure
has been hidden. However, the kingdom is divided
into four unique regions each organized in its own
special way:
• The Stacked Tower (stacks) where items are piled
one on top of the other,
• The Queueing Quay (queues) where everyone lines
The Story up in order,
• The Tree of Knowledge (trees) that branches out
like a family tree, and
• The sprawling city of Graphopolis (graphs) with
complex interconnections.
Your mission is to use the unique properties of each
region and the power of search algorithms to
uncover clues and ultimately find the treasure.
Assign Regions:
Group • Groups 1: Stacked Tower (DFS simulation)
• Groups 2:Queueing Quay (BFS simulation)
Formation & • Groups 3: Tree of Knowledge (Tree Traversals /
Role Binary Search Simulation)

Assignment • Groups 4: Graphopolis (General graph search –


choose DFS/BFS/Dijkstra’s if weights are
introduced)
Stack: Imagine a pile of plates; the last plate you put
on is the first you remove.
Queue: Think of a line at a ticket counter; the first
person in line gets served first.
Group Tree: Visualize a family tree where you start from the
Formation & root and branch out.
Graph: Consider a social network where everyone is
Role connected through various links.
Assignment Highlight Search Algorithms:
Explain that many search techniques (like Depth-
First Search for stacks or Breadth-First Search for
queues) are designed to work naturally with these
structures.
▪ For the Stack groups: How did using a last-in-first-
out approach affect your search? What did you do
when you reached a dead end?
▪ For the Queue groups: How did processing clues in
the order received help (or hinder) your search?
Questions ▪ Tree groups: How did your chosen traversal order
affect the outcome? Would a different order have
worked better?
▪ Graph groups: How did you decide which path to
follow? How did your chosen search algorithm
help you cover more ground?
General Search Algorithms are systematic
procedures used to traverse or explore a problem’s
state space in order to find a goal state or an
optimal solution.
Objective: To transform a problem into a structured
search problem and then apply an algorithm that
systematically examines possible states until a
General solution is found.
• Initial State: The starting point of the problem.
Search • Actions: All the possible moves or transitions
available from any given state.
Algorithms • Transition Model: A description (or function) that
defines what state results from a given action
applied to a state.
• Goal Test: A condition that determines whether a
state is a solution.
• Path Cost: A numerical cost associated with
reaching a particular state from the initial state.
Uninformed (Blind) Search Algorithms:
Breadth-First Search (BFS):Explores all nodes at
the current depth level before moving to the next
level.
Pros: Complete (if the branching factor is finite) and
optimal when path cost is uniform.
Types of Cons: Can require large amounts of memory.

Search Depth-First Search (DFS):Explores as far as


possible along one branch before backtracking.
Algorithms Pros: Uses less memory than BFS.
Cons: Can get trapped in deep or infinite paths; not
guaranteed to find the optimal solution.
Iterative Deepening Search (IDS):Combines the
benefits of BFS and DFS by gradually increasing the
depth limit.
Pros: Memory-efficient like DFS, complete like BFS.
Informed (Heuristic) Search Algorithms:
Greedy Best-First Search: Selects the next node
based solely on the heuristic (an estimate of the
distance to the goal).
Pros: Often finds a solution quickly.
Cons: Not necessarily optimal; may overlook
Types of shorter paths.
A* Search:
Search Uses a function 𝑓(𝑛)=𝑔(𝑛)+ℎ(𝑛)
Algorithms where:𝑔(𝑛) is the cost from the initial state to node
𝑛.
ℎ(𝑛) is the heuristic estimate from 𝑛 to the goal.
Pros: With an admissible heuristic, A* is both
complete and optimal.
Cons: Can be computationally intensive for large
state spaces.
A problem-solving agent is a type of
intelligent agent that uses search to find a
sequence of actions leading from an initial
state to a goal state.
Problem- ▪ Operates in a (relatively) predictable
Solving environment.
▪ Performs goal-directed behavior: it knows its
Agents objective clearly.
▪ Uses an internal representation of states,
actions, and transitions.
Goal Formulation
▪ Decide what the agent’s goals are, e.g., “reach
location X.”
Problem Formulation
▪ Define the initial state (where you start), the
Steps in actions available, the transition model (result of
each action), and the goal test (conditions for
success).
Problem- ▪ Sometimes also define the path cost function if
cost minimization is necessary.
Solving Search
▪ Explore the state space systematically to find a
path from the initial state to the goal state.
Solution Execution
▪ Once a path (sequence of actions) is found, the
agent executes these actions in the environment.
Example:
A robot wants to go from point A to point B.
Goal: Arrive at point B.
Steps in Problem Formulation: The robot’s initial
coordinates (x0, y0), allowed moves (e.g.,
Problem- up, down, left, right), obstacle constraints,
and a goal test (reaching (xB, yB)).
Solving
Search: Use a systematic approach to find
a path around obstacles.
Execution: Robot travels along the path
found.
What is Search?
Search is the process of expanding or exploring possible
states in a structured way to discover a path (action
sequence) that satisfies a goal. The search space can
be represented as a tree or graph of states, where edges
link states reachable by a single action.
Elements of Search
Searching for State Space: All states reachable from the initial state
by applying any sequence of actions.
Solutions Nodes and Edges: Node: Represents a state (plus any
additional data like the path cost, depth, etc.). Edge:
Represents an action that takes you from one state to
another.
Frontier (Open List): A data structure that holds the
unexplored nodes during the search process.
Visited / Explored Set (Closed List): A record of nodes
(states) that have been explored to avoid revisiting and
looping.
Control Strategy (or search strategy) refers to how we
pick the next node from the frontier to expand. It
determines the order of exploration and has a major
impact on:
▪ Time Complexity: How quickly we find a solution
or exhaust the space.
▪ Space Complexity: How large the frontier (open
Control list) can grow.
▪ Completeness: Whether the strategy can find a
Strategies solution if one exists.
▪ Optimality: Whether it can find the best (least-
cost) solution.
Examples of control strategies:
▪ FIFO queue: BFS
▪ LIFO stack: DFS
▪ Priority queue with path cost: UCS
▪ Priority queue with cost + heuristic: A*
Uninformed (Blind)
Search
Breadth-First Search (BFS)
▪ An uninformed search that explores all nodes at a particular depth (or “distance” in terms of edges)
before moving to deeper nodes.
▪ Uses a queue (FIFO) to track frontier.
▪ Finds the shortest path in terms of number of hops if the graph is unweighted.
▪ Potentially large memory usage in wide or deep state spaces.
▪ Guaranteed to find the shortest path (fewest edges) in an
unweighted scenario.
▪ Simple to implement (queue-based). Can blow up in
time/memory if the search space is large.
▪ Ignores actual path costs (just counts edges as 1 each).
Where Used
▪ Small or moderate grids/puzzles with uniform cost.
▪ Situations where you only need the fewest edges solution.
Uniform-Cost Search (UCS) / Dijkstra’s
▪ An uninformed search that expands the node with the lowest cumulative path cost so far.
▪ Often implemented with a priority queue, using the cost as priority.
▪ Finds the optimal path with respect to actual path cost if all costs are non-negative.
▪ Equivalent to Dijkstra’s algorithm in a general graph.
▪ Guaranteed to find a minimum-cost path.
▪ Works for weighted grids/graphs (where edges have
varying costs).
▪ Can expand many partial paths if costs are similar,
leading to high time complexity.

Where Used
▪ Routing in graphs with varying step costs
(e.g., different terrain costs).
▪ Any unweighted problem if you want a cost-based
approach (although BFS is simpler if all edges = 1).
Depth-First Search (DFS)
▪ An uninformed search that always expands the deepest node in the frontier first.
▪ Uses a stack (LIFO) or recursion.
▪ May find a solution quickly if it is “down” one path but may also get stuck exploring long or irrelevant
branches.
▪ Not guaranteed the shortest or minimal-cost path.
▪ Low memory usage (only tracks path + visited).
▪ Can be fast if the goal is along the first branch explored.
▪ No guarantee of finding a short or optimal path.
▪ Potentially visits deep irrelevant branches.
Where Used
▪ Depth-based puzzle solving, or if you only need any
solution quickly and space is limited.
▪ Often part of other algorithms (e.g., Depth-Limited, ID-DFS).
▪ A variation of DFS that cuts off exploring deeper than a
specified depth limit.
▪ If the goal is not found within that depth, the search fails for
that limit.
▪ Avoids DFS’s risk of going infinitely deep in certain graphs.

Depth- ▪ If the goal is deeper than the limit, DLS won’t find it.
▪ Controls the maximum exploration depth.
Limited ▪ Can reduce time/memory if you suspect the goal is within a
certain depth.
Search (DLS) ▪ May fail if the goal is below the cutoff depth.
▪ Not guaranteed to find the shortest path or any solution if
limit is too small.
Where Used
▪ When you have a known or estimated depth limit.
▪ As a component of Iterative Deepening Search.
Informed Search
▪ A naive, uninformed approach that tries random paths
from the start to see if you eventually reach the goal.
▪ Often repeated multiple times or up to a max attempts
limit.
▪ No systematic expansion or cost consideration.
Generate & ▪ May succeed by luck, but can also wander or cycle.

Test (Naive ▪ Very simple to code and conceptually straightforward.


▪ Might quickly find a solution in a small space if you’re
Random “lucky.”
▪ Extremely inefficient in large or complex spaces.
Walk) ▪ No guarantee of finding a path or an optimal route.
Where Used
▪ Teaching or demonstration of the most basic “try random
solution” approach.
▪ Rarely used in serious pathfinding.
▪ An informed search that expands the node closest to the
goal based on a heuristic h(n) (e.g., straight-line or
Manhattan distance).
▪ Ignores cost so far; only uses the heuristic.
▪ Often fast to find a path, focusing expansions near the goal.
▪ Not guaranteed to find an optimal or even feasible path if
Best-First the heuristic is misleading.
▪ Simple: just sort frontier by h(n).
Search ▪ Fewer expansions than BFS in large spaces if the heuristic

(Greedy) is good.
▪ Can be tricked by obstacles or ignoring real costs.
▪ Not guaranteed the shortest route in actual cost or steps.
Where Used
▪ Large state spaces where a heuristic can guide expansions,
but optimal cost is not crucial.
▪ Approximate or “greedy” solutions.
▪ An informed search that uses f(n)=g(n)+h(n), where g(n) =
cost so far, h(n) = heuristic to the goal.
▪ With an admissible heuristic (never overestimates), A*
finds the optimal path in cost.
▪ Typically fewer expansions than UCS because it prunes
irrelevant routes.
▪ The quality of the heuristic strongly influences
performance.
A* Search ▪ Optimal if the heuristic is admissible and consistent.
▪ Usually faster expansions than UCS.
▪ Requires a good heuristic to be efficient.
▪ If the heuristic is bad or overestimates, you can lose
optimality or slow down.
Where Used
▪ Common in games, robotics, map routing with good
heuristics (e.g., Euclidean or Manhattan distance in grids).

You might also like