AI_Unit 2 notes
AI_Unit 2 notes
Uninformed search strategies operate without any additional information about the problem
beyond the problem definition. They explore the search space systematically until they find a
solution.
Description: Expands all nodes at the current depth level before moving
deeper.
o Uniform-Cost Search:
Description: Expands the least costly node first, ensuring the lowest-cost
solution.
Complexity: Similar to BFS, but may require more computational resources due
to cost evaluation.
Informed searches use problem-specific knowledge (heuristics) to guide the search, reducing the
number of nodes explored and finding solutions more efficiently.
Heuristics:
o Definition: A heuristic is an estimate of the cost from a current state to the goal.
o Purpose: Heuristics prioritize paths that are likely to reach the goal sooner or more
cost-effectively.
Description: Selects the node with the lowest heuristic value, indicating
proximity to the goal.
Characteristics: Fast but not always optimal, as it doesn’t consider path cost.
Use Case: Suitable for problems where quick, approximate solutions are
acceptable.
o A Search*:
Description: Combines the path cost (g(n)) and heuristic estimate (h(n)) for
each node, selecting the node with the lowest f(n)=g(n)+h(n)f(n) = g(n) +
h(n)f(n)=g(n)+h(n).
Use Case: Widely used in pathfinding and planning, where optimality is crucial.
Heuristics play a central role in informed search and optimization, helping to guide searches
towards promising regions in the search space.
Admissible Heuristic:
Consistent Heuristic:
o Definition: A heuristic is consistent if the estimated cost between two nodes is less
than or equal to the step cost.
o Example: In graph search, consistent heuristics ensure that once a node is expanded,
the optimal path to it is found.
Local search focuses on finding solutions by exploring neighboring states and is particularly useful
in optimization problems with large or infinite state spaces.
o Hill Climbing:
o Simulated Annealing:
o Genetic Algorithms:
5. Optimization Problems
Optimization problems seek to find the “best” solution based on a specific objective or set of
objectives, often in scenarios where multiple solutions exist.
o Gradient Descent:
o Linear Programming:
o Evolutionary Algorithms:
Characteristics:
o Uncertain Environment: The agent only has partial visibility or information about
the state space.
o Belief States: Instead of representing a single state, the agent keeps track of a set of
possible states, known as a belief state, to estimate where it might be.
Strategies:
o Sensor-Based Search: The agent uses limited sensory input to update its belief
state.
Example:
o Robot Navigation: A robot may lack full information about an environment (e.g.,
obstacles or walls) and must use sensors to infer its location while navigating.
A Constraint Satisfaction Problem is a problem where the solution must satisfy a set of
constraints or restrictions. CSPs are commonly used for tasks that involve assigning values to
variables under specific conditions, such as puzzles, scheduling, and layout design.
Components of a CSP:
o Variables: Elements that need to be assigned values (e.g., colors in a map coloring
problem).
o Constraints: Rules that restrict combinations of values for the variables (e.g.,
adjacent regions must have different colors).
Types of Constraints:
o Binary Constraints: Apply between pairs of variables (e.g., "If X is blue, Y cannot be
blue").
o Global Constraints: Apply to a set of variables (e.g., "All variables must have unique
values").
Solving Techniques:
o Heuristic Approaches: Use heuristics like the Minimum Remaining Values (MRV) to
improve efficiency.
Examples:
o Sudoku: Each cell is a variable with the domain (1-9), and constraints prevent
duplicate values in rows, columns, and boxes.
o Timetable Scheduling: Variables represent time slots, with constraints like “no two
events at the same time for the same room.”
8. Constraint Propagation
Constraint Propagation is a technique used within CSPs to further reduce the search space by
applying constraints repeatedly until no further changes are possible.
Concept:
o Propagation: Starts with an initial set of constraints, and as variables are assigned
values, constraints propagate to neighboring variables, removing invalid options.
o Goal: Reduce the search space by eliminating choices that would lead to conflicts
early on, making the search process faster and more efficient.
Techniques:
Definition: A variable is arc-consistent if, for every value in its domain, there is
a consistent value in the domain of each connected variable.
Node Consistency: Ensures that individual variable domains satisfy any unary
constraints.
o Domain Reduction: Values in the domains of variables are reduced as constraints are
applied, which can simplify or solve the problem without further search.
Example:
o Map Coloring: In a map coloring CSP, constraint propagation can eliminate invalid
colors for each region as assignments are made, making it easier to find a valid
coloring without extensive backtracking.
9. Backtracking Search
Key Concepts:
o Efficiency Techniques:
Example Applications:
o Sudoku and N-Queens Problem: Places values or objects while ensuring constraints
are not violated.
Game playing in AI involves creating agents that can play games like chess, tic-tac-toe, and Go.
These games are modeled as search problems where agents aim to make optimal moves.
Characteristics:
o Competitive Environment: Games typically involve two opposing players (agent and
opponent).
o Zero-Sum: In many games, one player’s gain is equivalent to the other’s loss.
Representation of Games:
o Game Trees: Nodes represent game states, and edges represent possible moves.
Minimax Algorithm:
o Description: Minimax is a decision rule for minimizing the possible loss in a worst-
case scenario.
o Function:
o Outcome: The algorithm selects moves that yield the best guaranteed result,
assuming the opponent also plays optimally.
Complexity:
o Time Complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and ddd is
the depth of the game tree.
Alpha-Beta Pruning is an optimization for the Minimax algorithm that reduces the number of nodes
evaluated, making it feasible to search deeper in the game tree.
Concept:
How It Works:
o As the algorithm evaluates each node, it updates alpha and beta values.
o Pruning: If a move at a node cannot possibly alter the outcome, that branch is cut off.
Effectiveness:
Stochastic games introduce randomness into the decision-making process, requiring agents to
handle uncertainty and consider probabilities when making moves.
Characteristics:
o Random Events: The outcome of moves may depend on random events, such as dice
rolls.
o Chance Nodes: In addition to decision nodes, stochastic games include chance nodes
representing random events.
Approach:
Examples:
o Card Games: Games like poker have uncertain outcomes due to hidden information
and random card draws.