0% found this document useful (0 votes)
8 views

AI_Unit 2 notes

Uploaded by

Shwetank Rai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

AI_Unit 2 notes

Uploaded by

Shwetank Rai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Uninformed (Blind) Search Strategies

Uninformed search strategies operate without any additional information about the problem
beyond the problem definition. They explore the search space systematically until they find a
solution.

 Types of Uninformed Search:

o Breadth-First Search (BFS):

 Description: Expands all nodes at the current depth level before moving
deeper.

 Characteristics: Guaranteed to find the shortest path if costs are uniform.

 Complexity: Time and space complexity are O(bd)O(b^d)O(bd), where bbb is


the branching factor, and ddd is the depth of the solution.

o Depth-First Search (DFS):

 Description: Explores as far as possible along each path before backtracking.

 Characteristics: Memory-efficient, but may not find the shortest path.

 Complexity: Time complexity is O(bd)O(b^d)O(bd), space complexity is


O(b⋅m)O(b \cdot m)O(b⋅m), where mmm is the maximum depth.

o Uniform-Cost Search:

 Description: Expands the least costly node first, ensuring the lowest-cost
solution.

 Characteristics: Optimal for all path costs; useful in variable-cost


environments.

 Complexity: Similar to BFS, but may require more computational resources due
to cost evaluation.

2. Informed (Heuristic) Search Strategies

Informed searches use problem-specific knowledge (heuristics) to guide the search, reducing the
number of nodes explored and finding solutions more efficiently.

 Heuristics:

o Definition: A heuristic is an estimate of the cost from a current state to the goal.

o Purpose: Heuristics prioritize paths that are likely to reach the goal sooner or more
cost-effectively.

 Types of Informed Search:

o Greedy Best-First Search:

 Description: Selects the node with the lowest heuristic value, indicating
proximity to the goal.

 Characteristics: Fast but not always optimal, as it doesn’t consider path cost.
 Use Case: Suitable for problems where quick, approximate solutions are
acceptable.

o A Search*:

 Description: Combines the path cost (g(n)) and heuristic estimate (h(n)) for
each node, selecting the node with the lowest f(n)=g(n)+h(n)f(n) = g(n) +
h(n)f(n)=g(n)+h(n).

 Characteristics: Both optimal and complete if the heuristic is admissible (never


overestimates the true cost).

 Use Case: Widely used in pathfinding and planning, where optimality is crucial.

3. Heuristics in Problem Solving

Heuristics play a central role in informed search and optimization, helping to guide searches
towards promising regions in the search space.

 Admissible Heuristic:

o Definition: A heuristic is admissible if it never overestimates the true cost to reach


the goal.

o Example: For a grid-based pathfinding problem, the Manhattan distance (sum of


horizontal and vertical distances) is an admissible heuristic.

 Consistent Heuristic:

o Definition: A heuristic is consistent if the estimated cost between two nodes is less
than or equal to the step cost.

o Example: In graph search, consistent heuristics ensure that once a node is expanded,
the optimal path to it is found.

4. Local Search Algorithms

Local search focuses on finding solutions by exploring neighboring states and is particularly useful
in optimization problems with large or infinite state spaces.

 Types of Local Search Algorithms:

o Hill Climbing:

 Description: Iteratively moves to the neighboring state with a higher value


(closer to goal).

 Characteristics: Fast but can get stuck in local optima.

 Variants: Stochastic hill climbing, where a random move is selected from


among the best.

o Simulated Annealing:

 Description: Introduces randomness to escape local optima by allowing some


“downhill” moves, with a probability that decreases over time.

 Characteristics: Useful in large search spaces and often leads to near-optimal


solutions.
 Application: Optimization problems like scheduling, traveling salesman
problem.

o Genetic Algorithms:

 Description: Mimics natural selection by creating a population of solutions,


applying crossover and mutation to evolve better solutions.

 Characteristics: Effective for large, complex search spaces where other


methods may struggle.

 Use Case: Design optimization, AI-based game playing.

5. Optimization Problems

Optimization problems seek to find the “best” solution based on a specific objective or set of
objectives, often in scenarios where multiple solutions exist.

 Types of Optimization Problems:

o Single-Objective Optimization: Optimizes a single criterion (e.g., shortest path,


highest profit).

o Multi-Objective Optimization: Balances trade-offs between multiple criteria (e.g.,


cost vs. quality).

 Common Optimization Techniques:

o Gradient Descent:

 Description: Finds the minimum of a function by iteratively moving in the


direction of steepest descent.

 Application: Training machine learning models, especially in neural networks.

o Linear Programming:

 Description: Solves problems with linear constraints and a linear objective


function.

 Application: Resource allocation, supply chain management.

o Evolutionary Algorithms:

 Description: Includes genetic algorithms, uses natural selection-inspired


methods to search for optimal solutions.

 Application: Suitable for large, complex problems without precise mathematical


formulations.

o Branch and Bound:

 Description: Systematically divides the search space and bounds unpromising


areas to avoid exploring them.

 Application: Solving combinatorial optimization problems, like the traveling


salesman problem.

6. Searching with Partial Observations


In many real-world AI applications, an agent does not have complete information about the
environment or may encounter uncertainty regarding the outcome of actions. This type of search
is known as searching with partial observations.

 Characteristics:

o Uncertain Environment: The agent only has partial visibility or information about
the state space.

o Non-Deterministic Actions: Actions may not have predictable outcomes, adding


complexity to the search process.

o Belief States: Instead of representing a single state, the agent keeps track of a set of
possible states, known as a belief state, to estimate where it might be.

 Strategies:

o Sensor-Based Search: The agent uses limited sensory input to update its belief
state.

o Probabilistic Search: Incorporates probabilities to estimate the likelihood of being in


a certain state (e.g., in robotics, sensors may estimate positions based on probability).

o Partial Observability Algorithms: Algorithms like Partially Observable Markov


Decision Processes (POMDPs) are used to handle uncertainty and make decisions
based on belief states.

 Example:

o Robot Navigation: A robot may lack full information about an environment (e.g.,
obstacles or walls) and must use sensors to infer its location while navigating.

7. Constraint Satisfaction Problems (CSPs)

A Constraint Satisfaction Problem is a problem where the solution must satisfy a set of
constraints or restrictions. CSPs are commonly used for tasks that involve assigning values to
variables under specific conditions, such as puzzles, scheduling, and layout design.

 Components of a CSP:

o Variables: Elements that need to be assigned values (e.g., colors in a map coloring
problem).

o Domains: Possible values that each variable can take.

o Constraints: Rules that restrict combinations of values for the variables (e.g.,
adjacent regions must have different colors).

 Types of Constraints:

o Unary Constraints: Apply to a single variable (e.g., "Variable X cannot be red").

o Binary Constraints: Apply between pairs of variables (e.g., "If X is blue, Y cannot be
blue").
o Global Constraints: Apply to a set of variables (e.g., "All variables must have unique
values").

 Solving Techniques:

o Backtracking Search: Systematically assigns values to variables, backtracking when


a constraint is violated.

o Forward Checking: After assigning a variable, eliminates values from neighboring


variables that would conflict with this assignment.

o Heuristic Approaches: Use heuristics like the Minimum Remaining Values (MRV) to
improve efficiency.

 Examples:

o Sudoku: Each cell is a variable with the domain (1-9), and constraints prevent
duplicate values in rows, columns, and boxes.

o Timetable Scheduling: Variables represent time slots, with constraints like “no two
events at the same time for the same room.”

8. Constraint Propagation

Constraint Propagation is a technique used within CSPs to further reduce the search space by
applying constraints repeatedly until no further changes are possible.

 Concept:

o Propagation: Starts with an initial set of constraints, and as variables are assigned
values, constraints propagate to neighboring variables, removing invalid options.

o Goal: Reduce the search space by eliminating choices that would lead to conflicts
early on, making the search process faster and more efficient.

 Techniques:

o Arc Consistency (AC):

 Definition: A variable is arc-consistent if, for every value in its domain, there is
a consistent value in the domain of each connected variable.

 Algorithm: The AC-3 algorithm is commonly used to enforce arc consistency.

o Node and Path Consistency:

 Node Consistency: Ensures that individual variable domains satisfy any unary
constraints.

 Path Consistency: Ensures consistency across three or more variables by


adjusting their domains.

o Domain Reduction: Values in the domains of variables are reduced as constraints are
applied, which can simplify or solve the problem without further search.

 Example:
o Map Coloring: In a map coloring CSP, constraint propagation can eliminate invalid
colors for each region as assignments are made, making it easier to find a valid
coloring without extensive backtracking.

9. Backtracking Search

Backtracking is a search algorithm used in constraint satisfaction problems (CSPs) and


combinatorial optimization to find solutions by exploring possible assignments and backtracking
upon encountering constraints.

 Key Concepts:

o Incremental Approach: Variables are assigned values one at a time. If a constraint is


violated, the algorithm backtracks by undoing the last assignment.

o Recursive Structure: Backtracking uses recursion, exploring deeper levels in the


search tree.

o Efficiency Techniques:

 Forward Checking: After each assignment, eliminates choices in neighboring


variables that would lead to conflicts.

 Constraint Propagation: Applies constraints to reduce domains of unassigned


variables.

 Example Applications:

o Sudoku and N-Queens Problem: Places values or objects while ensuring constraints
are not violated.

10. Game Playing

Game playing in AI involves creating agents that can play games like chess, tic-tac-toe, and Go.
These games are modeled as search problems where agents aim to make optimal moves.

 Characteristics:

o Competitive Environment: Games typically involve two opposing players (agent and
opponent).

o Zero-Sum: In many games, one player’s gain is equivalent to the other’s loss.

o Deterministic or Stochastic: Some games have uncertain outcomes, such as dice


rolls (stochastic games), while others are entirely predictable.

 Representation of Games:

o Game Trees: Nodes represent game states, and edges represent possible moves.

o Utility Function: Evaluates the desirability of a game state for an agent.

11. Optimal Decisions in Games


Optimal decision-making in games involves choosing moves that maximize an agent's chances of
winning, assuming the opponent also plays optimally.

 Minimax Algorithm:

o Description: Minimax is a decision rule for minimizing the possible loss in a worst-
case scenario.

o Function:

 Maximizing Player: Chooses moves that maximize their score.

 Minimizing Player: Opponent aims to minimize the maximizing player’s score.

o Outcome: The algorithm selects moves that yield the best guaranteed result,
assuming the opponent also plays optimally.

 Complexity:

o Time Complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and ddd is
the depth of the game tree.

o Space Complexity: Requires significant computational resources, especially for large


games.

12. Alpha-Beta Pruning

Alpha-Beta Pruning is an optimization for the Minimax algorithm that reduces the number of nodes
evaluated, making it feasible to search deeper in the game tree.

 Concept:

o Alpha (α\alphaα): The best value the maximizer can guarantee.

o Beta (β\betaβ): The best value the minimizer can guarantee.

o Pruning Mechanism: Alpha-beta pruning skips exploring branches that cannot


influence the final decision.

 How It Works:

o As the algorithm evaluates each node, it updates alpha and beta values.

o Pruning: If a move at a node cannot possibly alter the outcome, that branch is cut off.

 Effectiveness:

o Reduces time complexity to O(bd/2)O(b^{d/2})O(bd/2), allowing the search to go


twice as deep for the same computational cost.

o Application: Used in games like chess to enhance efficiency.

13. Stochastic Games

Stochastic games introduce randomness into the decision-making process, requiring agents to
handle uncertainty and consider probabilities when making moves.
 Characteristics:

o Random Events: The outcome of moves may depend on random events, such as dice
rolls.

o Chance Nodes: In addition to decision nodes, stochastic games include chance nodes
representing random events.

o Utility Calculation: Agents calculate expected utility, balancing possible outcomes


based on probabilities.

 Approach:

o Expectiminimax Algorithm: An extension of Minimax that accounts for chance


nodes, calculating expected values for uncertain moves.

o Complexity: Higher computational cost than deterministic games, as all possible


outcomes must be considered for each chance node.

 Examples:

o Backgammon: Involves dice rolls that introduce randomness.

o Card Games: Games like poker have uncertain outcomes due to hidden information
and random card draws.

You might also like