0% found this document useful (0 votes)
6 views

Module-5{MQP Chatgpt Answers}

Backward Chaining is an inference method that starts with a goal and works backward to find supporting facts from a knowledge base. The process involves identifying applicable rules, creating sub-goals, and recursively validating them against known facts. An example demonstrates this method by proving whether 'West is a criminal' using specific rules and facts from the knowledge base.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Module-5{MQP Chatgpt Answers}

Backward Chaining is an inference method that starts with a goal and works backward to find supporting facts from a knowledge base. The process involves identifying applicable rules, creating sub-goals, and recursively validating them against known facts. An example demonstrates this method by proving whether 'West is a criminal' using specific rules and facts from the knowledge base.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ChatGPT

chatgpt.com/c/678badb3-9630-8009-bd0a-3633a28d924c

Explain Backward Chaining with example.

Backward Chaining
Backward Chaining is an inference method that starts from the goal (query) and works
backward to find known facts that support the proof. It attempts to deduce the goal by
chaining through rules in reverse, ensuring all conditions for the goal are satisfied by the
knowledge base (KB). This method is useful in logic programming and expert systems.

Steps of Backward Chaining:

1. Initialization: Start with the query as the initial goal.


2. Find Applicable Rules: Look for rules in the KB whose conclusions match the
current goal.
3. Sub-Goal Creation: Replace the current goal with the rule's premises (sub-goals).
4. Recursive Resolution: Solve each sub-goal recursively by repeating steps 2 and
3.
5. Validation: If all sub-goals are satisfied, the original query is true; otherwise, it is
false.

Example: Is West a criminal?

Knowledge Base (KB):

1. Rules:
Rule 1: Criminal(x)←American(x)∧Weapon(y)∧Sells(x,y,z)∧Hostile(z)
Rule 2: Sells(x,y,z)←Missile(y)∧Owns(z,y)
2. Facts:
American(West)
Weapon(M1)
Hostile(Nono)
Missile(M1)
Owns(Nono,M1)

Query:

1/21
Is Criminal(West) true?

Steps to Solve:

1. Start with the query: Criminal(West).


Apply Rule 1 since Criminal(x) is the conclusion of the rule.

2. Break into sub-goals:

American(West)
Weapon(M1)
Sells(West,M1,Nono)
Hostile(Nono)
3. Check each sub-goal:

American(West): True (from KB).


Weapon(M1): True (from KB).
Hostile(Nono): True (from KB).
Sells(West,M1,Nono): Apply Rule 2.
4. Sub-goal for Sells(West,M1,Nono):
Using Rule 2:

Missile(M1): True (from KB).


Owns(Nono,M1): True (from KB).
Therefore, Sells(West,M1,Nono) is true.
5. Conclusion:
Since all premises of Rule 1 are satisfied, Criminal(West) is true.

Visualization:
1. Query: Criminal(West)
2. Apply Rule 1:
American(West): True
Weapon(M1): True
Sells(West,M1,Nono): Apply Rule 2
Missile(M1): True
Owns(Nono,M1): True
Hostile(Nono): True
3. Criminal(West): True

Backward Chaining successfully deduces that West is a criminal based on the given
knowledge base.

2/21
This structured, recursive method of solving goals is a hallmark of backward chaining,
commonly implemented in logic programming languages like Prolog​.

If any answer contains any image that is in the pdf then just tell me in
which page number it is present.

Understood! If any relevant answer requires referencing an image from the uploaded
PDF, I'll provide the page number where the image is located. Let me know your next
query!

Explain Graph Plan Algorithm

Graph Plan Algorithm

The Graph Plan Algorithm is a planning method used in artificial intelligence to find a
sequence of actions that transitions a system from an initial state to a goal state while
respecting constraints. It builds a planning graph, a compact representation of possible
actions and states, and searches it to extract a valid plan.

Key Concepts of Graph Plan:

1. Planning Graph: A layered data structure consisting of alternating levels of:

State Levels (Si): Nodes represent literals (conditions) that can hold in the
state.
Action Levels (Ai): Nodes represent actions applicable at the preceding
state level.
Mutex Relationships: Constraints indicating that certain actions or states
cannot coexist.
2. Mutex Types:

Inconsistent Effects: Two actions have contradictory effects.


Interference: An action's effect negates another's precondition.
Competing Needs: Two actions have preconditions that are mutex.
3. Termination: The graph construction terminates when two consecutive levels are
identical (the graph "levels off").

3/21
Steps of the Graph Plan Algorithm:

1. Construct Initial Planning Graph:

Begin with S0​, the initial state, containing literals true in the starting state.
Add applicable actions (A0​) based on S0​, their effects to S1​, and mutex
relations.
2. Expand Planning Graph:

Alternate between action levels (Ai​) and state levels (Si​).


Add mutex links to capture conflicting actions or states.
3. Check Goal Achievement:

At each state level (Sn​), check if all goal conditions are present and non-
mutex.
If achievable, attempt to extract a valid plan.
4. Extract Solution:

Perform a backward search starting from the goal conditions at the last
state level.
Select actions to satisfy goals and recursively add their preconditions as sub-
goals.
Ensure selected actions are non-mutex.
5. Repeat or Expand:

If a solution is not found, expand the graph to the next level and retry.
Continue until the graph levels off or a solution is found.
6. Return Result:

If a valid plan is extracted, return it.


If the graph levels off without finding a solution, report failure.

Example:

Problem:

Initial State (S0​): Have(Cake)


Goal: Have(Cake)∧Eaten(Cake)
Actions:
Eat(Cake): Precondition: Have(Cake); Effect: ¬Have(Cake),Eaten(Cake)
Persistence (no-op): Maintain Have(Cake).

Execution:

4/21
1. Graph Construction:

S0​: Have(Cake)
A0​: Actions Eat(Cake), Persistence of Have(Cake)
S1​: Eaten(Cake),¬Have(Cake),Have(Cake)
2. Goal Check:

At S1​: Both Have(Cake) and Eaten(Cake) are present but mutex.


3. Graph Expansion:

Add S2​,A1​, and evaluate mutex constraints.


4. Solution Extraction:

Backward search identifies Eat(Cake) satisfies Eaten(Cake), and


Persistence satisfies Have(Cake).

Advantages:

Efficient heuristic estimates for planning problems.


Encodes both actions and constraints in a compact representation.
Guarantees admissibility by not overestimating steps to the goal.

Limitations:

Inefficient for highly complex or dynamic environments.


Requires explicit enumeration of all possible actions and states.

For more details, refer to the explanation of the algorithm in Page 10 of the PDF​.

Explain the concept of Resolution in AI.

Resolution in Artificial Intelligence

Resolution is a powerful inference rule used in first-order logic and propositional logic to
deduce conclusions from a given set of premises or clauses. It is commonly used in
automated theorem proving and logic programming. Resolution operates on formulas in
Conjunctive Normal Form (CNF), which is a conjunction of disjunctions of literals.

Key Concepts:

1. Clauses: A clause is a disjunction of literals, e.g., A∨B∨¬C.

5/21
2. Literals: A literal is an atomic proposition or its negation, e.g., A or ¬A.
3. Complementary Literals: A pair of literals where one is the negation of the other,
e.g., A and ¬A.

The Resolution Rule:


The resolution inference rule is defined as:

B∨CA∨B,¬A∨C​
A∨B: Clause 1
¬A∨C: Clause 2
B∨C: Resolvent (resulting clause)

The rule combines two clauses by eliminating complementary literals to produce a new
clause, called the resolvent.

Steps in Resolution:

1. Convert to CNF:

Rewrite all statements into an equivalent CNF form.


Use logical equivalences, such as eliminating implications and moving
negations inward.
2. Standardize Variables:

Rename variables to avoid confusion during substitution.


3. Resolve Clauses:

Identify complementary literals between two clauses.


Eliminate these literals to produce a resolvent.
4. Repeat:

Apply the resolution rule iteratively to all possible pairs of clauses.


5. Check for Empty Clause:

If the empty clause (□) is derived, the original set of clauses is unsatisfiable
(proof by contradiction).

Example:

Problem:

Prove: "Curiosity killed the cat."

6/21
Knowledge Base (KB):

1. ∀x[∀yAnimal(y)⇒Loves(x,y)]⇒∃yLoves(y,x)
2. ∀x[∃zAnimal(z)∧Kills(x,z)]⇒∀y¬Loves(y,x)
3. ∀xAnimal(x)⇒Loves(Jack,x)
4. Kills(Jack,Tuna)∨Kills(Curiosity,Tuna)
5. Cat(Tuna)
6. ∀xCat(x)⇒Animal(x)

Negated Goal: ¬Kills(Curiosity,Tuna)

Resolution Steps:

1. Convert all statements into CNF.


2. Combine relevant clauses:
From 5 and 6, Animal(Tuna).
Using 4 and the negated goal, deduce Kills(Jack,Tuna).
3. Derive a contradiction:
Jack cannot be loved and kills an animal (Tuna).
Contradiction: Jack loves all animals and is unloved.
4. Conclude:
Kills(Curiosity,Tuna) must be true.

Applications:
Theorem Proving: Automated systems for verifying logical statements.
Logic Programming: Core of languages like Prolog.
Problem Solving: Derives solutions by reasoning over constraints.

For a detailed explanation of the resolution process, refer to Page 9 of the PDF​.

How state space search is done in AI discuss

State Space Search in AI

State space search is a problem-solving technique in artificial intelligence (AI) where the
problem is represented as a graph or tree structure. Each node in the graph represents
a state of the system, and the edges represent actions that transition from one state to
another. The goal is to find a path from an initial state to a goal state.

7/21
Key Components of State Space Search:

1. State: A representation of the current situation in the problem.

Example: In a block-stacking problem, a state might describe the positions of


all blocks.
2. Initial State: The starting point of the search.

Example: Blocks A, B, and C are on the table.


3. Goal State: The desired outcome of the problem.

Example: Blocks stacked in the order A on B, B on C.


4. Operators/Actions: Possible actions that transform one state into another.

Example: Moving a block from the table to another block.


5. Search Tree/Graph: A structure representing all possible states and transitions.

Nodes: States
Edges: Actions

Types of State Space Searches:

1. Forward (Progression) Search:

Starts from the initial state and explores actions to generate new states until the
goal state is reached.
Steps:
1. Start with the initial state.
2. Apply actions to generate successor states.
3. Use a search strategy (e.g., BFS, DFS) to explore the state space.
Example: Solving a maze by starting at the entrance and exploring paths to the
exit.

2. Backward (Regression) Search:

Starts from the goal state and works backward to the initial state.
Steps:
1. Start with the goal state.
2. Identify actions that could lead to the goal state.
3. Use a search strategy to find a sequence of actions that leads back to the
initial state.
Example: Finding how a robot could have reached its current position.

8/21
3. Bidirectional Search:

Performs search simultaneously from the initial state and the goal state, meeting in
the middle.
Steps:
1. Start one search from the initial state and another from the goal state.
2. Expand both searches until they intersect.

Heuristic State Space Search:

Heuristics are used to guide the search process by estimating the cost to reach the goal
from a given state.

1. Best-First Search: Expands nodes based on a heuristic function h(s) that


estimates the cost to the goal.
2. A*: Uses a combination of actual cost g(s) and estimated cost h(s) to guide the
search.

Example: Block-Stacking Problem

Problem:

Initial State: Block A is on the table, Block B is on the table, Block C is on Block A.
Goal State: Block A on Block B, Block B on Block C.

Forward Search:

1. Start at Initial State: A on table, B on table, C on A.


2. Apply Actions:
Move C to the table.
Move B onto C.
Move A onto B.
3. Goal Achieved.

Backward Search:

1. Start at Goal State: A on B, B on C.


2. Identify Actions:
A moved from B to the table.
B moved from C to the table.
C moved to A's initial position.
3. Trace Actions to Initial State.

9/21
Applications:
Pathfinding: Finding routes in maps (e.g., GPS).
Game Solving: Chess, checkers, and other strategic games.
Robotics: Navigating robots in dynamic environments.
Planning Problems: Logistics, scheduling, and task planning.

For a detailed explanation of state space search methods like Forward and Backward
Search, refer to Page 10 of the PDF​.

Explain Backward Chaining algorithm with an example.

Backward Chaining Algorithm


The Backward Chaining Algorithm is a goal-driven inference technique in artificial
intelligence. It starts with the goal (query) and works backward to determine if the goal
can be deduced from the knowledge base. It recursively tries to prove the goal by
finding rules that lead to the goal and then proving the premises of those rules.

Steps of Backward Chaining:

1. Start with the Goal:


Take the goal as the query to be proven.
2. Find Matching Rules:
Identify rules in the knowledge base whose conclusion matches the goal.
3. Add Premises as Sub-Goals:
Replace the goal with the premises of the matching rule(s) as new sub-goals.
4. Recursively Prove Sub-Goals:
Repeat the process for each sub-goal.
5. Validate Against Facts:
If a sub-goal matches a fact in the knowledge base, mark it as proven.
6. Combine Results:
If all sub-goals are proven, the original goal is true; otherwise, it is false.

Example: Proving "Is West a Criminal?"

Knowledge Base (KB):

10/21
1. Rules:
Rule 1: Criminal(x)←American(x)∧Weapon(y)∧Sells(x,y,z)∧Hostile(z)
Rule 2: Sells(x,y,z)←Missile(y)∧Owns(z,y)
2. Facts:
American(West)
Weapon(M1)
Hostile(Nono)
Missile(M1)
Owns(Nono,M1)

Query:

Is Criminal(West) true?

Execution:
1. Goal: Prove Criminal(West).

Look for rules with Criminal(x) in the conclusion. Use Rule 1.


2. Decompose Goal: Based on Rule 1:

Sub-goals:
American(West)
Weapon(M1)
Sells(West,M1,Nono)
Hostile(Nono)
3. Check Sub-Goals:

American(West): Matches a fact in the KB → Proven.


Weapon(M1): Matches a fact in the KB → Proven.
Hostile(Nono): Matches a fact in the KB → Proven.
Sells(West,M1,Nono): Apply Rule 2.
4. Sub-Goal Sells(West,M1,Nono): Based on Rule 2:

Sub-goals:
Missile(M1)
Owns(Nono,M1)
5. Check Sub-Goals for Sells:

Missile(M1): Matches a fact in the KB → Proven.


Owns(Nono,M1): Matches a fact in the KB → Proven.

11/21
6. Conclusion:

All sub-goals for Criminal(West) are proven.


Criminal(West) is true.

Proof Tree:

scss

Criminal(West)
├── American(West) → True
├── Weapon(M1) → True
├── Sells(West, M1, Nono)
│ ├── Missile(M1) → True
│ └── Owns(Nono, M1) → True
└── Hostile(Nono) → True

Advantages:

Efficient for proving specific goals.


Does not explore irrelevant parts of the knowledge base.

Limitations:

Can enter infinite loops if not handled carefully (e.g., cyclic dependencies).
Performance depends on the structure of the knowledge base.

For further details on Backward Chaining, refer to Page 3 of the PDF​.

Define Classical Planning. With the blocks world example explain the
same in detail.

Classical Planning in AI

12/21
Classical planning is a branch of artificial intelligence that focuses on devising a
sequence of actions to transition a system from an initial state to a goal state. It operates
under certain assumptions:

1. Deterministic Actions: Each action has a single predictable outcome.


2. Fully Observable State: The planner has complete knowledge of the system's
state at all times.
3. Static Environment: The environment does not change except due to planned
actions.
4. Factored Representation: States are represented as a collection of logical facts.

Classical planning uses logic-based representations and searches for solutions in a


structured state space.

Blocks World Example

Problem:

The Blocks World domain consists of:

A table (surface).
A set of blocks (e.g., A, B, C).
Actions to move blocks from one position to another.

The goal is to achieve a specific configuration of blocks, starting from an initial


configuration.

Example Scenario:

1. Initial State:

Block A is on the table.


Block B is on the table.
Block C is on Block A.
Representation:

On(A,Table), On(B,Table), On(C,A), Clear(B), Clear(C)


2. Goal State:

Block A is on Block B.
Block B is on Block C.
Representation:

On(A,B), On(B,C), Clear(A)

13/21
3. Actions:

Move(x,y,z): Move block x from y to z.


Preconditions:

Clear(x): Block x is clear.


Clear(z): Block z is clear.
On(x,y): Block x is on y.
Effects:

On(x,z): Block x is now on z.


¬On(x,y): Block x is no longer on y.
Clear(y): Block y is now clear.
¬Clear(z): Block z is no longer clear.

Steps to Achieve the Goal:

1. Move Block C to the Table:

Preconditions:
Clear(C), On(C,A)
Action:
Move(C,A,Table)
Effects:
On(C,Table), Clear(A), ¬On(C,A), ¬Clear(Table)
2. Move Block B onto Block C:

Preconditions:
Clear(B), Clear(C), On(B,Table)
Action:
Move(B,Table,C)
Effects:
On(B,C), Clear(Table), ¬On(B,Table), ¬Clear(C)
3. Move Block A onto Block B:

Preconditions:
Clear(A), Clear(B), On(A,Table)
Action:
Move(A,Table,B)
Effects:
On(A,B), Clear(Table), ¬On(A,Table), ¬Clear(B)

Final Plan:

14/21
1. Move(C,A,Table)
2. Move(B,Table,C)
3. Move(A,Table,B)

Representation in Planning Domain Definition Language (PDDL):

pddl

Action(Move(x, y, z),
PRECOND: Clear(x) ∧ Clear(z) ∧ On(x, y),
EFFECT: On(x, z) ∧ Clear(y) ∧ ¬On(x, y) ∧ ¬Clear(z))

Complexity of Classical Planning:

PlanSAT: Determines if a plan exists; belongs to PSPACE.


Bounded PlanSAT: Checks if a solution of a given length exists; remains
decidable with function symbols.

Classical planning methods include Forward State-Space Search, Backward State-


Space Search, and Planning Graphs.

For more detailed information, refer to Page 10 of the PDF​.

Write appropriate quantifiers for the following (i) Some students read well
(ii) Some students like some books (iii) Some students like all books (iv) All
students like some books (v) All students like no books Explain the
concept of Resolution in First Order Logic with appropriate procedure.

Appropriate Quantifiers:
1. Some students read well:

∃x (Student(x)∧ReadsWell(x))

15/21
2. Some students like some books:

∃x∃y (Student(x)∧Book(y)∧Likes(x,y))
3. Some students like all books:

∃x (Student(x)∧∀y (Book(y)→Likes(x,y)))
4. All students like some books:

∀x (Student(x)→∃y (Book(y)∧Likes(x,y)))
5. All students like no books:

∀x (Student(x)→∀y (Book(y)→¬Likes(x,y)))

Resolution in First Order Logic (FOL)


Resolution in First Order Logic (FOL) is a method for deducing conclusions by
systematically combining logical clauses. It operates on formulas converted to
Conjunctive Normal Form (CNF) and is widely used in automated reasoning.

Procedure for Resolution in FOL:

1. Convert to CNF:

Eliminate implications (A⇒B becomes ¬A∨B).


Move negations inward using De Morgan's laws.
Standardize variable names to avoid conflicts.
Skolemize to eliminate existential quantifiers (∃x) by introducing Skolem
constants or functions.
Drop universal quantifiers (∀x) since they are implicit in FOL.
Represent the formula as a conjunction of disjunctions (CNF).
2. Negate the Goal:

Add the negation of the statement to be proven to the knowledge base.


3. Unify Clauses:

Identify complementary literals (e.g., P(x) and ¬P(y)).


Unify the literals by finding a substitution that makes them identical.
4. Resolve Clauses:

Eliminate complementary literals to produce a new clause (resolvent).


Repeat until:
An empty clause (□) is derived (proves inconsistency and thus
validates the goal).
No new clauses can be generated (goal cannot be proven).

16/21
5. Check for Contradiction:

If the empty clause is derived, the original set of clauses is unsatisfiable,


proving the negated goal.

Example:

Problem: Prove: "Curiosity killed the cat."

Knowledge Base (KB):

1. ∀x (∀y (Animal(y)→Loves(x,y))→∃y Loves(y,x))


2. ∀x (∃z (Animal(z)∧Kills(x,z))→∀y ¬Loves(y,x))
3. ∀x (Animal(x)→Loves(Jack,x))
4. Kills(Jack,Tuna)∨Kills(Curiosity,Tuna)
5. Cat(Tuna)
6. ∀x (Cat(x)→Animal(x))

Negated Goal: ¬Kills(Curiosity,Tuna)

Steps:

1. Convert to CNF:

Clause 1: ¬Animal(y)∨Loves(x,y)∨Loves(y,x)
Clause 2: ¬Animal(z)∨¬Kills(x,z)∨¬Loves(y,x)
Clause 3: ¬Animal(x)∨Loves(Jack,x)
Clause 4: Kills(Jack,Tuna)∨Kills(Curiosity,Tuna)
Clause 5: Cat(Tuna)
Clause 6: ¬Cat(x)∨Animal(x)
2. Add Negated Goal:

¬Kills(Curiosity,Tuna)
3. Unify and Resolve:

From Clause 4 and the negated goal, resolve Kills(Jack,Tuna).


Use Clause 6 and Clause 5 to infer Animal(Tuna).
Combine with Clause 2 to resolve ¬Loves(y,Jack).
Use Clause 3 to deduce a contradiction (Loves(Jack,Tuna)).
4. Empty Clause:

Contradiction confirms Kills(Curiosity,Tuna) is true.

17/21
Advantages:

Systematic and sound inference method.


Completeness ensures all valid conclusions can be derived.

Applications:

Automated theorem proving.


Logic programming (e.g., Prolog).
Natural language understanding.

For detailed examples, refer to Page 9 of the PDF​.

Explain the two approaches to searching for a plan in detail.

Two Approaches to Searching for a Plan

Planning in AI often involves finding a sequence of actions to achieve a goal from a


given initial state. Two main approaches for searching for a plan are:

1. Forward (Progression) State-Space Search


2. Backward (Regression) State-Space Search

1. Forward (Progression) State-Space Search

Description:

Starts from the initial state and explores all possible actions to generate new
states until the goal state is reached.
Moves "forward" by applying actions that transform the current state.

Steps:

1. Initialization: Start with the initial state S0​.


2. Action Selection: Apply an action that satisfies the current state’s preconditions to
generate successor states.
3. State Evaluation: Evaluate successor states using a search strategy (e.g., BFS,
DFS, A*).
4. Repeat: Continue expanding states until the goal state is reached.

Example:

18/21
Blocks World Problem:

Initial State: Block A is on Block B, and Block B is on the table.


Goal State: Block B is on Block A, and Block A is on the table.

Steps:

1. Start from S0​: On(A,B),On(B,Table).


2. Apply Move(A,B,Table): On(A,Table),On(B,Table).
3. Apply Move(B,Table,A): On(B,A),On(A,Table).
4. Goal achieved.

Advantages:

Easy to implement.
Explicitly constructs the state space in the direction of the solution.

Disadvantages:

May explore irrelevant states (e.g., actions that do not contribute to the goal).
Computationally expensive in large state spaces.

2. Backward (Regression) State-Space Search

Description:

Starts from the goal state and works backward, determining which actions could
have led to the goal.
Considers only those states that are relevant to achieving the goal.

Steps:

1. Initialization: Start with the goal state Sg​.


2. Action Selection: Identify actions that could produce the goal state.
3. Precondition Evaluation: Add the preconditions of these actions as new sub-
goals.
4. Repeat: Continue regressing until the initial state is reached.

Example:

Blocks World Problem:

Initial State: Block A is on Block B, and Block B is on the table.


Goal State: Block B is on Block A, and Block A is on the table.

Steps:

19/21
1. Start with Sg​: On(B,A),On(A,Table).
2. Determine actions to achieve On(B,A): Move(B,Table,A) with precondition
Clear(A),On(B,Table).
3. Determine actions to achieve On(A,Table): Move(A,B,Table) with precondition
Clear(Table),On(A,B).
4. Chain preconditions until S0​: On(A,B),On(B,Table).

Advantages:

Focuses on relevant actions, ignoring irrelevant parts of the state space.


Often more efficient for goal-oriented problems.

Disadvantages:

Difficult to handle cases where multiple goals interact.


Requires the ability to compute the "inverse" of actions.

Comparison:

Feature Forward Search Backward Search

Direction Initial state to goal Goal state to initial state

Exploration Explores all successor states Considers only relevant


actions

Efficiency Less efficient in large spaces More efficient for focused


goals

Use Cases Pathfinding, navigation Planning with specific goals


problems

Handling Multiple Handles naturally Can be complex


Goals

Example Problem:

Planning Robot Actions:

Initial State: Robot at Location A, Box at Location B.


Goal State: Robot at Location C with the Box.

Forward Search:

1. Start at Location A.
2. Move to Location B.

20/21
3. Pick up the box.
4. Move to Location C.

Backward Search:

1. Start with At(Robot,C)∧Has(Box).


2. Determine action Move(Robot,B,C) and its precondition At(Robot,B)∧Has(Box).
3. Regress to PickUp(Box) and its precondition At(Box,B)∧At(Robot,B).
4. Chain preconditions until S0​.

For more details about Forward and Backward State-Space Search, refer to Page 10 of
the PDF​.

ChatGPT can make mistakes. Check important info.

21/21

You might also like