0% found this document useful (0 votes)
39 views29 pages

11 Classical Planning

Uploaded by

PRIYAM KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views29 pages

11 Classical Planning

Uploaded by

PRIYAM KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
You are on page 1/ 29

Classical Planning

Introduction
Problem-solving agents can find sequences of actions that result in a
goal state
But they deal with atomic states and require heuristics
Hybrid propositional logical agent can find plans without domain
specific heuristics
But it relies on ground (variable-free) propositional inference
Factored representation – State of the world is represented by a
collection of variables using PDDL (Planning Domain Definition
Language)
PDDL
State – Represented as a conjunction of fluents that are ground
functionless atoms
Eg. At(Truck1, Melbourne)  At(Truck2,Sydney)
Database semantics is used – Any fluents not mentioned is false
States may also be represented as a set of fluents which can be manipulated
with set operations
Actions – Set of action schemas that implicitly define the ACTIONS(s)
and RESULT(s,a)
Classical planning focuses on problems where most actions leave most things
unchanged
PDDL
Action Schema – can represent a set of ground (variable-free)
actions.
Eg. Flying a plane from one place to another

Action schema includes action name, list of variables, precondition and


effect
An example ground action
PDDL
Precondition and effect are each conjunction of literals
An action a can be executed in state s if s entails the precondition of
a.
Entailment can also be expressed as s|= q iff every positive literal in
q is in s and every negated literal in q is not.

where any variables in a are universally quantified


Eg.
PDDL
The result of executing action a in state s is defined as a state s′
It is represented by the set of fluents formed by starting with s,
removing the fluents that appear as negative literals in the action’s
effects (delete list or DEL(a)), and adding the fluents that are
positive literals in the action’s effects (add list or ADD(a))

The goal is just like a precondition: a conjunction of literals


(positive or negative) that may contain variables, such as:
Example: Air Cargo Transport

Solution:
Example: The Spare Tire Problem

Solution:
The Blocks World
Algorithms for Planning as State-
Space Search
Forward Search
Backward Search
Forward State-Space Search
From the earliest days, forward search was thought to be too
inefficient to be practical
It is prone to exploring irrelevant actions
State Spaces are large (Eg. Air cargo problem with 10 airports, 5
planes and 20 pieces of cargo in each airport)
Domain-independent heuristics can be derived which makes forward
search feasible
Backward relevant-states search
We start at the goal and apply actions backward until we find a
sequence of steps that reached initial state
PDDL representation makes it easy to regress actions
Given a ground goal description g and a ground action a, the
regression from g over a gives us a state description g′ defined by
Partial uninstantiated actions and
states
If the goal is to deliver a piece of cargo to SFO,

Suggested action:

The regressed state description:


Choosing the actions
In backward search, we need to choose actions which are relevant
Atleast one of the action’s effects must unify with an element of goal
The action must not have any effect that negates an element of the goal
Eg. Given the goal several instantiations of Unload are
relevant
We can reduce the branching factor by always using the action
formed by substituting the most general unifier
Consider the goal given an initial state with 10
billion ISBNs and the single action schema
Heuristics for Planning
An admissible heuristic can be derived by defining a relaxed
problem that can be easier to solve
Domain-independent heuristics can be defined for planning
Heuristics:
Ignore pre-conditions heuristic
Ignore delete-lists heuristic
Ignore Preconditions Heuristic
Drops all preconditions from actions
Every action becomes applicable in every state, and any single goal
fluent can be achieved in one step
This almost implies that the number of steps required to solve the
relaxed problem is the number of unsatisfied goals
But it may not be because:
some action may achieve multiple goals
some actions may undo the effects of others
We need to count the minimum number of actions required such
that the union of those actions’ effects satisfy the goal (set-cover
problem)
Example
8-block puzzle can be encoded as:

If we remove the pre-conditions any tile can move in one


action to any space (number-of-misplaced tiles heuristic)
If we remove we get Manhattan distance heuristic
Ignore delete-lists Heuristic
A relaxed problem can be formed by removing all negative literals
from effects
This makes it possible to make monotonic progress towards the goal
There are no dead-ends so there is no need for backtracking

Relaxed problems leave us simplified versions but are still expensive


Other Heuristics
Relaxing actions does not reduce the number of states
State abstraction – a many-to-one mapping from states in the ground
representation to abstract representation
One way of state abstraction is to ignore some fluents
Eg. Air cargo problem with 10 airports, 50 planes and 200 pieces of cargo can
be reduced if all cargo is only in 5 airports
Other Heuristics
Decomposition – Dividing a problem into parts, solving each part
independently and combining the parts
Suppose the goal is a set of fluents G, which we divide into disjoint
subsets G1, …. Gn.
We then find plans P1,…Pn that solve the respective subgoals
Heuristic estimate can be maximum of COST(Pi) or sum of
COST(Pi)
COST(Pi) may sometimes underestimate whereas sum is not
admissible
If Pi and Pj are known to be independent, COST(Pi)+COST(Pj) is
admissible
Planning Graphs
Planning graphs can be used to give better heuristic estimates than
the previous techniques
A planning graph is organized into levels
First level S0 for the initial state consisting of nodes representing each fluent
Second level A0 consisting of nodes for each ground action applicable in S0
Then alternating levels of Si followed by Ai
Example Problem
Planning Graph
Planning Graph
Actions a and b are mutex iff:
Inconsistency: a deletes add of b
Interference: a deletes precondition of b
Competing needs: inconsistent preconditions
Literals a and b are mutex iff:
Inconsistency: a is b
Inconsistent support: all ways of achieving them are mutex
The graph terminates when two consecutive levels are identical
(levelling-off)
Heuristic Estimation from Planning
Graphs
If any goal literal fails to appear in the final level of the graph, the
problem is unsolvable
We can estimate the cost of achieving a goal literal gi from state s as
the level at which gi appears – level cost of gi
To estimate the number of actions, a serial planning graph may be
used – all actions are mutex in a serial planning graph
To estimate the cost of conjunction of goals, three approaches are
used:
Max-level heuristic
Level sum heuristic
Set-level heuristic
GraphPlan Algorithm
Mutex Actions
Inconsistent effects: Remove(Spare,Trunk ) is mutex with
LeaveOvernight because one has the effect At(Spare,Ground) and the other
has its negation
Interference: Remove(Flat ,Axle) is mutex with LeaveOvernight because
one has the precondition At(Flat,Axle) and the other has its negation as an
effect.
Competing needs: PutOn(Spare,Axle) is mutex with Remove(Flat ,Axle)
because one has At(Flat ,Axle) as a precondition and the other has its
negation.
Inconsistent support: At(Spare,Axle) is mutex with At(Flat ,Axle) in S2
because the only way of achieving At(Spare,Axle) is by PutOn(Spare,Axle), and
that is mutex with the persistence action that is the only way of achieving
At(Flat ,Axle).

You might also like