0% found this document useful (0 votes)
6 views

AIES Unit-2

The document outlines the course structure for CS 332 Artificial Intelligence, detailing its objectives, outcomes, and syllabus. Key topics include knowledge representation, propositional and predicate logic, inference methods, and various planning strategies. The course aims to equip students with the ability to apply intelligent agents and design smart systems for real-world problem-solving.

Uploaded by

Aarya Kevadia
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

AIES Unit-2

The document outlines the course structure for CS 332 Artificial Intelligence, detailing its objectives, outcomes, and syllabus. Key topics include knowledge representation, propositional and predicate logic, inference methods, and various planning strategies. The course aims to equip students with the ability to apply intelligent agents and design smart systems for real-world problem-solving.

Uploaded by

Aarya Kevadia
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 92

CS 332 Artificial Intelligence

SCHOOL OF COMPUTER ENGINEERING AND TECHNOLOGY

2/14/2020 ARTIFICIAL INTELLIGENCE 1


CS 332 Artificial Intelligence
Teaching Scheme Credits: 2 + 1 = 3
Theory: 4 Hrs / Week Practical: 2 Hrs / Week
Course Objectives:

1) To understand the concept of Artificial Intelligence (AI)


2) To learn various peculiar search strategies for AI
3) To develop a mind to solve real world problems unconventionally with optimality

Course Outcomes:

1) Identify and apply suitable Intelligent agents for various AI applications


2) Design smart system using different informed search / uninformed search or heuristic approaches.
3) Identify knowledge associated and represent it by ontological engineering to plan a strategy to solve
given problem.

2/14/2020 ARTIFICIAL INTELLIGENCE 2


Syllabus

Knowledge Representation and Planning


Propositional logic and predicate logic, Knowledge Representation structure such
as frame, Conceptual dependencies, Semantic networks and script, Resolution in
predicate logic, Unification algorithm, Forward and Backward chaining, Logic
Programming
Planning: Forward and Backward planning, Goal Stack Planning, Hierarchical
Planning.

2/14/2020 ARTIFICIAL INTELLIGENCE 3


Knowledge Representation and Planning

2/14/2020 ARTIFICIAL INTELLIGENCE 4


Contents

▪ Propositional logic and predicate logic


▪ Knowledge Representation structure such as frame, Conceptual dependencies,
Semantic networks and script
▪ Resolution in predicate logic
▪ Unification algorithm
▪ Forward and Backward chaining
▪ Logic Programming
▪ Forward and Backward planning
▪ Goal Stack Planning and Hierarchical Planning

2/14/2020 ARTIFICIAL INTELLIGENCE 5


Knowledge bases

Knowledge Representation
– Facts: Things we want to represent. Truth in some
relevant world.
– Representation of facts.

2/14/2020 ARTIFICIAL INTELLIGENCE 6


A knowledge-based agent
▪ A knowledge-based agent includes a knowledge base and an inference system.
▪ A knowledge base is a set of representations of facts of the world.
▪ Each individual representation is called a sentence.
▪ The sentences are expressed in a knowledge representation language.
The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
3. It performs the chosen action.

2/14/2020 ARTIFICIAL INTELLIGENCE 7


Logic
▪ Drawing reasonable conclusions from a set of data (observations, beliefs, etc) seems key to
intelligence
▪ Logic is a powerful and well developed approach to this and highly regarded by people
▪ Logic is also a strong formal system that we can program for computers to use
▪ Maybe we can reduce any AI problem to figuring out how to represent it in logic and apply
standard proof techniques to generate solutions.
Intelligent agents should have capacity for:
• Perceiving, that is, acquiring information from environment,
• Knowledge Representation, that is, representing its understanding of the world,
• Reasoning, that is, inferring the implications of what it knows and of the choices it has, and
• Acting, that is, choosing what it want to do and carry it out
2/14/2020 ARTIFICIAL INTELLIGENCE 10
Continued…
● Knowledge can also be represented by the symbols of logic, which is the study of the rules of
exact reasoning.

● Logic is also of primary importance in expert systems in which the inference engine reasons
from facts to conclusions.

● A descriptive term for logic programming and expert systems is automated reasoning
systems.

2/14/2020 ARTIFICIAL INTELLIGENCE 11


Logic in general
Logics are formal languages for representing information such that conclusions can be
drawn
Syntax defines the sentences in the language
Semantics define the "meaning" of sentences
◦ i.e., define truth of a sentence in a world

E.g., the language of arithmetic


◦ x+2 ≥ y is a sentence; x2+y > {} is not a sentence
◦ x+2 ≥ y is true iff the

◦ number x+2 is no less than the number y


◦ x+2 ≥ y is true in a world where x = 7, y = 1
◦ x+2 ≥ y is false in a world where x = 0, y = 6

2/14/2020 ARTIFICIAL INTELLIGENCE 12


Representation, reasoning, and logic

▪The object of knowledge representation is to express knowledge in a computer-


tractable form, so that agents can perform well.
▪A knowledge representation language is defined by its syntax, which defines all
possible sequences of symbols that constitute sentences of the language.
▪Ex: Sentences in a book, bit patterns in computer memory its semantics, which
determines the facts in the world to which the sentences refer.
▪Each sentence makes a claim about the world.
▪An agent is said to believe a sentence about the world.

ARTIFICIAL INTELLIGENCE 13
2/14/2020
Types of Logic
There are a number of logical systems with different syntax and
semantics.
1. Propositional logic
2. Predicate logic
3. Elaborat with example Propositional logic and Predicate logic

99

2/14/2020 ARTIFICIAL INTELLIGENCE 14


Propositional Logic
▪Propositional logic is a symbolic logic for manipulating propositions
▪Propositional logic is concerned with declarative sentences that can be classified
as either true or false
▪A sentence whose truth value can be determined is called a statement or
proposition
▪A statement is also called a closed sentence because its truth value is not open to
question
▪Statements that cannot be answered absolutely are called open sentences
▪A compound statement is formed by using logical connectives on individual
statements

2/14/2020 ARTIFICIAL INTELLIGENCE 15


Propositional logic
In propositional logic (PL) an user defines a set of propositional symbols, like P and Q. User
defines the semantics of each of these symbols. For example,
◦ P means "It is hot"
◦ Q means "It is humid"
◦ R means "It is raining"

Examples of PL sentences:
o (P ^ Q) => R (here meaning "If it is hot and humid, then it is raining")
o Q => P (here meaning "If it is humid, then it is hot")
o Q (here meaning "It is humid.")

2/14/2020 ARTIFICIAL INTELLIGENCE 17


Truth Table

2/14/2020 ARTIFICIAL INTELLIGENCE 18


Inference rules in Propositional logic

1. Idempotent rule:
P ˄ P ==> P
P ˅ P ==> P
2. Commutative rule:
P ˄ Q ==> Q ˄ P
P ˅ Q ==> Q ˅ P
3.Associative rule:
P ˄ (Q ˄ R) ==> (P ˄ Q) ˄ R
P ˅ (Q ˅ R) ==> (P ˅ Q) ˅ R

2/14/2020 ARTIFICIAL INTELLIGENCE 19


Continued…
4. Distributive Rule:
P ˅ (Q ˄ R) ==> (P ˅ Q) ˄ (P ˅ R)
P ˄ (Q ˅ R) ==> (P ˄ Q) ˅ (P ˄ R)
5. De-Morgan’s Rule:
‫(ך‬P ˅ Q) ==> ‫ך‬P ˄ ‫ך‬Q
‫( ך‬P ˄ Q) ==> ‫ ך‬P ˅ ‫ך‬Q
6. Implication elimination:
P 🡪 Q => ‫ך‬P ˅ Q

2/14/2020 ARTIFICIAL INTELLIGENCE 20


Continued…
7. Bidirectional Implication elimination:
( P ⬄ Q ) ==> ( P 🡪 Q ) ˄ (Q 🡪 P)
8. Contrapositive rule:
P 🡪 Q => ‫ך‬P 🡪 ‫ך‬Q
9. Double Negation rule:
‫ ך )ך‬P) => P
10. Absorption Rule:
P ˅ ( P ˄ Q) => P
P ˄ ( P ˅ Q) => P

2/14/2020 ARTIFICIAL INTELLIGENCE 21


Continued…
Example:
“I will get wet if it rains and I go out of the house”
Let Propositions be:
W : “I will get wet “
R : “it rains “
S : “I go out of the house”
(S ˄ R) 🡪 W

2/14/2020 ARTIFICIAL INTELLIGENCE 23


Pros and cons of propositional logic
▪ Propositional logic is declarative
▪ Propositional logic allows partial/disjunctive/negated
information
▪ Propositional logic is compositional
▪ Propositional logic has very limited expressive power

2/14/2020 ARTIFICIAL INTELLIGENCE 24


Predicate Logic
• Representing simple facts (Preposition)
“SOCRATES IS A MAN”
SOCRATESMAN ---------1
“PLATO IS A MAN”
PLATOMAN ---------2
• Fails to capture relationship between Socrates and man. We do not get any information about the
objects involved
Ex: if asked a question : “who is a man?” we cannot get
answer. Using Predicate Logic however we can represent above
facts as: Man(Socretes) and Man(Plato)
1. Marcus was a man. man(Marcus)
2/14/2020 ARTIFICIAL INTELLIGENCE 25
26
27
Continued…
Representation and Mapping
• Spot is a dog
dog(Spot)
• Every dog has a tail

Spot has a tail

2/14/2020 ARTIFICIAL INTELLIGENCE 28


Quantifiers
1. Universal quantifier (∀)
• ∀x: means “for all” x
• It is used to represent phrase “ for all”.
• It says that something is true for all possible values of
a variable.
• Ex. “ John loves everyone”
∀x: loves(John , x)

2/14/2020 ARTIFICIAL INTELLIGENCE 29


Continued…
2. Existential quantifier ( ∃ ):

• Used to represent the fact “ there exists some”


• Ex:
• “some people like reading and hence they gain good
knowledge”
∃ x: { [person(x) ∧ like (x , reading)] 🡪gain(x, knowledge) }
• “lord Haggins has a crown on his head”
• ∃ x: crown(x) ∧ onhead (x , Haggins)

2/14/2020 ARTIFICIAL INTELLIGENCE 30


Continued…
3. Nested Quantifiers
• We can use both ∀ and ∃ separately
• Ex: “ everybody loves somebody ”
∀x: ∃y: loves ( x , y)
• Connection between ∀ and ∃
• “ everyone dislikes garlic”
∀ x: ¬ like ( x , garlic )
⮚ This can be also said as:
“there does not exists someone who likes garlic”
¬ ∃x: like (x, garlic)

2/14/2020 ARTIFICIAL INTELLIGENCE 31


Continued…
All Romans were either loyal to Caesar or hated him.
∀x: Roman(x) → loyalto (x, Caesar) ∨ hate(x, Caesar)
4. Every one is loyal to someone.
∀x: ∃y: loyalto(x, y) ∃y: ∀x: loyalto(x, y)
5. People only try to assassinate rulers they are not loyal to.
∀x: ∀y: person(x) ∧ ruler(y) ∧ tryassassinate(x, y)
→ ¬loyalto(x, y)

2/14/2020 ARTIFICIAL INTELLIGENCE 32


Inference Methods

• Unification (prerequisite)
• Forward Chaining
• Backward Chaining
• Logic Programming (Prolog)
• Resolution
• Transform to CNF (Chomsky normal form )
• Generalization of Prop. Logic resolution

2/14/2020 ARTIFICIAL INTELLIGENCE 33


Forward Chaining
• Forward Chaining
◦ Start with atomic sentences in the KB and apply Modus Ponens in the
forward direction, adding new atomic sentences, until no further inferences
can be made.
◦ P implies Q and P is asserted to be true, so therefore Q must be true
•Given a new fact, generate all consequences
•Assumes all rules are of the form
•Each rule & binding generates a new fact
•This new fact will “trigger” other rules
•Keep going until the desired fact is generated
2/14/2020 ARTIFICIAL INTELLIGENCE 34
Example Knowledge Base
The law says that it is a crime for an American to sell weapons to hostile nations.
The country Nono, an enemy America, has some missiles, and all of its missiles
were sold to it by Col. West, who is an American.

Prove that Col. West is a criminal.

2/14/2020 ARTIFICIAL INTELLIGENCE 35


Example Knowledge Base
• It is a crime for an American to sell weapons to hostile nations
American(x) ∧Weapon(y) ∧Sells(x,y,z) ∧Hostile(z) ⇒ Criminal(x)
• Nono…has some missiles
∃x Owns(Nono, x) ∧ Missiles(x)
Owns(Nono, M1) and Missle(M1)
• All of its missiles were sold to it by Col. West
∀x Missle(x) ∧ Owns(Nono, x) ⇒ Sells( West, x, Nono)
• Missiles are weapons
Missle(x) ⇒ Weapon(x)

2/14/2020 ARTIFICIAL INTELLIGENCE 36


Example Knowledge Base
An enemy of America counts as “hostile”
Enemy( x, America ) ⇒ Hostile(x)

Col. West who is an American


American( Col. West )

The country Nono, an enemy of America


Enemy(Nono, America)

2/14/2020 ARTIFICIAL INTELLIGENCE 37


Example Knowledge Base

2/14/2020 ARTIFICIAL INTELLIGENCE 38


Backward Chaining
• Consider the item to be proven a goal
• Find a rule whose head is the goal (and bindings)
• Apply bindings to the body, and prove these (subgoals) in turn
• If you prove all the subgoals, increasing the binding set as you go,
you will prove the item.
• Logic Programming (cprolog, on CS)

2/14/2020 ARTIFICIAL INTELLIGENCE 39


Backward Chaining Example

2/14/2020 ARTIFICIAL INTELLIGENCE 41


Unification

• It’s a matching procedure that compares two literals and discovers whether there exists a set of
substitutions that can make them identical.
• E.g. 1
Hate( marcus , X) Hate (marcus , caesar)
caesar/ X
e.g. 2.
Hate(X,Y) Hate( john, Z) could be unified as:
John/X and y/z

2/14/2020 ARTIFICIAL INTELLIGENCE 50


UNIFY(p, q) = unifier θ where SUBST(θ, p) = SUBST(θ, q)
∀x: knows(John, x) → hates(John, x)
knows(John, Jane)
∀y: knows(y, Leonid)
∀y: knows(y, mother(y))
∀x: knows(x, Elizabeth)
UNIFY(knows(John ,x) ,knows(John, Jane)) = {Jane/x}
UNIFY(knows(John, x), knows(y, Leonid)) = {Leonid/x, John/y}
UNIFY(knows(John, x), knows(y, mother(y))) = {John/y,
mother(John)/x}
UNIFY(knows(John, x), knows(x, Elizabeth)) = FAIL

2/14/2020 ARTIFICIAL INTELLIGENCE 51


Knowledge Representation structure
• Knowledge types :
1. Declarative
2. Procedural
•Declarative knowledge deals with factoid questions (what is the
capital of India? Who won the Wimbledon in 2005? Etc.)
• Procedural knowledge deals with “How”
• Procedural knowledge can be embedded in declarative
knowledge
2/14/2020 ARTIFICIAL INTELLIGENCE 53
Classification
Knowledge Representation

Unstructured Structured Primitive Oriented

Semantic Nets Conceptual


Dependencies
Predicate calculus
Frames
Scripts

2/14/2020 ARTIFICIAL INTELLIGENCE 54


Predicate Calculus
By Gottlob Frege

Key points
◦ Simplest type of representation
◦ Fully logic based
◦ Deduction, Abduction and Induction
◦ Resolution and Refutation
Application: In rule-based systems

2/14/2020 ARTIFICIAL INTELLIGENCE 55


Semantic nets
▪ A semantic network is an irregular graph that has concepts in vertices and relations on arcs.
▪ Relations can be ad-hoc, but they can also be quite general, for example, “is a” ( ISA), “a kind
of” (AKO), “an instance of”, “part of”.
▪ Relations often express physical properties of objects (colour, length, and lots of others).
▪ Most often, relations link two concepts.

2/14/2020 ARTIFICIAL INTELLIGENCE 56


Frames
▪By Marvin Minsky in 1970

▪Evolution of Frame System

▪Definition- A collection of attributes and associated values that describe some entity in the world

▪Differs from semantic nets in a way that frames may involve procedural embedding in place of
values of attributes. (which are called as fillers)

2/14/2020 ARTIFICIAL INTELLIGENCE 57


Continued…
▪A frame represents a concept.
▪a frame system represents an organization of knowledge about a set of related concepts.
▪A frame has slots that denote properties of objects. Some slots have default fillers, some are
empty (may be filled when more becomes known about an object).
▪Frames are linked by relations of specialization/generalization and by many ad-hoc relations.

2/14/2020 ARTIFICIAL INTELLIGENCE 58


Continued…

2/14/2020 ARTIFICIAL INTELLIGENCE 59


Conceptual Dependency (CD)

●CD theory was developed by Schank in 1973 to 1975 to represent the meaning
of NL sentences.
− It helps in drawing inferences
− It is independent of the language

●CD representation of a sentence is not built using words in the sentence rather
built using conceptual primitives which give the intended meanings of words.
●CD provides structures and specific set of primitives from which
representation can be built.

2/14/2020 ARTIFICIAL INTELLIGENCE 60


Primitive Acts of CD theory
●ATRANS Transfer of an abstract relationship (i.e. give)
●PTRANS Transfer of the physical location of an object (e.g., go)
●PROPEL Application of physical force to an object (e.g. push)
●MOVE Movement of a body part by its owner (e.g. kick)
●GRASP Grasping of an object by an action (e.g. throw)
●INGEST Ingesting of an object by an animal (e.g. eat)
●EXPEL Expulsion of something from the body of an animal (e.g. cry)
●MTRANS Transfer of mental information (e.g. tell)
●MBUILD Building new information out of old (e.g decide)
●SPEAK Producing of sounds (e.g. say)
●ATTEND Focusing of a sense organ toward a stimulus
2/14/2020 (e.g. listen) ARTIFICIAL INTELLIGENCE 61
Some of Conceptualizations of CD

●Dependency structures are themselves conceptualization and can serve


as components of larger dependency structures.
●The dependencies among conceptualization correspond to semantic
relations among the underlying concepts.
●We will list the most important ones allowed by CD.
●Remaining can be seen from the book.

2/14/2020 ARTIFICIAL INTELLIGENCE 62


Conceptual category

●There are four conceptual categories

−ACT Actions {one of the CD primitives}


−PP Objects {picture producers}
−AA Modifiers of actions {action aiders}
−PA Modifiers of PP’s {picture aiders}

2/14/2020 ARTIFICIAL INTELLIGENCE 63


Example
●I gave a book to the man. CD representation is as follows:

P O R man (to)
I ⇔ ATRANS ← book
I (from)

●It should be noted that this representation is same for different saying with same meaning.
For example
− I gave the man a book,
− The man got book from me,
− The book was given to man by me etc.

2/14/2020 ARTIFICIAL INTELLIGENCE 64


Few conventions

●Arrows indicate directions of dependency


●Double arrow indicates two way link between actor and action.
O – for the object case relation
R – for the recipient case relation
P – for past tense
D - destination

2/14/2020 ARTIFICIAL INTELLIGENCE 65


Conceptual graph of the sentence “The dog scratches its ear with its
paw.”

2/14/2020 ARTIFICIAL INTELLIGENCE


Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 66
Conceptual Graph

2/14/2020 ARTIFICIAL INTELLIGENCE 67


Continued…
▪ General semantic relations help represent the meaning of simple sentences in a
systematic way.
▪ A sentence is centered on a verb that expects certain arguments.
▪ For example, verbs usually denotes actions (with agents) or states (with passive
experiencers, for example, “he dreams” or “he is sick”).

2/14/2020 ARTIFICIAL INTELLIGENCE 68


Continued…

2/14/2020 ARTIFICIAL INTELLIGENCE 69


Continued…

2/14/2020 ARTIFICIAL INTELLIGENCE 70


What is planning in AI?

▪ The planning in Artificial Intelligence is about the decision making tasks


performed by the robots or computer programs to achieve a specific goal.
▪ The execution of planning is about choosing a sequence of actions with a high
likelihood to complete the specific task.

2/14/2020 ARTIFICIAL INTELLIGENCE 71


Motivation

•Many AI agents operate in an environment. Two issues:


•The agent wants to find a “good” sequence of actions that will take it from an initial to a goal
state
•Every action of the agent changes some part of the state of the environment, keeping the
remaining part unchanged.
•After every action, how to find which part changed, and which did not? Frame problem

2/14/2020 ARTIFICIAL INTELLIGENCE 72


A planning agent
▪ An agent interacts with the world via perception and actions.
▪ Perception involves sensing the world and assessing the situation
creating some internal representation of the world
▪ Actions are what the agent does in the domain. Planning involves reasoning
about actions that the agent intends to carry out
▪ Planning is the reasoning side of acting
▪ This reasoning involves the representation of the world that the agent has, as
also the representation of its actions.
▪ Hard constraints where the objectives have to be achieved completely for
success
▪ The objectives could also be soft constraints, or preferences, to be achieved as
much as possible

2/14/2020 ARTIFICIAL INTELLIGENCE 73


Example

•A robot has to pick up an object fallen on the floor at point A and keep it at point
B on the floor in a room.
•State: positions of objects in the room + own position
•Robot actions: move one step LEFT, RIGHT, FORWARD, BACK, pick up
object from floor, put down the object in its arm on the floor.
•Robot needs to find a “safe” way from point A to B, by exploring the
environment
•Robot has a map of the entire room; knows all the objects in the room, and their
positions.

2/14/2020 ARTIFICIAL INTELLIGENCE 74


Example: Travel Planning

•“Please plan an economy class round trip air-travel from Pune to France lasting 9
days covering at least 3 different locations. I want to spend at least 2 days in each
location. I am interested in nature, history and art.”

2/14/2020 ARTIFICIAL INTELLIGENCE 75


Planning Types
1. STRIPS
2. Forward and Backward State Space Planning
3. Goal Stack Planning
4. Plan Space Planning
5. A Unified Framework For Planning

2/14/2020 ARTIFICIAL INTELLIGENCE 76


Planning in Blocks World

▪ There is a flat surface on which blocks can be placed.


▪ There are many square blocks, all of the same size.
▪ Each block has a unique ID (we use A, B, C, … as IDs)
▪ Blocks can be placed on top of each other.
▪ There is a robot arm that can perform several actions related to moving the blocks around
▪ The arm can hold at most one block at a time.
▪ Assumption:
Each block can have at most 1 other block on top of it.

2/14/2020 ARTIFICIAL INTELLIGENCE 77


Predicates

A given “situation” (“state”) can be described using a formula made up of


the following predicates
1. ON(A, B):block A is on block B
2. ONTABLE(A):block A is on the table
3. CLEAR(A):there is nothing on top of block A
4. HOLDING(A):the robot arm is holding block A
5. ARMEMPTY:the robot arm is holding nothing

2/14/2020 ARTIFICIAL INTELLIGENCE 78


Example

2/14/2020 ARTIFICIAL INTELLIGENCE 79


Another example

2/14/2020 ARTIFICIAL INTELLIGENCE 80


Actions the Robot Arm Can Perform

▪ Each action (operator) transforms one state into another.


▪ Each action has associated with it:
PRECONDITION: list of predicates that must be TRUE before the
action can be applied
ADD: list of predicates that become TRUE after the actions applied
DELETE: list of predicates that become FALSE after the action is applied
Any predicate not included in either the ADD or the DELETE list is
assumed to be unaffected by the action

2/14/2020 ARTIFICIAL INTELLIGENCE 81


Actions…

UNSTACK(x, y):pickup block x from its current position on block y.


–PRECONDITION: ARMEMPTY ∧ON(x, y) ∧ CLEAR(x)
–DELETE: ARMEMPTY ∧ ON(x, y)
–ADD: HOLDING(x) ∧ CLEAR(y)
–Note: CLEAR(x) is still true, even after the action occurs!
STACK(x, y):place block x on block y.
–PRECONDITION: CLEAR(y) ∧ HOLDING(x)
–DELETE: CLEAR(y) ∧ HOLDING(x)
–ADD: ARMEMPTY ∧ ON(x, y) ∧ CLEAR(x)

2/14/2020 ARTIFICIAL INTELLIGENCE 82


Actions…

PICKUP(x): pickup block x from the table and hold it.


–PRECONDITION: ARMEMPTY ∧ CLEAR(x) ∧ ONTABLE(x)
–DELETE: ARMEMPTY ∧ ONTABLE(x)
–ADD: HOLDING(x)
PUTDOWN(x):putdown block x on the table
–PRECONDITION: HOLDING(x)
–DELETE: HOLDING(x)
–ADD: ARMEMPTY ∧ ONTABLE(x)

2/14/2020 ARTIFICIAL INTELLIGENCE 83


Continued…

2/14/2020 ARTIFICIAL INTELLIGENCE 84


The Planning Problem

Given an initial state S0, and a final (goal) state S1,


identify the “best” sequenceof actions that will transform S0 into S1.
Each action can only be one of: UNSTACK, STACK, PICKUP, PUTDOWN.

2/14/2020 ARTIFICIAL INTELLIGENCE 85


2/14/2020 ARTIFICIAL INTELLIGENCE 86
2/14/2020 ARTIFICIAL INTELLIGENCE 87
2/14/2020 ARTIFICIAL INTELLIGENCE 88
2/14/2020 ARTIFICIAL INTELLIGENCE 89
2/14/2020 ARTIFICIAL INTELLIGENCE 90
Forward planning

Start with the initial state (current world state).


Select actions whose preconditions match with the current state
description (before-action state) through unification.
Apply actions to the current world state. Generate possible
follow-states (after-action states) according to the add and
delete lists of the action description.
Repeat for every generated new world state.
Stop when a state is generated which includes the goal formula.

2/14/2020 ARTIFICIAL INTELLIGENCE 91


Backward planning

Start with a given goal state description.


Check unsatisfied (pre)conditions of the goal description.
Select and apply actions which have those conditions as effects.
Proceed in the same way backwards, trying to fulfill
preconditions of each new action by recursively choosing
actions which achieve those preconditions.
Stop when the precondition is part of the initital state, and thus
a sequence of actions is found leading from the initial state to
the goal state.

2/14/2020 ARTIFICIAL INTELLIGENCE 92


Goal Stack Planning
● Problem-solving is searching and moving through a state
space.
● Planning is searching for successful paths through a state
space
✔ Planning = problem solving in advance.
✔ Planning is important if solutions cannot be undone.
✔ If the universe is not predictable, then a plan can fail dynamic plan revision.
Goal Stack planning
Π:= φ // plan is initially empty
C:= start state // C is the current state push an action Q that satisfies X

Push the goal state on the stack push all preconditions of Q

Push all its subgoals on the stack (in any ELSE IF X is an action THEN
order) execute X in current state C, change the
Repeat until the stack is empty: new current state C

X:= Pop the top of the stack using the action’s effects, add X to plan 

IF X is a compound goal THEN ELSE IF X is a goal which is TRUE in


current state C THEN NOTHING
push its subgoals which are unsatisfied in
C on the stack
ELSE IF X a single goal FALSE in C
THEN

2/14/2020 ARTIFICIAL INTELLIGENCE 94


Goal Stack Planning algorithm

1. Push goal in the stack


⮚ if top is compound goal, push to stack
⮚If top is single unsatisfied goal replace it by an action
⮚Push action precondition
⮚If top is action then pop
⮚If top is satisfied goal then complete

2/14/2020 ARTIFICIAL INTELLIGENCE 95


Rules
R1 : pickup(x)
armempty, OnTable(x), clear(x)
R2: putdown(x)
holding(x)
R3:stack(x,y)
holding(x)
clear(y)
R4: unstack(x,y)
armempty, on(x,y), clear(x)

2/14/2020 ARTIFICIAL INTELLIGENCE 96


Continued...

Planning = generating a sequence of actions to


achieve the goal from the start
Continued...
⮚Actions:
1. UNSTACK(A, B)
2. STACK(A, B)
3. PICKUP(A)
4. PUTDOWN(A)
Continued...
⮚Conditions and results:
1. ON(A, B)
2. ONTABLE(A)
3. CLEAR(A)
4. HOLDING(A)
5. ARMEMPTY
Hierarchical planning
hierarchical decomposition : HIERARCHICAL an idea that pervades almost all
DECOMPOSITION attempts to manage complexity.
For example, complex software is created from a hierarchy of subroutines or object classes; armies
operate as a hierarchy of units; governments and corporations have hierarchies of departments,
subsidiaries, and branch offices.
The key benefit of hierarchical structure is that, at each level of the hierarchy, a computational task,
military mission, or administrative function is reduced to a small number of activities at the next
lower
level, so the computational cost of finding the correct way to arrange those activities for the current
problem is small

2/14/2020 ARTIFICIAL INTELLIGENCE 100


Hierarchical Planning

Principle
▪ hierarchical organization of 'actions'
▪ complex and less complex (or: abstract) actions
▪ lowest level reflects directly executable actions
Procedure
▪ planning starts with complex action on top
▪ plan constructed through action decomposition
▪ substitute complex action with plan of less complex actions (pre-
defined plan schemata; or learning of plans/plan abstraction)
▪ overall plan must generate effect of complex action
Continued...

Hierarchical Planning / Plan Decomposition


Plans are organized in a hierarchy. Links between nodes at different
levels in the hierarchy denote a decomposition of a “complex action”
into more primitive actions (operator expansion).
Example:
move (x, y, z)
operator
expansion pickup (x, y) putdown (x, z)
The lowest level corresponds to executable actions of the agent.
Hierarchical Plan - Example

Travel (source, dest.)

Take-Plane Take-Bus Take-Car

Goto (bus, source) Buy-Ticket (bus) Hop-on (bus) Leave (bus, dest.)

Goto (counter) Request (ticket) Pay (ticket)


Hierarchical Plan - Example
Hierarchical Plan - Example
2/14/2020 ARTIFICIAL INTELLIGENCE 106

You might also like