Ch6: Knowledge Representation Using Rules
Ch6: Knowledge Representation Using Rules
Slide 1
Procedural vs. Declarative Knowledge
Q Declarative representation
–Knowledge is specified but the use is not given.
–Need a program that specifies what is to be done to the k
nowledge and how.
–Example:
• Logical assertions and Resolution theorem prover
–A different way: Logical assertions can be viewed as a pr
ogram, rather than as data to a program.
=> Logical assertions = Procedural representations of kno
wledge
Slide 2
Procedural vs. Declarative Knowledge
Q Procedural representation
–The control information that is necessary to use the kno
wledge is considered to be embedded in the knowledge it
self.
–Need an interpreter that follows the instructions given in
the knowledge.
Slide 3
Procedural knowledge Declarative knowledge
Slide 6
Procedural vs. Declarative Knowledge
•This rule sets up a subgoal to find a man.Again the statements ar
e examined from the beginning and now Marcus is found to satisfy
the subgoal and thus also the goal.
•So Marcus is reported as the answer.
•There is no clear cut answer whether declarative or procedural kn
owledge representation frameworks are better.
Slide 7
Logic Programming
•Logic Programming is a programming language paradigm o
n which logical assertions are viewed as programs.
•PROLOG program is described as a series of logical asserti
ons each of which is a Horn Clause.
Prolog program = {Horn Clauses}
–Horn clause: disjunction of literals of which at most one is p
ositive literal
p,¬pVq,and pq are horn clauses.
=> Prolog program is decidable
–Control structure: Prolog interpreter = backward reasoning
+ depth-first with backtracking
Slide 8
Logic Programming
Q Logic:
X: pet(X) ^ small(X) apartmentpet(X)
X: cat(X) v dog(X) pet(X)
X: poodle(X) dog(X) ^ small(X) poodle(fluffy)
Q Prolog:
apartmentpet(X) :- pet(X) , small(X). pet(X) :- cat(
X).
pet(X) :- dog(X).
dog(X) :- poodle(X).
small(X) :- poodle(X). poodle(fluffy).
Slide 9
Logic Programming
Q Prolog vs. Logic
–Quantification is provided implicitly by the way the variabl
es are interpreted.
• Variables: begin with UPPERCASE letter
• Constants: begin with lowercase letters or number
–There is an explicit symbol for AND (,), but there’s none f
or OR. Instead, disjunction must be represented as a list of
alternative statements
–“p implies q” is written as q :- p.
Slide 1
0
Logic Programming
Slide 1
1
Forward vs. Backward Reasoning
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node. Use the left
sides of the rules to generate the nodes at this second level of the tr
ee. Generate the next level of the tree by taking each node at the pr
evious level and finding all the rules whose right sides match it. The
n use the corresponding left sides to generate the new nodes. Conti
nue until a node that matches the initial state is generated. This met
hod of reasoning backward from the desired final state if often called
goal-directed reasoning.
Slide 12
Forward vs. Backward Reasoning
Q Forward: from the start states.
Q Backward: from the goal states.
•Reason forward from the initial states: Begin building a tree of move
sequences that might be solutions by starting with the initial configur
ations at the root of the tree. Generate the next level of the tree by f
inding all the rules whose left sides match the root node and using th
eir right sides to create the new configurations. Continue until a confi
guration that matches the goal state is generated.
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node.
Slide 13
Forward vs. Backward Reasoning
Q Four factors influence forward or Backward?
–Move from the smaller set of states to the larger set of sta
tes
–Proceed in the direction with the lower branching factor
–Proceed in the direction that corresponds more closely wi
th the way the user will think
–Proceed in the direction that corresponds more closely wi
th the way the problem-solving episodes will be triggered
Slide 14
Forward vs. Backward Reasoning
Q To encode the knowledge for reasoning, we need 2 kinds
of rules:
– Forward rules: to encode knowledge about how to respo
nd to certain input.
– Backward rules: to encode knowledge about how to achi
eve particular goals.
Slide 15
KR Using rules
IF . . THEN
ECA (Event Condition Action)
RULES
. APLLICATIONS
EXAMPLES
1. If flammable liquid was spilled, call the fire depart
ment.
2. If the pH of the spill is less than 6, the spill materi
al is an acid.
3. If the spill material is an acid, and the spill smells li
ke vinegar, the spill material is acetic acid.
( are used to represent rules)
FACTS
[][][]
MATCH EXECUTE
[][][]
EXECUTE
MATCH
New fact added to the KB
RULES
MATCH
RULES
Fig.3 Facts added by rules can match rules
FACTS
EXECUTE Fire d
MATCH ept is
called
RULES
Fig.4 Rule execution can affect the real world
The spill ma
The pH of th
terial is an a
e spill is < 6
cid
The spill ma
terial is an a
cetic acid
Spill smells l
ike vinegar
RULES RULES
RULES
Fig. 6 An example of forward chaining
A D
F
C Z
B
Fig. 7 Inference chain produced by Fig. 6
FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS
Step A E
A EH 1 H 2 A E 3 AE 4 A E 5 A E 7 A EH 8 AE H
G G H G H G H G CH
6 A E
G B C B C B C B C B C B G H C
G GD F
C
B D F BC BZ
Slide 26
Matching
– RETE gains efficiency from three major sources.
– The temporal nature of data. rules usually do not alter the
state description radically. Instead a rule will add one or t
wo elements or delete one or two elements but the state r
emains the same.RETE maintains a network of rule condi
tions and it uses changes in the state description to deter
mine which new rules might apply.
– Structural similarity in rules.Eg.one rule concludes jaguar(
X)if mammal(x),feline(x),carnivorous(x) and has-spots(x).
Another rule concludes tiger(x) and is identical to the first r
ule except that it replaces has-spots with has-stripes. If tw
o rules are matched independently, a lot of work is repeat
ed unnecessarily. RETE stores rules so that they share str
uctures in memory.sets of conditions that appear in sever
al rules are matched once per cycle. Slide 27
Matching
-Persistence of variable binding consistency: while all the
individual preconditions of a rule might be met there may b
e variable binding conflicts that prevent the rule from firing.
Son(Mary,joe) and son (Bill,Bob) .
The individual preconditions of the rule can be matched
Son(x,y)A son(y,z)grandparent(x,z)
Can be matched, but not in a manner that satisfies the con
straint imposed by the variable y.
Slide 28
Matching
Q Approximate matching
–Rules should be applied if their preconditions approxim
ately match the current situation
–Example: A speech-understanding program
• Rules: A description of a physical waveform to p
hones (a, e, …)
• Physical signal: differences in the way individuals
speak, result of background noise, …
Slide 29
Matching
ELIZA (Weizenbaum, 1966): Person: Men all are alike.
ELIZA: In what way?
Person: They're always bugging us about something or
other.
ELIZA: Can you think of specific example? Person: Well,
my boyfriend made me come here. ELIZA: Your boyfri
end made you come here.
Person: He says I'm depressed much of the time. ELIZA:
I am sorry to hear you are depressed.
Slide 30
Matching
ELIZA (Weizenbaum, 1966): Person: It's true. I am unhap
py.
ELIZA: Do you think coming here will help you not to be u
nhappy?
Person: I need some help, that much seems certain. ELIZ
A: What would it mean to you if you got some help?
Person: Perhaps I could learn to get along with my brothe
r.
ELIZA: Tell me more about your family.
....
Slide 31
Matching
ELIZA (Weizenbaum, 1966):
(X me Y) (X you Y)
(I remember X) (Why do you remember X just now?)
(My {family-member} is Y) (Who else in your family is Y?)
(X {family-member} Y) (Tell me more about your family)
....
Person: My brother is mean to me.
ELIZA: Who else in your family is mean to you?
....
Slide 32
Matching
Conflict resolution:
The result of the matching process is a list of rules whose
antecedents
–Preferences based on rules:
• Specificity of rules
• Physical order of rules
–Preferences based on objects:
• Importance of objects
• Position of objects
–Preferences based on action:
Slide 33
• Evaluation of states
Control Knowledge
Knowledge about which paths are most likely to lead quickl
y to a goal state is often called search control knowledge.
– Which states are more preferable to others.
– Which rule to apply in a given situation.
– The order in which to pursue subgoals
– Useful sequences of rules to apply.
Slide 34
Control Knowledge
There are a number of AI systems that represent their control
knowledge with rules. Example SOAR,PRODIGY
SOAR is a general architecture for building intelligent systems.
Slide 35
Control Knowledge
PRODIGY is a general purpose problem solving system, th
at incorporates several different learning mechanisms.
It can acquire control rules in a number of ways:
Through hand coding by programmers
Through a static analysis of the domain’s operators.
Through looking at traces of its own problem solving behav
ior.
PRODIGY learns control rules from its experience, but unlik
e SOAR it learns from its failures.
PRODIGY pursues an unfruitful path, it will try to come uo
with an explanation of why that path failed. It will then us
e that explanation to build control knowledge that will hel
p it avoid fruitless search paths in future.
Slide 36
Control Knowledge
Two issues concerning control rules:
• The first issue is called the utility problem. As we add mo
re and more control knowledge to a system, the system i
s able to search more judiciously. If there are many contr
ol rules, simply matching them all can be very time consu
ming.
• the second issue concerns with the complexity of the pro
duction system interpreter.
Slide 37