10. Forward Chaining
10. Forward Chaining
FORWARD CHAINING
• A forward-chaining algorithm for propositional definite clauses is a simple idea.
• Start with the atomic sentences in the knowledge base and apply Modus Ponens
in the forward direction, adding new atomic sentences, until no further
inferences can be made.
• Definite clauses such as Situation ⇒ Response are especially useful for systems
that make inferences in response to newly arrived information.
• Many systems can be defined this way, and forward chaining can be implemented
very efficiently.
First-order definite clauses
• First-order definite clauses closely resemble propositional definite clauses:
they are disjunctions of literals of which exactly one is positive.
• Starting from the known facts, it triggers all the rules whose premises
are satisfied, adding their conclusions to the known facts.
• The process repeats until the query is answered (assuming that just one
answer is required) or no new facts are added.
• First, the “inner loop” of the algorithm involves finding all possible
unifiers such that the premise of a rule unifies with a suitable set of facts
in the knowledge base.
• Finally, the algorithm might generate many facts that are irrelevant to
the goal.
Efficient forward chaining
• We can address the issues by
• If the knowledge base contains many objects owned by Nono and very
few missiles, however, it would be better to find all the missiles first and
then check whether they are owned by Nono.
• Clearly, the conclusion Colorable () can be inferred only if the CSP has a solution.
• Algorithms for solving tree-structured CSPs can be applied directly to the problem of
rule matching.
Efficient forward chaining
2. Incremental forward chaining
• Incremental forward chaining is used to eliminate redundant rule-matching attempts
in the forward-chaining algorithm.
• For example, on the second iteration, the rule Missile(x) ⇒ Weapon(x) matches against
Missile(M1) (again), and of course the conclusion Weapon(M1) known so nothing
happens.
• Such redundant rule matching can be avoided if we make the following observation:
Every new fact inferred on iteration t must be derived from at least one new
fact inferred on iteration t - 1.
This is true because any inference that does not require a new fact from iteration t -
1 could have been done at iteration t - 1 already.
Efficient forward chaining
• Our crime example is rather too small to show this effectively, but notice
that a partial match is constructed on the first iteration between the rule
American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) ⇒ Criminal (x)
and the fact American(West).
• This partial match is then discarded and rebuilt on the second iteration
(when the rule succeeds).