0% found this document useful (0 votes)
14 views

10. Forward Chaining

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

10. Forward Chaining

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

FORWARD CHAINING

FORWARD CHAINING
• A forward-chaining algorithm for propositional definite clauses is a simple idea.

• Start with the atomic sentences in the knowledge base and apply Modus Ponens
in the forward direction, adding new atomic sentences, until no further
inferences can be made.

• Need to apply the algorithm to first-order definite clauses.

• Definite clauses such as Situation ⇒ Response are especially useful for systems
that make inferences in response to newly arrived information.

• Many systems can be defined this way, and forward chaining can be implemented
very efficiently.
First-order definite clauses
• First-order definite clauses closely resemble propositional definite clauses:
they are disjunctions of literals of which exactly one is positive.

• A definite clause either is atomic or is an implication whose antecedent is a


conjunction of positive literals and whose consequent is a single positive
literal.

• The following are first-order definite clauses:


First-order definite clauses
Consider the following problem:
• The law says that it is a crime for an American to sell weapons to
hostile nations. The country Nono, an enemy of America, has some
missiles, and all of its missiles were sold to it by Colonel West, who is
American.

• We will prove that Colonel West is a criminal.

• First, we will represent these facts as first-order definite clauses.

• The next, how the forward-chaining algorithm solves the problem.


First-order definite clauses
First-order definite clauses
A simple forward-chaining algorithm
• The first forward-chaining algorithm is simple.

• Starting from the known facts, it triggers all the rules whose premises
are satisfied, adding their conclusions to the known facts.

• The process repeats until the query is answered (assuming that just one
answer is required) or no new facts are added.

• Notice that a fact is not “new” if it is just a renaming of a known fact.


A simple forward-chaining algorithm
• One sentence is a renaming of another if they are identical except for the
names of the variables.

• For example, Likes(x, IceCream) and Likes(y, IceCream) are renaming's


of each other because they differ only in the choice of x or y; their
meanings are identical: everyone likes ice cream.
A simple forward-chaining algorithm
• We use our crime problem to illustrate how FOL-FC-ASK works.
• The implication sentences are (9.3), (9.6), (9.7), and (9.8).
• Two iterations are required:
A simple forward-chaining algorithm
A simple forward-chaining algorithm
• The proof tree generated by forward chaining on the crime example.
• The initial facts appear at the bottom level, facts inferred on the first iteration in the middle
level, and facts inferred on the second iteration at the top level.
Efficient forward chaining
• The forward chaining algorithm shows three possible sources of
inefficiency.

• First, the “inner loop” of the algorithm involves finding all possible
unifiers such that the premise of a rule unifies with a suitable set of facts
in the knowledge base.

• This is often called pattern matching and can be very expensive.


Efficient forward chaining
• Second, the algorithm rechecks every rule on every iteration to see
whether its premises are satisfied, even if very few additions are made to
the knowledge base on each iteration.

• Finally, the algorithm might generate many facts that are irrelevant to
the goal.
Efficient forward chaining
• We can address the issues by

1. Matching rules against known facts


• The problem of matching the premise of a rule against the facts in the
knowledge base might seem simple enough.
• For example, suppose we want to apply the rule
Missile(x) ⇒ Weapon(x)
• Then we need to find all the facts that unify with Missile(x); in a suitably
indexed knowledge base, this can be done in constant time per fact.
• Now consider a rule such as
Missile(x) ∧ Owns(Nono, x) ⇒ Sells(West,x,Nono) .
Efficient forward chaining
• Again, we can find all the objects owned by Nono in constant time per
object;

• Then, for each object, we could check whether it is a missile.

• If the knowledge base contains many objects owned by Nono and very
few missiles, however, it would be better to find all the missiles first and
then check whether they are owned by Nono.

• This is the conjunct ordering problem.


Efficient forward chaining
• Find an ordering to solve the conjuncts of the rule premise so that the
total cost is minimized.

• It turns out that finding the optimal ordering.

• For example, the minimum-remaining-values (MRV) heuristic would


suggest ordering the conjuncts to look for missiles first if fewer missiles
than objects are owned by Nono.
Efficient forward chaining

(a) Constraint graph for coloring the map of Australia.


(b) The map-coloring CSP expressed as a single definite clause. Each map region is
represented as a variable whose value can be one of the constants Red , Green or Blue.
Efficient forward chaining
• The connection between pattern matching and constraint satisfaction is
actually very close.

• We can view each conjunct as a constraint on the variables that it


contains—for example, Missile(x) is a unary constraint on x.

• Extending this idea, we can express every finite-domain CSP as a


single definite clause together with some associated ground facts.
Efficient forward chaining
• Consider the map-coloring problem (Figure a - Slide Num:17).

• An equivalent formulation as a single definite clause is given in Figure b (Slide


Num:17).

• Clearly, the conclusion Colorable () can be inferred only if the CSP has a solution.

• Australia from the map, the resulting clause is


Diff (wa, nt) ∧ Diff (nt,q) ∧ Diff (q, nsw) ∧ Diff (nsw,v) ⇒ Colorable ()
which corresponds to the reduced CSP.

• Algorithms for solving tree-structured CSPs can be applied directly to the problem of
rule matching.
Efficient forward chaining
2. Incremental forward chaining
• Incremental forward chaining is used to eliminate redundant rule-matching attempts
in the forward-chaining algorithm.

• For example, on the second iteration, the rule Missile(x) ⇒ Weapon(x) matches against
Missile(M1) (again), and of course the conclusion Weapon(M1) known so nothing
happens.

• Such redundant rule matching can be avoided if we make the following observation:
Every new fact inferred on iteration t must be derived from at least one new
fact inferred on iteration t - 1.

This is true because any inference that does not require a new fact from iteration t -
1 could have been done at iteration t - 1 already.
Efficient forward chaining
• Our crime example is rather too small to show this effectively, but notice
that a partial match is constructed on the first iteration between the rule
American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) ⇒ Criminal (x)
and the fact American(West).

• This partial match is then discarded and rebuilt on the second iteration
(when the rule succeeds).

• It would be better to retain and gradually complete the partial matches as


new facts arrive, rather than discarding them.
Efficient forward chaining
3. Irrelevant facts
• The final source of inefficiency in forward chaining appears to be
intrinsic to the approach and also arises in the propositional context.

• Forward chaining makes all allowable inferences based on the known


facts, even if they are irrelevant to the goal at hand.

• In our crime example, there were no rules capable of drawing


irrelevant conclusions, so the lack of directedness was not a problem.
Efficient forward chaining

• In other cases (e.g., if many rules describe the eating habits of


Americans and the prices of missiles), FOL-FC-ASK will generate
many irrelevant conclusions.

• The ways to avoid drawing irrelevant conclusions is to use restrict


forward chaining and backward chaining.

You might also like