Module 3
Module 3
Knowledge Representation
Knowledge-based agents, Agents based on Propositional Logic – First-
order logic.
INSTRUCTIONAL OBJECTIVE
Knowledge-based agents
At the completion of this lecture,
From these tables, the truth value of any sentence a can be computed with respect to any
model m by a simple recursive evaluation.
For example, the sentence -P1,2 A (P2,2 V P3,1), evaluated in 𝑚1 , gives true A (false V true) =
true A true = true.
Propositional Logic - Semantics
Propositional logic does not require any relation of causation or relevance between P and
Q.
The sentence "5 is odd implies Tokyo is the capital of Japan" is a true sentence of
propositional logic even though it is a decidedly odd sentence of English.
Another point of confusion is that any implication is true whenever its antecedent is false.
For example, "5 is even implies Sam is smart" is true, regardless of whether Sam is smart.
"P = Q" as saying, "If P is true, then I am claiming that Q is true.
The only way for this sentence to be false is if P is true but Q is false.
The biconditional, P <=> Q, is true whenever both P and Q are true. In English, this is
often written as "P if and only if Q.
For example, a square is breezy if a neighboring square has a pit, and a square is breezy
only if a neighboring square has a pit. So we need a biconditional,
B1,1 <=> (P1,2 V P2,1) , where B1,1 means that there is a breeze in [1,1].
A Simple Knowledge base
Construct a knowledge base for the wumpus world, by first focusing on the immutable aspects of the wumpus world,
leaving the mutable aspects.
For now, we need the following symbols for each [x, y] location:
P x.y is true if there is a pit in [x, y]
W x,y is true if there is a wumpus in [x, y],dead or alive.
B y,y is true if the agent perceives a breeze in [y, y].
S x,y is true if the agent perceives a stench in [x,y].
The sentences are sufficient to derive -P1,2 (there is no pit in [1,2]). We label each sentence R:
• There is no pit in [1, 1]:
R1: -P1,1
• A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square; for now, we
include just the relevant squares:
R2 : B1,1 = (P1,2 V P2,1)
R3: B2,1 = (P1,1 V P2,2 V P3,1)
• The preceding sentences are true in all Wumpus worlds. Now we include the breeze percepts for the first two squares
visited in the specific world the agent is in, leading up to the situation in Figure 7.3(b).
R4 : - B1,1
R5 : B2,1
A Simple Inference Procedure
The goal is to decide whether KB ╞ α for some sentence α.
For example, is P1,2 entailed by the KB?
The first algorithm for inference is a model-checking approach that is a direct
implementation of the definition of entailment: enumerate the models, and check that α is
true in every model in which KB is true.
Models are assignments of true or False to every proposition symbol.
Consider the wumpus-world example, where the relevant proposition symbols are B1,1,
B2,1, P1,1,P1,2, P2,1, P2,2 and P3,1.
With seven symbols, there are 27 =128 possible models; in three of these, KB is true (Figure
7.9).
In those three models, P1,2 is true, hence there is no pit in [1,2]. On the other hand, P2,2
is true in two of the three models and false in one, so we cannot yet tell whether there is a
pit in [2,2].
Figure 7.9 represents a more precise form for the reasoning illustrated in Figure 7.5.
A Simple Inference Procedure
KB is true if R1 through R5 are true. (R1 A R2 A R3 A R4 A R5) which occurs in three rows of
128 rows.
A Simple Inference Procedure
A general algorithm for deciding entailment in propositional logic is shown in Figure 7.10.
The algorithm is sound because it implements directly the definition of entailment, and complete because it
works for any KB and α and always terminates—there are only finitely many models to examine.
If KB and α contain n symbols, then there are 2𝑛 models. Thus, the time complexity of the algorithm is 0(2").
The space complexity is only O(N) because the enumeration is depth-first.
Propositional Inference
Propositional Theorem Proving
Entailment can be done by theorem proving by applying rules of inference directly to the
sentences in the knowledge base to construct a proof of the desired sentence without
constructing models.
If the number of models is large, then theorem proving can be more efficient than model
checking.
The various concepts related to entailment are: The first concept is
Logical equivalence: Two sentences α and β are logically equivalent if the two sentences
have the same truth value in all possible models. This relationship is denoted by the symbol
"≡" or “iff" (if and only if). It can be written as α ≡ β.
In other words, two propositions are logically equivalent if they are always true or always
false together.
For example, the propositions "P ∧ Q" and "Q ∧ P" are logically equivalent because they
have the same truth value for all possible values of P and Q. Similarly, "¬(P ∨ Q)" and "(¬P)
∧ (¬Q)" are logically equivalent.
Two sentences α and β are equivalent only if each of them entails the other.
if and only if α ╞ β
Propositional Theorem Proving
Propositional Theorem Proving
The second concept is validity.
A sentence is valid if it is true in all models, e.g., P V ¬ P is valid.
Valid sentences are also known as tautologies—they are necessarily true. Because the sentence True is true in
all models, every valid sentence is logically equivalent to True.
Validity is connected to inference via the Deduction Theorem:
For any sentences α and β, α╞ β if and only if α →β is valid.
The final concept is satisfiability. A sentence is satisfiable if it is true in, or satisfied by, some model.
For example, the knowledge base given earlier, (RI ᴧ R2 ᴧ R3 ᴧ R4 ᴧ R5), is satisfiable because there are three
models in which it is true.
Satisfiability can be checked by enumerating the possible models until one is found that satisfies the sentence.
The problem of determining the satisfiability of sentences in propositional logic is the SAT problem that is
proved to be NP.
Validity and satisfiability are connected: α is valid iff ¬ α is unsatisfiable; contrapositively, α is satisfiable iff ¬ α
is not valid. We also have the following useful result:
β if and only if the sentence (α ᴧ ¬ β) is unsatisfiable.
Checking the unsatisfiability of (α ᴧ ¬ β) corresponds exactly to the mathematical proof technique of reduction
to absurdum ("reduction to an absurd thing"). It is also called proof by refutation or proof by contradiction.
Inference and Proofs
The inference rules is applied to derive a proof using a chain of conclusions that leads to the desired goal. The
best-known rule is called Modus Ponens and is written as
if A and A→B both are true then b must be true.
With R8 and the percept R4 , we can now apply Modus Ponens to get R9:
We can now infer the absence of pits in [2,2] and [1,3] (remember [1,1] is already known to be
pitless) using the same approach that leads to R10 earlier:
To obtain the fact that there is a pit in [1,1], [2,2], or [3,1], apply biconditional elimination to R3,
followed by Modus Ponens on R5, as follows:
The resolution rule is now applied for the first time: the literal ¬P2,2 in R13 resolves with the literal
P2,2 in R15, yielding the resolvent.
If a pit exists in one of [1,1], [2,2], or [3,1], and it is not in [2,2], it is in [1,1] or [3,1]. Similarly, the
literal ¬P1,1 in R1 is resolved by P3,1 when compared to the literal P1,1 in R16 to R17.
Proof by Resolution
In English, if a pit exists in either [1,1] or [3,1], and not in [1,1], then it is in [3,1]. The unit resolution
inference rule, l1 lk, m, is used in these last two inference stages.
where each l is a literal and l and m are complimentary literals (In other words, negation).
As a result, the unit resolution rule creates a new clause from a clause (a disjunction of literals) and
a literal. A single literal, known as a unit clause, can be viewed as a disjunction of one literal.
The full resolution rule can be generalized to
where li and mj are complementary literals. This means that when two clauses are resolved, a new
clause is created that contains all of the literal from the two original clauses except the two
complimentary literals.
Proof by Resolution
There is one additional technical feature to the resolution rule: each literal should only appear
once in the resultant clause.
Factoring is the process of removal of multiple copies of literal.
For example, resolving (A ∨ B) and (A V ¬B) we obtain (A V A) which is reduced to A.
The soundness of the resolution rule can be seen easily by considering the literal 𝑙𝑖 , that is
complementary to literal 𝑚𝑗 in the other clause.
If 𝑙𝑖 is true, then 𝑚𝑗 is false, and hence , must be true, because
given.
If 𝑙𝑖 is false, then must be true because l1lk is given.
Now 𝑙𝑖 is either true or false, so one or other of these conclusions holds—exactly as the
resolution rule states.
Conjunctive Normal Form (CNF)
Every sentence of propositional logic is logically equivalent to a conjunction of clauses.
A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form or CNF.
Procedure for converting to CNF. The procedure illustrated by converting the sentence
B1,1 ↔ (P1,2 V P2, 1 ) into CNF. The steps are as follows:
The second row of the figure shows clauses obtained by resolving pairs in the first row.
Then, when P1,2 is resolved with ¬P1,2 we obtain the empty clause, shown as a small square.
Figure 7.13 reveals that many resolution steps are pointless.
For example, the clause B1,1 V ¬B1,1 V P1,2 is equivalent to True V P1,2 which is equivalent to True.
Therefore, any clause in which two complementary literals appear can be discarded.
A Resolution Algorithm
Horn Clauses and Definite Clauses
The restricted inference form is the definite clause, which is a disjunction of literals of which exactly one is
positive.
For example, the clause (¬L1 V ¬Breeze V B1,1) is a definite clause, whereas (¬B1,1 V P1,2 V P2,1) is not.
Horn clause, is a disjunction of literals of which at most one is positive. So all definite clauses are Horn clauses,
as are clauses with no positive literals; these are called goal clauses.
Horn clauses are closed under resolution. if you resolve two Horn clauses, you get back a Horn clause.
Knowledge bases containing only definite clauses are interesting for three reasons:
1.Every definite clause can be written as an implication whose premise is a conjunction of positive literals and
whose conclusion is a single positive literal.
For example, the definite clause (¬ L1,1 V ¬Breeze V B1,1) can be written as the implication (L1,1 ᴧ Breeze →
B1,1). It says that if the agent is in [1,1] and there is a breeze, then [1,1] is breezy.
In Horn form, the premise is called the body and the conclusion is called the head. A sentence consisting of a
single positive literal, such as L1,1 ,is called a fact. It can be written in implication form as True L1,1.
Inference with Horn clauses can be done through the forward-chaining and backward chaining algorithms.
Definite Clause Examples: Goal Clause Example: ¬R ∨ ¬ S
P ∨ ¬R ∨ ¬ S R ∧ S ⇒P
P
Forward and Backward Chaining
The forward-chaining algorithm PL-FC-ENTAiLS? (KB, q) determines if a single proposition
symbol q—the query—is entailed by a knowledge base of definite clauses.
It begins from known facts (positive literals) in the knowledge base and if all the premises
of an implication are known, then its conclusion is added to the set of known facts.
For example, if L1,1 and Breeze are known and (L1,1 ᴧ Breeze) then B1,1 is in the
knowledge base, then B11 can be added. This process continues until the query q is added
or until no further inferences can be made.
Figure 7.16(a) shows a simple knowledge base of Horn clauses with A and B as known
facts.
Figure 7.16(b) shows the same knowledge base drawn as an AND–OR graph. In AND–OR
graphs, multiple links joined by an arc indicate a conjunction—every link must be proved—
while multiple links without an arc indicate a disjunction—any link can be proved.
The known leaves (A and B) are set, and inference propagates up the graph as far as
possible.
Wherever a conjunction appears, the propagation waits until all the conjuncts are known
before proceeding.
Forward Chaining - Algorithm
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward chaining is sound: every inference is essentially an application of Modus Ponens.
Forward chaining is also complete: every entailed atomic sentence will be derived. (after
the algorithm reaches a fixed point where no new inferences are passible).
Forward chaining is an example of data-driven reasoning—that is, reasoning in which the
attention starts with the known data.
It can be used within an agent to derive conclusions from incoming percepts, often without
a specific query in mind.
For example, the wumpus agent might TELL its percepts to the knowledge base using an
incremental forward-chaining algorithm in which new facts can be added to the agenda to
initiate new inferences.
In humans, a certain amount of data-driven reasoning occurs as new information arrives.
Backward Chaining
Backward Chaining
The backward-chaining algorithm, works backward from the query.
If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds
those implications in the knowledge base whose conclusion is q.
If all the premises of one of those implications can be proved true (by backward chaining),
then q is true.
When applied to the query q in Figure 7.16, it works back down the graph until it reaches a
set of known facts, A and B, that forms the basis for a proof.
Backward chaining is a form of goal-directed reasoning.
It is useful for answering specific questions such as "What shall I do now?" and "Where are
my keys?"
Often, the cost of backward chaining is much less than linear in the size of the knowledge
base, because the process touches only relevant facts.
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Modus Tollens
Disjunctive Rule
Addition
==>
Q
Problems to Practice in Propositional Logic
Problems to Practice in Propositional Logic
Resolve the Proposition by Resolution Theorem
Prove
Predicate Logic (First Order Logic)
Propositional logic is a declarative language because its semantics is based on a truth
relation between sentences and possible worlds.
Propositional logic lacks the expressive power to describe an environment with many
objects.
Propositional logic assumes that there are facts that either hold or do not hold in the
world.
Each fact can be in one of two states: true or false, and each model assigns true or false
to each proposition symbol.
First-order logic assumes more; namely, that the world consists of objects with certain
relations among them that do or do not hold.
FOL is sufficiently expressive to represent the natural language statements in a concise
way.
First-order logic is also known as Predicate logic or First-order predicate logic.
It is a powerful language that develops information about the objects in a easy way and
express the relationship between those objects.
Syntax and Semantics of First Order Logic
• First-order logic does not only assume that the world contains facts like propositional
logic but also assumes the following things in the world:
• The symbols, come in three kinds:
• Constant symbols, which stand for objects; predicate symbols, which stand for relations;
and function symbols, which stand for functions.
• Objects, which are things with individual identities
• Properties of objects that distinguish them from other objects
• Relations that hold among sets of objects
• Functions, which are a subset of relations where there is only one “value” for any
given “input”
• Examples:
• Objects: Students, lectures, companies, cars ...
• Relations: Brother-of, bigger-than, outside, part-of, has-color, occurs-after, owns,
visits, precedes, ...
• Properties: blue, oval, even, large, ...
• Functions: father-of, best-friend, second-half, one-more-than ...
Syntax and Semantics of First Order Logic
"Squares neighboring the wumpus are smelly."
Objects: wumpus, squares; Property: smelly; Relation: neighboring.
"Evil King John ruled England in 1200."
Objects: John, England, 1200; Relation: ruled; Properties: evil, king.
Figure 8.2 shows a model with five objects:
Syntax and Semantics of First Order Logic
Richard the Lionheart, King of England from 1189 to 1199; his younger brother, the evil
King John, who ruled from 1199 to 1215; the left legs of Richard and John; and a crown.
The objects in the model may be related in various ways.
In the figure, Richard and John are brothers. A relation is the set of objects that are
related. (A tuple is a collection of objects arranged in a fixed order and is written with
angle brackets surrounding the objects.) Thus, the brotherhood relation in this model is
the set
<Richard the Lionheart, King John>
The crown is on King John's head, so the "on head" relation contains just one tuple.
<the crown, King John>
The "brother" and "on head" relations are binary relations—that is, they relate pairs of
objects. The model also contains unary relations, or properties: the "person" property is
true of both Richard and John: the "king" property is true only of John (presumably
because Richard is dead at this point); and the "crown' property is true only of the crown.
Syntax and Semantics of First Order Logic
In functions, a given object must be related to exactly one object in this way.
For example, each person has one left leg, so the model has a unary "left leg" function
that includes the following mappings:
Richard the Lion heart, Richard's left leg, King John, John's left leg.
Functions requires, there must be a value for every input tuple.
Thus, the crown must have a left leg and so must each of the left legs.
A model in first-order logic consists of a set of objects and an interpretation that maps
constant symbols to objects, predicate symbols to relations on those objects, and function
symbols to functions on those objects.
Syntax and Semantics of First Order Logic
• Variable symbols
• E.g., x, y
• Connectives
• Same as in PL: not (), and (), or (), implies (), if and only if (biconditional )
• Quantifiers
• Universal x or (Ax)
• Existential x or (Ex)
Atomic Sentences
An atomic sentence is formed from a predicate symbol optionally followed by a
parenthesized list of terms, Predicate (term1, term2, ......, term n).
Example:
Ravi and Ajay are brothers: => Brothers(Ravi, Ajay)
Chinky is a cat: => cat (Chinky)
Richard the Lionheart, is the brother of King John.
Brother (Richard,John).
An atomic sentence is true in a given model if the relation referred to by the predicate
symbol holds among the objects referred to by the arguments.
Complex Sentences
• Atomic sentences can have complex terms as arguments.
Richard the Lionheart’s, father is married to King John's mother.
Married( Father (Richard), Mother( John))
• Logical connectives are used to construct complex sentences, with the same syntax and
semantics.
The four sentences that are true in the model of Figure 8.2 under intended interpretation:
¬Brother(LeftLeg(Richard), John)
Brother(Richard, John) ᴧ Brother (John, Richard)
King (Richard) V King(John)
¬King(Richard) → King(John)
Universal Quantifier (V)
First-order logic contains two standard quantifiers, called universal and existential.
Rules such as "Squares neighboring the wumpus are smelly" and "All kings are persons“.
∀x,y((Square(x)∧Wumpus(y)∧Adjacent(x,y))→Smelly(x))
The second rule, "All kings are persons," is written in first-order logic as
∀x King(x) ⇒ Person(x)
∀ is usually pronounced "For all x, For each x, For every x". Thus, the sentence says, "For
all x, if x is a king, then x is a person." The symbol x is called a variable. Variables are
lowercase letters. A variable can also serve as the argument of a function—for example,
LeftLeg(x).
A term with no variables is called a ground term.
The sentence ∀x P, where P is any logical expression, says that P is true for every object x.
More precisely, ∀x P is true in a given model if P is true in all possible extended
interpretations constructed from the interpretation given in the model where each
extended interpretation specifies a domain element to which x refers.
Universal Quantifier (V)
The extended interpretations in five ways are illustrated below:
Richard the Lionheart is a king ⇒ Richard the Lionheart is a person.
King John is a king ⇒ King John is a person.
Richard’s left leg is a king ⇒ Richard’s left leg is a person.
John’s left leg is a king ⇒ John’s left leg is a person.
The crown is a king ⇒ the crown is a person
We see that the implication is true whenever its premise is false—regardless of the truth
of the conclusion.
1. ∀x bird(x) →fly(x)
2. ∀x man(x) → respects (x, parent)
3. ∃x boys(x) → play(x, cricket)
4. ¬∀(x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)]
5. ∃(x) [student(x) → failed (x,Mathematics) ∧ ∀(y) [¬(x==y) ∧ student(y)
→ ¬failed (y, Mathematics)].
Free and Bound Variables
• The quantifiers interact with variables which appear in a suitable
way. There are two types of variables in First-order logic which are
given below: