0% found this document useful (0 votes)
30 views127 pages

PE_1_AI_UNIT_4

Uploaded by

vk6859190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views127 pages

PE_1_AI_UNIT_4

Uploaded by

vk6859190
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

KNOWLEDGE

REPRESENTATION
UNIT 4
Contents
Propositional Logic:
• Representation
• Inference
• Reasoning Patterns
• Resolution
• Forward and Backward Chaining.
First order Logic:
• Representation
• Inference
• Reasoning Patterns
• Resolution
• Forward and Backward Chaining.
Knowledge representation:
•Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence
which concerned with AI agents thinking and how thinking contributes to intelligent
behavior of agents.

Representation and Reasoning System


• A Representation and Reasoning System (RRS) is made up of:
➢syntax: specifies the symbols used, and how they can be combined to form legal
sentences
➢semantics: specifies the meaning of the symbols
➢reasoning theory or proof procedure: a (possibly nondeterministic) specification
of how an answer can be produced
Propositional Definite Clauses: Syntax
• An atom is a symbol starting with a lower case letter[2+2=4]
• A body is an atom or is of the form b1 ∧ b2 where b1 and b2 are
bodies.
• A definite clause is an atom or is a rule of the form h ← b where h is
an atom and b is a body.
• read this as “h if b” A knowledge base is a set of definite clauses
Propositional Definite Clauses: Semantics
• Semantics allows you to relate the symbols in the logic to the domain you’re
trying to model.
• An interpretation I assigns a truth value to each atom.
• We can use the interpretation to determine the truth value of clauses and
knowledge bases:
• A body b1 ∧ b2 is true in I if b1 is true in I and b2 is true in I.
• A rule h ← b is false in I if b is true in I and h is false in I. The rule is true
otherwise.
• A knowledge base KB is true in I if and only if every clause in KB is true in I.
Propositional logic in Artificial intelligence
• Propositional logic (PL) is the simplest form of logic where all the statements
are made by propositions. A proposition is a declarative statement which is
either true or false. It is a technique of knowledge representation in logical and
mathematical form.

• Example:
1.a) It is Sunday.
2.b) The Sun rises from West (False proposition)
3.c) 3+3= 7(False proposition)
4.d) 5 is a prime number.
Following are some basic facts about propositional logic:
• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical
connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
• A proposition formula which is always false is called Contradiction.
• Statements which are questions, commands, or opinions are not propositions such as
"Where is Rohini", "How are you", "What is your name", are not propositions.
Syntax of propositional logic:
• The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
• Atomic Propositions
• Compound propositions
• Atomic Proposition: Atomic propositions are the simple propositions. It consists of a
single proposition symbol. These are the sentences which must be either true or false.
1.Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
• Compound proposition: Compound propositions are constructed by combining
simpler or atomic propositions, using parenthesis and logical connectives.
• Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:
• Logical connectives are used to connect two simpler propositions or representing a sentence logically.
We can create compound propositions with the help of logical connectives. There are mainly five
connectives, which are given as follows:
• Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
• Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
• Example: Rohan is intelligent and hardworking. It can be written as,
• P= Rohan is intelligent,Q= Rohan is hardworking. → P∧ Q.
• Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q
are the propositions.
• Example: "Ritika is a doctor or Engineer",
• Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
• Implication: Sentence (Q) is dependent on sentence (P), and it is called implication.A sentence such as
P → Q, is called an implication. Implications are also known as if-then rules. It can be represented as
• If it is raining, then the street is wet.
• Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
• Biconditional: Sentence (Q) is dependent on sentence (P), and vice versa .A sentence such as P⇔ Q is
a Biconditional sentence, example If I am breathing, then I am alive
• P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Following is the summarized table for Propositional
Logic Connectives:
Question:
1. Express the following sentences in Proposition Logic.
a. "It is raining today, and street is wet."
b. "Ankit is a doctor, and his clinic is not in Mumbai."
c. Ritika is a doctor or Engineer
d. If it is raining, then the street is wet.
e. I am alive. I am breathing
f. Today is Monday
g. Today is not a Monday .
h. Ram plays both cricket and Hockey.
i. Ram leaves for Chennai or Mumbai
j. If it is Sunday (P) then I will go to Movie (Q).
k. If I have 1000 Rupees then only I will go to hotel,I will go to hotel if I have 1000 rupees.
Truth Table:
Logical equivalence:
• Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the columns in
the truth table are identical to each other.
• Let's take two propositions A and B, so for logical equivalence, we can write
it as A⇔B. In below truth table we can see that column for ¬A∨ B and A→B,
are identical hence A is Equivalent to B

Properties of Operators:
• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with
propositional logic. Example:
• All the girls are intelligent.
• Some apples are sweet.
• Propositional logic has limited expressive power.
• In propositional logic, we cannot describe statements in terms of their
properties or logical relationships.
Inference:
• In artificial intelligence, we need intelligent computers which can create new logic
from old logic or by evidence, so generating the conclusions from evidence and
facts is termed as Inference.
Inference rules:
• Inference rules are the templates for generating valid arguments. Inference rules are
applied to derive proofs in artificial intelligence, and the proof is a sequence of the
conclusion that leads to the desired goal.
• In inference rules, the implication among all the connectives plays an important role.
Following are some terminologies related to inference rules:
• Implication: It is one of the logical connectives which can be represented as P → Q. It
is a Boolean expression.
• Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.
• Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
• Inverse: The negation of implication is called inverse. It can be represented as ¬ P →
¬ Q.
• From the above term some of the compound statements are equivalent to each
other, which we can prove using truth table:

Hence from the above truth table, we can prove that P → Q is equivalent to ¬ Q
→ ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.
Types of Inference rules:
1. Modus Ponens:
• The Modus Ponens rule is one of the most important rules of inference, and it
states that if P and P → Q is true, then we can infer that Q will be true. It can
be represented as:
• Example:
• Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: “I am sleepy" ==>P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
• Proof by Truth table:
1
2. Modus Tollens:
• The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬
P will also true. It can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not sleepy" => ~P

• Proof by Truth table:


3. Hypothetical Syllogism:
• The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is
true, and Q→R is true. It can be represented as the following notation:
• Example:
• Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
• Proof by truth table:
4. Disjunctive Syllogism:
• The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q
will be true. It can be represented as:

• Example:
• Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
• Proof by truth-table:
5. Addition:
• The Addition rule is one the common inference rule, and it states that If P is true,
then P∨Q will be true.

• Example:
• Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
• Proof by Truth-Table:
6. Simplification:
• The simplification rule state that if P∧ Q is true, then Q or P will also be true.
It can be represented as:

• Proof by Truth-Table:
7. Resolution:
• The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will
also be true. It can be represented as

• Proof by Truth-Table:
Reasoning in Artificial intelligence
• Reasoning:
• The reasoning is the mental process of
deriving logical conclusion and making
predictions from available knowledge,
facts, and beliefs. Or we can say,
"Reasoning is a way to infer facts from
existing data." It is a general process of
thinking rationally, to find valid
conclusions.
• In artificial intelligence, the reasoning is
essential so that the machine can also think
rationally as a human brain, and can
perform like a human.
Types of Reasoning

• Deductive reasoning
• Inductive reasoning
• Abductive reasoning
• Common Sense Reasoning
• Monotonic Reasoning
• Non-monotonic Reasoning
Terminologies
• A hypothesis is an assumption, an idea that is proposed for the sake of argument so
that it can be tested to see if it might be true.
• Premise means a statement taken to be true and used as a basis for argument or
reasoning.
Premise 1: John does not like any sour things.
Premise 2: All lemons are sour.
Conclusion: John does not like lemons.
• Patterns are a set of statements arranged in a sequence such that they are related to
each other in a specific rule. These rules define a way to calculate or solve problems.
For example, in a sequence of 3,6,9,12,_, each number is increasing by 3.
• axiom is an unprovable rule or first principle accepted as true because it is self-
evident or particularly useful.
• Heuristic knowledge is representing knowledge of some experts in a filed or subject.
Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
1. Deductive reasoning:
• Deductive reasoning is deducing new information from logically related known
information. It is the form of valid reasoning, which means the argument's conclusion
must be true when the premises are true.
• Deductive reasoning is a type of propositional logic in AI, and it requires various rules
and facts. It is sometimes referred to as top-down reasoning, and contradictory to
inductive reasoning.
• In deductive reasoning, the truth of the premises guarantees the truth of the
conclusion.
• Example:
• Premise-1: All the human eats veggies
• Premise-2: Suresh is human.
• Conclusion: Suresh eats veggies.
The general process of deductive reasoning is given below:
2. Inductive Reasoning:
• Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts
by the process of generalization. It starts with the series of specific facts or data and reaches to
a general statement or conclusion.
• Inductive reasoning is a type of propositional logic, which is also known as cause-effect
reasoning or bottom-up reasoning.
• In inductive reasoning, we use historical data or various premises to generate a generic rule,
for which premises support the conclusion.
• Example:
• Premise: All of the pigeons we have seen in the zoo are white.
• Conclusion: Therefore, we can expect all the pigeons to be white.
3. Abductive reasoning:
• Abductive reasoning is a form of logical reasoning which starts with single
or multiple observations then seeks to find the most likely explanation or
conclusion for the observation.
• Abductive reasoning is an extension of deductive reasoning, but in abductive
reasoning, the premises do not guarantee the conclusion.
• Example:
• Implication: Cricket ground is wet if it is raining
• Axiom: Cricket ground is wet.
• Conclusion: It is raining.
Detailed Examples Of Abductive Reasoning
1. Dew On Morning Grass
• Scenario: “When I went outside this morning, the grass was completely covered with
dew. It must have rained last night.”
• Here, we can see how the hypothesis put forward, i.e., ‘it must have rained last
night,’ is an inference or an incomplete observation. The person making the
observation did not in fact witness that it had rained the previous night. Rather, she is
inferring or using abductive logic to arrive at the best possible explanation for the
given conclusion.
• Given that the grass is wet (conclusion) it is probable that it rained last night
(hypothesis).
• Notice though that there could be some other alternative explanation for the grass
being wet; it’s possible, for example, that she forgot that the sprinklers turn on
automatically in the morning.
• This illustrates how abductive reasoning can sometimes result in an incorrect
assumption or explanation as to why something took place.
Detailed Examples Of Abductive Reasoning
2. Getting Home Late From Work
• Scenario: Usually, my partner gets home from work at around 6. It’s now 7
o’clock, so she must be stuck in bad traffic.
• The explanation that has the most explanatory power, so to speak, is that she
is late to get home because she must be stuck in traffic.
• Though the person cannot conclude this is true until he or she confirms it with
their partner, it’s a reasonable and logical assumption to make.
4. Common Sense Reasoning
• Common sense reasoning is an informal form of reasoning, which can be
gained through experiences.
• Common Sense reasoning simulates the human ability to make presumptions
about events which occurs on every day.
• It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.
• Example:
1.One person can be at one place at a time.
2.If I put my hand in a fire, then it will burn.
• The above two statements are the examples of common sense reasoning
which a human mind can easily understand and assume.
5. Monotonic Reasoning:
• In monotonic reasoning, once the conclusion is taken, then it will remain the same even if
we add some other information to existing information in our knowledge base. In
monotonic reasoning, adding knowledge does not decrease the set of prepositions that can
be derived.
• To solve monotonic problems, we can derive the valid conclusion from the available facts
only, and it will not be affected by new facts.
• Monotonic reasoning is not useful for the real-time systems, as in real time, facts get
changed, so we cannot use monotonic reasoning.
• Monotonic reasoning is used in conventional reasoning systems, and a logic-based system
is monotonic.
• Any theorem proving is an example of monotonic reasoning.
• Example:
• Earth revolves around the Sun.
• It is a true fact, and it cannot be changed even if we add another sentence in knowledge
base like, "The moon revolves around the earth" Or "Earth is not round," etc.
Monotonic Reasoning example:
• This reasoning refers to the part of reasoning in which a facts or information
will not change in any condition/circumstance, even if you add millions of new
records in the data. We can consider them universal truths.
• For example, if we have 2 + 3 = 5, then it will not change anyhow, even after
addition of many other records. Another example, “Gravity always pulls object
towards itself”, or “Sun rises through East and sets in West”. These conclusions
will never change, no matter how many additional information is being given
more or added to the database.
Advantages and Disadvantages of Monotonic
Reasoning:
• Advantages of Monotonic Reasoning:
• In monotonic reasoning, each old proof will always remain valid.
• If we deduce some facts from available facts, then it will remain valid for
always.
• Disadvantages of Monotonic Reasoning:
• We cannot represent the real world scenarios using Monotonic reasoning.
• Hypothesis knowledge cannot be expressed with monotonic reasoning,
which means facts should be true.
• Since we can only derive conclusions from the old proofs, so new knowledge
from the real world cannot be added.
6. Non-monotonic Reasoning
• In Non-monotonic reasoning, some conclusions may be invalidated if we add some
more information to our knowledge base.
• Logic will be said as non-monotonic if some conclusions can be invalidated by
adding more knowledge into our knowledge base.
• Non-monotonic reasoning deals with incomplete and uncertain models.
• "Human perceptions for various things in daily life, "is a general example of non-
monotonic reasoning.
• Example: Let suppose the knowledge base contains the following knowledge:
• Birds can fly
• Penguins cannot fly
• Pitty is a bird
• So from the above sentences, we can conclude that Pitty can fly.
• However, if we add one another sentence into knowledge base "Pitty is a penguin",
which concludes "Pitty cannot fly", so it invalidates the above conclusion.
Non-monotonic Reasoning example
• This reasoning refers to the part of reasoning in which information may change
in some condition, or after adding some additional information, previously
added facts can be changed.
• For Example, consider the following statements:
1.John loves Ice-Cream and Chocolates.
2.John hates nuts and cannot eat them in any condition.
3.“Temptations” is a chocolate.
• So, till here, if we apply reasoning, we can deduce that John loves Temptations.
• But, now if I add one more statement, that is “Temptations contains fruits and
nuts”, so, now our previously deduced conclusion doesn’t hold here. So, as our
conclusion changed with addition of more information, that is why it belongs to
the category of Non-Monotonic reasoning.
Advantages and Disadvantages of Non-monotonic
reasoning:
• Advantages of Non-monotonic reasoning:
• For real-world systems such as Robot navigation, we can use non-monotonic
reasoning.
• In Non-monotonic reasoning, we can choose probabilistic facts or can make
assumptions.
• Disadvantages of Non-monotonic Reasoning:
• In non-monotonic reasoning, the old facts may be invalidated by adding new
sentences.
• It cannot be used for theorem proving.
First-Order Logic in Artificial intelligence
• In the topic of Propositional logic, we have seen that how to represent
statements using propositional logic. But unfortunately, in propositional logic,
we can only represent the facts, which are either true or false.
• PL is not sufficient to represent the complex sentences or natural language
statements. The propositional logic has very limited expressive power.
Consider the following sentence, which we cannot represent using PL logic.
• "Some humans are intelligent", or
• "Sachin likes cricket."
• To represent the above statements, PL logic is not sufficient, so we required
some more powerful logic, such as first-order logic.
First-Order logic:
• First-order logic is another way of knowledge representation in artificial intelligence.
It is an extension to propositional logic.
• FOL is sufficiently expressive to represent the natural language statements in a
concise way.
• First-order logic is also known as Predicate logic or First-order predicate logic.
First-order logic is a powerful language that develops information about the objects
in a more easy way and can also express the relationship between those objects.
• First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
• Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such
as: the sister of, brother of, has color, comes between
• Function: Father of, best friend, third inning of, end of, ......
• As a natural language, first-order logic also has two main parts:
1. Syntax
2. Semantics
Syntax of First-Order logic:
• The syntax of FOL determines which collection of symbols is a logical
expression in first-order logic. The basic syntactic elements of first-order logic
are symbols. We write statements in short-hand notation in FOL.
• Following are the basic elements of FOL syntax:

Constant 1, 2, A, John, Mumbai, cat,....


Variables x, y, z, a, b,....
Predicates Brother, Father, >,....
Function sqrt, LeftLegOf, ....
Connectives ∧, ∨, ¬, ⇒, ⇔
Equality ==
Quantifier ∀, ∃
Atomic sentences:
• Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.
• We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
• Complex sentences are made by combining atomic sentences using connectives.
First-order logic statements can be divided into two parts:
• Subject: Subject is the main part of the statement.
• Predicate: A predicate can be defined as a relation, which binds two atoms together in a
statement.
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the
subject of the statement and second part "is an integer," is known as a predicate.
Truth in first-order logic
• Sentences are true with respect to a model and an interpretation
• Model contains ≥ 1 objects (domain elements) and relations among them
• Interpretation specifies referents for
➢constant symbols → objects
➢predicate symbols → relations
➢function symbols → functional relations
• An atomic sentence predicate(term1, . . . , termn) is true
• iff the objects referred to by term1, . . . , termn are in the relation referred
to by predicate
Models for FOL: Example
Quantifiers in First-order logic:
• A quantifier is a language element which generates quantification, and
quantification specifies the quantity of specimen in the universe of discourse.
• These are the symbols that permit to determine or identify the range and scope
of the variable in the logical expression. There are two types of quantifier:
• Universal Quantifier, (for all, everyone, everything)
• Existential quantifier, (for some, at least one).
Universal Quantifier:
• Universal quantifier is a symbol of logical representation, which specifies that
the statement within its range is true for everything or every instance of a
particular thing.
• The Universal quantifier is represented by a symbol ∀, which resembles an
inverted A.
• If x is a variable, then ∀x is read as:
• For all x
• For each x
• For every x.
Example:
• All man drink coffee.
• Let a variable x which refers to a man so all x can be represented in UOD as
below:

∀x man(x) → drink (x, coffee).


It will be read as: There are all x where x is a man who drink coffee.
Existential Quantifier:
• Existential quantifiers are the type of quantifiers, which express that the
statement within its scope is true for at least one instance of something.
• It is denoted by the logical operator ∃, which resembles as inverted E. When it
is used with a predicate variable then it is called as an existential quantifier.
• If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be
read as:
• There exists a 'x.'
• For some 'x.'
• For at least one 'x.'
Example:

∃x: boys(x) ∧ intelligent(x)


It will be read as: There are some x where x is a boy who is intelligent.
Points to remember:
• The main connective for universal quantifier ∀ is implication →.
• The main connective for existential quantifier ∃ is and ∧.
Properties of Quantifiers:
• In universal quantifier, ∀x∀y is similar to ∀y∀x.
• In Existential quantifier, ∃x∃y is similar to ∃y∃x.
• ∃x∀y is not similar to ∀y∃x.
Some Examples of FOL using quantifier:
1. All birds fly.
In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).
3. Some boys play cricket.
In this question, the predicate is "play(x, y)," where x= boys, and y= game.
Since there are some boys so we will use ∃, and it will be represented as:
∃x: boys(x) ∧ play(x, cricket).

4. Not all students like both Mathematics and Science.


In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following
representation for this:
¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
Free and Bound Variables:
• The quantifiers interact with variables which appear in a suitable way. There
are two types of variables in First-order logic which are given below:
• Free Variable: A variable is said to be a free variable in a formula if it occurs
outside the scope of the quantifier.
• Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.
• Bound Variable: A variable is said to be a bound variable in a formula if it
occurs within the scope of the quantifier.
• Example: ∀x [A (x) B( y)], here x and y are the bound variables.
Knowledge representation using FOL
• In knowledge representation, a domain is just some part of the world about
which we wish to express some knowledge.
• We will begin with a brief description of the TELL/ASK interface for first-
order knowledge bases. Then we will look at the domains of family
relationships, numbers, sets, and lists, and at the wumpus world.
✓Assertions and queries in first-order logic
✓The kinship domain
✓Numbers, sets, and lists
✓Wumpus World
Assertions and queries in first-order logic
• Sentences are added to a knowledge base using TELL, exactly as in
propositional logic. Such sentences are called assertions.
• For example, we can assert that John is a king and that kings are persons:
• TELL(KB, King(John)) .
• TELL(KB, ∀ x King(x) ⇒ Person(x)) .
• We can ask questions of the knowledge base using ASK.
• For example, ASK(KB, King(John)) returns true.
Queries and goals
• Questions asked using ASK are called queries or goals.
• Generally speaking, any query that is logically entailed by the knowledge base
should be answered affirmatively.
• For example, given the two assertions in the preceding paragraph, the query
ASK(KB, Person(John))
• should also return true. We can also ask quantified queries,
such as ASK(KB, ∃ x Person(x)) .
• The answer to this query could be true,
• The standard form for an answer of this sort is a substitution or binding list,
which is a set of variable/term pairs.
• In this particular case, given just the two assertions, the answer would be
{x/John}.
• If there is more than one possible answer, a list of substitutions can be returned.
The kinship domain

• The first example we consider is the domain of family relationships, or kinship.


This domain includes facts such as “Elizabeth is the mother of Charles” and
“Charles is the father of William” and rules such as “One’s grandmother is the
mother of one’s parent.
• Clearly, the objects in our domain are people. We will have two unary
predicates, Male and Female. Kinship relations—parenthood, brotherhood,
marriage, and so on—will be represented by binary predicates: Parent, Sibling,
Brother, Sister , Child, Daughter, Son, Spouse, Wife, Husband, Grandparent,
Grandchild, Cousin, Aunt, and Uncle.
• We will use functions for Mother and Father, because every person has exactly
one of each of these (at least according to nature’s design).
• We can go through each function and predicate, writing down what we know
in terms of the other symbols.
For example,
• Brothers are siblings
• ∀x, y Brother(x, y) ⇒ Sibling(x, y)
• Male and female are disjoint categories:
• ∀ x Male(x) ⇔ ¬Female(x) .
• Parent and child are inverse relations:
• ∀ p, c Parent(p, c) ⇔ Child(c, p) .
Numbers, sets, and list
• Numbers are perhaps the most vivid example of how a large theory can be built
up from a tiny kernel of axioms.
• We will describe here the theory of natural numbers or nonnegative integers.
• We need a predicate NatNum that will be true of natural numbers; we need one
constant symbol, 0; and we need one function symbol, S (successor).
• Natural numbers are defined recursively:
• NatNum(0) .
• ∀ n NatNum(n) ⇒ NatNum(S(n)) .
• That is, 0 is a natural number, and for every object n, if n is a natural number
then S(n) is a natural number. So the natural numbers are 0, S(0), S(S(0)), and so
on.
• The domain of sets is also fundamental to mathematics as well as to commonsense
reasoning. (In fact, it is possible to build number theory on top of set theory.) We
want to be able to represent individual sets, including the empty set.
• We need a way to build up sets by adding an element to a set or taking the union or
intersection of two sets.
• We will want to know whether an element is a member of a set and to be able to
distinguish sets from objects that are not sets.
• We will use the normal vocabulary of set theory as syntactic sugar. The empty set
is a constant written as { }.
• There is one unary predicate, Set, which is true of sets. The binary predicates are
x∈ s (x is a member of set s) and s1 ⊆ s2 (set s1 is a subset, not necessarily proper,
of set s2).
• The binary functions are s1 ∩s2 (the intersection of two sets), s1 ∪ s2 (the union of
two sets), and {x|s} (the set resulting from adjoining element x to set s).
Example:
1.Two sets are equal if and only if each is a subset of the other:
• ∀ s1, s2 (s1 = s2) ⇔ (s1 ⊆ s2 ∧ s2 ⊆ s1) .
2. An object is in the intersection of two sets if and only if it is a member of
both sets:
• ∀ x, s1, s2 x∈(s1 ∩s2) ⇔ (x∈s1 ∧ x∈ s2) .
3. An object is in the union of two sets if and only if it is a member of
either set:
• ∀ x, s1, s2 x∈(s1 ∪s2) ⇔ (x∈s1 ∨ x∈ s2) .
• Lists are similar to sets. The differences are that lists are ordered and the
same element can appear more than once in a list.
• We can use the vocabulary of Lisp for lists: Nil is the constant list with no
elements; Cons, Append, First, and Rest are functions; and Find is the
predicate that does for lists what Member does for sets.
• List? is a predicate that is true only of lists.
• The empty list is [ ].
• The term Cons(x, y)
The term Cons(x, Nil), (i.e., the list containing the element x), is written as
[x], where y is a nonempty list.
• A list of several elements, such as [A, B, C], corresponds to the nested
term Cons(A, Cons(B, Cons(C, Nil))).
Wumpus world
Following is the Simple KB for wumpus world when an agent moves from room
[1, 1], to room [2,1]:

• Here in the first row, we have mentioned propositional variables for


room[1,1], which is showing that room does not have wumpus(¬ W11), no
stench (¬S11), no Pit(¬P11), no breeze(¬B11), no gold (¬G11), visited (V11),
and the room is Safe(OK11).
• In the second row, we have mentioned propositional variables for room
[1,2], which is showing that there is no wumpus, stench and breeze are
unknown as an agent has not visited room [1,2], no Pit, not visited yet, and
the room is safe.
• In the third row we have mentioned propositional variable for room[2,1],
which is showing that there is no wumpus(¬ W21), no stench (¬S21), no
Pit (¬P21), Perceives breeze(B21), no glitter(¬G21), visited (V21), and room
is safe (OK21).
Inference in First-Order Logic
• Inference in First-Order Logic is used to deduce new facts or sentences from
existing sentences. Before understanding the FOL inference rule, let's
understand some basic terminologies used in FOL.
• Substitution:
• Substitution is a fundamental operation performed on terms and formulas. It
occurs in all inference systems in first-order logic. The substitution is complex
in the presence of quantifiers in FOL. If we write F[a/x], so it refers to
substitute a constant "a" in place of variable "x".
Equality:
• First-Order logic does not only use predicate and terms for making atomic
sentences but also uses another way, which is equality in FOL. For this, we can
use equality symbols which specify that the two terms refer to the same object.
• Example: Brother (John) = Smith.
• As in the above example, the object referred by the Brother (John) is similar to
the object referred by Smith. The equality symbol can also be used with
negation to represent that two terms are not the same objects.
• Example: ¬(x=y) which is equivalent to x ≠y.
FOL inference rules for quantifier:

• As propositional logic we also have inference rules in first-order logic, so


following are some basic inference rules in FOL:
• Universal Generalization
• Universal Instantiation
• Existential Instantiation
• Existential introduction
1. Universal Generalization:
• Universal generalization is a valid inference rule which states that if premise
P(c) is true for any arbitrary element c in the universe of discourse, then we can
have a conclusion as ∀ x P(x).
• It can be represented as:

• This rule can be used if we want to show that every element has a similar
property.
• In this rule, x must not appear as a free variable.
• Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All
bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:
• Universal instantiation is also called as universal elimination or UI is a valid
inference rule. It can be applied multiple times to add new sentences.
• The new KB is logically equivalent to the previous KB.
• As per UI, we can infer any sentence obtained by substituting a ground
term for the variable.
• The UI rule state that we can infer any sentence P(c) by substituting a ground
term c (a constant within domain x) from ∀ x P(x) for any object in the
universe of discourse.
• It can be represented as:
• Example:1.
• IF "Every person like ice-cream"=> ∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
• Example: 2.
• Let's take a famous example,
• "All kings who are greedy are Evil." So let our knowledge base contains this
detail as in the form of FOL:
• ∀x king(x) ∧ greedy (x) → Evil (x),
• So from this information, we can infer any of the following statements using
Universal Instantiation:
• King(John) ∧ Greedy (John) → Evil (John),
• King(Richard) ∧ Greedy (Richard) → Evil (Richard),
• King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
3. Existential Instantiation:
• Existential instantiation is also called as Existential Elimination, which is a
valid inference rule in first-order logic.
• It can be applied only once to replace the existential sentence.
• The new KB is not logically equivalent to old KB, but it will be satisfiable if
old KB was satisfiable.
• This rule states that one can infer P(c) from the formula given in the form of ∃x
P(x) for a new constant symbol c.
• The restriction with this rule is that c used in the rule must be a new term for
which P(c ) is true.
• It can be represented as:
Example:
• From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
• So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not
appear in the knowledge base.
• The above used K is a constant symbol, which is called Skolem constant.
• The Existential instantiation is a special case of Skolemization process.
4. Existential introduction
• An existential introduction is also known as an existential generalization, which
is a valid inference rule in first-order logic.
• This rule states that if there is some element c in the universe of discourse
which has a property P, then we can infer that there exists something in the
universe which has the property P.
• It can be represented as:

• Example: Let's say that,


"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
What is Unification?
• Unification is a process of making two different logical atomic expressions
identical by finding a substitution. Unification depends on the substitution
process.
• It takes two literals as input and makes them identical using substitution.
• Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that,
Ψ1𝜎 = Ψ2𝜎, then it can be expressed as UNIFY(Ψ1, Ψ2).
• Example: Unify{King(x), King(John)}
• Let Ψ1 = King(x), Ψ2 = King(John),
Conditions for Unification:
Following are some basic conditions for unification:
• Predicate symbol must be same, atoms or expression with different
predicate symbol can never be unified.
• Number of Arguments in both expressions must be identical.
• Unification will fail if there are two similar variables present in the
same expression.
Algorithm: Unify(Ψ1, Ψ2)
Step. 1: If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not same, then return FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of arguments, then return FAILURE.
Step. 4: Set Substitution set(SUBST) to NIL.
Step. 5: For i=1 to the number of elements in Ψ1.
a) Call Unify function with the ith element of Ψ1 and ith element of Ψ2, and put the result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both L1 and L2.
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.
Normal Forms
1.Disjunctive Normal Form (DNF)
2.Conjunctive Normal Form
Conjunctive Normal Form:
• if p, q are two statements, then "p and q" is a compound statement, denoted by p ∧ q
and referred as the conjunction of p and q. The conjunction of p and q is true only
when both p and q are true, otherwise, it is false.

p q p∧q
T T T
T F F
F T F
F F F

• Example: if statement p is "6<7" and statement q is "-3>-4" then the conjunction of


p and q is true as both p and q are true statements.
Disjunctive Normal Form (DNF):
if p, q are two statements, then "p or q" is a compound statement, denoted by p ∨
q and referred as the disjunction of p and q. The disjunction of p and q is true
whenever at least one of the two statements is true, and it is false only when
both p and q are false.

p q p∨q
T T T
T F T
F T T
F F F

Example: - if p is "4 is a positive integer" and q is "√5 is a rational number",


then p ∨ q is true as statement p is true, although7 statement q is false.
Resolution Method in AI
• Resolution method is an inference rule which is used in both Propositional as
well as First-order Predicate Logic in different ways. This method is basically
used for proving the satisfiability of a sentence.
• In resolution method, we use Proof by Refutation technique to prove the
given statement.
• The key idea for the resolution method is to use the knowledge base and
negated goal to obtain null clause (which indicates contradiction).
• Resolution method is also called Proof by Refutation. Since the knowledge
base itself is consistent, the contradiction must be introduced by a negated
goal.
• As a result, we have to conclude that the original goal is true.
Following steps used to convert into CNF:
In propositional logic, the resolution method is applied only to those clauses
which are disjunction of literals. There are following steps used to convert into
CNF:
1) Eliminate bi-conditional implication by replacing A ? B with (A ? B) ? (B ?A)
2) Eliminate implication by replacing A ? B with ¬A V B.
3) In CNF, negation(¬) appears only in literals, therefore we move it inwards as:
• ¬ ( ¬A) ? A (double-negation elimination
• ¬ (A ? B) ? ( ¬A V ¬B) (De Morgan)
• ¬(A V B) ? ( ¬A ? ¬B) (De Morgan)
4) Finally, using distributive law on the sentences, and form the CNF as:
• (A1 V B1) ? (A2 V B2) ? …. ? (An V Bn).
• Note: CNF can also be described as AND of ORS
Resolution Method in Propositional Logic
• In propositional logic, resolution method is the only inference rule which gives a
new clause when two or more clauses are coupled together.
• Using propositional resolution, it becomes easy to make a theorem prover sound
and complete for all.
• The process followed to convert the propositional logic into resolution method
contains the below steps:
1. Convert the given axiom into clausal form, i.e., disjunction form.
2. Apply and proof the given goal using negation rule.
3. Use those literals which are needed to prove.
4. Solve the clauses together and achieve the goal.
• But, before solving problems using Resolution method, let’s understand two
normal forms
Example OF Propositional Resolution
• Consider the following Knowledge Base:
1.The humidity is high or the sky is cloudy.
2.If the sky is cloudy, then it will rain.
3.If the humidity is high, then it is hot.
4.It is not hot.
• Goal: It will rain.
• Use propositional logic and apply resolution method to prove that the
goal is derivable from the given knowledge base.
Solution
Let’s construct propositions of the given sentences one by one:
Let,
1) P: Humidity is high.
Q: Sky is cloudy.
It will be represented as P V Q.
2) Q: Sky is cloudy. …from(1)
Let, R: It will rain.
It will be represented as Q ? R.
3) P: Humidity is high. …from(1)
Let, S: It is hot.
It will be represented as P ? S.
4) ¬S: It is not hot.
Applying resolution method:
• In (2), Q ? R will be converted as (¬Q V R)
• In (3), P ? S will be converted as (¬P V S)
• Negation of Goal (¬R): It will not rain.
• Finally, apply the rule as shown below:

After applying Proof by Refutation (Contradiction) on the goal, the


problem is solved, and it has terminated with a Null clause
( Ø ). Hence, the goal is achieved. Thus, It is not raining.
Resolution Method in FOPl/ Predicate Logic
• Resolution method in FOPL is an uplifted version of propositional resolution
method.
• In FOPL, the process to apply the resolution method is as follows:
• Convert the given axiom into CNF, i.e., a conjunction of clauses. Each clause
should be dis-junction of literals.
• Apply negation on the goal given.
• Use literals which are required and prove it.
• Unlike propositional logic, FOPL literals are complementary if one unifies with
the negation of other literal.
Example :
1.John likes all kind of food.
2.Apple and vegetable are food
3.Anything anyone eats and not killed is food.
4.Anil eats peanuts and still alive
5.Harry eats everything that Anil eats.

Prove by resolution that:

6.John likes peanuts.


Step-1: Conversion of Facts into FOL
1.John likes all kind of food.
2.Apple and vegetable are food
3.Anything anyone eats and not killed is food.
4.Anil eats peanuts and still alive
5.Harry eats everything that Anil eats.

Prove by resolution that:

6.John likes peanuts.


Step-2: Conversion of FOL into CNF
• In First order logic resolution, it is required to convert the FOL into CNF as CNF
form makes easier for resolution proofs.
• Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known as Skolemization. But in this
example problem since there is no existential quantifier so all the statements will remain same in this step.
• Drop Universal quantifiers
In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we
don't need it.
• ¬ food(x) V likes(John, x)
• food(Apple)
• food(vegetables)
• ¬ eats(y, z) V killed(y) V food(z)
• eats (Anil, Peanuts)
• alive(Anil)
• ¬ eats(Anil, w) V eats(Harry, w)
• killed(g) V alive(g)
• ¬ alive(k) V ¬ killed(k)
• likes(John, Peanuts)

•Distribute conjunction ∧ over disjunction ¬.


This step will not make any change in this problem.
Step-3: Negate the statement to be proved
• In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)
Step-4: Draw Resolution graph:
• Now in this step, we will solve the problem by resolution tree using substitution. For the above
problem, it will be given as follows:

Hence the negation of the conclusion has been proved as a complete


contradiction with the given set of statements.
Explanation of Resolution graph:
• In the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John,
x) get resolved(canceled) by substitution of {Peanuts/x}, and we are left
with ¬ food(Peanuts)
• In the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get
resolved (canceled) by substitution of { Peanuts/z}, and we are left with ¬
eats(y, Peanuts) V killed(y) .
• In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil,
Peanuts) get resolved by substitution {Anil/y}, and we are left
with Killed(Anil) .
• In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get
resolve by substitution {Anil/k}, and we are left with ¬ alive(Anil) .
• In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get
resolved.
Forward Chaining and backward chaining in AI
Inference engine : The inference engine is the component of the intelligent system
in artificial intelligence, which applies logical rules to the knowledge base to infer
new information from known facts. Inference engine commonly proceeds in two
modes, which are:
1.Forward chaining
2.Backward chaining
What is Forward Chaining?
• Forward chaining is a method of reasoning in artificial intelligence in which inference rules are
applied to existing data to extract additional data until an endpoint (goal) is achieved.
• In this type of chaining, the inference engine starts by evaluating existing facts, derivations, and
conditions before deducing new information. An endpoint (goal) is achieved through the
manipulation of knowledge that exists in the knowledge base.
Properties of forward chaining
• The process uses a down-up approach (bottom to top).
• It starts from an initial state and uses facts to make a conclusion.
• This approach is data-driven.
• It’s employed in expert systems and production rule system.
Examples of forward chaining
• A simple example of forward chaining can be explained in the following
sequence.
•A
• A->B
•B
• A is the starting point. A->B represents a fact. This fact is used to achieve a
decision B.
• A practical example will go as follows;
• Tom is running (A)
• If a person is running, he will sweat (A->B)
• Therefore, Tom is sweating. (B)
Example
Forward Chaining in AI : Artificial Intelligence
• Forward chaining is the concept of data and decision. From the available data,
expand until a decision is not made.
• In artificial intelligence, we have two different methods to use forward chaining.
1. Forward Chaining in Propositional Logic
2. Forward Chaining in Predicate Logic/(FOPL)
Example:Forward Chaining in Propositional Logic

• Let’s see an example:


1.If D barks and D eats bone, then D is a dog.
2.If V is cold and V is sweet, then V is ice-cream.
3.If D is a dog, then D is black.
4.If V is ice-cream, then it is Vanilla.
• Derive forward chaining using the given known facts to prove Tomy is
black.
• Tomy barks.
• Tomy eats bone.
Solution:
• Given Tomy barks.
• From (1), it is clear:
• If Tomy barks and Tomy eats bone, then Tomy is a dog.
• From (3), it is clear:
• If Tomy is a dog, then Tomy is black.
• Hence, it is proved that Tomy is black.
• Note: There is an advantage of forward chaining that is, we can draw new
inference from the given facts.
Forward Chaining in Predicate Logic/ FOPL

• Forward Chaining in Predicate Logic is different from forward chaining in


Propositional Logic. In FOPL, forward chaining is efficiently implemented on
first-order clauses.
Example:
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."
• Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-
order definite clauses, and then we will use a forward-chaining algorithm to
reach the goal.
Facts Conversion into FOL:
It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
Country A has some missiles. It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
All of the missiles were sold to country A by Robert.
∀p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
Country A is an enemy of America.
Enemy (A, America) .........(7)
Robert is American
American(Robert). ..........(8)
Forward chaining proof:
Step-1:
In the first step we will start with the known facts and will choose the sentences
which do not have implications, such as: American(Robert), Enemy(A,
America), Owns(A, T1), and Missile(T1). All these facts will be represented
as below.
Step-2:
• At the second step, we will see those facts which infer from available facts and
with satisfied premises.
• Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
• Rule-(2) and (3) are already added.
• Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added,
which infers from the conjunction of Rule (2) and (3).
• Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and
which infers from Rule-(7).
Step-3:
• At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert,
q/T1, r/A}, so we can add Criminal(Robert) which infers all the available
facts. And hence we reached our goal statement.
• Hence it is proved that Robert is Criminal using forward chaining
approach.
As per the law, it is a crime for an American to sell weapons to hostile nations. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who
is an American citizen."
• Prove that "Robert is criminal."
Advantages
• It can be used to draw multiple conclusions.
• It provides a good basis for arriving at conclusions.
• It’s more flexible than backward chaining because it does not have a
limitation on the data derived from it.
Disadvantages
• The process of forward chaining may be time-consuming. It may take a lot of
time to eliminate and synchronize available data.
• Unlike backward chaining, the explanation of facts or observations for this
type of chaining is not very clear. The former uses a goal-driven method that
arrives at conclusions efficiently.
What is Backward chaining
• Backward chaining is a concept in artificial intelligence that involves backtracking from the
endpoint or goal to steps that led to the endpoint. This type of chaining starts from the goal
and moves backward to comprehend the steps that were taken to attain this goal.
Properties of backward chaining
• The process uses an up-down approach (top to bottom).
• It’s a goal-driven method of reasoning.
• The endpoint (goal) is subdivided into sub-goals to prove the truth of facts.
• A backward chaining algorithm is employed in inference engines, game theories, and
complex database systems.
• The modus ponens inference rule is used as the basis for the backward chaining process.
This rule states that if both the conditional statement (p->q) and the antecedent (p) are true,
then we can infer the subsequent (q).
Example of backward chaining
Backward chaining can be explained in the following sequence.
•B
• A->B
•A
• B is the goal or endpoint, that is used as the starting point for backward tracking. A
is the initial state. A->B is a fact that must be asserted to arrive at the endpoint B.
• A practical example of backward chaining will go as follows:
• Tom is sweating (B).
• If a person is running, he will sweat (A->B).
• Tom is running (A).
Example of backward chaining

Goal state is Z
Facts:
F&B->Z
C&D->F
A->D

Lets assume:
initial database is having
A,B,C,E
Backward Chaining in AI: Artificial Intelligence

• Backward Chaining is a backward approach which works in the backward


direction. It begins its journey from the back of the goal.
• Like, forward chaining, we have backward chaining for Propositional logic as
well as Predicate logic followed by their respective algorithms.
Backward Chaining in Propositional Logic

• In propositional logic, backward chaining begins from the goal and using the
given propositions, it proves the asked goal.
• There is a backward chaining algorithm which is used to perform backward
chaining for the given axioms.
• The first algorithm given is known as the Davis-Putnam algorithm.
• It was proposed by Martin Davis and Hilary Putnam in 1960. In 1962,
Davis, Logemann and Loveland presented a new version of the Davis-
Putnam algorithm.
• It was named under the initials of all four authors as DPLL. The versioned
algorithm (DPLL) takes an input of a statement as CNF.
Example of Backward Chaining in Propositional
Logic
• Given that:
1.If D barks and D eats bone, then D is a dog.
2.If V is cold and V is sweet, then V is ice-cream.
3.If D is a dog, then D is black.
4.If V is ice-cream, then it is Vanilla.
• Derive backward chaining using the given known facts to prove
Tomy is black.
• Tomy barks.
• Tomy eats bone.
Solution:
1.On replacing D with Tomy in (3), it becomes:
• If Tomy is a dog, then Tomy is black.
• Thus, the goal is matched with the above axiom.
• Now, we have to prove Tomy is a dog. …(new goal)
• Replace D with Tomy in (1), it will become:
• If Tomy barks and Tomy eats bone, then Tomy is a dog. …(new goal)
• Again, the goal is achieved.
• Now, we have to prove that Tomy barks and Tomy eats bone. …(new goal)
• As we can see, the goal is a combination of two sentences which can be further divided as:
• Tomy barks.
• Tomy eats bone.
• From (1), it is clear that Tomy is a dog.
• Hence, Tomy is black.
• Note: Statement (2) and (4) are not used in proving the given axiom. So, it is clear that goal never
matches the negated versions of the axioms. Always Modus Ponen is used, rather Modus Tollen.
Backward Chaining in FOPL

• In FOPL, backward chaining works from the backward direction of the goal,
apply the rules on the known facts which could support the proof.
• Backward Chaining is a type of AND/OR search because we can prove the
goal by applying any rule in the knowledge base.
• A backward chaining algorithm is used to process the backward chaining.
Example:
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."
• Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-
order definite clauses, and then we will use a forward-chaining algorithm to
reach the goal.
Facts Conversion into FOL:
It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
Country A has some missiles. It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
All of the missiles were sold to country A by Robert.
∀p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
Country A is an enemy of America.
Enemy (A, America) .........(7)
Robert is American
American(Robert). ..........(8)
Backward-Chaining proof:

• In Backward chaining, we will start with our goal predicate, which


is Criminal(Robert), and then infer further rules.
• Step-1:
• At the first step, we will take the goal fact. And from the goal fact, we will
infer other facts, and at last, we will prove those facts true. So our goal fact is
"Robert is Criminal," so following is the predicate of it.
Step-2:
• At the second step, we will infer other facts form goal fact which satisfies the
rules. So as we can see in Rule-1, the goal predicate Criminal (Robert) is
present with substitution {Robert/P}. So we will add all the conjunctive facts
below the first level and will replace p with Robert.
• Here we can see American (Robert) is a fact, so it is proved here.
Step-3:
At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as
it satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1
at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert,
T1, r) which satisfies the Rule- 4, with the substitution of A in place of r. So these
two statements are proved here.
Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which
satisfies Rule- 6. And hence all the statements are proved true using backward
chaining.
Advantages
• The result is already known, which makes it easy to deduce inferences.
• It’s a quicker method of reasoning than forward chaining because the endpoint
is available.
• In this type of chaining, correct solutions can be derived effectively if pre-
determined rules are met by the inference engine.
Disadvantages
• The process of reasoning can only start if the endpoint is known.
• It doesn’t deduce multiple solutions or answers.
• It only derives data that is needed, which makes it less flexible than forward
chaining.
Difference between backward chaining and forward chaining

S. No. Forward Chaining Backward Chaining


1. Forward chaining starts from known facts and applies Backward chaining starts from the goal and works backward through
inference rule to extract more data unit it reaches to the inference rules to find the required facts that support the goal.
goal.
2. It is a bottom-up approach It is a top-down approach

3. Forward chaining is known as data-driven inference Backward chaining is known as goal-driven technique as we start from the
technique as we reach to the goal using the available goal and divide into sub-goal to extract the facts.
data.
4. Forward chaining reasoning applies a breadth-first Backward chaining reasoning applies a depth-first search strategy.
search strategy.

5. Forward chaining tests for all the available rules Backward chaining only tests for few required rules.

6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic, prescription, and debugging
monitoring, control, and interpretation application. application.
7. Forward chaining can generate an infinite number of Backward chaining generates a finite number of possible conclusions.
possible conclusions.
8. It operates in the forward direction. It operates in the backward direction.

9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required data.
Questions One marks:

Write the commutative equivalence of AND and OR.


1. Write and explain the forward chaining algorithm.
2. Explain proof by resolution in Propositional Logic. How quantifiers are used in FOL?

3. Discuss the syntax and semantics of Propositional Logic. What is theorem proving?
4. Differentiate between forward and backward reasoning. What is unification?
5. Express the following sentences in Proposition Logic.
How atomic sentence is represented?
• a. "It is raining today, and street is wet."
Mention the disadvantage of Propositional Logic?
• b. "Ankit is a doctor, and his clinic is not in Mumbai."
• c. Ritika is a doctor or Engineer When a sentence is called satisfiable?

• d. If it is raining, then the street is wet. How complex sentenses are represented?

• e. I am alive. I am breathing Represent the sentence “Brothers are siblings” in first order logic?
6. Narrate the resolution process in First Order Logic
What factors justify whether the reasoning is to be done in forward or
7.Write and explain the backward chaining algorithm. backward reasoning?

8. Explain assertion, queries, and domain in FOL.


9. Illustrate the use of first order logic to represent knowledge.
10. Explain the forward chaining process in detail with example? What is the need of incremental chaining?
END OF UNIT 4

You might also like