0% found this document useful (0 votes)
11 views

Module 3

Ai ML notes

Uploaded by

Ankit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Module 3

Ai ML notes

Uploaded by

Ankit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 157

Module 3

Knowledge Representation
Knowledge-based agents, Agents based on Propositional Logic – First-
order logic.
INSTRUCTIONAL OBJECTIVE
Knowledge-based agents
At the completion of this lecture,

students will be able to :


• Understand the definition of Knowledge based Agents
• Example - Wumpus World
KNOWLEDGE-BASED AGENTS
• An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.
• Knowledge-based agents are those agents who have the capability
of maintaining an internal state of knowledge, reason over that knowledge,
update their knowledge after observations and take actions. These agents can
represent the world with some formal representation and act intelligently.
• Knowledge-based agents are composed of two main parts:
• Knowledge-base and
• Inference system.
A knowledge-based agent must able to do the following:
• An agent should be able to represent states, actions, etc.
• An agent Should be able to incorporate new percepts
• An agent can update the internal representation of the world
• An agent can deduce the internal representation of the world
• An agent can deduce appropriate actions.
KNOWLEDGE-BASED AGENTS
KNOWLEDGE-BASED AGENTS
Knowledge base: Knowledge-base is a central component of a knowledge-
based agent, it is also known as KB. It is a collection of sentences.
• These sentences are expressed in a language which is called a knowledge
representation language.
• The Knowledge-base of KBA stores fact about the world.
Inference system:
• Inference means deriving new sentences from old. Inference system allows
us to add a new sentence to the knowledge base.
• A sentence is a proposition about the world. Inference system applies
logical rules to the KB to deduce new information.
• Inference system generates new facts so that an agent can update the KB.
KNOWLEDGE-BASED AGENTS
• The central component of a knowledge-based agent is its knowledge base, or KB.
• A knowledge base is a set of sentences.
• Each sentence is expressed in a language called a knowledge representation
language and represents some assertion about the world, and sometimes we
dignify a sentence with the name axiom, when the sentence is taken as given
without being derived from other sentences.
• There must be a way to add new sentences to the knowledge base and a way to
query what is known. The standard names for these operations are TELL and ASK,
respectively.
• Both operations may involve inference—that is, deriving new sentences from old.
• Inference must obey the requirement that when one ASKs a question of the
knowledge base, the answer should follow from what has been told (or TELLed) to
the knowledge base previously.
KNOWLEDGE-BASED AGENTS
• Figure 7.1 shows the outline of a knowledge-based agent program.
• Like agents, it takes a percept as input and returns an action. The agent maintains a
knowledge base, KB, which may initially contain some background knowledge. Each time
the agent program is called, it does three things:
First, it TELLS the knowledge base what it perceives.
Second, it ASKS the knowledge base what action it should perform. In the process of
answering this query, extensive reasoning may be done about the current state of the
world, about the outcomes of possible action sequences, and so on.
Third, the agent program TELLS the knowledge base which action was chosen, and the
agent executes the action.
KNOWLEDGE-BASED AGENTS
• The details of the representation language are hidden inside three functions that
implement the interface between the sensors and actuators on one side and the core
representation and reasoning system on the other.
• MAKE-PERCEPT- SENTENCE constructs a sentence asserting that the agent perceived the
given percept at the given time.
• MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at
the current time.
• Finally, MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action
was executed.
• The details of the inference mechanisms are hidden inside TELL and ASK.
KNOWLEDGE-BASED AGENTS
• The agent in Figure 7.1 appears similar to the agents with internal state described and because of the
definitions of TELL and ASK, however, the knowledge-based agent is not an arbitrary program for
calculating actions.
• It is amenable to a description at the knowledge level, where we need specify only what the agent knows
and what its goals are, in order to fix its behavior.
• For example, an automated taxi might have the goal of taking a passenger from San Francisco to Mann
County and might know that the Golden Gate Bridge is the only link between the two locations.
• Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal.
• This analysis is independent of how the taxi works at the implementation level. It doesn't matter whether
its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons by
manipulating strings of symbols stored in registers or by propagating noisy signals in a network of
neurons.
• A knowledge-based agent can be built simply by TELLing it what it needs to know.
• Starting with an empty knowledge base, the agent designer can TELL sentences one by one until the agent
knows how to operate in its environment. This is called the declarative approach to system building. In
contrast, the procedural approach encodes desired behaviors directly as program code.
• A successful agent often combines both declarative and procedural elements in its design, and that
declarative knowledge can often be compiled into more efficient procedural code.
The WUMPUS World
• The wumpus world is a cave consisting of rooms connected by passage ways. Lurking
somewhere in the cave is the terrible wumpus, a beast that eats anyone who enters its
room.
• The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms
contain bottomless pits that will trap anyone who wanders into these rooms (except for
the wumpus, which is too big to fall in).
• The only mitigating feature of this bleak environment is the possibility of finding a heap
of gold. A sample wumpus world is shown in Figure 7.2.
The WUMPUS World
The definition of the task environment is given, by the PEAS description:
• Performance measure: +1000 for climbing out of the cave with the gold, – 1000 for
falling into a pit or being eaten by the wumpus, –1 for each action taken and –10 for
using up the arrow. The game ends either when the agent dies or when the agent climbs
out of the cave.
• Environment: A 4 x 4 grid of rooms. The agent always starts in the square labeled
[1,1],facing to the right. The locations of the gold and the wumpus are chosen randomly,
with a uniform distribution, from the squares other than the start square. In addition,
each square other than the start can be a pit, with probability 0.2.
• Actuators: The agent can move Forward, TurnLeft by 90°,or TurnRight by 90°. The agent
dies a miserable death if it enters a square containing a pit or a live wumpus. (it is safe,
albeit smelly, to enter a square with a dead wumpus). If an agent tries to move forward
and bumps into a wall, then the agent does not move. The action Grab can be used to
pick up the gold if it is in the same square as the agent. The action Shoot can be used to
fire an arrow in a straight line in the direction the agent is facing. The arrow continues
until it either hits (and hence kills) the wumpus or hits a wall. The agent has only one
arrow, so only the first Shoot action has any effect. Finally the action Climb can be used
to climb out of the cave, but only from square [1,1].
The WUMPUS World
Sensors: The agent has five sensors. each of which gives a single bit of information:
– In the square containing the wumpus and in the directly (not diagonally) adjacent squares, the
agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the Wumpus is killed, it emits a woeful Scream that can be perceived anywhere in the cave.
The percepts will be given to the agent program in the form of a list of five symbols; for example, if there is a
stench and a breeze, but no glitter, bump, or scream, the agent program will get [Stench,Breeze, None, None,
None].
The wumpus environment can be characterized along various dimensions as discrete, static, and single-agent.
It is sequential, because rewards may come only after many actions are taken.
It is partially observable, because some aspects of the state are not directly perceivable: the agent's location,
the wumpus's state of health, and the availability of an arrow. As for the locations of the pits and the
wumpus: we could treat them as unobserved parts of the state that happen to be immutable—in which case,
the transition model for the environment is completely known; or we could say that the transition model
itself is unknown because the agent doesn't know which Forward actions are fatal—in which case,
discovering the locations of pits and wumpus completes the agent's knowledge of the transition model.
The WUMPUS World
A knowledge-based wumpus agent use an informal knowledge representation language consisting of writing
down symbols in a grid (as in Figures 7.3 and 7.4).
The WUMPUS World
The agent's initial knowledge base contains the piles of the environment, and it knows that it is in [1,1] and that
[1,1] is a safe square; we denote that with an "A" and "OK," respectively, in square [1,1].
The first percept is: None, None, None, None, None, from which the agent can conclude that its neighboring
squares, [1,2] and [2,1], are free of dangers—they are OK. Figure 7.3(a) shows the agent's state of knowledge at
this point. A cautious agent will move only into a square that it knows to be OK.
Let us suppose the agent decides to move forward to [2,1]. The agent perceives a breeze (denoted by "B") in
[2,1], so there must be a pit in a neighboring square. The pit cannot be in [1,1], by the rules of the game, so
there must be a pit in [2,2] or [3,1] or both. The notation "PT' in Figure 7.3(b) indicates a possible pit in those
squares. At this point, there is only one known square that is OK and that has not yet been visited. So the
prudent agent will turn around, go back to [1,1], and then proceed to [1,2].
The agent perceives a stench in [1,2], resulting in the state of knowledge shown in Figure 7.4(a). The stench in
[1,2] means that there must be a wumpus nearby. But the wumpus cannot be in [1,1], by the rules of the game,
and it cannot be in [2,2] (or the agent would have detected a stench when it was in [2,1]). Therefore, the agent
can infer that the Wumpus is in [1,3]. The notation W indicates this inference. Moreover, the lack of a breeze in
[1,2] implies that there is no pit in [2,2]. The agent has already inferred that there must be a pit in either [2,2]
or [3,1], so this means it must be in [3,1]. This is a fairly difficult inference, because it combines knowledge
gained at different times in different places and relies on the lack of a percept to make one crucial step.
The agent has now proved to itself that there is neither a pit nor a Wumpus in [2,2), so it is OK to move there.
We do not show the agent's state of knowledge at [2,2];we just assume that the agent turns and moves to [2,3],
giving us Figure 7.4(b). In [2,3], the agent detects a glitter, so it should grab the gold and then return home.
INSTRUCTIONAL OBJECTIVE
Knowledge-based agents
At the completion of this lecture,

students will be able to :


• Logic
• Propositional Logic
Logic
The knowledge bases consist of sentences. These sentences are expressed according to the syntax of the
representation language, which specifies all the sentences that are well formed.
In ordinary arithmetic: "x y = 4" is a well-formed sentence, whereas "x4y +=" is not.
A logic must also define the semantics or meaning of sentences. The semantics defines the truth of each
sentence with respect to each possible world.
For example, the semantics for arithmetic specifies that the sentence "x y = 4" is true in a world where x is 2
and y is 2, but false in a world where x is 1 and y is 1.
In standard logics, every sentence must be either true or false in each possible world—there is no "in
between."
Precisely, the term model can be used in place of ''possible world." The models are mathematical
abstractions, each of which simply fixes the truth or falsehood of every relevant sentence.
Informally, we may think of a possible world as, for example, having x men and y women sitting at a table
playing bridge, and the sentence x + y — 1 is true when there are four people in total.
Formally, the models are just all possible assignments of real numbers to the variables x and y. Each such
assignment fixes the truth of any sentence of arithmetic whose variables are x and y.
If a sentence a is true in model m, we say that m satisfies a or sometimes m is a model of a. We use the
notation M(a) to mean the set of all models of a.
Logic and Logical Reasoning
Entailment means that one thing follows from another. Entailment or logical entailment
between sentences is that a sentence follows logically from another sentence.
In mathematical notation, the sentence a entails the sentence β is represented as
α ╞ β if and only if, in every model in which α is true, β is also true.
Using the notation, we can write the Semantics of entailment as:
KB ╞ α iff M(α) M(β)
Logical Reasoning – Wumpus World Problem
Consider the situation in Figure 7.3(b): the agent has detected nothing in [1,1] and a breeze in [2,1].
These percepts, combined with the agent's knowledge of the rules of the wumpus world, constitute the KB.
The agent is interested in whether the adjacent squares [1,2], [2,2], and [3,1] contain pits. Each of the three
squares might or might not contain a pit, so there are 23 = 8 possible models and are shown in Figure 7.5.2.
The KB can be thought of as a set of sentences or as a single sentence that asserts all the individual sentences.
The KB is false in models which [1,2] contains a pit, because there is no breeze in [1,1].
There are three models in which the KB is true, and these are shown surrounded by a solid line.
consider two possible conclusions: 1: α1 = "There is no pit in [1,2].
In every model in which KB is true, α1 is also true.
Logical Reasoning – Wumpus World Problem
2: α2 = "There is no pit in [2,2] “
In some models, in which KB is true, α2 is false. Hence, the agent cannot conclude that there is no pit in [2,2].
Logical Reasoning – Wumpus World Problem
The inference algorithm illustrated in Figure 7.5 is called model checking, because it enumerates all possible
models to check that α is true in all models in which KB is true, that is M(KB) (α)
If an inference algorithm i can derive α from KB, we write KB ╞ i α which is pronounced “α is derived from KB
by i" or "i derives α from KB." i
An inference algorithm that derives only entailed sentences is called sound or truth preserving.
An inference algorithm is complete if it can derive any sentence that is entailed.
A reasoning process is described whose conclusions are guaranteed to be true in any world in which the
premises are true;
in particular, if KB is true in the real world, then any sentence α derived from KB by a sound inference
procedure is also true in the real world. This correspondence between world and representation is illustrated in
Figure 7.6.
Logical representation
• Logical representation is a language with some concrete rules which deals with
propositions and has no ambiguity in representation.
• Logical representation means drawing a conclusion based on various conditions.
• It consists of precisely defined syntax and semantics which supports the sound inference.
Each sentence can be translated into logics using syntax and semantics.
• Syntaxes are the rules which decide how we can construct legal sentences in the logic.
• Semantics are the rules by which we can interpret the sentence in the logic.
• Semantics also involves assigning a meaning to each sentence.
Logical representation can be categorized into mainly two types:
• Propositional Logic
• Predicate logic
Propositional Logic - Syntax
The syntax of propositional logic defines the allowable sentences.
The atomic sentences consist of a single proposition symbol.
Each symbol stands for a proposition that can be true or false.
The symbols start with an uppercase letter and may contain other letters or subscripts ; for
example: P, Q, R, H71, North.
The names are arbitrary but are often chosen to have some mnemonic value—we use W 1,3
to stand for the proposition that the Wumpus is in [1,3).
There are two proposition symbols with fixed meanings:
True is the always-true proposition and False is the always-false proposition.
Complex sentences are constructed from simpler sentences, using parentheses and logical
connectives(. ).
Propositional Logic - Syntax
There are five connectives in common use:
(not): A sentence such as —W 1,3 is called the negation of W 1,3. A literal is either an atomic
sentence (a positive literal) or a negated atomic sentence (a negative literal).
A (and): A sentence whose main connective is A, such as W 1,3 A P 3,1, is called a conjunction;
its parts are the conjuncts. (The A looks like an "A" for "And.")
V (or): A sentence using V, such as (W 1,3 A P 3,1) V W 2,2, is a disjunction of the disjuncts
(W 1,3 A P 3,1) and W 2,2.
(implies ): A sentence such as (W 1,3 A P 3,1) W 2,2 is called an implication (or
conditional). Its premise or antecedent is (W1,3 A P3,1), and its conclusion or consequent is
W2,2. Implications are also known as rules or if—then statement.
<=>(if and only if ): The sentence W 1,3 <=> W 2,2 is a biconditional.
Propositional Logic - Syntax
Propositional Logic - Syntax
Figure 7 7 gives a formal grammar of propositional logic;
A sentence with several operators can be parsed by the grammar in multiple ways.
To eliminate the ambiguity we define a precedence for each operator.
The "not" operator (–) has the highest precedence.
Propositional Logic - Syntax
Propositional Logic Connectives
Precedence Order for Propositional Logic
Propositional Logic - Semantics
The semantics defines the rules for determining the truth of a sentence with respect to a
particular model.
In propositional logic, a model simply fixes the truth value true or false for every proposition
symbol.
For ex, if the sentences in the knowledge base make use of the proposition symbols 𝑃1,2 , 𝑃2,2 ,
and 𝑃3,1 then one possible model is
𝑚1 = {𝑃1,2 =false, 𝑃2,2 =false, and 𝑃3,1 =true}
With three proposition symbols, there are 23 = 8 possible models.
𝑃1,2 is just a symbol; it means “There is a pit in [1,2]”.
The semantics for propositional logic must specify how to compute the truth value of any
sentence, given a model. This is done recursively.
All sentences are constructed from atomic sentences and the five connectives; therefore, we
need to specify how to compute the truth of atomic sentences and how to compute the
truth of sentences formed with each of the five connectives.
Propositional Logic - Semantics
Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the model.
For example, in the model given earlier, P1,2 is false.
For complex sentences, there are five rules, which hold for any subsentences P and Q in any
model m.
• –P is true iff P is false in M.
• P A Q is true iff both P and Q are true in m.
• P V Q is true iff either P or Q is true in m.
• P = Q is false when P is true and Q is false in m.
• P iz Q is true iff P and Q are both true or both false in m.
The rules can also be expressed with truth tables that specify the truth value of a complex
sentence for each possible assignment of truth values to its components.
Propositional Logic - Semantics
Truth tables for the five connectives are given in Figure 7.8.

From these tables, the truth value of any sentence a can be computed with respect to any
model m by a simple recursive evaluation.
For example, the sentence -P1,2 A (P2,2 V P3,1), evaluated in 𝑚1 , gives true A (false V true) =
true A true = true.
Propositional Logic - Semantics
Propositional logic does not require any relation of causation or relevance between P and
Q.
The sentence "5 is odd implies Tokyo is the capital of Japan" is a true sentence of
propositional logic even though it is a decidedly odd sentence of English.
Another point of confusion is that any implication is true whenever its antecedent is false.
For example, "5 is even implies Sam is smart" is true, regardless of whether Sam is smart.
"P = Q" as saying, "If P is true, then I am claiming that Q is true.
The only way for this sentence to be false is if P is true but Q is false.
The biconditional, P <=> Q, is true whenever both P and Q are true. In English, this is
often written as "P if and only if Q.
For example, a square is breezy if a neighboring square has a pit, and a square is breezy
only if a neighboring square has a pit. So we need a biconditional,
B1,1 <=> (P1,2 V P2,1) , where B1,1 means that there is a breeze in [1,1].
A Simple Knowledge base
Construct a knowledge base for the wumpus world, by first focusing on the immutable aspects of the wumpus world,
leaving the mutable aspects.
For now, we need the following symbols for each [x, y] location:
P x.y is true if there is a pit in [x, y]
W x,y is true if there is a wumpus in [x, y],dead or alive.
B y,y is true if the agent perceives a breeze in [y, y].
S x,y is true if the agent perceives a stench in [x,y].
The sentences are sufficient to derive -P1,2 (there is no pit in [1,2]). We label each sentence R:
• There is no pit in [1, 1]:
R1: -P1,1
• A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square; for now, we
include just the relevant squares:
R2 : B1,1 = (P1,2 V P2,1)
R3: B2,1 = (P1,1 V P2,2 V P3,1)
• The preceding sentences are true in all Wumpus worlds. Now we include the breeze percepts for the first two squares
visited in the specific world the agent is in, leading up to the situation in Figure 7.3(b).
R4 : - B1,1
R5 : B2,1
A Simple Inference Procedure
The goal is to decide whether KB ╞ α for some sentence α.
For example, is P1,2 entailed by the KB?
The first algorithm for inference is a model-checking approach that is a direct
implementation of the definition of entailment: enumerate the models, and check that α is
true in every model in which KB is true.
Models are assignments of true or False to every proposition symbol.
Consider the wumpus-world example, where the relevant proposition symbols are B1,1,
B2,1, P1,1,P1,2, P2,1, P2,2 and P3,1.
With seven symbols, there are 27 =128 possible models; in three of these, KB is true (Figure
7.9).
In those three models, P1,2 is true, hence there is no pit in [1,2]. On the other hand, P2,2
is true in two of the three models and false in one, so we cannot yet tell whether there is a
pit in [2,2].
Figure 7.9 represents a more precise form for the reasoning illustrated in Figure 7.5.
A Simple Inference Procedure
KB is true if R1 through R5 are true. (R1 A R2 A R3 A R4 A R5) which occurs in three rows of
128 rows.
A Simple Inference Procedure
A general algorithm for deciding entailment in propositional logic is shown in Figure 7.10.
The algorithm is sound because it implements directly the definition of entailment, and complete because it
works for any KB and α and always terminates—there are only finitely many models to examine.
If KB and α contain n symbols, then there are 2𝑛 models. Thus, the time complexity of the algorithm is 0(2").
The space complexity is only O(N) because the enumeration is depth-first.
Propositional Inference
Propositional Theorem Proving
Entailment can be done by theorem proving by applying rules of inference directly to the
sentences in the knowledge base to construct a proof of the desired sentence without
constructing models.
If the number of models is large, then theorem proving can be more efficient than model
checking.
The various concepts related to entailment are: The first concept is
Logical equivalence: Two sentences α and β are logically equivalent if the two sentences
have the same truth value in all possible models. This relationship is denoted by the symbol
"≡" or “iff" (if and only if). It can be written as α ≡ β.
In other words, two propositions are logically equivalent if they are always true or always
false together.
For example, the propositions "P ∧ Q" and "Q ∧ P" are logically equivalent because they
have the same truth value for all possible values of P and Q. Similarly, "¬(P ∨ Q)" and "(¬P)
∧ (¬Q)" are logically equivalent.
Two sentences α and β are equivalent only if each of them entails the other.
if and only if α ╞ β
Propositional Theorem Proving
Propositional Theorem Proving
The second concept is validity.
A sentence is valid if it is true in all models, e.g., P V ¬ P is valid.
Valid sentences are also known as tautologies—they are necessarily true. Because the sentence True is true in
all models, every valid sentence is logically equivalent to True.
Validity is connected to inference via the Deduction Theorem:
For any sentences α and β, α╞ β if and only if α →β is valid.
The final concept is satisfiability. A sentence is satisfiable if it is true in, or satisfied by, some model.
For example, the knowledge base given earlier, (RI ᴧ R2 ᴧ R3 ᴧ R4 ᴧ R5), is satisfiable because there are three
models in which it is true.
Satisfiability can be checked by enumerating the possible models until one is found that satisfies the sentence.
The problem of determining the satisfiability of sentences in propositional logic is the SAT problem that is
proved to be NP.
Validity and satisfiability are connected: α is valid iff ¬ α is unsatisfiable; contrapositively, α is satisfiable iff ¬ α
is not valid. We also have the following useful result:
β if and only if the sentence (α ᴧ ¬ β) is unsatisfiable.
Checking the unsatisfiability of (α ᴧ ¬ β) corresponds exactly to the mathematical proof technique of reduction
to absurdum ("reduction to an absurd thing"). It is also called proof by refutation or proof by contradiction.
Inference and Proofs
The inference rules is applied to derive a proof using a chain of conclusions that leads to the desired goal. The
best-known rule is called Modus Ponens and is written as
if A and A→B both are true then b must be true.

Ex: If it rains then I will carry an umbrella. It rains.


Inference: I will carry an umbrella.
If ( WumpusAhead ᴧ WumpusAlive) → Shoot then Shoot may be deduced.
And-Elimination is another inference rule, which states that any of the conjuncts can be inferred from
conjunction:
αᴧβ or αᴧβ
α β
Ex: ( WumpusAhead ᴧ WumpusAlive), WumpusAlive is inferred.
Modus Ponens and And-Elimination sounds same by evaluating the truth values of α and β. These may then
be applied to each situation, resulting in good conclusions.
The equivalence for biconditional elimination, produces the two inference rules.
Inference and Proofs
Consider the wumpus environment where these equivalences and inference rules may be applied.
We begin with the knowledge base including R1 through R5 and derive ¬P[1,2] i.e. that [1,2] does not include
any pits.
To generate R6, first apply biconditional elimination to R2 to obtain:

After that, we apply And-Elimination on R6 to get R7:

For contrapositives, logical equivalence yields R8:

With R8 and the percept R4 , we can now apply Modus Ponens to get R9:

Finally, use De Morgan’s rule to arrive at the following conclusion: R10:

That is to say, neither [1,2] nor [2,1] have a pit in them.


Propositional Theorem Proving
Any of the search techniques may be used to produce a proof-like sequence of steps. All
we have to do is to define a proof problem:
• Initial State: the starting point for knowledge.
• Actions: the set of actions is made up of all the inference rules that have been applied
to all the sentences that fit the inference rule’s upper half.
• Consequences: Adding the statement to the bottom part of the inference rule is the
result of an action.
• Objective: The objective is to arrive at a state that contains the phrase we are
attempting to verify.
Thus, searching for proofs is an alternative to enumerating models.
Another property of logical systems is monotonicity, which says that the set of entailed
sentences can only increase as information is added to the knowledge base.
Monotonicity means that inference rules can be applied whenever suitable premises are
found in the knowledge base—the conclusion of the rule must follow regardless of what
else is in the knowledge base.
Proof by Resolution
Search algorithms such as iterative deepening search are complete because they reach
the goal, but if the available inference rules are inadequate, then the goal is not
reachable.
For example, if we removed the biconditional elimination rule, the proof preceding
would not go through.
A single inference rule, called resolution is introduced, that yields a complete inference
algorithm when coupled with any complete search algorithm.
Consider the example of wumpus world using resolution rule.
Consider the steps leading up to Figure 7.1(a): the agent returns from [2,1] to [1,1] and
then goes to [1,2], where it perceives a stench, but no breeze. We add the following facts
to the knowledge base:
Proof by Resolution

We can now infer the absence of pits in [2,2] and [1,3] (remember [1,1] is already known to be
pitless) using the same approach that leads to R10 earlier:

To obtain the fact that there is a pit in [1,1], [2,2], or [3,1], apply biconditional elimination to R3,
followed by Modus Ponens on R5, as follows:

The resolution rule is now applied for the first time: the literal ¬P2,2 in R13 resolves with the literal
P2,2 in R15, yielding the resolvent.

If a pit exists in one of [1,1], [2,2], or [3,1], and it is not in [2,2], it is in [1,1] or [3,1]. Similarly, the
literal ¬P1,1 in R1 is resolved by P3,1 when compared to the literal P1,1 in R16 to R17.
Proof by Resolution
In English, if a pit exists in either [1,1] or [3,1], and not in [1,1], then it is in [3,1]. The unit resolution
inference rule, l1 lk, m, is used in these last two inference stages.

where each l is a literal and l and m are complimentary literals (In other words, negation).
As a result, the unit resolution rule creates a new clause from a clause (a disjunction of literals) and
a literal. A single literal, known as a unit clause, can be viewed as a disjunction of one literal.
The full resolution rule can be generalized to

where li and mj are complementary literals. This means that when two clauses are resolved, a new
clause is created that contains all of the literal from the two original clauses except the two
complimentary literals.
Proof by Resolution
There is one additional technical feature to the resolution rule: each literal should only appear
once in the resultant clause.
Factoring is the process of removal of multiple copies of literal.
For example, resolving (A ∨ B) and (A V ¬B) we obtain (A V A) which is reduced to A.
The soundness of the resolution rule can be seen easily by considering the literal 𝑙𝑖 , that is
complementary to literal 𝑚𝑗 in the other clause.
If 𝑙𝑖 is true, then 𝑚𝑗 is false, and hence , must be true, because
given.
If 𝑙𝑖 is false, then must be true because l1lk is given.
Now 𝑙𝑖 is either true or false, so one or other of these conclusions holds—exactly as the
resolution rule states.
Conjunctive Normal Form (CNF)
Every sentence of propositional logic is logically equivalent to a conjunction of clauses.
A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form or CNF.
Procedure for converting to CNF. The procedure illustrated by converting the sentence
B1,1 ↔ (P1,2 V P2, 1 ) into CNF. The steps are as follows:

The original sentence is now in CNF as a conjunction of three clauses.


A Resolution Algorithm
Inference procedures based on resolution work by using the principle of proof by contradiction.
(ie) To show that KB Ⱶ α, we show that ( KB ᴧ ¬ α) is not satisfiable. We do this by proving a
contradiction.
A resolution algorithm is shown in Figure 7.12.
A Resolution Algorithm
First, ( KB ᴧ ¬ α) is converted into CNF.
Then, the resolution rule is applied to the resulting clauses. Each pair that contains
complementary literals is resolved to produce a new clause, which is added to the set if it is not
already present. The process continues until one of two things happens:
■ there are no new clauses that can be added, in which case KB does not entail α; or,
■ two clauses resolve to yield the empty clause, in which case KB entails α.
The empty clause—a disjunction of no disjuncts—is equivalent to False because a disjunction is
true only if at least one of its disjuncts is true.
Another way to see that an empty clause represents a contradiction is to observe that it arises
only from resolving two complementary unit clauses such as P and
We can apply the resolution procedure to a very simple inference in the wumpus world.
When the agent is in [1,1], there is no breeze, so there can be no pits in neighboring squares.
The relevant knowledge base is
KB = R2 ᴧ R4 = B1,1 = (P1,2 V P2,1) ᴧ ¬ B1,1 and we want to prove α = ¬P1,2
A Resolution Algorithm
When we convert (( KB ᴧ ¬ α) into CNF, we obtain the clauses shown at the top of Figure 7.13.

The second row of the figure shows clauses obtained by resolving pairs in the first row.
Then, when P1,2 is resolved with ¬P1,2 we obtain the empty clause, shown as a small square.
Figure 7.13 reveals that many resolution steps are pointless.
For example, the clause B1,1 V ¬B1,1 V P1,2 is equivalent to True V P1,2 which is equivalent to True.
Therefore, any clause in which two complementary literals appear can be discarded.
A Resolution Algorithm
Horn Clauses and Definite Clauses
The restricted inference form is the definite clause, which is a disjunction of literals of which exactly one is
positive.
For example, the clause (¬L1 V ¬Breeze V B1,1) is a definite clause, whereas (¬B1,1 V P1,2 V P2,1) is not.
Horn clause, is a disjunction of literals of which at most one is positive. So all definite clauses are Horn clauses,
as are clauses with no positive literals; these are called goal clauses.
Horn clauses are closed under resolution. if you resolve two Horn clauses, you get back a Horn clause.
Knowledge bases containing only definite clauses are interesting for three reasons:
1.Every definite clause can be written as an implication whose premise is a conjunction of positive literals and
whose conclusion is a single positive literal.
For example, the definite clause (¬ L1,1 V ¬Breeze V B1,1) can be written as the implication (L1,1 ᴧ Breeze →
B1,1). It says that if the agent is in [1,1] and there is a breeze, then [1,1] is breezy.
In Horn form, the premise is called the body and the conclusion is called the head. A sentence consisting of a
single positive literal, such as L1,1 ,is called a fact. It can be written in implication form as True L1,1.
Inference with Horn clauses can be done through the forward-chaining and backward chaining algorithms.
Definite Clause Examples: Goal Clause Example: ¬R ∨ ¬ S
P ∨ ¬R ∨ ¬ S R ∧ S ⇒P
P
Forward and Backward Chaining
The forward-chaining algorithm PL-FC-ENTAiLS? (KB, q) determines if a single proposition
symbol q—the query—is entailed by a knowledge base of definite clauses.
It begins from known facts (positive literals) in the knowledge base and if all the premises
of an implication are known, then its conclusion is added to the set of known facts.
For example, if L1,1 and Breeze are known and (L1,1 ᴧ Breeze) then B1,1 is in the
knowledge base, then B11 can be added. This process continues until the query q is added
or until no further inferences can be made.
Figure 7.16(a) shows a simple knowledge base of Horn clauses with A and B as known
facts.
Figure 7.16(b) shows the same knowledge base drawn as an AND–OR graph. In AND–OR
graphs, multiple links joined by an arc indicate a conjunction—every link must be proved—
while multiple links without an arc indicate a disjunction—any link can be proved.
The known leaves (A and B) are set, and inference propagates up the graph as far as
possible.
Wherever a conjunction appears, the propagation waits until all the conjuncts are known
before proceeding.
Forward Chaining - Algorithm
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward Chaining
Forward chaining is sound: every inference is essentially an application of Modus Ponens.
Forward chaining is also complete: every entailed atomic sentence will be derived. (after
the algorithm reaches a fixed point where no new inferences are passible).
Forward chaining is an example of data-driven reasoning—that is, reasoning in which the
attention starts with the known data.
It can be used within an agent to derive conclusions from incoming percepts, often without
a specific query in mind.
For example, the wumpus agent might TELL its percepts to the knowledge base using an
incremental forward-chaining algorithm in which new facts can be added to the agenda to
initiate new inferences.
In humans, a certain amount of data-driven reasoning occurs as new information arrives.
Backward Chaining
Backward Chaining
The backward-chaining algorithm, works backward from the query.
If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds
those implications in the knowledge base whose conclusion is q.
If all the premises of one of those implications can be proved true (by backward chaining),
then q is true.
When applied to the query q in Figure 7.16, it works back down the graph until it reaches a
set of known facts, A and B, that forms the basis for a proof.
Backward chaining is a form of goal-directed reasoning.
It is useful for answering specific questions such as "What shall I do now?" and "Where are
my keys?"
Often, the cost of backward chaining is much less than linear in the size of the knowledge
base, because the process touches only relevant facts.
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Backward Chaining
Modus Tollens
Disjunctive Rule
Addition

==>
Q
Problems to Practice in Propositional Logic
Problems to Practice in Propositional Logic
Resolve the Proposition by Resolution Theorem
Prove
Predicate Logic (First Order Logic)
Propositional logic is a declarative language because its semantics is based on a truth
relation between sentences and possible worlds.
Propositional logic lacks the expressive power to describe an environment with many
objects.
Propositional logic assumes that there are facts that either hold or do not hold in the
world.
Each fact can be in one of two states: true or false, and each model assigns true or false
to each proposition symbol.
First-order logic assumes more; namely, that the world consists of objects with certain
relations among them that do or do not hold.
FOL is sufficiently expressive to represent the natural language statements in a concise
way.
First-order logic is also known as Predicate logic or First-order predicate logic.
It is a powerful language that develops information about the objects in a easy way and
express the relationship between those objects.
Syntax and Semantics of First Order Logic
• First-order logic does not only assume that the world contains facts like propositional
logic but also assumes the following things in the world:
• The symbols, come in three kinds:
• Constant symbols, which stand for objects; predicate symbols, which stand for relations;
and function symbols, which stand for functions.
• Objects, which are things with individual identities
• Properties of objects that distinguish them from other objects
• Relations that hold among sets of objects
• Functions, which are a subset of relations where there is only one “value” for any
given “input”
• Examples:
• Objects: Students, lectures, companies, cars ...
• Relations: Brother-of, bigger-than, outside, part-of, has-color, occurs-after, owns,
visits, precedes, ...
• Properties: blue, oval, even, large, ...
• Functions: father-of, best-friend, second-half, one-more-than ...
Syntax and Semantics of First Order Logic
"Squares neighboring the wumpus are smelly."
Objects: wumpus, squares; Property: smelly; Relation: neighboring.
"Evil King John ruled England in 1200."
Objects: John, England, 1200; Relation: ruled; Properties: evil, king.
Figure 8.2 shows a model with five objects:
Syntax and Semantics of First Order Logic
Richard the Lionheart, King of England from 1189 to 1199; his younger brother, the evil
King John, who ruled from 1199 to 1215; the left legs of Richard and John; and a crown.
The objects in the model may be related in various ways.
In the figure, Richard and John are brothers. A relation is the set of objects that are
related. (A tuple is a collection of objects arranged in a fixed order and is written with
angle brackets surrounding the objects.) Thus, the brotherhood relation in this model is
the set
<Richard the Lionheart, King John>
The crown is on King John's head, so the "on head" relation contains just one tuple.
<the crown, King John>
The "brother" and "on head" relations are binary relations—that is, they relate pairs of
objects. The model also contains unary relations, or properties: the "person" property is
true of both Richard and John: the "king" property is true only of John (presumably
because Richard is dead at this point); and the "crown' property is true only of the crown.
Syntax and Semantics of First Order Logic
In functions, a given object must be related to exactly one object in this way.
For example, each person has one left leg, so the model has a unary "left leg" function
that includes the following mappings:
Richard the Lion heart, Richard's left leg, King John, John's left leg.
Functions requires, there must be a value for every input tuple.
Thus, the crown must have a left leg and so must each of the left legs.
A model in first-order logic consists of a set of objects and an interpretation that maps
constant symbols to objects, predicate symbols to relations on those objects, and function
symbols to functions on those objects.
Syntax and Semantics of First Order Logic
• Variable symbols
• E.g., x, y
• Connectives
• Same as in PL: not (), and (), or (), implies (), if and only if (biconditional )
• Quantifiers
• Universal x or (Ax)
• Existential x or (Ex)
Atomic Sentences
An atomic sentence is formed from a predicate symbol optionally followed by a
parenthesized list of terms, Predicate (term1, term2, ......, term n).
Example:
Ravi and Ajay are brothers: => Brothers(Ravi, Ajay)
Chinky is a cat: => cat (Chinky)
Richard the Lionheart, is the brother of King John.
Brother (Richard,John).
An atomic sentence is true in a given model if the relation referred to by the predicate
symbol holds among the objects referred to by the arguments.
Complex Sentences
• Atomic sentences can have complex terms as arguments.
Richard the Lionheart’s, father is married to King John's mother.
Married( Father (Richard), Mother( John))
• Logical connectives are used to construct complex sentences, with the same syntax and
semantics.
The four sentences that are true in the model of Figure 8.2 under intended interpretation:
¬Brother(LeftLeg(Richard), John)
Brother(Richard, John) ᴧ Brother (John, Richard)
King (Richard) V King(John)
¬King(Richard) → King(John)
Universal Quantifier (V)
First-order logic contains two standard quantifiers, called universal and existential.
Rules such as "Squares neighboring the wumpus are smelly" and "All kings are persons“.
∀x,y((Square(x)∧Wumpus(y)∧Adjacent(x,y))→Smelly(x))
The second rule, "All kings are persons," is written in first-order logic as
∀x King(x) ⇒ Person(x)
∀ is usually pronounced "For all x, For each x, For every x". Thus, the sentence says, "For
all x, if x is a king, then x is a person." The symbol x is called a variable. Variables are
lowercase letters. A variable can also serve as the argument of a function—for example,
LeftLeg(x).
A term with no variables is called a ground term.
The sentence ∀x P, where P is any logical expression, says that P is true for every object x.
More precisely, ∀x P is true in a given model if P is true in all possible extended
interpretations constructed from the interpretation given in the model where each
extended interpretation specifies a domain element to which x refers.
Universal Quantifier (V)
The extended interpretations in five ways are illustrated below:
Richard the Lionheart is a king ⇒ Richard the Lionheart is a person.
King John is a king ⇒ King John is a person.
Richard’s left leg is a king ⇒ Richard’s left leg is a person.
John’s left leg is a king ⇒ John’s left leg is a person.
The crown is a king ⇒ the crown is a person
We see that the implication is true whenever its premise is false—regardless of the truth
of the conclusion.

All man drink coffee.


∀x man(x) → drink (x, coffee).
Existential Quantifier
∃x is pronounced, There exists an x such that ..., For some x....
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)
Example, that John has a crown on his head, we write
∃x Crown(x) ∧ OnHead(x, John).
The sentence ∃ x P says that P is true for at least one object x. More precisely, ∃x P is true
in a given model if P is true in at least one extended interpretation that assigns x to a
domain element. That is, at least one of the following is true:
Richard the Lionheart is a crown ∧ Richard the Lionheart is on John’s head;
King John is a crown ∧ King John is on John’s head;
Richard’s left leg is a crown ∧ Richard’s left leg is on John’s head;
John’s left leg is a crown ∧ John’s left leg is on John’s head;
The crown is a crown ∧ the crown is on John’s head.
The fifth assertion is true in the model, so the original existentially quantified sentence is
Nested Quantifiers
Complex sentences can be expressed using multiple quantifiers. The simplest case is where
the quantifiers are of the same type.
For example, Brothers are siblings can be written as ∀ x ∀ y Brother (x, y) ⇒ Sibling(x, y).
Consecutive quantifiers of the same type can be written as one quantifier with several
variables.
Ex. siblinghood is a symmetric relationship, we can write ∀ x,y Sibling(x,y) ⇔ Sibling(y, x) .
In other cases we will have mixtures.
Everybody loves somebody means that for every person, there is someone that person
loves: ∀x ∃y Loves(x, y).
On the other hand, There is someone who is loved by everyone, we write ∃y ∀x Loves(x, y).
The order of quantification is therefore very important. It becomes clearer if we insert
parentheses. ∀ x (∃ y Loves(x, y)) says that everyone love someone.
On the other hand, ∃ y (∀ x Loves(x, y)) says that someone is being loved by everybody.
Quantifiers
Some confusion can arise when two quantifiers are used with the same variable name.
Consider the sentence ∀x (Crown(x) ∨ (∃x Brother (Richard, x))).
Here the x in Brother (Richard, x) is existentially quantified.
The rule is that the variable belongs to the innermost quantifier that mentions it: it will
not be subject to any other quantification.
Another way to think of it is this: ∃x Brother (Richard, x) is a sentence about Richard (that
he has a brother), not about x; so putting a ∀x outside it has no effect.
It could equally well have been written ∃z Brother (Richard, z).
Because this can be a source of confusion, we will always use different variable names
with nested quantifiers.
Connections between ∀ and ∃
The two quantifiers are connected with each other, through negation.
Asserting that everyone dislikes parsnips is the same as asserting there does not exist
someone who likes them, and vice versa:
∀x ¬Likes(x,Parsnips ) is equivalent to ¬∃x Likes(x,Parsnips)
We can go one step further: Everyone likes ice cream means that there is no one who
does not like ice cream:
∀ x Likes(x,IceCream) is equivalent to ¬∃x ¬Likes(x,IceCream)
Because ∀ is a conjunction over the universe of objects and is a disjunction. The De
Morgan rules for quantified and unquantified sentences are as follows:
∀x ¬P ≡ ¬∃x P ¬(P ∨ Q) ≡ ¬P ∧ ¬Q
¬∀x P ≡ ∃x ¬P ¬(P ∧ Q) ≡ ¬P ∨ ¬Q
∀x P ≡ ¬∃x ¬P P ∧ Q ≡ ¬(¬P ∨ ¬Q)
∃x P ≡ ¬∀x ¬P P ∨ Q ≡ ¬(¬P ∧ ¬Q)
Equality
First-order logic includes one more way to make atomic sentences, other than using a
predicate and terms.
We can use the Equality symbol to signify that two terms refer to the same object.
For example, Father (John) = Henry says that the object referred to by Father (John) and
the object referred to by Henry are the same.
The equality symbol can be used to state facts about a given function, as we just did for
the Father symbol.
It can also be used with negation to insist that two terms are not the same object.
To say that Richard has at least two brothers, we would write
∃x,y Brother(x, Richard) ∧ Brother(y, Richard) ∧ ¬(x = y)
Properties of Quantifiers
• The main connective for universal quantifier ∀ is implication →
• The main connective for existential quantifier ∃ is and ∧
• In universal quantifier, ∀x∀y is similar to ∀y∀x
Ex: (x)(y)P(x,y) ↔ (y)(x) P(x,y)
• In Existential quantifier, ∃x∃y is similar to ∃y∃x
Ex: (x)(y)P(x,y) ↔ (y)(x) P(x,y)
• ∃x∀y is not similar to ∀y∃x
Properties of Quantifiers
• Universal quantifiers are often used with “implies” to form “rules”:
(x) student(x)  smart(x) means “All students are smart”
• Universal quantification is rarely used to make blanket statements about every
individual in the world:
(x)student(x)smart(x) means “Everyone in the world is a student and is smart”
• Existential quantifiers are usually used with “and” to specify a list of properties about an
individual:
(x) student(x)  smart(x) means “There is a student who is smart”
We can relate sentences involving  and  using De Morgan’s laws:
(x) P(x) ↔ (x) P(x)
(x) P(x) ↔ (x) P(x)
(x) P(x) ↔  (x) P(x)
(x) P(x) ↔ (x) P(x)
Quantified inference rules
• Universal instantiation
x P(x)  P(A)
• Universal generalization
P(A)  P(B) …  x P(x)
• Existential instantiation
x P(x) P(F)  skolem constant F
• Existential generalization
P(A)  x P(x)
Universal Instantiation
(a.k.a. universal elimination)
• If (x) P(x) is true, then P(C) is true, where C is any constant in the
domain of x.
• Example:
(x) eats(Ziggy, x)  eats(Ziggy, IceCream)
• The variable symbol can be replaced by any ground term, i.e., any
constant symbol or function symbol applied to ground terms only.
Existential Instantiation
(a.k.a. existential elimination)
• From (x) P(x) infer P(C)
• Example:
(x) eats(Ziggy, x)  eats(Ziggy, Stuff)
• Note that the variable is replaced by a brand-new constant not
occurring in this or any other sentence in the KB.
• Also known as skolemization; constant is a skolem constant.
Existential Generalization
(a.k.a. existential introduction)
• If P(C) is true, then (x) P(x) is inferred.
• Example
eats(Ziggy, IceCream)  (x) eats(Ziggy, x)
• All instances of the given constant symbol are replaced by the new
variable symbol.
• Note that the variable symbol cannot already exist anywhere in the
expression.
Translating English to FOL
Every gardener likes the sun.
x gardener(x)  likes(x,Sun)
You can fool some of the people all of the time.
x t person(x) time(t)  can-fool(x,t)
You can fool all of the people some of the time.
x t (person(x)  time(t) can-fool(x,t))
x (person(x)  t (time(t) can-fool(x,t))) Equivalent
All purple mushrooms are poisonous.
x (mushroom(x)  purple(x))  poisonous(x)
No purple mushroom is poisonous.
x purple(x)  mushroom(x)  poisonous(x)
x (mushroom(x)  purple(x))  poisonous(x) Equivalent
There are exactly two purple mushrooms.
x y mushroom(x)  purple(x)  mushroom(y)  purple(y)  (x=y)  z (mushroom(z)  purple(z)) => ((x=z) v (y=z))
Clinton is not tall.
tall(Clinton)
X is above Y iff X is on directly on top of Y or there is a pile of one or more other objects directly on top of one another
starting with X and ending with Y.
x y above(x,y) ↔ (on(x,y)  z (on(x,z)  above(z,y)))
Practice
1. All birds fly.
2. Every man respects his parent.
3. Some boys play cricket.
4. Not all students like both Mathematics and Science.
5. Only one student failed in Mathematics.
Solution

1. ∀x bird(x) →fly(x)
2. ∀x man(x) → respects (x, parent)
3. ∃x boys(x) → play(x, cricket)
4. ¬∀(x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)]
5. ∃(x) [student(x) → failed (x,Mathematics) ∧ ∀(y) [¬(x==y) ∧ student(y)
→ ¬failed (y, Mathematics)].
Free and Bound Variables
• The quantifiers interact with variables which appear in a suitable
way. There are two types of variables in First-order logic which are
given below:

• Free Variable: A variable is said to be a free variable in a formula if


it occurs outside the scope of the quantifier.
Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

• Bound Variable: A variable is said to be a bound variable in a


formula if it occurs within the scope of the quantifier.
Example: ∀x [A (x) B( y)], here x and y are the bound variables.

Exercise

1. John likes Cricket OR football


2. John lives in a house and the color of the house is green.
3. If car belongs to john then it is green.
4. John did not write Ramayana.
5. All elephants are gray.
6. Some courses in CSE are easy
Solution

• likes(John, Cricket) V likes(John, football)


• lives(john, house) Ʌ colour (house, green)
• belongs(car, john) → colour (car, green)
• ¬write(john, ramayana)
• x :elephant(x) →colour(x, gray)
• ƎX: Course(x) Ʌ easy(x, CSE)
Exercise

• Venkatesh only likes easy courses.


• Science courses are not easy.
• All the courses in the arts department are easy.
• B.Sc is a science course
Solution
• Venkatesh only likes easy courses
easycourse(x) →likes(venkatesh,x)
• Science courses are not easy
sciencecourse(x) →¬easycourse(x)
• All the courses in the arts department are easy
course(x) Ʌ arts(x) → easy(x)
• B.Sc is a science course
sciencecourse(B.Sc)
Solution
• Mani only likes easy games
easygames(x) → likes(mani,game(x))
• Boxing is hard
Hard(boxing)
• All the indoor games are easy
indoorgame(x) → easygames(x)
• Table Tennis is an indoor game
Indoorgame(Table Tennis)
Exercise

• Karate fighters are very strong.


• Mani is a karate fighter.
• Mani broke some body-part of every other karate fighter.
• Mani broke the leg of the karate fighter who broke the jaw of raja.
• Raja is a boxer
• Boxers are not as strong as karate fighters.
Solution
• Karate fighters are very strong
verystrong(karatefighter(x))
• Mani is a karate fighter
karatefighter(mani)
• Mani broke some body-part of every other karate fighter
Bodypart(x) Ʌ karatefighter(y) Ʌ broke(mani,y(x))
• Mani broke the leg of the karate fighter who broke the jaw of raja.
broke(mani, karatefighter(leg)) Ʌ broke(karatefighter, (jaw(Raja))
• Raja is a boxer
boxer(raja)
• Boxers are not as strong as karate fighters
¬strong_as(boxer(x),karatefighter(y))
Exercise

• Vishnu likes all kinds of fruits


• Mani is a good student
• All good students have high grades
• All students with high grades are bright
• Narayanan likes seeing English movies.
Exercise
1.Marcus was a man
Man(marcus)
2. Marcus was a Pompeian
Pompeian(Marcus)
3. All Pompeians were Romans
∀x: Pompeian(x) → Roman(x)
4. Caesar was a ruler.
ruler(Caesar)
5. All Romans were either loyal to Caesar or hated him.
∀x: Roman(x) → loyalto(x,Caesar) V hate(x,Caesar)
Exercise
6. Everyone is loyal to someone.
∀x : Ǝy: loyalto(x,y)
7. People only try to assassinate rulers they are not loyal to.
∀x : ∀y: person(x) Ʌ ruler(y) Ʌ tryassassinate(x,y) → ¬loyalto(x,y)
8. Marcus tried to assassinate Caesar.
tryassassinate(Marcus,Caesar)
9. All men are people.
∀x : man(x) → person(x)
Exercise
1. Marcus was a man.
2. Marcus was a Pompeian.
3. All Pompeians were Romans
4. Caesar was a ruler
5. All Romans were either loyal to Caesar or hated him
6. Everyone is loyal to someone
7. People only try to assassinate rulers they are not loyal to
8. Marcus tried to assassinate Caesar
9. All men are people
Q: Was Marcus loyal to Caesar?
Exercise
Q: Was Marcus loyal to Caesar? Q: Was Marcus hates Caesar?
Exercise
• Marcus was a man
man(Marcus)
• Marcus was a Pompeian
Pompeian(Marcus)
• Marcus was born in 40 A.D
born(Marcus,40)
• All men are moral
∀x : man(x) → mortal(x)
• All Pompeians died when the volcano erupted in 79 A.D
∀x: pompeian(x) → died(x,79) Ʌ erupted(volcano,79)
Exercise
• No mortal lives longer than 150 years
∀x : ∀t1: ∀t2: mortal(x) Ʌ born(x,t1) Ʌ gt(t2-t1,150) → dead(x,t2)
• It is now 1991
now=1991
• Alive means not died
∀x : ∀t: [alive(x,t) → ¬dead(x,t)] Ʌ [¬ dead(x,t) → alive(x,t)]
• If someone dies, then he is dead at all later times.
∀x : ∀t1: ∀t2: died(x,t1) Ʌ gt(t2,t1) → dead(x,t2)
Exercise
1. Marcus was a man
2. Marcus was a Pompeian
3. Marcus was born in 40 A.D
4. All men are mortal
5. All Pompeians died when the volcano erupted in 79 A.D
6. No mortal lives longer than 150 years
7. It is now 1991
8. Alive means not died
9. If someone dies, then he is dead at all later times.
Q: Was Marcus dead?
Exercise
1. Marcus was a man
2. Marcus was a Pompeian
3. Marcus was born in 40 A.D
4. All men are moral
5. All Pompeians died when the volcano erupted in 79 A.D
6. No mortal lives longer than 150 years
7. It is now 1991
8. Alive means not died
9. If someone dies, then he is dead at all later times.
Q: Was Marcus dead?
Exercise
Q: Was Marcus dead?
Exercise
• Mani only likes easy games.
• Boxing is hard.
• All the indoor games are easy.
• Table Tennis is an indoor game.
Q: Is mani likes table tennis?
Resolution
Resolution is a powerful technique for reducing the proof process.
Resolution is also called proof by refutation.
i.e to prove a statement, resolution attempts to show that the
negation of the statement produces a contradiction with the known
statements.
Resolution requires that all statements be converted to a clause
form.
Convert into conjunctive normal form (CNF).
Algorithm for Resolution
Convert facts to First Order Logic (FOL)
Convert FOL into CNF.
Negate the statement to be proved and add the clause to the set of
clauses in the knowledge base.
Select two clauses that contains conflicting terms. Call these as the
parent clauses.
Resolve them together. The resulting clause, is called the resolvent.
Draw the resolution graph.
If the resolvent is the empty clause, then a contradiction has been
found.
Resolution in Propositional logic
Given axioms CNF conversion
P P
(P Ʌ Q) →R ¬ P V¬Q V R
(SVT) →Q (¬SVQ) Ʌ (¬TVQ)
T T
Prove R.
Algorithm – Proof by Refutation
Eliminate → using the fact that a → b equivalent to ¬a V b
∀x : Pompeian(x) → roman(x) is ¬Pompeian(x) V Roman(x)
Reduce the scope of each ¬ to a single term ¬(¬p)=p, ¬(aɅb)=(¬a V ¬b) and ¬(aVb)=(¬a Ʌ ¬b)
¬∀x :P(x)= Ǝx:¬P(x) and ¬Ǝx :P(x) = ∀x : ¬ P(x)
Standardize the variables sothat each quantifier binds a unique variable.
∀x : P(x) V ∀x : Q(x) converted to ∀x : P(x) V ∀y : Q(y)
Move all quantifiers to the left of the formula without changing their relative order.
Replace existential quantifier with special functions and eliminate the corresponding
quantifiers.
Ǝy: sum(y) can be transformed into sum(d)
Move all universal quantifiers to the left of the expression and drop.
Convert the statement into a conjunction of disjunction.
(a Ʌ b)V c = (a v c) Ʌ (b v c)
Create a separate clause corresponding to each conjunct.
Standardize apart the variables in the set of clauses.
(∀x : P(x) Ʌ Q(x)) = ∀x : P(x) Ʌ ∀x : Q(x)
Proof by Refutation
1. Eliminate .
2. Reduce the scope of each  to a single term.
3. Standardize variables so that each quantifier binds a unique variable.
4. Move all quantifiers to the left without changing their relative order.
5. Eliminate  (Skolemization).
6. Drop .
7. Convert the formula into a conjunction of disjuncts.
8. Create a separate clause corresponding to each conjunct.
9. Standardize apart the variables in the set of obtained clauses.
Unification Algorithm
• Unification is a procedure that compares literals and discovers whether
there exists a set of substitutions that makes them identical.
• To attempt to unify two literals, we first need to check if the initial
predicates symbols are the same.
• If the predicate symbols match, then we must check the arguments,
one pair at a time. If the first match , we can continue with the second
and so on.
• A variable can match another variable, any constant, or a predicate
expression.
• The substitution must be consistent.
Ex: hate(x,y)
hate(marcus,caesar) (marcus/x, caesar/y)
Example
Conversion of Facts to FOL
Conversion of FOL to CNF
Move Negation
Rename Variables or Standardize variables
Eliminate Existential Instantiation Quantifier
Drop Universal Quantifiers
Negate the statement to be proved – Resolution
Graph
• ¬likes(John, Peanuts)
Forward Chaining
• It is a down-up approach, as it moves from bottom to top.
• It is a process of making a conclusion based on known facts or data, by
starting from the initial state and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach to the
goal using available data.
• Forward-chaining approach is commonly used in the expert system, such as
CLIPS, business, and production rule systems.
• Consider the following example for both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."
• Prove that "Robert is criminal."
Facts Conversion into FOL
• It is a crime for an American to sell weapons to hostile nations. (p, q, r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
• Country A has some missiles.
p Owns(A, p) ∧ Missile(p). It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Proof – Forward Chaining
Step-1:
In the first step we will start with the known facts and will choose the sentences
which do not have implications, such as:
American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All these
facts will be represented as below.
Proof – Forward Chaining
Step-2:
At the second step, we will see those facts which infer from available facts and with
satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers
from Rule-(7).
Proof – Forward Chaining
Step-3:
• At step-3, as we can check Rule-(1) is satisfied with the
substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And hence we
reached our goal statement.
Backward Chaining
A backward chaining algorithm is a form of reasoning, which starts with the
goal and works backward, chaining through rules to find known facts that
support the goal.
Properties of backward chaining:
• It is known as a top-down approach.
• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
• Backward-chaining algorithm is used in game theory, automated theorem
proving tools, inference engines, proof assistants, and various AI
applications.
• The backward-chaining method mostly used a depth-first
search strategy for proof.
Backward Chaining - Proof
• In Backward chaining, we will start with our goal predicate, which
is Criminal(Robert), and then infer further rules.
Step-1:
• At the first step, we will take the goal fact. And from the goal fact, we
will infer other facts, and at last, we will prove those facts true. So
our goal fact is "Robert is Criminal," so following is the predicate of
it.
Backward Chaining - Proof
Step-2:
• At the second step, we will infer other facts form goal fact which
satisfies the rules. So as we can see in Rule-1, the goal predicate
Criminal (Robert) is present with substitution {Robert/P}. So we will
add all the conjunctive facts below the first level and will replace p
with Robert.
Here we can see American (Robert) is a fact, so it is proved here.
Backward Chaining - Proof
Step-3:
At step-3, we will extract further fact Missile(q) which infer from
Weapon(q), as it satisfies Rule-(5). Weapon (q) is also true with the
substitution of a constant T1 at q.
Backward Chaining - Proof
Step-4:
• At step-4, we can infer facts Missile(T1) and Owns(A, T1) form
Sells(Robert, T1, r) which satisfies the Rule- 4, with the substitution
of A in place of r. So these two statements are proved here.
Backward Chaining - Proof
Step-5:
• At step-5, we can infer the fact Enemy(A,
America) from Hostile(A) which satisfies Rule- 6. And hence all the
statements are proved true using backward chaining.
Resolution graph using FOL and unification
algorithm

You might also like