0% found this document useful (0 votes)
3 views

Module4AI

Module 4 of the Artificial Intelligence course focuses on First Order Logic (FOL), covering its syntax, semantics, and application in knowledge representation. It discusses key components such as objects, predicates, quantifiers, and logical connectives, emphasizing the advantages of FOL over propositional logic. The module also explores the relationship between formal and natural languages, highlighting the importance of balancing expressiveness and unambiguous reasoning.

Uploaded by

yaaroobba123
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module4AI

Module 4 of the Artificial Intelligence course focuses on First Order Logic (FOL), covering its syntax, semantics, and application in knowledge representation. It discusses key components such as objects, predicates, quantifiers, and logical connectives, emphasizing the advantages of FOL over propositional logic. The module also explores the relationship between formal and natural languages, highlighting the importance of balancing expressiveness and unambiguous reasoning.

Uploaded by

yaaroobba123
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Artificial Intelligence

Module 4

Artificial Intelligence Module 4


BCS545B
Syllabus
• First Order Logic
• Inference in First Order Logic

– Artificial Intelligence: Start Russell, Peter Norvig


Chapter 8 (8.1, 8.2, 8.3, 8.4) and
Chapter 9 (9.1, 9.2, 9.3)

Artificial Intelligence Module 4


BCS545B
First Order Logic
Chapter 1

Artificial Intelligence Module 4


BCS545B
Topics Covered in this Chapter

Representation Revised
Syntax and Semantics of First-Order Logic

Using First-Order Logic


Knowledge Engineering in First-Order Logic



Artificial Intelligence Module 4


BCS545B
Representation Revisited
• First-Order Logic (FOL) is an advanced formal system of logic that
builds on the strengths of propositional logic but introduces a
much richer language to describe complex statements. Unlike
propositional logic, which only handles simple true or false
propositions, FOL allows us to express and reason about
relationships between objects, their properties, and the existence
or non-existence of certain conditions.
• Key Components of First-Order Logic
– Objects: Objects represent the entities we’re talking about in a domain. These could
be specific people, places, numbers, or items, like "Alice," "Room 101," "3," or
"Wumpus" in a grid world.
– Predicates: Predicates represent properties of objects or relationships between them.
For example, "Loves(Alice, Bob)" could represent that Alice loves Bob, or "Pit(Room
101)" could represent that there is a pit in Room 101. Predicates are functions that
return true or false based on their inputs.
Artificial Intelligence Module 4
BCS545B
Representation Revisited
– Quantifiers: Quantifiers expand the expressive power of FOL, allowing statements
about collections of objects or specifying conditions that must be true for some or all
objects. The two main quantifiers are:
• Universal Quantifier ( ∀ ): Represents "for all" (e.g., "∀x, Pit(x) → Breezy(x)" means "All pits
make their surrounding squares breezy").
• Existential Quantifier ( ∃ ): Represents "there exists" (e.g., "∃x, Pit(x)" means "There exists at
least one pit").
– Logical Connectives: Like in propositional logic, FOL uses connectives to build
complex statements:
AND ( ∧ ), OR ( ∨ ), NOT ( ¬ ), IMPLIES ( → ), and IF AND ONLY IF ( ↔ ).
– Functions: Functions are like predicates but return objects rather than true/false
values. For example, "Parent(Mary)" could return "Alice" if Mary is Alice's parent.

Artificial Intelligence Module 4


BCS545B
The Language of Thought
Expressiveness of Natural Language: Natural languages are highly expressive, and we
often use them to communicate complex ideas. The idea here is that if we could fully

understand and formalize the rules of natural language, we might be able to use it in
systems for reasoning and representation, potentially leveraging the vast amounts of
existing human knowledge expressed in natural language.
Communication vs. Representation: The current perspective views natural language
more as a tool for communication than as a straightforward system of representation. A

single phrase in natural language often requires context to make sense. For example,
saying “Look!” communicates an idea based on situational context (e.g., pointing to
something interesting or alarming). This reliance on context complicates using natural
language as a purely representational system because it demands that the system also
understand contextual information.
Ambiguity in Language: Natural languages are filled with ambiguity, meaning that a
word or phrase can have multiple interpretations depending on context. For instance, the

word “spring” could refer to a season, a metal coil, or a source of water. This ambiguity
is a challenge for using natural language as a precise knowledge representation system,
as one needs to resolve these ambiguities for clear reasoning.

Artificial Intelligence Module 4


BCS545B
The Language of Thought
Sapir–Whorf Hypothesis: This hypothesis suggests that the language we speak shapes
how we understand and categorize the world. It argues that speakers of different

languages may experience the world differently due to linguistic differences. While this
hypothesis has been debated, studies have shown some cognitive effects based on
language, such as how speakers of different languages might perceive directions or
describe objects differently.
Experiments and Cognitive Science: Various experiments support the idea that people
don't retain exact sentences in memory but instead remember the core meaning or idea.

This implies that the mind abstracts away from the exact wording, which suggests that
there might be an underlying nonverbal representation of knowledge. Additionally,
certain languages influence how people navigate, describe locations, and categorize
objects—further supporting the idea that language influences cognitive processes.
Modern Imaging Research: Advances in imaging technologies like fMRI are beginning to
show that there may be common patterns of knowledge representation across

individuals, though this research is still in early stages. These findings open the door to
better understanding how knowledge is represented in the human brain and whether it
resembles logical forms, potentially bridging the gap between natural language and
formal systems.
Artificial Intelligence Module 4
BCS545B
Combining the Best of Formal and Natural
Languages
• Foundations of Formal Logic: Formal logic, especially propositional logic, offers
a context-independent and unambiguous way to represent knowledge. This
clarity is essential for building reliable systems that reason and make
inferences. However, propositional logic is limited in expressiveness—it doesn’t
capture the richness of relationships, functions, or general rules as natural
language does.
• Objects, Relations, and Functions: To increase expressiveness, first-order logic
introduces concepts similar to those found in natural language. It allows
reasoning about:
– Objects (e.g., people, numbers, places)
– Relations (e.g., "is a friend of," "is larger than")
– Functions (e.g., "the father of," "one plus two")
These elements enable first-order logic to capture more complex ideas, closer
to the richness of natural language, while still maintaining the structure and
rigor needed for logical reasoning.

Artificial Intelligence Module 4


BCS545B
Combining the Best of Formal and Natural
Languages
• Expressing General Laws and Rules: Unlike propositional logic, which handles
individual facts, first-order logic can express universal statements (e.g., “All
squares neighboring the wumpus are smelly”). This allows for generalizations
and rules about categories of objects, which is a significant advantage when
creating systems that reason about the world in a structured way.
• Ontological Commitment: Different logical systems make different assumptions
about the world. This concept, known as ontological commitment, reflects what
a system believes exists in the world. For example:
Propositional logic assumes the world consists of facts that are either true or false.
First-order logic assumes a richer world of objects and relations among them.

Temporal logic assumes that facts hold at specific times.


Probability theory and fuzzy logic introduce degrees of belief or truth.



Artificial Intelligence Module 4


BCS545B
Combining the Best of Formal and Natural
Languages
• Epistemological Commitment: The epistemological commitment of a system
refers to what an agent believes about the facts in the world. For example, in
propositional and first-order logic, an agent can consider a fact to be true, false,
or unknown. In contrast, probabilistic systems can represent varying degrees of
belief (e.g., 75% confidence).
• Balancing Formal Logic and Natural Language: The section concludes by
suggesting that a practical reasoning system should incorporate the strengths
of both formal and natural languages. By doing so, it can achieve a balance
between expressiveness (capturing complex ideas) and unambiguous reasoning
(ensuring precise interpretation and inference).

Artificial Intelligence Module 4


BCS545B
Synatx and Semantics of First-Order Logic
• Syntax defines the rules for forming • Semantics assigns meaning to
valid sentences in FOL: sentences under an interpretation:
– Constants: Specific objects (e.g., John). – Interpretation: Defines the domain of
– Variables: Represent general objects objects, assigns objects to constants,
(e.g., x, y). true/false relationships to predicates,
– Predicates: Properties or relationships and maps to functions.
(e.g., Likes(John, IceCream)). – Truth Values:
– Functions: Map objects to other • Atomic formulas: True if the relationship
objects (e.g., FatherOf(John)). holds in the domain.
• Complex formulas: Combined using
– Logical Connectives: ¬ (not), ∧ (and),
connectives and quantifiers.
∨ (or), → (implies), ↔ (iff). – Quantifiers:
– Quantifiers: • ∀x φ(x): True if φ(x) holds for all x.
• Universal (∀): Applies to all (e.g., ∀x • ∃x φ(x): True if φ(x) holds for some x.
Likes(x, IceCream)).
• Existential (∃): Applies to some (e.g., ∃x Example: ∀x (Student(x) → Likes(x,
Likes(x, IceCream)). Math)) states that every student likes
math.

Artificial Intelligence Module 4


BCS545B
Models for First-Order Logic
Models in first-order logic (FOL) provide formal structures that represent possible worlds. Each
model maps the vocabulary of a logical language (such as predicates, constants, functions) to

elements of the world, allowing us to determine the truth of logical sentences within that world.
• Key components of a FOL model:
– Domain (D): The set of objects (or domain elements) in a model. The domain is always nonempty—there
must be at least one object in every model.
• For example, in a model with five objects, we could have Richard the Lionheart, King John, their left legs, and a crown.
– Relations: Relations describe how objects are connected. These can be:
• Binary relations: A set of tuples representing connections between two objects. For example, the "brother" relation
between Richard and John could be represented as {(Richard, John), (John, Richard)}.
• Unary relations (properties): A property that applies to one object, such as the "person" property that applies to both
Richard and John, or the "king" property that applies only to John.
– Functions: A function assigns exactly one object to another. For example, a "left leg" function maps each
person to their left leg:
• Richard → Richard’s left leg
• John → John’s left leg
In FOL, total functions are required, meaning each input must have exactly one output. If an
object like the crown doesn't have a left leg, a special "invisible" leg could be used to satisfy this

requirement.

Artificial Intelligence Module 4


BCS545B
Models for First-Order Logic

Artificial Intelligence Module 4


BCS545B
Symbols and Representations
• In first-order logic (FOL), the syntax is made up of three main types of symbols:
– Constant Symbols: Represent specific objects in the domain (e.g., Richard, John).
– Predicate Symbols: Represent relations between objects (e.g., Brother, OnHead,
Person).
– Function Symbols: Represent functions mapping objects to other objects (e.g., LeftLeg).
• Interpretation
– A model in FOL includes an interpretation, which maps these symbols to specific
elements in the domain of the model. The interpretation ensures that we know what
each constant, predicate, and function symbol represents within the model. For
example, the interpretation might be:
• Constant Symbols: "Richard" refers to Richard the Lionheart, and "John" refers to King John.
• Predicate Symbols: "Brother" refers to the set of tuples { (Richard, John), (John, Richard) },
indicating that Richard and John are brothers.
• Function Symbols: "LeftLeg" refers to a function that maps Richard to his left leg and John to his
left leg.
– There can be many possible interpretations. For example, Richard and John could be
mapped to the same object, or different objects, depending on the model. Not all objects
need to have a name, and an object can have multiple names in different interpretations.

Artificial Intelligence Module 4


BCS545B
Terms
• A term in first-order logic refers to an object in the domain of the model. There
are two main types of terms:
– Constant Symbols: These are simple terms that directly refer to specific objects in the
domain (e.g., Richard, John).
– Function Symbols: These allow more complex terms that refer to objects derived from
other objects. A function symbol followed by a list of terms forms a complex term. For
example, LeftLeg(John) refers to the object representing King John's left leg.
• Complex Terms: A complex term consists of a function symbol followed by a list
of arguments, which are themselves terms. For example:
– LeftLeg(John) is a complex term where LeftLeg is the function symbol, and John is the
argument (a constant symbol).
• It’s important to note that a complex term is not a “subroutine call” like in
programming languages; it is simply a way to name an object. The term
LeftLeg(John) doesn’t call a subroutine to return a value; it simply refers to a
specific object (King John’s left leg).

Artificial Intelligence Module 4


BCS545B
Atomic Sentences
• An atomic sentence (or atom) is a basic logical statement that expresses a fact
about the domain of a model. It is formed by combining a predicate symbol with
a list of terms. The structure of an atomic sentence can be described as:
– Predicate Symbol: Represents a relation or property in the model.
– Terms: Represent objects in the domain, either as constant symbols or complex terms.
• Example of an Atomic Sentence:
– Brother(Richard, John) states that Richard the Lionheart is the brother of King John.
– Here:
• Brother is a predicate symbol representing the brotherhood relation.
• Richard and John are constant symbols referring to specific objects in the domain.
• Atomic Sentences with Complex Terms: Atomic sentences can also include
complex terms as arguments. For example:
– Married(Father(Richard), Mother(John)) states that Richard the Lionheart’s father is
married to King John’s mother.

Artificial Intelligence Module 4


BCS545B
Complex Sentenses
• Complex sentences in first-order logic are constructed by
combining atomic sentences using logical connectives. These
connectives are the same as those used in propositional logic (such
as negation, conjunction, disjunction, and implication), but they
apply to sentences that may include variables, terms, and
predicates.
• Example of Complex Sentences:
– Negation: ¬Brother(LeftLeg(Richard), John) states that the LeftLeg of Richard is not
the brother of John.
– Conjunction: Brother(Richard, John) ∧ Brother(John, Richard) states that Richard is
the brother of John, and John is the brother of Richard (symmetric relationship).
– Disjunction: King(Richard) ∨ King(John) states that either Richard or John (or both) is
a king.
– Implication: ¬King(Richard) ⇒ King(John) states that if Richard is not a king, then
John is a king.

Artificial Intelligence Module 4


BCS545B
Quantifiers
• First-order logic allows us to make statements about entire collections of
objects using quantifiers. These are symbols that enable us to express
generalizations over objects without explicitly naming them. There are two
types of quantifiers:
– Universal Quantification (∀):
• The symbol ∀ means "for all." It allows us to make statements about all objects in the domain.
• Example: ∀x (King(x) ⇒ Person(x)) states that for every object x, if x is a king, then x must be a
person.
– Existential Quantification (∃):
• The symbol ∃ means "there exists" or "there is at least one." It allows us to make statements about
the existence of some object.
• Example: ∃x (Crown(x) ∧ OnHead(x, John)) states that there exists some object x, such that x is a
crown and x is on John's head.
– Nested Quantifiers: First-order logic allows nested quantifiers, where multiple
quantifiers are used in conjunction. These can be either of the same type or mixed.
• Same-type Quantifiers:
– "Brothers are siblings" can be written as: ∀ x ∀ y (Brother(x, y) ⇒ Sibling(x, y))
• Mixed Quantifiers:
• "Everybody loves somebody" means that for every person, there exists someone they love: ∀ x ∃ y
(Loves(x, y))

Artificial Intelligence Module 4


BCS545B
Equality
• equality provides a way to express that two terms refer to the
same object, using the equality symbol (=). For example:
– Father(John) = Henry means that the object referred to by Father(John)
is the same as the object referred to by Henry.
• This is useful for establishing relationships between terms and can
also be used with negation to express that two terms do not refer
to the same object. For instance:
– ∃ x, y Brother(x, Richard) ∧ Brother(y, Richard) ∧ ¬(x = y) means that
Richard has at least two distinct brothers, ensuring x and y are not the
same object.

Artificial Intelligence Module 4


BCS545B
Alternate Semantics
• A proposed alternative semantics in the context of
first-order logic is database semantics, which
relies on a few key assumptions:
– Unique-Names Assumption (UNA): Every constant
symbol refers to a distinct object. For example, John
and Henry represent different objects.
– Closed-World Assumption (CWA): If an atomic
sentence is not known to be true, it is considered false.
– Domain Closure: The domain only contains objects that
are explicitly named by constants in the language.

Artificial Intelligence Module 4


BCS545B
Using First-Order Logic

Assertions and Queries


The Kinship Domain

Numbers, Sets and Lists


The Wumpus World



Artificial Intelligence Module 4


BCS545B
Assertions and Queries in first-order logic
• Assertions (TELL): These are sentences added to a knowledge
base. For example:
– TELL(KB, King(John))
– TELL(KB, Person(Richard))
– TELL(KB, ∀x (King(x) ⇒ Person(x)))
• Queries (ASK): These are questions asked of the knowledge base, and answers
are derived based on the assertions. For example:
– ASK(KB, King(John)) // Returns true
– ASK(KB, Person(John)) // Also returns true
• Quantified Queries (ASKVARS): These allow queries with
variables, returning possible values that satisfy the query.
– ASKVARS(KB, Person(x)) // Returns {x/John, x/Richard}

Artificial Intelligence Module 4


BCS545B
The Kinship Domain
• This domain involves relationships between people, such as
parenthood, marriage, and siblinghood. Some key points:
– Functions and Predicates: Represent relationships (e.g., Mother(c) = m ⇔
Female(m) ∧ Parent(m, c))
– Example Axioms:
A mother is a female parent.
A husband is a male spouse.

Parent and child are inverse relations.


A grandparent is a parent of one's parent.


– Theorems: Some sentences, like "siblinghood is symmetric," are derived


logically from axioms.


– Axioms vs Theorems: Not all facts in a knowledge base are axioms. Some
can be derived (theorems) based on the axioms.

Artificial Intelligence Module 4


BCS545B
Numbers, Sets and Lists
• Natural Numbers (Peano Axioms):
– NatNum(0): 0 is a natural number.
– ∀n NatNum(n) ⇒ NatNum(S(n)): If n is a natural number, then S(n) (the successor of
n) is also a natural number.
• Addition:
– ∀m NatNum(m) ⇒ +(0, m) = m: Adding 0 to a natural number returns the number
itself.
– ∀m, n NatNum(m) ∧ NatNum(n) ⇒ +(S(m), n) = S(+(m, n)): Addition is defined
recursively.
• Set Theory:
– Set(s): A set is defined either as the empty set or as a set formed by adding an
element to another set.
– Set Operations: Various axioms govern set membership (∈), subset (⊆), union (∪),
and intersection (∩).
• Lists:
– Lists are ordered and can have repeated elements. Functions like Cons, Append, and
First are used to represent lists.

Artificial Intelligence Module 4


BCS545B
The Wumpus World
• The Wumpus world is a classic problem in AI, where an agent
navigates a grid to avoid dangers and find treasure. The
knowledge representation includes:
– Percepts: These are the sensory inputs the agent receives, represented as
a list of sensory data (e.g., Percept([Stench, Breeze, Glitter, None, None],
5)).
– Actions: The agent can perform actions like Turn(Right), Move(Forward),
etc.
– Queries and Reasoning: The agent can ask the knowledge base about the
best actions to take based on its percepts and previous actions, using
queries like ASKVARS(∃ a BestAction(a, 5)).
– Inference Rules: The system uses rules like:
• ∀t, s, g, m, c Percept([s, Breeze, g, m, c], t) ⇒ Breeze(t): If the percept is "Breeze"
at time t, then the knowledge base can infer that Breeze(t) is true.

Artificial Intelligence Module 4


BCS545B
Knowledge Engineering in FOL
• The process of knowledge engineering in first-order logic involves creating
formal representations of knowledge in a specific domain, allowing for
reasoning and problem-solving. The approach involves several stages, which are
outlined below, using the example of an electronic circuit domain:
– Identify the Task: The first step is to define the scope of the knowledge base and the
types of questions it will answer. In the case of the electronic circuit example, questions
could involve determining whether a circuit performs correctly, finding the outputs
given certain inputs, and understanding the structure of the circuit.
– Assemble the Relevant Knowledge: At this stage, the knowledge engineer works to
collect information about the domain. This might involve extracting knowledge from
experts or understanding the domain from the perspective of the task at hand. For
digital circuits, this includes knowing the behavior of gates (AND, OR, XOR, NOT) and
how signals flow through wires and gates.
– Decide on a Vocabulary: Here, the knowledge engineer defines the concepts and their
relationships in logical terms. For digital circuits, predicates and functions like Gate(x),
Type(x), Signal(x), Connected(x, y) are used to represent gates, their types, signal
states, and connections. Constants like AND, OR, XOR, and NOT are used to describe
specific gate types.

Artificial Intelligence Module 4


BCS545B
Knowledge Engineering in FOL
– Encode General Knowledge of the Domain: This stage involves writing axioms that
define the relationships between the concepts in the vocabulary. For the circuit domain,
general knowledge might include axioms about how gates behave (e.g., an AND gate
outputs 1 only if all inputs are 1, while an XOR gate outputs 1 only when its inputs are
different).
– Example axioms for gates:
AND Gate: If any input is 0, the output is 0.
OR Gate: If any input is 1, the output is 1.

XOR Gate: The output is 1 if inputs are different.


NOT Gate: The output is the inverse of the input.


– Encode a Description of the Specific Problem Instance: Once the general domain

knowledge is encoded, specific instances of problems are written using the defined
vocabulary. For example, in the adder circuit instance, the specific gates and
connections are defined, including their types (XOR, AND, OR) and how they are
connected.
– Pose Queries to the Inference Procedure: After the knowledge base is constructed,
queries are posed to check the correctness of the system. For example, to check when
the sum and carry outputs of the circuit will be 0 and 1, respectively.

Artificial Intelligence Module 4


BCS545B
Knowledge Engineering in FOL
– Debug the Knowledge Base:
– Once the queries are tested, errors in the knowledge base might emerge. Missing
axioms, incorrect facts, or logical inconsistencies could cause incorrect results.
Debugging involves identifying and fixing these errors, ensuring the knowledge base
reflects the true behavior of the system.

Artificial Intelligence Module 4


BCS545B
Inference in First-Order
Logic
Chapter 2

Artificial Intelligence Module 4


BCS545B
Topics Covered in this Chapter
• Propositional Versus First Order Inference
• Unification and Lifting
• Forward Chaining

Artificial Intelligence Module 4


BCS545B
Propositional vs. First-Order Inference
• Inference rules for quantifiers
– Universal Instantiation (UI):
• This rule allows replacing variables in a universally quantified sentence (e.g., ∀x) with specific terms,
known as ground terms. This creates concrete, non-quantified sentences for reasoning.
• Example:
From ∀x (King(x) ∧ Greedy(x) ⇒ Evil(x)), we can infer:
King(John) ∧ Greedy(John) ⇒ Evil(John).
King(Richard) ∧ Greedy(Richard) ⇒ Evil(Richard).
– Existential Instantiation (EI):
• This rule replaces variables in an existentially quantified sentence (e.g., ∃x) with a unique constant called a
Skolem constant. This gives a name to the object without ambiguity.
• Example:
From ∃x (Crown(x) ∧ OnHead(x, John)), we infer:
Crown(C1) ∧ OnHead(C1, John).
Here, "C1" must be a new, unused constant in the knowledge base.
• Existential instantiation replaces an existentially quantified variable with a new constant called the Skolem
Constant. The new constant must not conflict with names of existing objects in the knowledge base.
• Application of Existential Instantiation:
– Once the existential quantifier is replaced with a Skolem constant, the original quantified sentence is no longer needed.
– Example: From ∃x (Kill(x, Victim)), infer Kill(Murderer, Victim). The quantified sentence is replaced by this specific
instance.
• Inferential Equivalence:
– The updated knowledge base (after existential instantiation) is not logically equivalent to the original but is inferentially
equivalent.
– Meaning: Both knowledge bases are satisfiable under the same conditions.

Artificial Intelligence Module 4


BCS545B
Propositional vs. First-Order Inference
• Reduction to Propositional Logic & Challenges
– Reduction to Propositional Logic:
• First-order inference can be reduced to propositional inference by replacing quantified
sentences with all possible ground terms from the knowledge base.
• Example Knowledge Base: ∀x (King(x) ∧ Greedy(x) ⇒ Evil(x)), King(John), Greedy(John).
• Propositionalized: King(John) ∧ Greedy(John) ⇒ Evil(John)
King(Richard) ∧ Greedy(Richard) ⇒ Evil(Richard).
– Challenges:
• Infinite Ground Terms: Function symbols, like Father(Father(John)), create an infinite
number of nested terms, making propositional reasoning impractical.
• Herbrand’s Theorem: If a sentence is entailed, a proof exists using a finite subset of ground
terms. These terms are generated iteratively: constants → terms of depth 1 → terms of depth
2.
– Semi-Decidability:
• First-order inference is semi-decidable:
– It can always confirm entailment if true.
– It cannot always confirm non-entailment, as it might loop infinitely without conclusion.

Artificial Intelligence Module 4


BCS545B
Unification and Lifting
• Unification and lifting address inefficiencies in propositionalizing first-order
logic for inference tasks. Propositionalization involves converting all possible
variable substitutions into ground sentences, which is computationally
expensive and often unnecessary.
• The process of unification is fundamental to this approach, as it identifies
substitutions (unifiers) that make logical expressions identical. For instance,
when querying "Who does John know?" unification allows matching knowledge
base sentences such as "John knows Jane" or "Everyone knows someone." This
avoids redundancy and focuses computation on relevant substitutions.
• By introducing lifting, inference mechanisms operate on abstract, variable-
inclusive logic, bypassing the need for exhaustive ground sentence
enumeration. This method preserves efficiency and scales better for complex
logical systems. Together, unification and lifting represent a significant
advancement in automating logical reasoning within AI systems.

Artificial Intelligence Module 4


BCS545B
First Order Inference rule
• Inference with Substitution: Generalized Modus Ponens uses substitution 𝜃 to
align premises from the knowledge base with an implication rule. For example,
given ∀yGreedy(y), substitution θ={x/John,y/John} ensures that Greedy(x) and
Greedy(y) align, enabling the inference Evil(John).

• Formal Rule: It combines n premises with a single implication to derive a


conclusion, applying substitution θ. This allows the premises and the
implication's antecedents to match, making the rule effective for first-order
logic.

• Soundness: Generalized Modus Ponens is sound because universal instantiation


ensures any substitution θ preserves logical equivalence. This validates that
SUBST(θ,q) logically follows.

• Lifting: By generalizing Modus Ponens to first-order logic, lifting avoids


exhaustive enumeration of ground instances (propositionalization), improving
efficiency in reasoning systems.
Artificial Intelligence Module 4
BCS545B
Unification
• Unification Process: Unification simplifies logical expressions by
applying substitutions to align two terms. For instance, unifying
Knows(John,x) with Knows(John,Jane) leads to {x/Jane}, making the two
expressions identical.
• Handling Conflicts with Standardizing Apart: When variable names
overlap (e.g., x in both sentences), renaming one sentence's variables
avoids unification failure. This ensures logical equivalence remains
intact.
– UNIFY(Knows(John, x), Knows(x,Elizabeth)) = fail
• Most General Unifier (MGU): The most general unifier imposes the
fewest constraints, allowing maximum flexibility for further inferences.
In the example Knows(John,x) and Knows(y,z), the MGU is {y/John,x/z},
avoiding unnecessary restrictions.
• Occur Check and Efficiency: The occur check prevents inconsistencies,
such as S(x) unifying with S(S(x)). However, it adds computational cost,
making it a tradeoff
Artificial Intelligence
BCS545B
between soundness
Module 4
and performance.
Unification Algorithm

Artificial Intelligence Module 4


BCS545B
Storage and Retrival
• The TELL and ASK functions used to inform and interrogate a knowledge base
are the more primitive STORE and FETCH functions. STORE(s) stores a
sentence s into the knowledge base and FETCH(q) returns all unifiers such that
the query q unifies with some sentence in the knowledge base.
• The simplest way to implement STORE and FETCH is to keep all the facts in one
long list and unify each query against every element of the list. Such a process
is inefficient, but it works.
• To make the process of FETCH more efficient during unification in AI systems,
certain strategies can be employed to avoid unnecessary attempts at unifying
sentences that have no chance of matching.
• A subsumption lattice is a structure that organizes sentences or queries based
on their logical specificity and generality. The lattice represents all possible
queries that can unify with a given fact or sentence, with nodes connected by
subsumption relationships.

Artificial Intelligence Module 4


BCS545B
Forward Chaining
• What is Forward Chaining?
– Forward chaining is a data-driven reasoning approach where inference begins with
known facts and applies inference rules to derive new facts until a goal or conclusion
is reached.
– Used in rule-based systems and expert systems.
– It starts with given data and moves toward a goal.
• First-order definite clauses are similar to propositional definite clauses but
extend to include variables.
• A first-order definite clause:
– Is atomic or an implication.
– The antecedent is a conjunction of positive literals.
– The consequent is a single positive literal.
• Variables are universally quantified by default in first-order literals.
• Example:
– Atomic clause: King(John).
– Implication: King(x) ∧ Greedy(x) ⇒ Evil(x).

Artificial Intelligence Module 4


BCS545B
Forward Chaining Algorithm

Artificial Intelligence Module 4


BCS545B
Forward Chaining Problem
• The law says that it is crime for an American to sell weapons to hostile nations.
The country Nono, an enemy of America, has some missiles, and all of its
missiles were sold to it by Colonel West, who is an American. An enemy of
America counts as “hostile
Prove “West is Criminal”
∀x∀y∀z: American(x) ^ Weapon(y) ^ Sells(x, y, z) ^ Hostile(z) => Criminal(x)
∀x: Missile(x) ^ Owns(Nono, x) => Sells(West, x, Nono)

∀x: Enemy(x, America) => Hostile(x)


∀x: Missile(x) => Weapon(x)




• American(West)
• Enemy(Nono, America)
• Owns(Nono, M1)
• Missile(M1)

Artificial Intelligence Module 4


BCS545B
Effective Forward Chaining
• Efficient forward chaining aims to improve the traditional forward-
chaining algorithm by addressing its inefficiencies in rule
matching, redundant computations, and irrelevant fact generation.
These improvements make it suitable for practical applications
such as expert systems and deductive databases.
• Challenges in Standard Forward Chaining
– Pattern Matching Complexity: Matching rule premises with facts in the knowledge
base can be computationally expensive. Optimizing the order of conjuncts using
heuristics like the Minimum-Remaining-Values (MRV) heuristic minimizes this cost.
• Example: If there are fewer missiles than objects owned by Nono, processing missiles first
reduces search space.
– Redundant Rule Rechecking: The naive algorithm rechecks all rules in every
iteration, even when only a few new facts are added. This creates unnecessary
overhead in scenarios with large knowledge bases.
– Generation of Irrelevant Facts: Forward chaining infers all possible conclusions,
including those unrelated to the current goal. This wastes computational resources
and slows down inference in large systems.

Artificial Intelligence Module 4


BCS545B
Effective Forward Chaining
• Strategies for Efficiency
– Incremental Forward Chaining: Rules are re-evaluated only when a newly
added fact can impact their premises. This prevents redundant rechecking
and ensures the algorithm focuses on relevant facts. Partial matches of rule
premises are retained and updated incrementally, rather than being rebuilt
for every new fact.
– Rete Algorithm: A specialized dataflow network preprocesses rules to store
partial matches, avoiding redundant computations. Variables and bindings
are filtered dynamically through nodes corresponding to rule
literals.Applications: Production systems like XCON, which configure
computer components, use the Rete algorithm for scalable reasoning.
– Magic Sets in Deductive Databases: Forward chaining is restricted to
relevant subsets of facts. This hybrid approach combines forward inference
with backward preprocessing to optimize performance.

Artificial Intelligence Module 4


BCS545B

You might also like