0% found this document useful (0 votes)
10 views

NLP Module 4

Uploaded by

rumanhashmi92
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

NLP Module 4

Uploaded by

rumanhashmi92
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Natural Language Processing

By
Dr. L. Lakshmi
Course Code: DS311
Module-4
Module-4 Contents

 Representing Meaning
 Meaning Structure of Language
 First Order Predicate Calculus
 Representing Linguistically Relevant Concepts
 Syntax Driven Semantic Analysis
 Semantic Attachments
 Syntax Driven Analyzer
 Robust Analysis
 Lexemes and Their Senses
 Internal Structure
 Word Sense Disambiguation
 Information Retrieval.
Representing Meaning
• The text introduces an approach to semantics that revolves around the idea
that the meaning of linguistic expressions can be represented using formal
structures, referred to as meaning representations.
• These representations serve as a way to capture the meaning of language
systematically.
• To achieve this, specialized frameworks are employed to define both the
syntax (the structure) and the semantics (the meaning) of these
representations.
• These frameworks are known as meaning representation languages.
• The text also draws an analogy between meaning representation languages
and the representations used in earlier discussions for phonological,
morphological, and syntactic analysis.
• Just as those representations are used to capture sound, word formation,
and sentence structure, meaning representation languages are used to
model the meaning of linguistic expressions.
Representing Meaning
• The text explains that meaning representations are necessary because raw
linguistic inputs and the structures derived from them are not sufficient for
the kind of semantic processing required in various tasks.
• In particular, meaning representations are needed to connect linguistic
inputs to the non-linguistic knowledge of the world required to understand
and interpret language meaningfully.
• The text highlights that many real-world tasks involve this type of semantic
processing, and it provides examples of such tasks:
• Answering essay questions on an exam: This requires understanding the
meaning of questions and formulating responses.
• Deciding what to order at a restaurant by reading a menu: Interpreting
menu items involves understanding language and associating it with real-
world knowledge about food.
• Learning to use a new piece of software by reading the manual: This
involves comprehending instructional text and applying it to interact with
software.
Representing Meaning
• Realizing that you’ve been insulted: Recognizing the meaning and intention
behind the words used.
• Following recipes: Understanding the instructions and converting linguistic
inputs into actions in the real world.
• These examples illustrate that meaning representations bridge the gap
between language and the knowledge needed for practical, everyday tasks
involving language comprehension.
• The text emphasizes that phonological, morphological, and syntactic
representations alone are insufficient for successfully completing tasks that
require deeper language understanding.
• To accomplish such tasks, one needs meaning representations that connect
linguistic elements to non-linguistic world knowledge.
• For instance, in order to complete the tasks mentioned earlier (like
answering exam questions or following recipes), one must have access to
both the language and the relevant real-world knowledge.
• The text suggests that world knowledge plays a crucial role in making
meaning out of linguistic inputs and is necessary for completing the tasks
Representing Meaning
• Realizing that you’ve been insulted: Recognizing the meaning and intention
behind the words used.
• Following recipes: Understanding the instructions and converting linguistic
inputs into actions in the real world.
• These examples illustrate that meaning representations bridge the gap
between language and the knowledge needed for practical, everyday tasks
involving language comprehension.
• The text emphasizes that phonological, morphological, and syntactic
representations alone are insufficient for successfully completing tasks that
require deeper language understanding.
• To accomplish such tasks, one needs meaning representations that connect
linguistic elements to non-linguistic world knowledge.
• For instance, in order to complete the tasks mentioned earlier (like
answering exam questions or following recipes), one must have access to
both the language and the relevant real-world knowledge.
• The text suggests that world knowledge plays a crucial role in making
meaning out of linguistic inputs and is necessary for completing the tasks.
Representing Meaning
• Some examples of the necessary world knowledge for specific tasks are
provided:
• Answering and grading essay questions: This requires understanding the
topic, the expected knowledge level of the students, and how essay
questions are typically answered. It involves applying general knowledge
about the subject and evaluation criteria.
• Reading a menu and deciding what to order, giving dining advice, following
or creating recipes: These tasks require knowledge about food, its
preparation, people's preferences, and the typical experience in
restaurants. This helps in making informed decisions about meals.
• Learning to use software by reading a manual or giving related advice: One
must have knowledge of computers, the specific software, similar
applications, and general user experiences. Understanding the technical
context and user behavior is critical for interpreting the manual and using
the software effectively.
• These examples highlight that real-world, non-linguistic knowledge is vital
for connecting language to meaningful action and decision-making in
everyday tasks.
Representing Meaning
• The text describes a
representational approach
to semantics, suggesting
that linguistic expressions
have associated meaning
representations that align
with the everyday
commonsense knowledge
of the world.
• The creation and
assignment of these
representations to
linguistic inputs is referred
to as semantic analysis.
• To illustrate these
concepts, the text refers to
a figure (Fig. 17.1) that
presents sample meaning
representations for the
sentence "I have a car"
using four different
meaning representation
Representing Meaning
• First-Order Logic: This representation uses logical symbols and is designed
to express facts and relationships clearly.
• Semantic Network: This graphical representation connects concepts and
their relationships, providing a visual way to represent meaning.
• Conceptual Dependency Diagram: This type of representation emphasizes
the relationships between actions and entities.
• Frame-Based Representation: This organizes knowledge into structured
frameworks that capture typical situations and their associated roles.
• Overall, the text sets the stage for exploring how different representation
languages can effectively capture the meanings of linguistic expressions in
relation to common knowledge.
Representing Meaning
• Common Foundation: Despite differences, all approaches to meaning
representation share a core idea: structures are built from a set of symbols
or representational vocabulary.
• Symbol Structures: These structures correspond to objects, their
properties, and relationships among them, representing a specific state of
affairs.
• Shared Example: In the example "I have a car," all approaches use symbols
representing the speaker, the car, and the possession relationship between
the two.
• Dual Perspective:
• The representations can be viewed in two ways:
• As representations of the meaning of a specific linguistic input ("I have a
car").
• As representations of a state of affairs in the world (ownership of a car).
• Linking to the World: This dual perspective connects linguistic inputs to
real-world knowledge, allowing for meaningful interpretation.
Computational Desiderata for Representations
• Desiderata" is a plural noun that means "things wanted or needed". It comes from the
Latin word desideratum
• Let us see the importance of meaning representations and their role in natural
language processing.
• To illustrate this, the task of providing restaurant advice to tourists is used as a case
study. In this scenario:
• A computer system is designed to handle spoken language queries from tourists.
• The system generates appropriate responses by utilizing a knowledge base of relevant
domain knowledge (e.g., information about restaurants).
• The text introduces examples to demonstrate the basic requirements that meaning
representations must meet, which include:
• Interpreting the meaning of user requests accurately.
• Connecting the linguistic input to relevant real-world knowledge (in this case,
restaurant information).
• Additionally, the examples highlight the complications that arise when designing
meaning representations, such as:
• Ensuring that the system can handle various types of queries.
• Dealing with the complexity of human language and context.
• The text emphasizes that meaning representations play a crucial role in the system’s
Verifiability
• The concept of verifiability, a key requirement for meaning representations in
natural language processing.
• Verifiability refers to the ability of a system to compare the meaning of a
sentence with the state of affairs represented in a knowledge base to determine
its truth.
• Example: The question "Does Maharani serve vegetarian food?" is used to
illustrate verifiability.
• The meaning representation for this question is simplified as:
• Serves(Maharani, Vegetarian Food)
• Basic Requirement: A meaning representation must allow the system to
determine the truth of the sentence by comparing it to known facts.
• Process:The system will match the meaning representation against its knowledge
base (which stores facts about restaurants).If the system finds a matching fact
(e.g., Maharani does serve vegetarian food), it will respond affirmatively.
• If no match is found, the system can either respond negatively (if it has complete
knowledge) or indicate that it does not know (if its knowledge is incomplete).
• Verifiability: The core idea is that the system must be able to compare the
representation of a sentence with the real-world facts in its knowledge base to
verify the truth of the statement.
Unambiguous Representations
• The importance of unambiguous representations in meaning representation
languages, addressing the challenge of ambiguity and its counterpart,
vagueness, in semantic processing.
• Ambiguity in Semantics: Linguistic expressions can have multiple legitimate
meaning representations depending on context.
• For example, the sentence "I wanna eat someplace that’s close to ICSI" can
be interpreted as the speaker wanting to eat nearby, or in an exaggerated
context, wanting to "devour" the location.
• Handling Ambiguity:
• Systems need a way to determine which interpretation is preferable.
• While the following chapters will discuss techniques to handle these
ambiguities, the focus here is that final meaning representations must be
unambiguous for reasoning and action purposes.
• Unambiguous Representation Requirement:
• Regardless of how ambiguous the input is, the system must provide a single
unambiguous interpretation in the final meaning representation.
Unambiguous Representations
• Vagueness vs. Ambiguity:
• Vagueness involves a lack of specificity but does not lead to multiple
interpretations.
• For instance, "I want to eat Italian food" is vague, as it does not specify
exactly what the user wants to eat, but it is not ambiguous.
• Ambiguity leads to multiple distinct interpretations.
• Vagueness is context-dependent, and in some cases, a vague
representation may suffice, while in others, a more specific representation
is necessary.
• Distinguishing Ambiguity from Vagueness:
• It can be difficult to differentiate between the two, but tests provided by
Zwicky and Sadock (1975) offer diagnostic methods for distinguishing them.
• In summary, meaning representation languages must support unambiguous
interpretations while also accommodating vagueness when needed,
depending on the context.
Canonical Form
• The canonical form in semantic representation, aims to address the
challenge of assigning the same meaning to different linguistic expressions
that essentially convey the same message, as well as handling word sense
disambiguation and syntax variations.
• Ambiguity in Inputs:
• Different inputs, even if expressed using various words and syntactic
structures, can convey the same meaning.
• For example, the sentences:
• Does Maharani serve vegetarian food?
• Do they have vegetarian food at Maharani?
• Are vegetarian dishes served at Maharani?
• Does Maharani serve vegetarian fare?
• While these sentences use different words and structures, they should
ideally receive the same meaning representation, as they ask the same
fundamental question.
Canonical Form
• Canonical Form:
• The principle that distinct inputs conveying the same meaning should be
assigned the same meaning representation.
• This simplifies reasoning tasks and helps maintain consistency in a system's
knowledge base.
• Storing alternative representations for each input would lead to complexity
and redundancy.
• Challenges in Canonical Representation:
• The system must recognize that different expressions such as vegetarian
food, vegetarian dishes, and vegetarian fare refer to the same concept in
context.
• The system must also understand that different verbs like having and
serving are equivalent in this context.
• Syntax variations, such as active versus passive voice (
• Maharani serves vegetarian dishes vs.
• Vegetarian dishes are served by Maharani),
• must still result in the same underlying meaning.
Canonical Form
• Word Sense Disambiguation:
• Different words like food, dishes, and fare can have multiple senses, but a
system should choose the appropriate shared sense to assign a common
meaning representation.
• This process is called word sense disambiguation (similar to part-of-speech
tagging).
• Syntactic Variations:
• Different syntactic structures, like active and passive voice, can have
systematically related meanings.
• Understanding how to assign meaning roles (e.g., Maharani as the server
and vegetarian dishes as the food being served) despite syntactic
differences is crucial for accurate semantic representation.
• In summary, the doctrine of canonical form enables a system to assign
identical meaning representations to different inputs that convey the same
meaning.
• This approach helps simplify the system's reasoning capabilities while
addressing challenges like word sense disambiguation and syntactic
variation.
Inference and Variables
• The importance of inference and the use of variables in meaning
representations, especially when simple matching approaches fall short.
• Inference and Background Knowledge:
• When dealing with a more complex request like "Can vegetarians eat at
Maharani?", it's clear that this is not just about finding a canonical form
match.
• The question does not mean the same thing as previous examples
• (e.g., "Does Maharani serve vegetarian food?"), even if it might lead to the
same answer.
• The connection between vegetarian food and Maharani’s offerings involves
commonsense knowledge about what vegetarians eat and what Maharani
serves.
• A system needs the ability to infer conclusions from its background
knowledge rather than relying solely on matching meaning representations.
• Inference involves drawing logical conclusions from existing knowledge,
allowing the system to answer questions not directly addressed by facts
stored in the knowledge base.
Inference and Variables
• Requests with Variables:
• A more general request like "I’d like to find a restaurant where I can get vegetarian
food" presents a different challenge, as it does not specify any particular restaurant.
• This requires a variable-based approach because the system needs to match the
request to unknown or unnamed entities (in this case, restaurants serving vegetarian
food).
• The meaning representation would involve a variable, such as:
• Serves(x, VegetarianFood)
• Here, x stands for any restaurant, and the system must replace the variable with
specific entities from the knowledge base that fit the description (e.g., a restaurant
serving vegetarian food).
• Handling Indefinite References:
• Many linguistic inputs contain indefinite references, and meaning representation
systems must handle these.
• The use of variables allows the system to generalize meaning representations and
connect them to specific knowledge, making the system more flexible and capable of
answering more complex, indirect queries.
• In summary, inference enables systems to derive conclusions from background
knowledge, while variables allow them to handle indefinite references in more
complex queries, leading to more dynamic and contextually appropriate responses
Expressiveness
• Expressiveness Requirement:
• A meaning representation system must be expressive enough to handle a
wide range of subject matter.
• Ideally, a single meaning representation language should be able to
represent the meaning of any natural language utterance.
• Limitations of a Single System:
• Although it would be ideal to have one universal system, it is unlikely that a
single representational framework can cover every possible natural
language utterance.
• First-Order Logic:
• Despite these limitations, First-Order Logic (FOL) is highlighted as being
sufficiently expressive to handle many of the necessary aspects of natural
language representation.
In summary, while no single meaning representation language may cover all
natural language needs, First-Order Logic offers a robust framework capable of
handling a significant portion of required expressiveness.
Context Free Grammars for English
• Constituency:
• While the entire noun phrase can occur before a verb, individual words within the phrase
cannot be separated or reordered arbitrarily.
• Examples:
• Grammatical: "Three parties from Brooklyn arrive.
• "Non-Grammatical:
• "From arrive.“ "As attracts.“ "The is.“ "Spot is.“
• These non-grammatical examples highlight that constituents must remain intact to
maintain proper sentence structure.
• Preposing and Postposing: Constituents can be moved to different parts of a sentence,
maintaining their integrity.
• Examples:
• Preposed: "On September seventeenth, I’d like to fly from Atlanta to Denver.“
• Postposed: "I’d like to fly from Atlanta to Denver on September seventeenth.”
• Non-Grammatical Examples:
• "On September, I’d like to fly seventeenth from Atlanta to Denver.“
• "I’d like to fly on September from Atlanta to Denver seventeenth.“
• Explanation: The integrity of the phrase is crucial. The entire phrase can be moved, but its
internal order cannot be changed without losing grammaticality.
Structure of the Language
• Semantics: Semantics deals with the meaning of words, phrases, and sentences.
• In terms of the structure of language, it focuses on how meaning is constructed
from linguistic units, both at the word level and at the sentence level.
• A. Lexical Semantics
• Lexical semantics studies the meaning of individual words and how they relate
to one another.
• It involves analyzing:
• Synonymy: Words with similar meanings (e.g., "big" and "large").
• Antonymy: Words with opposite meanings (e.g., "hot" and "cold").
• Hyponymy: Hierarchical relations (e.g., "dog" is a hyponym of "animal").
• Polysemy: Words with multiple related meanings (e.g., "bank" as a financial
institution or a riverbank).
• Homonymy: Words with the same form but unrelated meanings (e.g., "bat" for
an animal and "bat" used in sports).
• In NLP, lexical semantics is crucial for tasks such as word sense disambiguation
(determining which meaning of a word is intended based on context) and
semantic similarity (measuring the closeness of meaning between words or
phrases).
Structure of the Language
• B. Compositional Semantics
• Compositionality: The meaning of a sentence is derived from the meanings
of its parts (words or phrases) and how they are syntactically combined.
• For instance, in "The cat sat on the mat," the overall meaning is built from
the meanings of the individual words and their syntactic relationships.
• Predicate-Argument Structure: This is a key concept where a predicate (e.g.,
a verb like "eat") takes one or more arguments (e.g., "John eats pizza").
• The structure of the sentence impacts the meaning by specifying the
relationship between subjects, objects, and verbs.
• Compositional semantics is fundamental for sentence parsing in NLP,
allowing systems to understand and generate coherent sentences by
combining smaller units of meaning.
• C. Truth-Conditional Semantics
• Truth conditions specify under what circumstances a sentence would be true
or false.
• Example: "The cat is on the mat" is true if and only if there is a cat, and it is
indeed located on a mat.
Structure of the Language
• In this approach, the structure of a sentence in terms of subjects, verbs,
and objects helps map the sentence to a model of the world where we can
evaluate its truthfulness.
• In NLP, truth-conditional semantics aids in formal logic representations,
such as First-Order Logic (FOL), to ensure that language can be linked to
real-world meanings.
• D. Semantic Roles and Thematic Roles
• Semantic roles (or thematic roles) describe the relationship between a verb
and its arguments in terms of roles like:
• Agent: The doer of an action (e.g., "John" in "John opened the door").
• Patient: The entity affected by the action (e.g., "the door").
• Instrument: The means by which the action is performed (e.g., "with a
key").
• Understanding these roles is essential in semantic role labeling in NLP,
helping systems to extract who did what to whom in a sentence.
Structure of the Language
• Pragmatics: Pragmatics extends beyond the literal meaning of words and
sentences, focusing on how context influences meaning.
• It deals with the interaction between speaker, listener, and situational
factors in communication..
• A. Speaker Intent and Context
• In pragmatics, the meaning of an utterance often depends on the speaker's
intent and the shared knowledge between the speaker and the listener.
• Example: "Can you pass the salt?" is syntactically a question but
pragmatically a polite request.
• The structure of dialogue in pragmatics involves the way sentences and
phrases build meaning based on the conversational flow, including what is
explicitly said and what is implied.
• In NLP, dialogue systems and chatbots must consider pragmatic structures
to respond appropriately, going beyond literal interpretation to capture
implied meaning.
Structure of the Language
• B. Speech Acts
• Speech acts are communicative actions performed with language, such as
making requests, giving orders, or making promises.
• Locutionary act: The literal statement (e.g., "I promise to come").
• Illocutionary act: The intention behind the statement (e.g., the act of promising).
• Perlocutionary act: The effect on the listener (e.g., the listener's belief that you
will come).
• In NLP, understanding speech act theory is crucial for creating systems that can
accurately interpret or generate human-like language interactions, particularly in
virtual assistants.
• C. Implicature
• Implicature refers to meaning that is implied but not explicitly stated.
• Example: "It’s cold in here" may imply a request to close a window.
• Grice’s Maxims (of quantity, quality, relation, and manner) help govern
conversational implicatures:
• Quantity: Be as informative as required.
• Quality: Say what is true.
Structure of the Language
• Relation: Be relevant.
• Manner: Be clear and orderly.
• In NLP, implicature recognition is vital for systems dealing with natural
language understanding (NLU), particularly in tasks where indirect
language is used (e.g., sarcasm or politeness).
• D. Anaphora and Coreference Resolution
• Anaphora refers to the use of expressions (like pronouns) that depend on
other parts of the sentence for their meaning.
• Example: In "John took his book. He put it on the table," "He" refers to John
and "it" refers to the book.
• Coreference resolution is the task of determining which entities in a text
refer to the same thing.
• In NLP, resolving anaphora is crucial for understanding the relationships
between sentences and for maintaining discourse coherence in tasks like
summarization or question answering.
First Order Logic
• First-Order Logic (FOL) is a well-established, flexible, and computationally
efficient method for representing knowledge.
• It meets key requirements for a meaning representation language, including
verifiability, inference, and expressiveness.
• FOL also supports a sound model-theoretic semantics, ensuring that it provides
a robust framework for understanding meaning.
• A major advantage of FOL is that it makes minimal specific commitments about
how knowledge should be represented, making it versatile and adaptable.
• The basic structure in FOL revolves around objects, properties of objects, and
relations among objects—concepts commonly shared with other
representational systems.
• FOL has a clear and structured syntax, allowing for precise formalization of
knowledge.
• The section introduces the basics of how FOL’s syntax works and how it
connects with the semantics of meaning representation.
• FOL is also applicable for representing events, which are critical in natural
language processing.
• This ability allows FOL to model dynamic and temporal aspects of meaning.
First Order Logic
• Basic Elements of First Order Logic
First Order Logic
• Basic Elements of First Order Logic:
• Here we introduces First-Order Logic (FOL) and explains how its atomic
elements can be combined to build complex meaning representations.
• 1. Objects:
• FOL provides three fundamental ways to represent objects:
• Constants: These refer to specific objects in the world (e.g., Maharani,
Harry). Constants are unique and typically written as capitalized words or
letters.
• Functions: These represent relationships or concepts, such as possessive
expressions like “Frasca’s location,” which in FOL might be written as
LocationOf(Frasca).
• Functions return unique objects, making them convenient for referring to
unnamed but specific entities.
• Variables: These refer to unspecified objects. Variables are denoted by
lower-case letters and allow assertions about unknown objects or groups of
objects.
First Order Logic
• 2. Predicates:
• Predicates express relationships between objects or properties of objects.
• They can take different numbers of arguments:
• Example : Maharani serves vegetarian food
• Two-place predicates: Relate two objects, such as Serves(Maharani,
VegetarianFood), which indicates that Maharani serves vegetarian food.
• One-place predicates: Assert properties about a single object, such as
• Maharani is a restaurant
• Restaurant(Maharani), meaning that Maharani is a restaurant..
First Order Logic
• 3. Composite Representations:
• Atomic Formulas: These are the simplest representations built from
constants, functions, and predicates.
• FOL allows the creation of composite representations by combining atomic
formulas using logical connectives.
• 4. Logical Connectives:
• Conjunction (^) and negation (¬) are examples of logical connectives used
to combine formulas.
• For example, the sentence "I only have five dollars and I don’t have a lot of
time" can be represented as:
• Have(Speaker, FiveDollars) ^ ¬Have(Speaker, LotOfTime)
• These connectives allow an infinite number of logical formulas to be
created, providing FOL with vast expressive power.
• 5. Expressiveness:
• The recursive structure of FOL, as outlined in Fig. 17.3, means that an
infinite number of complex representations can be generated using a finite
set of rules.
First Order Logic
• Variables and Quantifiers:
• As noted above, variables are used in two ways in FOL: to refer to particular
anonymous objects and to refer generically to all objects in a collection.
• Here let us discuss how quantifiers in First-Order Logic (FOL) handle
variables to make assertions about unknown or unspecified objects. The
two primary quantifiers in FOL are.
• 1. Existential Quantifier (∃):
• Usage: Used when referring to at least one anonymous object that satisfies
a condition.
• It is signaled by the presence of indefinite noun phrases in natural
language.
• Example: In the sentence "a restaurant that serves Mexican food near ICSI,"
we are referring to an unknown restaurant that meets specific conditions.
• Representation:
• The FOL representation would be:
• ∃x Restaurant(x) ^ Serves(x, MexicanFood) ^ Near(LocationOf(x),
LocationOf(ICSI)).
First Order Logic
• This means that for the sentence to be true, there must be at least one
object (e.g., a restaurant) such that if we substitute it for x, the resulting
logical formula is true.
• For instance, if "AyCaramba" is such a restaurant, substituting it for x
results in a true statement.
• Restaurant(AyCaramba)^Serves(AyCaramba,MexicanFood)^Near((Location
Of (AyCaramba),LocationOf (ICSI))
• 2. Universal Quantifier (∀):
• Usage: Used when referring to all objects in a collection, asserting that the
formula is true for any possible substitution.
• Example: In the sentence "All vegetarian restaurants serve vegetarian
food," we are making a universal statement about every restaurant in the
category of vegetarian restaurants.
• Representation: The FOL representation would be:
• ∀x VegetarianRestaurant(x) → Serves(x, VegetarianFood).
First Order Logic
• This means that the sentence is true if, for every substitution of x with a vegetarian
restaurant, the consequent (Serves(x, VegetarianFood)) is true.
• VegetarianRestaurant(Maharani) ) → Serves(Maharani,VegetarianFood)
• For example, substituting "Maharani" results in a true sentence because Maharani
is a vegetarian restaurant that serves vegetarian food.
• False Antecedents: If we substitute a non-vegetarian restaurant (like "AyCaramba")
for x, the antecedent (VegetarianRestaurant(AyCaramba)) is false.
• VegetarianRestaurant(AyCaramba) ) → Serves(AyCaramba,VegetarianFood)
• However, this still satisfies the universal quantifier because an implication with a
false antecedent is always true.
• In other words, there is no restriction of x to restaurants or concepts related to
them. (VegetarianRestaurant(Carburetor) ) Serves(Carburetor,VegetarianFood)
• Here the antecedent is still false and hence the rule remains true under this kind of
• irrelevant substitution.
• To review, variables in logical formulas must be either existentially or universally
quantified.
• To satisfy an existentially quantified variable, there must be at least one
substitution that results in a true sentence.
First Order Logic
• First-Order Logic (FOL) Semantics:
• Objects, properties, and relations in FOL correspond to elements in the
external world being modeled.
• FOL meanings are established by mapping expressions to real-world
elements using set-theoretic concepts.
• Objects in the world are represented as FOL terms that denote domain
elements.
• Properties are sets of domain elements, and relations are sets of tuples of
these elements.
• Example:
• The sentence "Centro is near Bacaro" can be expressed in FOL as:
• Near(Centro, Bacaro).
• The meaning depends on whether the elements (Centro and Bacaro) are
part of the relation Near in the model.
• Logical operators like and (^), or (V), not (¬), and implies (→) combine
formulas and their meanings.
First Order Logic
• First-Order Logic (FOL) Semantics:

• Variables and Quantifiers: Variables are substituted by elements in the


domain to evaluate formulas.
• Existential quantifier (∃): Formula is true if at least one substitution makes
it true.
• Universal quantifier (∀): Formula is true for all possible substitutions.
• Some operators like or (V) and implies (→) do not exactly match their
everyday English usage.
First Order Logic
• Inference in FOL:
• Inference allows adding valid new propositions to a knowledge base or
determining the truth of propositions not explicitly present.
• Modus Ponens:
• A form of inference based on "if-then" reasoning.
• Form: If 𝑎 and 𝑎⇒𝑏, then 𝑏 can be inferred.
• Example:
• If a restaurant is vegetarian (antecedent), it serves vegetarian food
(consequent).
• Given: VegetarianRestaurant(Leaf)
• Using modus ponens: infer that Serves(Leaf, VegetarianFood)..
First Order Logic
• Inference in FOL:
• Forward Chaining:
• Starts with known facts and applies rules to derive new facts.
• Facts are added to the knowledge base as soon as they are deduced.
• Advantage: Inference is done in advance, making future queries faster.
• Disadvantage: Unnecessary facts may be inferred and stored.
• Backward Chaining:
• Starts with a query and checks if it can be proven based on the knowledge
base.
• Searches for rules where the consequent matches the query and proves the
antecedent.
• Example:
• To prove Serves(Leaf, VegetarianFood), find a rule with this as the
consequent, then prove the antecedent.
First Order Logic
• Inference in FOL:
• Difference between Backward Chaining and Reasoning Backwards:
• Backward chaining proves queries by finding matching rules.
• Reasoning backwards from known consequents (i.e., assuming antecedents
are true because the consequent is true) is called abduction and is often
used for plausible reasoning but is logically invalid.
• Resolution:
• A sound and complete inference method that can find all valid inferences.
• Disadvantage: It is computationally expensive.
• Practical Use:
• Forward and backward chaining are more commonly used, and knowledge
base developers must structure knowledge to support necessary
inferences.

You might also like