0% found this document useful (0 votes)
0 views

AIML-Unit 2 Notes-Assignment 2

The document provides an introduction to Knowledge Representation and Reasoning (KRR) in artificial intelligence, focusing on how knowledge is structured and utilized for reasoning tasks. It discusses key concepts such as logical agents, knowledge-based agents, and the Wumpus World, illustrating how these systems use formal logic for decision-making and problem-solving. Additionally, it covers propositional and first-order logic as methods for knowledge representation, highlighting their applications and challenges in AI.

Uploaded by

Uday Sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

AIML-Unit 2 Notes-Assignment 2

The document provides an introduction to Knowledge Representation and Reasoning (KRR) in artificial intelligence, focusing on how knowledge is structured and utilized for reasoning tasks. It discusses key concepts such as logical agents, knowledge-based agents, and the Wumpus World, illustrating how these systems use formal logic for decision-making and problem-solving. Additionally, it covers propositional and first-order logic as methods for knowledge representation, highlighting their applications and challenges in AI.

Uploaded by

Uday Sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

INTRODUCTION TO ARTIFICIAL INTELLIGENCE & MACHINE LEARNING

UNIT- II
Knowledge–Representation and Reasoning: Logical Agents: Knowledge based agents, the Wumpus
world, logic. Patterns in Propositional Logic, Inference in First-Order Logic-Propositional vs first
order inference, unification and lifting.
================================================================================
Introduction
Knowledge Representation and Reasoning (KRR) refers to the field in artificial intelligence (AI) that
focuses on how to represent knowledge about the world in a way that a machine can understand
and use for reasoning tasks. The aim is to allow computers to perform reasoning processes such as
deduction, decision-making, and problem-solving based on the knowledge represented.

Key Concepts:
1. Knowledge Representation (KR): This involves creating structures that capture the knowledge
about objects, relationships, events, and processes. Common KR frameworks include:
o Logic-based representations: Use formal languages (like propositional logic, predicate
logic) to represent knowledge and derive conclusions through formal deduction.
o Frames and semantic networks: Hierarchical structures that represent objects, their
attributes, and relationships.
o Ontologies: Formal representations of a set of concepts within a domain and the
relationships between them.
2. Reasoning: Reasoning involves deriving new knowledge from existing knowledge. Types of
reasoning include:
o Deductive Reasoning: Drawing conclusions from general rules or facts, ensuring that
conclusions are logically valid.
o Inductive Reasoning: Drawing general conclusions from specific observations, often
probabilistic or uncertain.
o Abductive Reasoning: Inferring the best possible explanation for a set of observations.
o Non-monotonic Reasoning: Reasoning where new information can change previous
conclusions (e.g., updates in belief systems).
3. Challenges:
o Expressiveness: Balancing how richly knowledge is represented with the computational
efficiency of reasoning.
o Uncertainty and Incomplete Information: Many real-world scenarios involve
uncertainty, and reasoning must account for incomplete or ambiguous knowledge.
o Scalability: Representing and reasoning over large volumes of knowledge.
4. Applications:
o Expert Systems: Use KRR to model human expertise for decision-making.
o Natural Language Processing: Understanding and generating human language by
interpreting the knowledge embedded in text.
o Robotics: Representing the knowledge of the world and using reasoning to navigate,
plan, and make decisions.
KRR is essential for enabling machines to understand the world, make informed decisions, and solve
complex problems, playing a crucial role in AI and intelligent systems and the following are the kind
of knowledge which needs to be represented in AI systems:
o Object: All the facts about objects in our world domain. E.g., Guitars contains strings,
trumpets are brass instruments.
o Events: Events are the actions which occur in our world.
o Performance: It describe behaviour which involves knowledge about how to do things.
o Meta-knowledge: It is knowledge about what we know.
o Facts: Facts are the truths about the real world and what we represent.
o Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the Sentences
(Here, sentences are used as a technical term and not identical with the English
language).

Logical Agents
Logical Agents are artificial intelligence systems that use formal logic to represent knowledge and
make decisions. These agents apply logical reasoning to derive conclusions, make inferences, and act
based on the knowledge they have about the world.

Key Concepts:
1. Knowledge Representation: Logical agents represent knowledge using formal logical languages,
such as propositional logic or first-order logic. This allows the agent to clearly express facts,
rules, and relationships about the world.
2. Reasoning: Logical agents perform reasoning using logical rules and inference techniques to
deduce new facts or make decisions. Common reasoning methods include:
o Deductive reasoning: Drawing conclusions from known facts and logical rules.
o Inference: Using rules to derive new knowledge from existing knowledge.
3. Actions: Based on the reasoning process, logical agents can make decisions or take actions that
bring about desired outcomes, such as problem-solving, planning, or reacting to changes in their
environment.
4. Examples of Logical Agents:
o Propositional Logic Agents: Use simple true/false statements to represent knowledge
and derive conclusions.
o First-Order Logic Agents: Use more expressive logical systems that allow for quantifiers,
such as "for all" and "there exists," to represent more complex relationships.
Logical Agents are intelligent systems that use formal logic to reason about their environment and
make decisions or take actions based on that reasoning.

Knowledge based agents


Knowledge-Based Agents are AI systems that use a structured knowledge base to make decisions
and perform actions. These agents rely on explicit representations of knowledge about the world,
which they use to reason, solve problems, and interact with their environment.
Key Concepts:
1. Knowledge Base (KB): A collection of facts, rules, and information about the world that the
agent uses to understand its environment. The knowledge can be represented using logical
statements, semantic networks, or other formal methods.
2. Inference Mechanism: The agent uses reasoning techniques to infer new knowledge from the
existing knowledge base. This can involve deductive reasoning, where the agent derives
conclusions based on the known facts and rules.
3. Action: The agent can take actions based on its reasoning, aiming to achieve specific goals or
solve problems. These actions are typically determined by a decision-making process that
evaluates the possible outcomes based on the knowledge it has.
4. Autonomy: Knowledge-based agents can operate autonomously by making decisions and taking
actions without continuous human intervention. They adapt to new information or changes in
the environment by updating their knowledge base and re-evaluating their goals.
5. Examples:
o Expert Systems: Knowledge-based agents designed to simulate human expertise in
specific domains.
o Autonomous Robots: Robots that use a knowledge base to navigate and perform tasks
in dynamic environments.
Knowledge-Based Agents are intelligent systems that use a structured set of knowledge and
reasoning capabilities to understand, act in, and adapt to their environment, often making decisions
autonomously.

The Wumpus World


The Wumpus World is a classic problem used in the field of artificial intelligence to demonstrate
concepts of logic, reasoning, and decision-making. It is a simple grid-based environment where an
agent must navigate and make decisions based on limited information. It's often used to illustrate
concepts like logic-based reasoning, agents making decisions under uncertainty, and how to deal
with incomplete information.

In the Wumpus World:


 The Wumpus world is a simple world example to illustrate the worth of a knowledge-based
agent and to represent knowledge representation.
 It was inspired by a video game Hunt the Wumpus by Gregory Yob in 1973.
 It is showing some rooms with Pits, one room with Wumpus and one agent at (1, 1) square
location of the world.
 The agent starts in a specific grid cell and can move around, but there are hazards (like pits
and a Wumpus, a dangerous creature).
 The goal is to find the gold and exit without encountering the Wumpus or falling into a pit.
 The agent receives sensory feedback from its surroundings: "stench" if the Wumpus is
nearby, "breeze" if there's a pit nearby, and other cues to help make decisions.
 The agent must reason about the environment, using the limited clues available, to avoid
dangers and achieve its goal.
 Following is a sample diagram for representing the Wumpus world.

 There are also some components which can help the agent to navigate the cave.
 These components are given as follows:
a) The rooms adjacent to the Wumpus room are smelly, so that it would have some
stench.
b) The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he
will perceive the breeze.
c) There will be glitter in the room if and only if the room has gold.
d) The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will
emit a horrible scream which can be heard anywhere in the cave.
PEAS description of Wumpus world
To explain the Wumpus world we have given PEAS description as below:
 Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each action, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
 Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
 Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot
 Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the glitter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
• These percepts can be represented as five element list, in which we will have
different indicators for each sensor.
• Example if agent perceives stench, breeze, but no glitter, no bump, and no scream
then it can be represented as: [Stench, Breeze, None, None, None].

Exploring the Wumpus world


Agent's First step:
• Initially, the agent is in the first room or on the square [1,1] as shown in the figure below,
and it is already known that this room is safe for the agent, so to represent on the below
diagram (a) that room is safe we will add symbol OK.
• Symbol A is used to represent agent, symbol B for the breeze, G for Glitter or gold, V for the
visited room, P for pits, W for Wumpus.
• At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent
squares are also OK.
Agent's second Step:
• Now agent needs to move forward, so it will either move to [1, 2], or [2,1].
• Let's suppose agent moves to the room [2, 1], at this room agent perceives some breeze
which means Pit is around this room.
• The pit can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?
• Now agent will stop and think and will not make any harmful move.
• The agent will go back to the [1, 1] room.
• The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the
visited squares.
Agent's third step:
• At the third step, now agent will move to the room [1,2] which is OK.
• In the room [1,2] agent perceives a stench which means there must be a Wumpus nearby.
• But Wumpus cannot be in the room [1,1] as by rules of the game, and also not in [2,2]
(Agent had not detected any stench when he was at [2,1]).
• Therefore agent infers that Wumpus is in the room [1,3], and in current state, there is no
breeze which means in [2,2] there is no Pit and no Wumpus.
• So it is safe, and we will mark it OK, and the agent moves further in [2,2].

Agent's fourth step:


• At room [2,2], here no stench and no breezes present so let's suppose agent decides to
move to [2,3].
• At room [2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.
Propositional logic
• Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions.
• A proposition is a declarative statement which is either true or false.
• It is a technique of knowledge representation in logical and mathematical form.
o Example:
o It is Sunday.
o The Sun rises from West (False proposition)
o 3+3= 7 (False proposition)
o 5 is a prime number.
• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can use any
symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical connectives.
• These connectives are also called logical operators.
• A proposition formula which is always true is called tautology, and it is also called a valid
sentence.
• A proposition formula which is always false is called Contradiction.

Syntax of propositional logic:


 The syntax of propositional logic defines the allowable sentences for the knowledge
representation.
 There are two types of Propositions:
a. Atomic Propositions
b. Compound propositions

Atomic Proposition:
 Atomic propositions are the simple propositions.
 It consists of a single proposition symbol.
 These are the sentences which must be either true or false.
 Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.

Compound proposition:
 Compound propositions are constructed by combining simpler or atomic propositions, using
parenthesis and logical connectives.
 Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:
 Logical connectives are used to connect two simpler propositions or representing a sentence
logically.
 We can create compound propositions with the help of logical connectives.
 There are mainly five connectives as given in the following table:

Truth tables:
 A truth table is a mathematical table used in logic—specifically in connection with Boolean
algebra, Boolean functions, and propositional calculus—which sets out the functional values of
logical expressions on each of their functional arguments, that is, for each combination of values
taken by their logical variables.
 In particular, truth tables can be used to show whether a propositional expression is true for all
legitimate input values, that is, logically valid.
 A truth table has one column for each input variable (for example, A and B), and one final
column showing all of the possible results of the logical operation that the table represents (for
example, A XOR B).
 Each row of the truth table contains one possible configuration of the input variables (for
instance, A=true, B=false), and the result of the operation for those values.
 A truth table is a structured representation that presents all possible combinations of truth
values for the input variables of a Boolean function and their corresponding output values.
First-order logic (FOL)
First-order logic (FOL) is a formal system used in mathematics, philosophy, linguistics, and computer
science. It is a system of logic that allows you to reason about objects, their properties, and
relationships between them.
 First-order logic is another way of knowledge representation in artificial intelligence. It is an
extension to propositional logic.
 First-order logic is also known as Predicate logic or First-order predicate logic. First-order logic is
a powerful language that develops information about the objects in a more easy way and can
also express the relationship between those objects.
 First-order logic (like natural language) does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world: o Objects: A, B, people,
numbers, colors, wars, theories, squares, pits, wumpus, ...... o Relations: It can be unary relation
such as: red, round, is adjacent, or n-any relation such as: the sister of, brother of, has color,
comes between o Function: Father of, best friend, third inning of, end of, ......
FOL is powerful for expressing and reasoning about a wide variety of concepts, and its formal
structure helps in proving statements or deriving conclusions systematically.

Syntax of First-Order logic


The syntax of FOL determines which collection of symbols is a logical expression in first-order logic.

Basic Elements of First-order logic: Following are the basic elements of FOL syntax

Atomic sentences:
 Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.
 We can represent atomic sentences as Predicate (term1, term2, ......, term n).
 Example:
Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).

Complex Sentences:
 Complex sentences are made by combining atomic sentences using connectives. First-order
logic statements can be divided into two parts:
 Subject: Subject is the main part of the statement.
 Predicate: A predicate can be defined as a relation, which binds two atoms together in a
statement.
 Consider the statement: "x is an integer.", it consists of two parts, the first part x is the
subject of the statement and second part "is an integer," is known as a predicate.
Quantifiers in First-order logic:
 A quantifier is a language element which generates quantification, and quantification
specifies the quantity of specimen in the universe of discourse.
 These are the symbols that permit to determine or identify the range and scope of the
variable in the logical expression.
 There are two types of quantifier:
a. Universal Quantifier, (for all, everyone, everything)
b. Existential quantifier, (for some, at least one).

Universal Quantifier:
 Universal quantifier is a symbol of logical representation, which specifies that the statement
within its range is true for everything or every instance of a particular thing.
 The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.

Existential Quantifier:
 Existential quantifiers are the type of quantifiers, which express that the statement within
its scope is true for at least one instance of something.
 It is denoted by the logical operator ∃, which resembles as inverted E.

Points to remember:
 The main connective for universal quantifier ∀ is implication →.
 The main connective for existential quantifier ∃ is and ∧.
FOL inference rules for quantifier:
 As propositional logic, there are inference rules in first-order logic.
 Following are some basic inference rules in FOL:
o Universal Generalization
o Universal Instantiation
o Existential Instantiation
o Existential introduction

1. Universal Generalization:
 Universal generalization is a valid inference rule which states that if P(c) is true for any
element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
 This rule can be used if we want to show that every element has a similar property.
 Example:
Let's represent, P(c): "A byte contains 8 bits",
so for ∀ x P(x)
"All bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:
 Universal instantiation is also called as universal elimination or UI is a valid inference rule.
 It can be applied multiple times to add new sentences.
 The new KB is logically equivalent to the previous KB.
 Example:
IF "Every person like ice-cream“ ∀x P(x)
so we can infer that "John likes ice-cream" => P(c).
3. Existential Instantiation:
 Existential instantiation is also called as Existential Elimination, which is a valid inference
rule in first-order logic.
 It can be applied only once to replace the existential sentence.
 Example:
∃x Crown(x) ∧ OnHead(x, John)
4. Existential introduction
 An existential introduction is also known as an existential generalization, which is a valid
inference rule in first-order logic.
 Example:
Let's say that, "Priyanka got good marks in English."
"Therefore, someone got good marks in English."

Inference in First-Order Logic


 In First-Order Logic, inference is used to derive new facts or sentences from existing ones.
 Before getting into the FOL inference rule, it's important to understand some basic FOL
terminology.

Substitution:
 Substitution is a basic procedure that is applied to terms and formulations.
 It can be found in all first-order logic inference systems.
 When there are quantifiers in FOL, the substitution becomes more complicated.
 When we write F[a/x], we are referring to the substitution of a constant "a" for the variable
"x."
 [ Note: first-order logic can convey facts about some or all of the universe's objects.]

Equality:
 In First-Order Logic, atomic sentences are formed not only via the use of predicate and
words, but also through the application of equality.
 We can do this by using equality symbols, which indicate that the two terms relate to the
same thing.
 Example: Brother (John) = Smith.
 In the above example, the object referred by the Brother (John) is close to the object
referred by Smith.
 The equality symbol can be used with negation to portray that two terms are not the same
objects.
 Example: ¬(x=y) which is equivalent to x ≠y

Propositional logic vs FOL

Feature Propositional Logic First-Order Logic (FOL)


Basic Elements Propositions (simple statements) Predicates, terms, and quantifiers
Predicates applied to terms, e.g., P(a),
Atomic Statements Simple propositions, e.g., p, q
R(x,y)
Quantification None Yes (Universal ∀, Existential ∃)
More complex, involving relationships
Complexity Simple, only involves logical connectives
and quantification
Example Statement "It is raining." p "John is a student." Student(John)
Logical Connectives ∧ (AND), ∨ (OR), ¬ (NOT), → (IMPLIES) ∧, ∨, ¬, ∀ (for all), ∃ (there exists)
Can express relationships between
Expressiveness Can only express simple true/false values
objects, properties, and quantification
Example with "If it is raining, then the ground is wet"
"It is raining AND the ground is wet" p∧q
Connectives ∀x(Raining(x)→Wet(Ground(x)))
Expressing Can express relationships, e.g., "John is
No relationships between objects
Relationships friends with Mary" Friends(John,Mary)
Cannot express relationships or Can express complex relationships and
Limitations
properties of objects properties using predicates

Unification and Lifting

1. Unification in FOL
 Unification is the process of finding a substitution (a mapping) of variables that makes two
logical expressions (or terms) identical. It’s a key operation in FOL, particularly in automated
reasoning and logic programming.
 Goal: The goal of unification is to find a substitution that, when applied to both terms,
results in identical expressions.
 How it works:
o You compare two terms (which can be predicates, functions, or individual constants)
and attempt to make them identical by substituting variables.
o If you can find such a substitution, the terms are unified; otherwise, they are
incompatible.
o Example:
Term 1: P(x,y)
Term 2: P(a,b)
 A unification would occur if you substitute x=a and y=b, making both terms identical. So, the
substitution {x↦a,y↦b} unifies the two terms.
 Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that, Ψ1𝜎 = Ψ2𝜎, then it can
be expressed as UNIFY(Ψ1, Ψ2).
 E.g. Let's say there are two different expressions, P(x, y), and P(a, f(z)).
 In this example, we need to make both above statements identical to each other.
 For this, we will perform the substitution.
o P(x, y)......... (i)
o P(a, f(z))......... (ii)
 Substitute x with a, and y with f(z) in the first expression, and it will be represented as a/x
and f(z)/y.
 With both the substitutions, the first expression will be identical to the second expression
and the substitution set will be: [a/x, f(z)/y].
 Applications: Unification is used in:
o Resolution (in automated theorem proving)
o Logic programming (e.g., in Prolog)
o Automated reasoning (where you need to match terms to infer new information)

2. Lifting in FOL
 Lifting, in the context of FOL, typically refers to the process of generalizing or extending a
function or operation from the ground level (such as a specific term or constant) to more
complex expressions (involving variables or predicates).
 Goal: The aim of lifting is to extend operations or properties defined over simple terms to a
broader scope, often involving quantified expressions.
 How it works:
 Lifting usually involves generalizing an operation or rule from specific elements (like
constants) to variable-bound elements or quantified terms.
 It can involve extending an interpretation from the domain of individual constants to a
broader domain of variables, or from ground terms to terms that include quantifiers.
 Example:
o If you have a function f(x)f(x), lifting might refer to generalizing it so that it can apply to
all xx, i.e., ∀x f(x)\forall x \, f(x).
 Another example could be extending a rule, say, a property of constants like "every constant
is even," to a more general statement that holds for variables like "for all xx, xx is even."

Generalized Modus Ponens (GMP)


 GMP rule is a logical rule of inference that extends the standard Modus Ponens rule.
 It allows reasoning with more complex statements, particularly those that involve quantified
expressions (like in First-Order Logic) such as
(such as ∀ "for all" or ∃ "there exists").
 GMP rule is used to infer conclusions when you have a universally quantified conditional
statement and an instantiation of the universal quantifier.

Standard Modus Ponens (MP):


In classical logic (propositional logic), Modus Ponens is a basic rule of inference that states:
• If p→q (If p then q)
• And p is true
• Then q must be true
Formally:
1) p→q
2) p
3) Therefore, q

Example 1:
 Universal Statement: ∀x (Human(x)→Mortal(x))
This means: "For all x, if x is human, then x is mortal.“
 Specific Instance: Human(Socrates)
This means: "Socrates is a human.“
 Apply GMP: By Generalized Modus Ponens, since Human(Socrates) is true and
∀x (Human(x)→Mortal(x)) is also true, we can infer:
 Conclusion: Mortal(Socrates)
 This means: "Socrates is mortal.“

 GMP rule is a powerful extension of Modus Ponens that allows inference from universal
statements by instantiating the variable with a specific object.
 This rule is essential for reasoning in First-Order Logic where quantified statements are often
used.

Example 2:
 Universal Statement: ∀x (Bird(x)→CanFly(x))
(All birds can fly.)
 Specific Instance: Bird(Tweety)
(Tweety is a bird.)
 Apply GMP: Since Tweety is a bird and the universal statement says all birds can fly, we can
apply GMP:
• Conclusion: CanFly(Tweety)
(Tweety can fly.)
Applications of Generalized Modus Ponens:
 Medical Diagnosis: In medical reasoning systems, rules such as "If a patient has fever, they
might have an infection" can be generalized. If a specific patient (say, "John") has a fever, GMP
allows the system to infer that John might have an infection.
 Expert Systems: Expert systems in various domains (like law, finance, or engineering) rely on
universal rules such as "All engineers understand physics" or "All lawyers are trained in law." If
the system knows a specific person is an engineer or a lawyer, it can apply GMP to infer
additional knowledge about their expertise.
 Prolog: In logic programming languages like Prolog, facts and rules are often written as universal
statements. The inference engine applies GMP to derive conclusions. Example is shown below:

Summary of Key Differences:


 Unification is about making two terms identical by finding a suitable substitution of variables.
 Lifting is about generalizing or extending an operation or property from simpler terms
(constants or specific values) to more complex expressions involving variables or quantifiers.
[The following questions asked in previous question papers in Unit 2 and
please find the questions and answers]

1) Explain unification algorithm with suitable example.


Unification is a process used in logic and computer science, particularly in automated reasoning and
programming languages, where two logical expressions or terms are made identical by finding a
suitable substitution for variables. The unification algorithm is fundamental to systems like Prolog
and other logic programming languages.
What is Unification?
Unification involves finding a substitution (or mapping) of variables that makes two terms (which
could be complex expressions) identical. A substitution is typically a mapping of variables to terms.

Basic Terminology:
 Terms: Expressions that can include constants, variables, and functions.
 Substitution: A replacement of variables with terms.
 Unifier: A substitution that makes two terms identical.
 Most General Unifier (MGU): The simplest unifier (in terms of variables) that can unify two
terms.

The Unification Algorithm:


The algorithm works by comparing two terms and progressively applying substitutions to make them
identical. If two terms cannot be unified, the algorithm fails. The process involves:
1. Variable Matching: If one term is a variable, replace it with the other term.
2. Function Matching: If both terms are functions (or compound terms), unify their arguments
recursively.
3. Constant Matching: If both terms are constants, they must be identical for unification to
succeed.

Steps of the Unification Algorithm:


1. Base case: If both terms are identical, no further unification is needed.
2. Case 1: If one term is a variable and the other is a complex term, the variable can be
replaced by that term, provided it doesn't lead to circular references.
3. Case 2: If both terms are functions, unify their corresponding arguments.
4. Case 3: If both terms are constants, they must be identical for unification to succeed.
5. Failure: If neither of the above steps is possible, unification fails.

Example:
Let's consider an example to illustrate unification.
Problem:
We want to unify the two terms:
Term 1: f(X, g(Y))
Term 2: f(a, g(b))
Step-by-step process:
1. Compare the outermost structure:
Both terms are of the form f(A, B), where A and B are the arguments. Since the functor f is the
same in both terms, we proceed to unify the arguments.
2. Unify the first argument:
We need to unify X with a.
This gives the substitution {X → a}.
3. Unify the second argument:
We need to unify g(Y) with g(b).
Both terms have the same functor g, so we now unify their arguments. We need to unify Y with
b, giving the substitution {Y → b}.
4. Combine the substitutions:
The combined substitution from the two steps is:
o {X → a, Y → b}
Thus, the unifier for f(X, g(Y)) and f(a, g(b)) is {X → a, Y → b}.

The unification algorithm succeeds with the substitution {X → a, Y → b}, meaning the two terms are
unified under this substitution. The process would fail if, for example, the terms were f(X) and f(X, Y),
since no substitution could make both terms identical.

-------------------------------------------------------------------------------------------------------------------------------------

2) Mention the categories of hill climbing search. What are the reasons that hill climbing often
get struck?

Hill climbing is a simple search algorithm used in artificial intelligence (AI) to find the best solution by
iteratively moving toward a peak (optimum) in the problem space. It can be classified into different
categories based on how it approaches the search space. Additionally, it has certain limitations that
can lead to it getting stuck during the search process and the hill climbing search is shown in the
following figure:
Categories of Hill Climbing Search:
1. Simple Hill Climbing:
o In this version, the algorithm moves to the first neighbouring state that improves the
current state. It evaluates the neighbouring nodes one by one and selects the first one
that is better.
o Characteristics: Simple, greedy approach.
o Example: If you're trying to find the highest point in a terrain, you check the immediate
neighbours and climb to the highest one that you find.
2. Steepest-Ascent Hill Climbing:
o Here, the algorithm evaluates all the neighbours of the current state and selects the one
with the highest value (the steepest ascent).
o Characteristics: More thorough than simple hill climbing, but can still get stuck in local
optima.
o Example: If you're climbing a mountain and you look at all the nearby hills, you choose
the one with the greatest height difference from your current position.
3. Stochastic Hill Climbing:
o Instead of evaluating all neighbours or just the first improving neighbour, stochastic hill
climbing picks a random neighbour and moves to it if it improves the current state. If no
better neighbours are found, it may try others.
o Characteristics: More flexible than steepest ascent but still prone to getting stuck.
o Example: If you're randomly exploring different routes on a mountain and taking the
first route you find that is better than your current position.
4. First-Choice Hill Climbing:
o In this variation, the algorithm randomly picks a neighbour and if it’s better, it moves to
it. If not, it picks another random neighbour, continuing the process until a better
solution is found.
o Characteristics: Randomized version of the search, which can potentially avoid some
pitfalls of other hill climbing methods.

Reasons Hill Climbing Gets Stuck:


1. Local Maximum:
o Hill climbing may get stuck at a local maximum — a point that is higher than its
neighbours but not the highest point in the entire search space. The algorithm might
think it has found the optimal solution because all neighbouring states seem worse, but
there's a higher peak elsewhere that is not reachable from the current position.
2. Plateau:
o A plateau is a flat area in the search space where the neighbouring states have the same
value (i.e., there is no direction of improvement). If the algorithm lands on a plateau, it
may struggle to find a better path or decide on the next move, thus failing to make
progress.
3. Ridges:
o Sometimes, there are ridges in the search space: areas where a straight move in any
direction leads to a lower value, but small steps along the ridge could eventually lead to
a higher value. Hill climbing often misses these small steps and ends up stuck at lower
points.
4. Greedy Nature:
o Hill climbing is inherently a greedy algorithm — it always chooses the next move that
appears to be the best option at the moment. This can lead to suboptimal solutions
because the algorithm does not look ahead or consider the bigger picture.
5. Lack of Memory:
o In many hill climbing variants, the algorithm doesn't keep track of past states. This can
be a problem because it may repeatedly visit the same state, especially if the landscape
of the search space has many symmetrical areas.
6. No Backtracking:
o Hill climbing typically does not perform backtracking, meaning if the algorithm makes a
move that leads to a worse state (or gets stuck), it can't go back and try a different
approach. This lack of flexibility makes it harder to escape local maxima or plateaus.

Solutions to Overcome Hill Climbing Limitations:


 Random Restart Hill Climbing: The algorithm is restarted multiple times from different initial
states, increasing the chances of finding a global maximum.
 Simulated Annealing: This technique allows occasional "downhill" moves (even if they lead to
worse states) to escape local maxima and find a global optimum.
 Genetic Algorithms: These use a population of solutions and evolve them over time, helping to
explore a broader area of the search space.

Hill climbing is a simple and fast approach to optimization, it is prone to getting stuck at local
maxima, plateaus, and ridges. More sophisticated algorithms or strategies are often required for
finding a global optimum.

-------------------------------------------------------------------------------------------------------------------------------------

3. What are the best first search strategy, depth first strategy and give comparison.

The Best-First Search (BFS) and Depth-First Search (DFS) are two common strategies used in search
algorithms for exploring state spaces. Both strategies have different approaches and trade-offs. The
BFS and DFS strategies are explained and compared.

1. Best-First Search (BFS)


 Approach: Best-First Search explores a search tree by expanding the most promising node
first. It uses an evaluation function (often called a heuristic function, denoted as h(n)) to
decide which node to explore next. The node with the lowest evaluation value is expanded
first.
 Variants: A popular variant of BFS is A Search*, which combines both the path cost and
heuristic to decide the best node to expand.
 Optimality: Best-First Search can find the optimal solution if combined with an appropriate
heuristic (like in A*).
 Completeness: BFS can be incomplete if the search space is infinite, depending on the
heuristic used.
2. Depth-First Search (DFS)
 Approach: Depth-First Search explores the search tree by going as deep as possible along
one branch before backtracking. It starts at the root node and explores a path until it
reaches a leaf or a dead-end. Then, it backtracks and explores other paths.
 Stack-Based: DFS uses a stack (either explicitly or via recursion) to manage the nodes to be
explored.
 Optimality: DFS does not guarantee the optimal solution because it may explore suboptimal
paths first.
 Completeness: DFS can be incomplete if the search space is infinite or if there are loops in
the search space that cause it to go into infinite recursion.

Comparison
Criteria Best-First Search (BFS) Depth-First Search (DFS)
Exploration Expands the most promising node based Explores deeply along one path before
Strategy on a heuristic (h(n)) backtracking
Priority Queue (or Min-Heap, to prioritize
Data Structure Stack (can be implemented via recursion)
based on the heuristic)
Can be complete, depending on the Can be incomplete if the search space is
Completeness
heuristic and search space infinite or has loops
Can be optimal if the heuristic is
Not guaranteed to find the optimal
Optimality consistent and admissible (e.g., in A*
solution
Search)
Can have high memory usage since all Typically uses less memory (only the
Memory Usage
nodes are stored in the priority queue current path and visited nodes)
Depends on the quality of the heuristic;
Time O(b^d) where b is the branching factor
generally, higher time complexity
Complexity and d is the depth of the solution
compared to DFS
Space O(b^d) (like BFS) because all nodes must O(b * d) (since only the current path needs
Complexity be stored in the priority queue to be stored)
Can handle loops by using closed lists or May get stuck in infinite loops without
Handling Loops
visited nodes cycle detection
When memory is limited, or a solution is
When an evaluation function is available,
Best Use Case needed quickly without considering
and finding an optimal solution is a priority
optimality
Risk of Getting Can get stuck in local minima if the Can get stuck in infinite loops without a
Stuck heuristic is poorly designed mechanism to check for cycles
Key Differences and Use Cases:
 Best-First Search is ideal when you have a good heuristic function, as it guides the search
towards the goal more efficiently. However, it can use a lot of memory and might not always
be complete or optimal unless combined with specific constraints (e.g., A*).
 Depth-First Search is memory-efficient and works well for problems with deep solutions or
when a quick solution is needed, but it may miss the optimal path and is not guaranteed to
find a solution if there are infinite paths or loops.
-------------------------------------------------------------------------------------------------------------------------------------

4. Describe the role of CNF /DNF in resolution.

In resolution-based inference, Conjunctive Normal Form (CNF) and Disjunctive Normal Form (DNF)
play important roles in structuring logical expressions for automated reasoning, particularly in logic
programming, automated theorem proving, and artificial intelligence. The explanation of each and
their relevance to the resolution process:

1. Conjunctive Normal Form (CNF)


 Definition: A logical formula is in Conjunctive Normal Form (CNF) if it is a conjunction (AND) of
clauses, where each clause is a disjunction (OR) of literals (propositions or their negations).
Essentially, a CNF formula is a set of clauses that are connected by AND, and each clause is a set
of literals connected by OR.
o Example: The formula (A ∨ B) ∧ (¬A ∨ C) ∧ (B ∨ ¬C) is in CNF.
 Role in Resolution:
o In resolution, CNF plays a crucial role because resolution operates by combining two
clauses, each containing a pair of complementary literals, and deriving a new clause.
o CNF simplifies the structure of logical formulas, enabling the resolution algorithm to
work systematically.
o Resolution is primarily defined for CNF formulas because it relies on the idea that we
can resolve (or eliminate) complementary literals (like A and ¬A), which simplifies the
clauses progressively until we reach a contradiction (empty clause, denoted as ⊥) or a
solution.
Steps in CNF-based Resolution:
o Convert the formula into CNF if it's not already in that form.
o Apply the resolution rule: Resolve two clauses that contain complementary literals.
o Continue resolving until either a contradiction is found (indicating that the formula is
unsatisfiable) or no further resolutions are possible.
o Example: Suppose we have two clauses in CNF:
 (A ∨ B) and (¬A ∨ C).
 The complementary literals are A and ¬A, so we can resolve them and produce
the new clause (B ∨ C).
o Use in Automated Theorem Proving: In automated reasoning, especially in systems like
Prolog or SAT solvers, CNF is used because it allows the resolution process to efficiently
prune search spaces and reach conclusions through systematic simplification.
2. Disjunctive Normal Form (DNF)
 Definition: A logical formula is in Disjunctive Normal Form (DNF) if it is a disjunction (OR) of
conjunctions (AND) of literals. Each term in a DNF formula is a conjunction of literals, and the
overall formula is a disjunction of these conjunctions.
o Example: The formula (A ∧ B) ∨ (¬A ∧ C) is in DNF.
 Role in Resolution:
o While DNF is not commonly used directly in resolution, it plays a role in understanding
the satisfiability of a formula.
o DNF can represent logical formulas in a way that makes it easier to check satisfiability
because each conjunction in the DNF can be treated as a potential solution.
o Resolution is more naturally suited to CNF, as it focuses on the resolution of
complementary literals, and DNF typically involves disjunctions that do not easily lend
themselves to this form of simplification.
Steps in DNF-based Resolution:
o While resolution is not typically performed directly on DNF formulas, converting a DNF
formula to CNF is a common pre-processing step when applying resolution-based
methods.

Summary of the Role of CNF and DNF in Resolution:


Aspect Conjunctive Normal Form (CNF) Disjunctive Normal Form (DNF)
A conjunction (AND) of clauses, where each A disjunction (OR) of conjunctions
Definition
clause is a disjunction (OR) of literals. (AND) of literals.
CNF is the primary form used in resolution. It DNF is not directly used in resolution,
Role in
allows the application of the resolution rule but it can be helpful for satisfiability
Resolution
(eliminating complementary literals). checking.
CNF formulas are commonly used for DNF formulas can represent satisfiable
Satisfiability unsatisfiability checking in automated theorem solutions directly, since each
proving. conjunction is a possible solution.
CNF formulas are the basis for the resolution DNF formulas are not typically involved
Resolution rule, which works by resolving complementary in resolution but might be converted to
literals across clauses. CNF for resolution.
Used extensively in automated reasoning, SAT
Used in logic circuits and satisfiability
Usage solvers, and logic programming for resolution-
checking but not directly in resolution.
based inference.

Conversion between CNF and DNF:


 In practice, CNF is preferred for resolution because of its structure, which lends itself to the
resolution rule. DNF, on the other hand, is typically used for satisfiability testing or model
checking but is not well-suited for resolution due to its disjunctive nature.
Conclusion:
 CNF plays a central role in resolution because it allows for the systematic application of the
resolution rule to simplify logical formulas and determine satisfiability or prove unsatisfiability.
 DNF is useful for satisfiability checking but doesn't directly apply to resolution, as it doesn't offer
a natural way to resolve complementary literals across clauses.
-------------------------------------------------------------------------------------------------------------------------------------

5. What do you mean by rational agents? Explain.

Rational Agents in AI
A rational agent is an entity in artificial intelligence (AI) that perceives its environment and takes
actions to maximize its chances of achieving a specific goal, given its available knowledge and
capabilities. The core idea behind rational agents is that they are designed to act in the most
beneficial or optimal way based on their current state of knowledge, reasoning, and decision-making
processes.

Definition of a Rational Agent


A rational agent is one that:
 Perceives its environment through sensors (which could be, for example, cameras, microphones,
or other types of data-gathering systems).
 Takes actions through actuators (e.g., motors, outputs, or other systems that enable it to
interact with its environment).
 Makes decisions based on the information it perceives, aiming to maximize its expected
performance measure.
In simple terms, a rational agent always tries to choose the action that it believes will lead to the
best possible outcome, according to its goal or objective.

Key Characteristics of a Rational Agent


1. Autonomy:
o A rational agent has the ability to make decisions independently based on its
observations, reasoning, and goals, without needing human intervention.
2. Goal-Oriented:
o The agent is designed to achieve a certain goal or set of goals. These goals define what
"rational" behaviour means for the agent. For instance, a robot in a warehouse might
have the goal of delivering items to specific locations.
3. Perception and Action:
o It perceives the environment (through sensors) and then acts (using actuators) based on
that perception. The actions taken by the agent are based on its current state and
knowledge of how to achieve its goals.
4. Decision-Making Process:
o Rational agents use various decision-making algorithms to choose the best course of
action based on their current state, available knowledge, and prediction of future
consequences.
5. Performance Measure:
o A rational agent has a performance measure, which is a criterion used to evaluate how
well it is achieving its goal(s). For example, a chess-playing agent's performance measure
could be winning the game.
6. Learning:
o Many rational agents are capable of learning from their experiences. This means that,
over time, the agent can improve its ability to make decisions and adapt to new
environments by analysing past actions and outcomes.

Formal Definition of Rationality


A rational agent’s behaviour can be described as:
 It selects the best action based on the current perception and its knowledge of the
environment.
 It uses a performance measure to evaluate the desirability of different states and selects
actions that are expected to maximize that measure.
In other words, given a set of possible actions, a rational agent chooses the one that is expected to
result in the best performance, based on what it knows and its understanding of the environment.

Components of a Rational Agent


A rational agent typically consists of the following components:
1. Sensors:
o Sensors collect information from the environment. For example, a self-driving car's
sensors may include cameras, radar, GPS, etc.
2. Actuators:
o Actuators are used to take actions that affect the environment. In the case of the self-
driving car, the actuators would include the steering wheel, brakes, accelerator, and
other components that control the car’s movement.
3. Performance Measure:
o This is a quantitative way to evaluate how well the agent is achieving its goals. For
example, the performance measure for a recommendation system could be the user’s
satisfaction with the recommended items.
4. Environment:
o The environment is the context in which the agent operates. It is what the agent
perceives and acts upon. For example, the environment for a robot could be the physical
space in which it is operating, including obstacles, tasks, and other objects.
5. Agent Function:
o The agent function determines the agent’s behaviour based on its perception and state.
It is essentially a mapping from the percept history (the agent’s entire past sequence of
perceptions) to an action.
6. Agent Program:
o The agent program implements the agent function. It runs on the physical hardware (like
a robot or computer) and processes the agent's perceptions to produce actions.

Types of Rational Agents


Rational agents can vary based on their complexity and the environments in which they operate.
There are generally three types of agents:
1. Simple Reflex Agents:
o These agents act based on the current perception, often using condition-action rules (if-
then rules). For example, a thermostat could be a simple reflex agent that heats or cools
a room based on temperature readings.
2. Model-Based Reflex Agents:
o These agents keep track of the world state using an internal model, enabling them to
make decisions based on the history of their actions and perceptions. This allows the
agent to handle more complex tasks and adapt to dynamic environments.
3. Goal-Based Agents:
o These agents act to achieve specific goals. They search for a path or strategy that brings
them closer to a desired state or goal. For example, an autonomous vehicle navigating to
a specific destination is a goal-based agent.
4. Utility-Based Agents:
o These agents not only seek to achieve goals but also try to maximize their utility
(satisfaction). They choose actions that lead to the most beneficial outcome, taking into
account multiple goals or criteria, such as efficiency, safety, and cost.
5. Learning Agents:
o These agents can improve their performance over time by learning from their
environment and previous actions. They can adjust their behaviour based on feedback
to adapt to new situations.

Example of a Rational Agent:


Let's take a self-driving car as an example of a rational agent:
 Perception: The car perceives the environment using sensors (e.g., cameras, Lidar, GPS) that
detect the car’s surroundings (other cars, pedestrians, traffic signs).
 Actions: Based on its perception, the car can take actions such as steering, accelerating, braking,
or signalling.
 Performance Measure: The performance measure could be minimizing travel time while
ensuring safety, obeying traffic laws, and reducing fuel consumption.
 Goal: The car's goal is to drive safely and efficiently from its starting point to its destination.

The self-driving car makes rational decisions based on the information it gathers from its sensors, its
goal of reaching a destination efficiently and safely, and its performance measure.
A rational agent is an AI entity that perceives its environment, acts based on that perception to
achieve specific goals, and chooses actions that maximize its performance based on available
information and capabilities. Rational agents form the foundation of many AI systems, as they
enable autonomous decision-making in a wide range of applications.
ASSIGNMENT 2
(Submit with in a week)
1) Write about logical agents and its representation.
2) Explain the following: (i) Knowledge based agents (ii)Logical Agents
3) Discuss about patterns and pattern representation Propositional Logic.
4) Discuss about Inference in Propositional Logic.
5) Write about Inference in First-Order Logic.
6) Compare inference in propositional logic with inference in first order logic.
7) Explain the predicate logic representation with suitable example.
8) What are the merits and demerits of propositional logic in Artificial Intelligence?
9) Explain the predicate logic representation and inference in predicate logic with a suitable
example.
10) What are the standard quantifiers used in first order logic? Explain them with examples.
11) Explain unification algorithm with suitable example.
12) Mention the categories of hill climbing search. What are the reasons that hill climbing often get
struck?
13) Compare best first search strategy with depth first strategy.
14) Describe the role of CNF/DNF in resolution .
15) What do you mean by rational agents? Explain.

You might also like