PE_1_AI_UNIT_4
PE_1_AI_UNIT_4
REPRESENTATION
UNIT 4
Contents
Propositional Logic:
• Representation
• Inference
• Reasoning Patterns
• Resolution
• Forward and Backward Chaining.
First order Logic:
• Representation
• Inference
• Reasoning Patterns
• Resolution
• Forward and Backward Chaining.
Knowledge representation:
•Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence
which concerned with AI agents thinking and how thinking contributes to intelligent
behavior of agents.
• Example:
1.a) It is Sunday.
2.b) The Sun rises from West (False proposition)
3.c) 3+3= 7(False proposition)
4.d) 5 is a prime number.
Following are some basic facts about propositional logic:
• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical
connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
• A proposition formula which is always false is called Contradiction.
• Statements which are questions, commands, or opinions are not propositions such as
"Where is Rohini", "How are you", "What is your name", are not propositions.
Syntax of propositional logic:
• The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
• Atomic Propositions
• Compound propositions
• Atomic Proposition: Atomic propositions are the simple propositions. It consists of a
single proposition symbol. These are the sentences which must be either true or false.
1.Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
• Compound proposition: Compound propositions are constructed by combining
simpler or atomic propositions, using parenthesis and logical connectives.
• Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:
• Logical connectives are used to connect two simpler propositions or representing a sentence logically.
We can create compound propositions with the help of logical connectives. There are mainly five
connectives, which are given as follows:
• Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
• Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
• Example: Rohan is intelligent and hardworking. It can be written as,
• P= Rohan is intelligent,Q= Rohan is hardworking. → P∧ Q.
• Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q
are the propositions.
• Example: "Ritika is a doctor or Engineer",
• Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
• Implication: Sentence (Q) is dependent on sentence (P), and it is called implication.A sentence such as
P → Q, is called an implication. Implications are also known as if-then rules. It can be represented as
• If it is raining, then the street is wet.
• Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
• Biconditional: Sentence (Q) is dependent on sentence (P), and vice versa .A sentence such as P⇔ Q is
a Biconditional sentence, example If I am breathing, then I am alive
• P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Following is the summarized table for Propositional
Logic Connectives:
Question:
1. Express the following sentences in Proposition Logic.
a. "It is raining today, and street is wet."
b. "Ankit is a doctor, and his clinic is not in Mumbai."
c. Ritika is a doctor or Engineer
d. If it is raining, then the street is wet.
e. I am alive. I am breathing
f. Today is Monday
g. Today is not a Monday .
h. Ram plays both cricket and Hockey.
i. Ram leaves for Chennai or Mumbai
j. If it is Sunday (P) then I will go to Movie (Q).
k. If I have 1000 Rupees then only I will go to hotel,I will go to hotel if I have 1000 rupees.
Truth Table:
Logical equivalence:
• Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the columns in
the truth table are identical to each other.
• Let's take two propositions A and B, so for logical equivalence, we can write
it as A⇔B. In below truth table we can see that column for ¬A∨ B and A→B,
are identical hence A is Equivalent to B
•
Properties of Operators:
• Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
• Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
• Identity element:
• P ∧ True = P,
• P ∨ True= True.
• Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
• ¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with
propositional logic. Example:
• All the girls are intelligent.
• Some apples are sweet.
• Propositional logic has limited expressive power.
• In propositional logic, we cannot describe statements in terms of their
properties or logical relationships.
Inference:
• In artificial intelligence, we need intelligent computers which can create new logic
from old logic or by evidence, so generating the conclusions from evidence and
facts is termed as Inference.
Inference rules:
• Inference rules are the templates for generating valid arguments. Inference rules are
applied to derive proofs in artificial intelligence, and the proof is a sequence of the
conclusion that leads to the desired goal.
• In inference rules, the implication among all the connectives plays an important role.
Following are some terminologies related to inference rules:
• Implication: It is one of the logical connectives which can be represented as P → Q. It
is a Boolean expression.
• Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.
• Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
• Inverse: The negation of implication is called inverse. It can be represented as ¬ P →
¬ Q.
• From the above term some of the compound statements are equivalent to each
other, which we can prove using truth table:
Hence from the above truth table, we can prove that P → Q is equivalent to ¬ Q
→ ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.
Types of Inference rules:
1. Modus Ponens:
• The Modus Ponens rule is one of the most important rules of inference, and it
states that if P and P → Q is true, then we can infer that Q will be true. It can
be represented as:
• Example:
• Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: “I am sleepy" ==>P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
• Proof by Truth table:
1
2. Modus Tollens:
• The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬
P will also true. It can be represented as:
• Example:
• Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
• Proof by truth-table:
5. Addition:
• The Addition rule is one the common inference rule, and it states that If P is true,
then P∨Q will be true.
• Example:
• Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
• Proof by Truth-Table:
6. Simplification:
• The simplification rule state that if P∧ Q is true, then Q or P will also be true.
It can be represented as:
• Proof by Truth-Table:
7. Resolution:
• The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will
also be true. It can be represented as
• Proof by Truth-Table:
Reasoning in Artificial intelligence
• Reasoning:
• The reasoning is the mental process of
deriving logical conclusion and making
predictions from available knowledge,
facts, and beliefs. Or we can say,
"Reasoning is a way to infer facts from
existing data." It is a general process of
thinking rationally, to find valid
conclusions.
• In artificial intelligence, the reasoning is
essential so that the machine can also think
rationally as a human brain, and can
perform like a human.
Types of Reasoning
• Deductive reasoning
• Inductive reasoning
• Abductive reasoning
• Common Sense Reasoning
• Monotonic Reasoning
• Non-monotonic Reasoning
Terminologies
• A hypothesis is an assumption, an idea that is proposed for the sake of argument so
that it can be tested to see if it might be true.
• Premise means a statement taken to be true and used as a basis for argument or
reasoning.
Premise 1: John does not like any sour things.
Premise 2: All lemons are sour.
Conclusion: John does not like lemons.
• Patterns are a set of statements arranged in a sequence such that they are related to
each other in a specific rule. These rules define a way to calculate or solve problems.
For example, in a sequence of 3,6,9,12,_, each number is increasing by 3.
• axiom is an unprovable rule or first principle accepted as true because it is self-
evident or particularly useful.
• Heuristic knowledge is representing knowledge of some experts in a filed or subject.
Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
1. Deductive reasoning:
• Deductive reasoning is deducing new information from logically related known
information. It is the form of valid reasoning, which means the argument's conclusion
must be true when the premises are true.
• Deductive reasoning is a type of propositional logic in AI, and it requires various rules
and facts. It is sometimes referred to as top-down reasoning, and contradictory to
inductive reasoning.
• In deductive reasoning, the truth of the premises guarantees the truth of the
conclusion.
• Example:
• Premise-1: All the human eats veggies
• Premise-2: Suresh is human.
• Conclusion: Suresh eats veggies.
The general process of deductive reasoning is given below:
2. Inductive Reasoning:
• Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts
by the process of generalization. It starts with the series of specific facts or data and reaches to
a general statement or conclusion.
• Inductive reasoning is a type of propositional logic, which is also known as cause-effect
reasoning or bottom-up reasoning.
• In inductive reasoning, we use historical data or various premises to generate a generic rule,
for which premises support the conclusion.
• Example:
• Premise: All of the pigeons we have seen in the zoo are white.
• Conclusion: Therefore, we can expect all the pigeons to be white.
3. Abductive reasoning:
• Abductive reasoning is a form of logical reasoning which starts with single
or multiple observations then seeks to find the most likely explanation or
conclusion for the observation.
• Abductive reasoning is an extension of deductive reasoning, but in abductive
reasoning, the premises do not guarantee the conclusion.
• Example:
• Implication: Cricket ground is wet if it is raining
• Axiom: Cricket ground is wet.
• Conclusion: It is raining.
Detailed Examples Of Abductive Reasoning
1. Dew On Morning Grass
• Scenario: “When I went outside this morning, the grass was completely covered with
dew. It must have rained last night.”
• Here, we can see how the hypothesis put forward, i.e., ‘it must have rained last
night,’ is an inference or an incomplete observation. The person making the
observation did not in fact witness that it had rained the previous night. Rather, she is
inferring or using abductive logic to arrive at the best possible explanation for the
given conclusion.
• Given that the grass is wet (conclusion) it is probable that it rained last night
(hypothesis).
• Notice though that there could be some other alternative explanation for the grass
being wet; it’s possible, for example, that she forgot that the sprinklers turn on
automatically in the morning.
• This illustrates how abductive reasoning can sometimes result in an incorrect
assumption or explanation as to why something took place.
Detailed Examples Of Abductive Reasoning
2. Getting Home Late From Work
• Scenario: Usually, my partner gets home from work at around 6. It’s now 7
o’clock, so she must be stuck in bad traffic.
• The explanation that has the most explanatory power, so to speak, is that she
is late to get home because she must be stuck in traffic.
• Though the person cannot conclude this is true until he or she confirms it with
their partner, it’s a reasonable and logical assumption to make.
4. Common Sense Reasoning
• Common sense reasoning is an informal form of reasoning, which can be
gained through experiences.
• Common Sense reasoning simulates the human ability to make presumptions
about events which occurs on every day.
• It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.
• Example:
1.One person can be at one place at a time.
2.If I put my hand in a fire, then it will burn.
• The above two statements are the examples of common sense reasoning
which a human mind can easily understand and assume.
5. Monotonic Reasoning:
• In monotonic reasoning, once the conclusion is taken, then it will remain the same even if
we add some other information to existing information in our knowledge base. In
monotonic reasoning, adding knowledge does not decrease the set of prepositions that can
be derived.
• To solve monotonic problems, we can derive the valid conclusion from the available facts
only, and it will not be affected by new facts.
• Monotonic reasoning is not useful for the real-time systems, as in real time, facts get
changed, so we cannot use monotonic reasoning.
• Monotonic reasoning is used in conventional reasoning systems, and a logic-based system
is monotonic.
• Any theorem proving is an example of monotonic reasoning.
• Example:
• Earth revolves around the Sun.
• It is a true fact, and it cannot be changed even if we add another sentence in knowledge
base like, "The moon revolves around the earth" Or "Earth is not round," etc.
Monotonic Reasoning example:
• This reasoning refers to the part of reasoning in which a facts or information
will not change in any condition/circumstance, even if you add millions of new
records in the data. We can consider them universal truths.
• For example, if we have 2 + 3 = 5, then it will not change anyhow, even after
addition of many other records. Another example, “Gravity always pulls object
towards itself”, or “Sun rises through East and sets in West”. These conclusions
will never change, no matter how many additional information is being given
more or added to the database.
Advantages and Disadvantages of Monotonic
Reasoning:
• Advantages of Monotonic Reasoning:
• In monotonic reasoning, each old proof will always remain valid.
• If we deduce some facts from available facts, then it will remain valid for
always.
• Disadvantages of Monotonic Reasoning:
• We cannot represent the real world scenarios using Monotonic reasoning.
• Hypothesis knowledge cannot be expressed with monotonic reasoning,
which means facts should be true.
• Since we can only derive conclusions from the old proofs, so new knowledge
from the real world cannot be added.
6. Non-monotonic Reasoning
• In Non-monotonic reasoning, some conclusions may be invalidated if we add some
more information to our knowledge base.
• Logic will be said as non-monotonic if some conclusions can be invalidated by
adding more knowledge into our knowledge base.
• Non-monotonic reasoning deals with incomplete and uncertain models.
• "Human perceptions for various things in daily life, "is a general example of non-
monotonic reasoning.
• Example: Let suppose the knowledge base contains the following knowledge:
• Birds can fly
• Penguins cannot fly
• Pitty is a bird
• So from the above sentences, we can conclude that Pitty can fly.
• However, if we add one another sentence into knowledge base "Pitty is a penguin",
which concludes "Pitty cannot fly", so it invalidates the above conclusion.
Non-monotonic Reasoning example
• This reasoning refers to the part of reasoning in which information may change
in some condition, or after adding some additional information, previously
added facts can be changed.
• For Example, consider the following statements:
1.John loves Ice-Cream and Chocolates.
2.John hates nuts and cannot eat them in any condition.
3.“Temptations” is a chocolate.
• So, till here, if we apply reasoning, we can deduce that John loves Temptations.
• But, now if I add one more statement, that is “Temptations contains fruits and
nuts”, so, now our previously deduced conclusion doesn’t hold here. So, as our
conclusion changed with addition of more information, that is why it belongs to
the category of Non-Monotonic reasoning.
Advantages and Disadvantages of Non-monotonic
reasoning:
• Advantages of Non-monotonic reasoning:
• For real-world systems such as Robot navigation, we can use non-monotonic
reasoning.
• In Non-monotonic reasoning, we can choose probabilistic facts or can make
assumptions.
• Disadvantages of Non-monotonic Reasoning:
• In non-monotonic reasoning, the old facts may be invalidated by adding new
sentences.
• It cannot be used for theorem proving.
First-Order Logic in Artificial intelligence
• In the topic of Propositional logic, we have seen that how to represent
statements using propositional logic. But unfortunately, in propositional logic,
we can only represent the facts, which are either true or false.
• PL is not sufficient to represent the complex sentences or natural language
statements. The propositional logic has very limited expressive power.
Consider the following sentence, which we cannot represent using PL logic.
• "Some humans are intelligent", or
• "Sachin likes cricket."
• To represent the above statements, PL logic is not sufficient, so we required
some more powerful logic, such as first-order logic.
First-Order logic:
• First-order logic is another way of knowledge representation in artificial intelligence.
It is an extension to propositional logic.
• FOL is sufficiently expressive to represent the natural language statements in a
concise way.
• First-order logic is also known as Predicate logic or First-order predicate logic.
First-order logic is a powerful language that develops information about the objects
in a more easy way and can also express the relationship between those objects.
• First-order logic (like natural language) does not only assume that the world contains
facts like propositional logic but also assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
• Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such
as: the sister of, brother of, has color, comes between
• Function: Father of, best friend, third inning of, end of, ......
• As a natural language, first-order logic also has two main parts:
1. Syntax
2. Semantics
Syntax of First-Order logic:
• The syntax of FOL determines which collection of symbols is a logical
expression in first-order logic. The basic syntactic elements of first-order logic
are symbols. We write statements in short-hand notation in FOL.
• Following are the basic elements of FOL syntax:
• This rule can be used if we want to show that every element has a similar
property.
• In this rule, x must not appear as a free variable.
• Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All
bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:
• Universal instantiation is also called as universal elimination or UI is a valid
inference rule. It can be applied multiple times to add new sentences.
• The new KB is logically equivalent to the previous KB.
• As per UI, we can infer any sentence obtained by substituting a ground
term for the variable.
• The UI rule state that we can infer any sentence P(c) by substituting a ground
term c (a constant within domain x) from ∀ x P(x) for any object in the
universe of discourse.
• It can be represented as:
• Example:1.
• IF "Every person like ice-cream"=> ∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
• Example: 2.
• Let's take a famous example,
• "All kings who are greedy are Evil." So let our knowledge base contains this
detail as in the form of FOL:
• ∀x king(x) ∧ greedy (x) → Evil (x),
• So from this information, we can infer any of the following statements using
Universal Instantiation:
• King(John) ∧ Greedy (John) → Evil (John),
• King(Richard) ∧ Greedy (Richard) → Evil (Richard),
• King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
3. Existential Instantiation:
• Existential instantiation is also called as Existential Elimination, which is a
valid inference rule in first-order logic.
• It can be applied only once to replace the existential sentence.
• The new KB is not logically equivalent to old KB, but it will be satisfiable if
old KB was satisfiable.
• This rule states that one can infer P(c) from the formula given in the form of ∃x
P(x) for a new constant symbol c.
• The restriction with this rule is that c used in the rule must be a new term for
which P(c ) is true.
• It can be represented as:
Example:
• From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
• So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not
appear in the knowledge base.
• The above used K is a constant symbol, which is called Skolem constant.
• The Existential instantiation is a special case of Skolemization process.
4. Existential introduction
• An existential introduction is also known as an existential generalization, which
is a valid inference rule in first-order logic.
• This rule states that if there is some element c in the universe of discourse
which has a property P, then we can infer that there exists something in the
universe which has the property P.
• It can be represented as:
p q p∧q
T T T
T F F
F T F
F F F
p q p∨q
T T T
T F T
F T T
F F F
Goal state is Z
Facts:
F&B->Z
C&D->F
A->D
Lets assume:
initial database is having
A,B,C,E
Backward Chaining in AI: Artificial Intelligence
• In propositional logic, backward chaining begins from the goal and using the
given propositions, it proves the asked goal.
• There is a backward chaining algorithm which is used to perform backward
chaining for the given axioms.
• The first algorithm given is known as the Davis-Putnam algorithm.
• It was proposed by Martin Davis and Hilary Putnam in 1960. In 1962,
Davis, Logemann and Loveland presented a new version of the Davis-
Putnam algorithm.
• It was named under the initials of all four authors as DPLL. The versioned
algorithm (DPLL) takes an input of a statement as CNF.
Example of Backward Chaining in Propositional
Logic
• Given that:
1.If D barks and D eats bone, then D is a dog.
2.If V is cold and V is sweet, then V is ice-cream.
3.If D is a dog, then D is black.
4.If V is ice-cream, then it is Vanilla.
• Derive backward chaining using the given known facts to prove
Tomy is black.
• Tomy barks.
• Tomy eats bone.
Solution:
1.On replacing D with Tomy in (3), it becomes:
• If Tomy is a dog, then Tomy is black.
• Thus, the goal is matched with the above axiom.
• Now, we have to prove Tomy is a dog. …(new goal)
• Replace D with Tomy in (1), it will become:
• If Tomy barks and Tomy eats bone, then Tomy is a dog. …(new goal)
• Again, the goal is achieved.
• Now, we have to prove that Tomy barks and Tomy eats bone. …(new goal)
• As we can see, the goal is a combination of two sentences which can be further divided as:
• Tomy barks.
• Tomy eats bone.
• From (1), it is clear that Tomy is a dog.
• Hence, Tomy is black.
• Note: Statement (2) and (4) are not used in proving the given axiom. So, it is clear that goal never
matches the negated versions of the axioms. Always Modus Ponen is used, rather Modus Tollen.
Backward Chaining in FOPL
• In FOPL, backward chaining works from the backward direction of the goal,
apply the rules on the known facts which could support the proof.
• Backward Chaining is a type of AND/OR search because we can prove the
goal by applying any rule in the knowledge base.
• A backward chaining algorithm is used to process the backward chaining.
Example:
"As per the law, it is a crime for an American to sell weapons to hostile
nations. Country A, an enemy of America, has some missiles, and all the
missiles were sold to it by Robert, who is an American citizen."
• Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-
order definite clauses, and then we will use a forward-chaining algorithm to
reach the goal.
Facts Conversion into FOL:
It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
Country A has some missiles. It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
All of the missiles were sold to country A by Robert.
∀p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
Country A is an enemy of America.
Enemy (A, America) .........(7)
Robert is American
American(Robert). ..........(8)
Backward-Chaining proof:
3. Forward chaining is known as data-driven inference Backward chaining is known as goal-driven technique as we start from the
technique as we reach to the goal using the available goal and divide into sub-goal to extract the facts.
data.
4. Forward chaining reasoning applies a breadth-first Backward chaining reasoning applies a depth-first search strategy.
search strategy.
5. Forward chaining tests for all the available rules Backward chaining only tests for few required rules.
6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic, prescription, and debugging
monitoring, control, and interpretation application. application.
7. Forward chaining can generate an infinite number of Backward chaining generates a finite number of possible conclusions.
possible conclusions.
8. It operates in the forward direction. It operates in the backward direction.
9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required data.
Questions One marks:
3. Discuss the syntax and semantics of Propositional Logic. What is theorem proving?
4. Differentiate between forward and backward reasoning. What is unification?
5. Express the following sentences in Proposition Logic.
How atomic sentence is represented?
• a. "It is raining today, and street is wet."
Mention the disadvantage of Propositional Logic?
• b. "Ankit is a doctor, and his clinic is not in Mumbai."
• c. Ritika is a doctor or Engineer When a sentence is called satisfiable?
• d. If it is raining, then the street is wet. How complex sentenses are represented?
• e. I am alive. I am breathing Represent the sentence “Brothers are siblings” in first order logic?
6. Narrate the resolution process in First Order Logic
What factors justify whether the reasoning is to be done in forward or
7.Write and explain the backward chaining algorithm. backward reasoning?