0% found this document useful (0 votes)
150 views

AI-UNIT-2 PPT

Uploaded by

ayyanreddymanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views

AI-UNIT-2 PPT

Uploaded by

ayyanreddymanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 135

UNIT - II

• Advanced Search: Constructing Search Trees,


Stochastic Search, AO* Search Implementation,
Minimax Search, Alpha-Beta Pruning
• Basic Knowledge Representation and Reasoning:
Propositional Logic, First-Order Logic, Forward
Chaining and Backward Chaining, Introduction to
Probabilistic Reasoning, Bayes Theorem
Constructing Search Trees
The construction of search tree will
find a potential solution to unknown
questions by looking at the available
data in an organized manner,
one piece (or "node") at a time.
• Suppose one is writing a program for an airline to find a way for a customer to get from City A to
City B given a list of all of the possible flights between any two cities. The program will need to go
through the list and determine which flight (or connecting series of flights) that the customer
should take. This is implemented using a search tree. City A (where the customer currently is
located) will be the initial piece of information, or "initial node." The search tree will consist of
this initial node at the top of the tree. Next, the program would look at all of the flights beginning
with City A. These flights would show cities that the customer could reach in just one flight, and
so these cities (each city a node itself) are on the next level, below the initial node. Now, each of
these cities has flights as well, which would be on the third level, and the tree would continue so
forth.
• Choosing a search strategy (i.e. Breadth-first or Depth-first) will tell the program how to
progress down the search tree, or hierarchy of nodes that has been built from the
information. There are clearly many different ways that one can get from City A to City B,
so the search strategy is the key to determining which path will be selected.
• Search trees, with search strategies, can also be found in finding solutions in puzzle-
solving or simple games such as "The Eight Queens Problem" where one tries to place
eight queens in locations on a chessboard such that none of them can move to the
space of another.
Therefore:
• Create search trees to solve Artificial Intelligence questions. Use each piece of
information as a node and construct a hierarchy of nodes based on how the individual
pieces of information connect together so that one can go along a path through a
series of nodes in order to get from one to another.

• Use a SEARCH STRATEGIES to determine the way of traveling down the hierarchy of the
search tree. HEURISTICS can be used to give weights or "costs" to the different paths
between nodes and help in the strategy decisions of which path to ultimately choose
Stochastic Search
• Adversarial search, or game-tree search, is a technique for analyzing
an adversarial game in order to try to determine who can win the
game and what moves the players should make in order to win.
Adversarial search is one of the oldest topics in Artificial Intelligence.
The original ideas for adversarial search were developed by Shannon
in 1950 and independently by Turing in 1951, in the context of the
game of chess—and their ideas still form the basis for the techniques
used today
Game playing:
• Good example for Stochastic search is Game playing.
• It is an interesting topic because:
one require intelligence
Logical thinking
Rational mind
Searching algorithms
• At the same time, we don’t know what the opponent choice and ideas
• It is a multi agent environment
• Game playing works on consistent values (utility factor)
• It is a space search so we use BFS and DFS
• In this depth is called ply
• How much the big is game tree the same amount of search is done
2-Person Games:
• Players: We call them Max and Min.
• Initial State: Includes board position and whose turn it is.
• Operators: These correspond to legal moves.
• Terminal Test: A test applied to a board position which
determines whether the game is over. In chess, for example, this
would be a checkmate or stalemate situation.
• Utility Function: A function which assigns a numeric value to a
terminal state. For example, in chess the outcome is win (+1),
lose (-1) or draw (0). Note that by convention, we always
measure utility relative to Max.
• Example: Tic-Tac-Toe
max

min
AO* Algorithm
• Best-first search is what the AO* algorithm does. The AO*
method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.
AND OR graphs are specialized graphs that are used in problems that
can be divided into smaller problems. The AND side of the graph
represents a set of tasks that must be completed to achieve the main
goal, while the OR side of the graph represents different methods for
accomplishing the same main goal.
Working of AO* algorithm:
• The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
• Game playing majorly works on 2 algorithms:
1 Minimax search
2 alpha Beta pruning

Minimax search
mainly works with the help of backtracking algorithm
Best move strategy is used
Max will try to maximize (best move)
Min will try to minimize (Worst move)
Minimax Search
1. Generate the whole game tree.
2. Apply the utility function to leaf nodes to get their values.
3. Use the utility of nodes at level n to derive the utility of nodes at
level n-1.
4. Continue backing up values towards the root (one layer at a
time).
5. Eventually the backed up values reach the top of the tree, at
which point Max chooses the move that yields the highest value.
This is called the minimax decision because it maximises the
utility for Max on the assumption that Min will play perfectly to
minimise it.
Basic Example 1
Example 2:
• Let us assume Max=- inf and Min=inf
Max move
(inf,3) --- 3
(3,2) ---- 2
(2,-1) ----- -1

(inf,1) --- 1
(1,0)---- 0
(0,2) ----- 0

(inf,5) ----- 5
(5,4) ----- 4
(4,1) ----- 1

(inf,7) ----min 7
(7,5) ----- min 5
Min move
(-inf,-1) --- -1
(-1,0) ---- 0
(-inf,1) ---- 1
(1,5)----- 5

Max Move
(Inf,0) ----- 0
(0,5) ------ 0
Example:3
• We have utility values -1,5,-2,-4,1,-1,2,4
• Now it is our turn to move
we choose best value or max value
• The next turn is opponent will
always choose which is worst for
Us (i.,e 1 st player)
Example :Tic-Tac-Toe (minimax)
• Properties of minimax:

• Complete : Yes (if tree is finite)


• Optimal : Yes (against an optimal opponent)
• Time complexity : O(bm)
• Space complexity : O(bm) (depth-first exploration)
• For chess, b ≈ 35, m ≈100 for "reasonable" games
• exact solution completely infeasible.
Limitations
•– Not always feasible to traverse entire tree
•– Time limitations
Alpha-Beta pruning
• In minimax algorithm we explore all nodes in the given search tree,
exploration is more
• So now we can develop a algorithm where we can reduce the
expansion of nodes that is alpha beta pruning
• In this pruning means we are going to ignore the next branches.
• It is a method of cut off search by exploring less no of nodes.
• In simple words alpha beta pruning is defined as cutting the parts of
the tree which we don’t need.
• Basically 2 values
• and
• Alpha= max value, in worst case alpha =-inf
• Beta=min value, in worst case beta=inf
• At max level always the value of alpha will be changed and beta
value remains constant
• At min level always the value of beta will be changed and alpha
value remains constant
• Never take any value to the upside of the tree i.e.,
• Always need to check the condition at each node whether
• Alpha Beta
• If it is greater then we can prone the next path to travel so that we
can reduce the expansion of nodes and should know the alpha cut-off
and beta cut-off
Example for alpha beta pruning
• After terminal nodes or leaf nodes we have max level
• At max:
Initial value of alpha =inf (changes) and beta= -inf (remains constant)
Max(-inf,2)=2
Max(2,3) =3
At node E the value is 3 (greater than equal to 3)
• At min:
Next upper node is B (less than equal to 3)
But if you want to evaluate next part than initially alpha = -inf and
beta =inf
But now, alpha=-inf
Beta= 3(changes at min)
At node B the value is 3 (less than equal to 3)
• At max:
To max node ‘F’ we can take the values of ‘B’ node
Now alpha= -inf
Beta=3
Max(-inf,5)=5
Now alpha=5 which is going to change at ‘F’ node (great than equal 5)
and beta =3 (remains constant at max level)
Now check the condition,
yes
Since condition is true prone the next path
• Now again check at B min(3,5) =3 (less than equal)
• At ‘ A’ –max
Alpha=-inf (>=3)
Beta=inf
At ‘c’ – min
Alpha=3
Beta=inf
At ‘G’ ---max
Alpha =3
Beta=inf
But ‘G’ has only one possible , so no need to compare (G=0)
• At ‘C’ ---- min
Alpha=3
Beta=0
----- True ---- prone

• At ‘I’ node --- max


Initially alpha=3 and beta=inf
Now at ‘I’ alpha =3 and beta=inf
Now I=3
At ‘D’ node ----- min D=3
Initially alpha=3 and beta=inf
Now at ‘D’ node alpha=3 and beta=2
----- True ---- prone
• Best case of alpha-beta is O(b power of d/2)
• D----depth of the node (ply)
• O(b d/2) is higher than O(b d)
Basic Knowledge Representation
and Reasoning:
• Intelligence----- Machine requires intelligence to perform a task i.e.,
ability to use knowledge.
• The important factor of intelligence is Knowledge
• How to represent knowledge
• How to give Knowledge to machine
• How to store on to machine
• Reasoning --------- It is defined in different ways like Processing of
knowledge, capability of thinking, to analyze, to have valid conclusion.
• Syntax : Grammatical of a language
• Semantics : meaning or sentence of a language
If there is no proper or good representation than that yields to the above two
forms i.e., syntax and semantics

Inference mechanism reads the environment and interprets the knowledge that
is represented in suitable language and act accordingly.

In simple words it requires a


language and
Method to use language

• So There must be a method or technique to represent the knowledge that is


called logic (propositional logic and first-order predicate logic)
• It is one of the Knowledge representation
• Propositional logic is one of the simplest method of Knowledge
representation.
• The term proposition simply defined like writing a sentence in terms
of English language, programming or mathematical language.
• It is a language with some concrete rules which deals with
propositions and has no ambiguity in representation.
• It consist of precisely ,
• The atomic and complex are the 2
representations of syntax and
semantics
• This representation will support the inference and can be translated
into logics using syntax and semantics.
• Syntax: well defined sentence in the language and should have the
proper structure
• Semantics: Defines the truth of meaning of sentence
• Propositional logic is the declarative statement either in terms of
TRUE/FALSE
• Connectives

Word Symbol Example

Not A

And
^ A^B

OR AVB

Implies A B

If and only if A B
P Q P Q

P P
T T T

T F F T F

F T F F T

F F T
If the roads are wet than it rains ????? -------- meaning less

there might be some other reasons to have the road wet.


I go to mall if I have to do shopping
Example 1
• A ----- It is hot
• B ------ It is humidity
• C ------ It is raining
Condition for propositional logic: write the propositional statements
If it is humid then it is hot ------ B A
If it is hot and humid then it is not raining --------- (A^B) ~C
Example 2:
P----You can access the internet from campus
Q---- you are CSE students
R---- you are freshman
You can access the internet from campus only if you are CSE students or
you are not a freshman ------ P ( Q v ~R )
First-Order Logic
• it is the another way of representing the knowledge in AI, considered
to be an extension of PL (proposition logic)
• FOL is also known as predicate logic
• FOL is defined as a powerful language that develops information
about the object more easy way and also can express the relationship
between their objects and also infer arguments in infinite models like

• Simply says it assumes objects, relations and functions


• FOL is also a natural language which has two parts : syntax and
semantics
• The basic syntax elements of: constants (terms), variables (terms),
predicates, Functions (terms), Connectives, equality, Quantifiers
• terms
• Tommy is a term
• Men is a term
• Predicates:
• These are the sentences which are joined from a predicate symbol
followed be parenthesis () with sequence of terms.
• The representation is
Predicate(term1,term2,………………..term n)
• Example:
• Hari and Raghu are brothers ------> brother(Hari, Raghu)
• Tommy is a dog --->dog(Tommy)
• Quantifiers:
• It is a language element which generates quantification.
• These are the symbols that permits to determine or identify the range
and scope of the variable in the logic expression.
• Universal quantifier is a symbol of logical representation which specify
the statement with in a range is true for everything.
• In universal quantifiers we use implification as a symbol ( )
Example: If x is a variable then x is read as different ways with in a
range

• Existential quantifier will express the statement with in its scope is


true for at least one instance of sometimes.
• In Existential quantifier we use “V” , “ “
Example: If x is a variable then (x) is read as different ways with in
some scope of existence.
Example:

• John likes all kind of food


• Apple and vegetables are food
• Anything anyone eats and not killed his food
• Anil eats peanut and still alive
• Harry eats everything that anil eats
• John like peanuts

Convert all the above statements into predicate logic (or) FOL?
Inference in FOL:
• This is used to deduce new facts or sentences from existing sentences
• By understanding FOL inference rules let us understand some basic
terminology used:
Substitution:
It is a fundamental operation performed on terms and formulae's
Equality:
It not only uses predicate ,terms for creating or making atomic
sentences but also uses equality
Example:
Brother(john)=smith
~(x=y) equivalent to x!=y
Inference rules for quantifiers
• Universal Generalization
• Universal Instantiation
• Existential Instantiation
• Existential Introduction
• Universal Generalization:
It is a valid inference rule which states that if premises p(c) is true for
any arbitrary element ‘C’ in the universe of discourse, then (x) p(x)
can be represented as,
P(c)
(x) p(x)

• Universal Instantiation:
It is also called as universal elimination, can be applied multiple times
to add new sentence.
(x) p(x)
P(c)
• Existential Instantiation:
It is also called as existential elimination which can be applied only once
to replace the existential sentence.
x p(x)
p(c)
• Existential Introduction:
It states that if there is some element ‘c’ in the universe of discourse,
which has a property ‘p’
P(c)
x p(x)
• Example1: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x
P(x) "All bytes contain 8 bits.", it will also be true.
• Example:2.IF "Every person like ice-cream"=> ∀x P(x) so we can infer
that "John likes ice-cream" => P(c)
• Example 3: From the given sentence: ∃x Crown(x) ∧ OnHead(x,
John), So we can infer: Crown(K) ∧ OnHead( K, John), as long as K
does not appear in the knowledge base.
• Example 4: Let's say that,
"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
Resolution in FOL:
• It is defined as a theorem proven technique which proves by contradiction.
• It is used when there are various statements, then read to prove conclusion
of those statements.
• Unification is the key concept which proves the conclusion of those
statements.
• Resolution is a single inference rule which can be efficiently operated on
Horn clause and definite clause (conjunction normal form and clausal
form )
Clause: Disjunction of literals
Conjunctive Normal Form: A sentence represented as a conjunction of
clauses said to be CNF
Steps for resolution:
• Conversion of Sentence into FOL
• Convert FOL statement into CNF
• Negate the statement which needs to prove (by contradiction)
• Draw resolution graph (unification).
Example:
John likes all kind of food
Apple and vegetables are food
Anything anyone eats and not killed his food
Anil eats peanut and still alive
Harry eats everything that anil eats
Prove by resolution that --- John like peanuts
Step 1: Convert the given into FOL
• Step 2: Conversion of FOL into CNF
(i) Eliminate all implications and rewrite
(ii) Move negation inside and rewrite
• (iii) Rename or standardize variable
• (iv) Eliminate any Existential instantiation quantifiers in the
statements:
No problem because in any of statement there is no existential
quantifier
• (v) Drop universal quantifiers
Step3: Negate the statement to be proved
In this we consider or apply negation to the conclusion statement i.e.,
it is represented as

~ likes(john, peanuts)

Step4: Draw resolution graphs (unification)


In this we will solve problem by resolution tree using substitution, so
the resolution graph is given as
Forward chaining and Backward
chaining
• In artificial intelligence, forward and backward chaining is one of the
important topics, but before understanding forward and backward chaining
lets first understand that from where these two terms came.
Inference engine:
• The inference engine is the component of the intelligent system in artificial
intelligence, which applies logical rules to the knowledge base to infer new
information from known facts. The first inference engine was part of the
expert system. Inference engine commonly proceeds in two modes, which are:
1.Forward chaining
2.Backward chaining
• Horn Clause and Definite clause:
• Forward chaining is also known as a forward deduction or forward
reasoning method when using an inference engine. Forward chaining
is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the
forward direction to extract more data until a goal is reached.

• The Forward-chaining algorithm starts from known facts, triggers all


rules whose premises are satisfied, and add their conclusion to the
known facts. This process repeats until the problem is solved.
Forward chaining mechanism
Properties of Forward-Chaining:

• It is a down-up approach, as it moves from bottom to top.


• It is a process of making a conclusion based on known facts or data,
by starting from the initial state and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach
to the goal using available data.
• Forward -chaining approach is commonly used in the expert system,
such as CLIPS, business, and production rule systems.
• Consider the following example:
Rule1:If A and C then F
Rule2: If A and E then G
Rule3: IF b then E
Rule4:If G then D

Problem :Prove IF A and B are true then D is true (IF A and B then D)

In Data base we have A and B


In Knowledge base :
• A &C F
• A&E G
•B E
•G D
Finally reached from initial states A & B to goal
state D
Example 2
• Goal state : Z
• Facts/Data base :A,B,E,C
• Rules:
Rule1:F &B Z
Rule2: C & D F
Rule 3: A D
Finally reached conclusion state ‘Z’
• Backward-chaining is also known as a backward deduction or
backward reasoning method when using an inference engine. A
backward chaining algorithm is a form of reasoning, which starts with
the goal and works backward, chaining through rules to find known
facts that support the goal.
• In other words
Backward chaining mechanism
Properties of backward chaining:

• It is known as a top-down approach.


• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to
prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules
are selected and used.
• Backward -chaining algorithm is used in game theory, automated
theorem proving tools, inference engines, proof assistants, and various
AI applications.
• The backward-chaining method mostly used a depth-first search
strategy for proof.
Example:
• Goal state: Z Facts/data base
• Facts: A,E,B,C
• Rules:
Rule1:F&B Z
Rule2:C&D F Knowledge base
Rule3:A D
• Finally reached “Z” and added into the facts/data base
Introduction to probabilistic
reasoning
• Reasoning:
Atomic events
• In probability theory, an elementary event, also called
an atomic event or sample point, is an event which contains only a
single outcome in the sample space. Using set theory terminology, an
elementary event is a singleton.
Posterior probability: Which is calculated after all information or informed in prior is taken into he
account. Posterior probability is the combination of prior probability and new information
• Example: rolling a dice (6)
• Sample space(sigma)= {1,2,3,4,5,6}
• Events: even {2,4,6} and odd {1,3,5}
P(6)=1/6
P(even) =3/6 = 1/2
P(odd) =3/6 = 1/2
Conditional probability : What is the probability of getting 6 when it is an even
no?
P(a/b)=p(a b)/p(b) = 1/6
½
Sol:1/3
Bayes' theorem
• Bayes' theorem is also known as Bayes' rule, Bayes' law,
or Bayesian reasoning, which determines the probability of an event
with uncertain knowledge. In probability theory, it relates the
conditional probability and marginal probabilities of two random
events.

• Bayes Theorem An important branch of applied statistics called Bayes


Analysis can be developed out of conditional probability. It is possible
given the outcome of the second event in a sequence of two events
to determine the probability of various possibilities for the first event.
Bayes Theorem
• From conditional prob:
P(A|B)=p(A^B)/p(B) ---- P(A|B).p(B)=p(A^B) ------eq1
P(B|A)=p(B^A)/p(A) ---- P(B|A).p(A)=p(B^A) ------eq2
eq1=eq2
• P(A|B).p(B)=P(B|A).p(A) from this get P(A|B) & P(B|A)
P(A|B)=P(B|A).p(A)/p(B)
P(B|A)=P(A|B).p(B)/p(A) Bayes' rule

• Calculate Hypothesis for flu based on symptoms: GIVEN


P(A): symptom of flu----0.00001
P(B|A):prob of symptoms gives flu ----0.95
P(B): your symptoms of flu=0.01
• P(A|B)= p(B|A)*p(A)/p(B)
0.95*0.0001/0.01=0.00095 (<1 in thousand people)

Example for conditional probability given:


P(C) ---70%----children like chocolates (0.7)
p( C and S)----35%---children likes both chocolates and strawberry
(0.35)
• Calculate conditional prob who likes chocolates also likes strawberry
P(S|C)= p(C^S)/p(C)
0.35/0.7=0.5 (50% --- who likes chocolates also likes strawberry)
• Example – How to buy a used car
• I am thinking of buying a used car at Honest Ed’s. In order to make an
informed decision, I look up the records in an auto magazine of the car type I
am interested in and find that unfortunately 30% have faulty transmissions.
• To get more information on this particular car at Honest Ed’s I hire a
mechanic who can make a shrewd guess on the basis of a quick drive around
the block. Of course, he isn’t always right but he does have an excellent
record. Of all the faulty cars he has examined in the past, he correctly
pronounced 90% “faulty”. In other words, he wrongly pronounced only 10%
“ok”.
• He has almost as good a record in judging good cars. He has correctly
pronounced 80%”ok”, while he wrongly pronounced only 20% “faulty”.

• “faulty” describes the mechanics opinion.


• faulty with no quotation marks describes the actual state of the car.
• Example – How to buy a used car
• What is the chance that the car I’m thinking of buying has a faulty
transmission:
1. Before I hire the mechanic?
2. If the mechanic pronounces it “faulty”?
3. If the mechanic pronounces it “ok”?
Assignment:

You might also like