Introduction To Artificial Intelligent Artificial Intelligence (AI) Is The Branch or Field of Computer Science That Is Concerned With
Introduction To Artificial Intelligent Artificial Intelligence (AI) Is The Branch or Field of Computer Science That Is Concerned With
Lecture 1
Introduction to Artificial Intelligent
Artificial Intelligence (AI) is the branch or field of computer science that is concerned with
the automation of intelligent behavior. Major AI researchers and textbooks define this field as
“the study and design of intelligent agents”, in which an intelligent agent is a system that
perceives its environment and takes actions that maximize its chances of success. John
McCarthy, who coined the term in 1955, defines it as “the science and engineering of making
intelligent machines”. AI has a long history but is still constantly and actively growing and
changing. It has become an essential part of the technology industry, providing the heavy
lifting for many of the most challenging problems in computer science and many other fields.
Commercial Products of AI
There are many areas of study in AI. We will try to list some of them:
• Game Playing
o Playing games using a well-defined set of rules such as checkers, chess and 15-puzzel.
o Representing information about the world in a form that a computer system can utilize
to solve complex tasks such as diagnosing a medical condition, having a dialog in a natural
language, intelligent assistants, real-time problem solving and internet agents.
• Expert Systems
• Vision
• Robotics
Importance of AI
▪ AI may well be one of the most important development of the world. It will affect
government & private companies interested in the development of computer products,
robotics & related field.
▪ Japanese realized that many of their goals to produce systems that can converse in a
natural language, understand speech & visual sense, learn & refine their knowledge ,
make decisions, & exhibit other traits can be achieved.
▪ British initiated a plan called the Alvey project with a reasonable budget.
▪ France, Canada, Russsia, Italy, Australia, & Singapore have committed to some extent
to funded research & development.
▪ 1983, developed VLSI to use in AI technologies.
▪ MCC( Microelectronics & computer technology corporation) & is headquartered in
Austin, Texas.
▪ Second DARPA( Defense Advanced Research Projects Agency) has increased its
funding for research in AI & supported in three significant programs:-
1. development of an autonomous land vehicle(ALV) a driverless military vehicle.
2. the development of a pilot's associate (an expert system which provides assistance to
fighter pilot)
3. the strategic computing program (an AI based military super computer project).
Goals of AI
the goal of AI is to develop working computer system that truly capable of performing tasks
that require high levels of intelligence. The programs are not necessarily meant to imitate
human sense & thought processes. Indeed, in performing some tasks differently, they may
actually exceed human capabilities. The important point is that the systems all be capable of
performing intelligent tasks effectively & efficient.
Artificial Intelligence
AI Technique
1. search: provides a way of solving programs for which no more direct approach is
available as well as a frame work into which any direct techniques that are available
can be embedded.
2. use of knowledge:- provides a way of solving complex problems by exploiting the
structures of the objects that are involved.
3. abstraction: provides a way of separating important features & variations from the
many unimportant ones that would otherwise overwhelm any process.
the techniques of AI must often require that the problem defined in some specific way,
for ex. Breaking a complex decision into a series of simpler sub problems that lead to
the final solution.The presentation of a problem in a simple, easily process able form
then aids in the development of a solution.Algorithm is a specific set of operations,
procedures & decisions which guarantees to yield correct results. ( Gloriose &
Osorio,1990)Where heuristic is a rule of thumb ,trick, strategy, simplification or any
other method that aids the solution of complex problems.One of the difference between
a heuristic & an algorithm is that while a heuristic generally aids in finding the
solution, it does not guarantee an optimal solutions or no a solution at all. However,
with an algorithm, one can be sure of finding the correct results.
Artificial Intelligence
(3) Search
Inference Mechanism: How to represent information processes it, and reach to conclusions.
• Basic concept
• Directed graph
N Child node
Root node
Leaf node
Last node
• Path: is ordered sequence of nodes [ N1 , N2, …, Nk ] such that (Ni , Ni+1) represent an arc,
the path which consist of K nodes is said to have order K.
(Ni , Nj)
Ni Nj
C
The path is: A B C D
D
• Cyclic Path: a path that contains a node more than once is said to be cycle.
A C D E
B B
• State space representation: - represents all states may be in the problem space.
(Op#1)
S
N1 (Op#i)
N2
Nt
Path is: S Ni Nt
Example:-
d b
Note:
To reach a e we can find more than one path, and probably find a cyclic path.
Therefore the better way is to use tree.
Example:
Artificial Intelligence
8-puzzle problem
31 4
The Initial state 67
8 2 5
4 3 1
The goal state 8 6 7
5 2
________________________________________________
Solution:
1 4 3
7 6
5 8 2
1 3 1 4 3 1 4 3 1 4 3
7 4 6 7 6 7 8 6 7 6
5 8 2 5 8 2 5 2 5 8 2
1 3 1 3 1 4 1 4 3 1 4 3 1 4 3 1 4 3 4 3
7 4 6 7 4 6 7 6 3 7 6 2 7 8 6 7 8 6 5 7 6 1 7 6
5 8 2 5 8 2 5 8 2 5 8 5 2 5 2 8 2 5 8 2
The goal
Search algorithm (1) Systematic Search (Blind Search)
Algorithm depth-first
Begin
initialize = open [start], closed= [ ], parent [start] =” Nill”, found =No;
While open < > [ ] do
Begin
- Remove the first state from left of open, call it X;
- If X is the goal then found = true, break;
- Generate of children of X and put in list L;
- Put X on close;
- Eliminate from L any state already in closed;
- Eliminate from open any state already in L;
- Append L to the left of open;
- For each child Y in L, set parent [Y] = X;
- Empty L;
End;
E
D
F
H I J
#1 Closed= [A]
Open= [B, C, D, E]
#3 X=F
Artificial Intelligence
Closed= [F, B, A]
Open= [I, J, C, D, E]
#4 X=I
Closed=[I, F, B, A]
Open=[N, J, C, D, E]
#5 X=N
Closed= [N, I, F, B, A]
Open= [J, C, D, E]
#6 X=J
Closed= [J, N, I, F, B, A] parent [A] = “Nill”
Open= [C, D, E] parent [B] = A
Parent [C] = B
#7 X=C parent [F] = B
Closed= [C, J, N, I, F, B, A] parent [I] = F
Open= [G, H, D, E] parent [J] = F
Parent [N] = I
#8 X=G parent [G] = C
Closed= [G, C, J, N, I, F, B, A] parent [K] = G
Open= [K, L, H, D, E]
#9 X= K found = True
Lecture 2
Breadth – first Search
Breadth-first search expands nodes in order of their distance from the root, generating one
level of the tree at a time until a solution is found. It is most easily implemented by maintain
a queue of nodes, initially containing just the root, and always removing the node at the head
of the queue, expanding it, and adding its children to the end of the queue.
Example:
X= A
#5 Closed=[E, D, C, B, A]
Open=[F, G, H, M, R]
X=F
#6 Closed=[F, E, D, C, B, A ]
Open=[ G, H, M, R, I, J]
X=G
#7 Closed= [G, F, E, D, C, B, A]
Open= [H, M, R, I, J, K, L]
X=H
#8 Closed= [H, G, F, E, D, C, B, A]
Open= [M, R, I, J, K, L]
X=M
#9 Closed= [M, H, G, F, E, D, C, B, A]
Open= [R, I, J, K, L]
X=R
10# Closed= [R, M, H, G, F, E, D, C, B, A]
Open= [I, J, K, L]
X=I
11# Closed= [I, R, M, H, G, F, E, D, C, B, A]
Open= [J, K, L, N]
X=J
12# Closed= [J, R, M, H, G, F, E, D, C, B, A]
Open= [K, L, N]
N K L
P Q
Artificial Intelligence
Path: A B C G K
Depth-first search gets quickly into a deep search space. If it is known that the solution path
will be long, depth-first search won't waste time searching a large number of "shallow" states
in the graph. On the other hand, depth-first search can get "lost" deep in a graph, missing
shorter paths to a goal or even becoming stuck in an infinitely long path that does not lead to
a goal. Depth-first search is much more efficient for searching spaces with many branches
since it does not have to keep all the nodes at a given level on the memory “open list”, so it
requires less memory space. Unlike breadth-first search, a depth first search does not
guarantee to find the shortest path to a state the first time it is encountered. Whereas,
breadthfirst search always finds the shortest path to a goal node. Breadth-first search will not
get trapped into along unfruitful path.
Heuristic Search
All of the search methods in the preceding section are uninformed in that they did not take
into account the goal.
The heuristic function is a way to inform the search about the direction to a goal. A heuristic
function h (n) is supposed to estimate the path cost of a solution beginning from the state at
node n to a goal node.
Hill climbing (HC) algorithm is a technique for certain classes of optimization problems.
The idea is to start with a sub-optimal solution to a problem (i.e., start at the base of a hill)
and then repeatedly improve the solution (walk up the hill) until some condition is maximized
(the top of the hill is reached).
Artificial Intelligence
Begin
Cs=start, open = [start], stop=false, path [ ]
While (not stop) do Begin
Add Cs to path
If (Cs=Goal) then return (path);
Empty open;
Generate all possible children of CS and put into open;
If open = [ ] then
Stop= true; /* dead end reached */
Else Let X=Cs;
For each state Y in open do begin
Compute the heuristic value of Y , h(y);
If Y in better than X then X=Y
endfor
if X in better than CS then
CS=X
Else
Stop=true /* local optima*/
Endwhile
Return(fail)
End
Artificial Intelligence
Example:
A
100
B C D
38 33 30
H
E G 32
42 25
F
K
15
20
I J
10 8
N
L M
11
CS=A
3 5
Open=[B38,C33, D30]
#0 CS=A, open = [A], stop = false X= D30
CS=F15
#4 path=[F, G, D, A] OpenF = [I10 , J8 ]
CS= J8 X= J8
CS= M5
#6 path= [M, J, F, G, D, A]
CS= Goal, stop
Best –first (no cost fuction)
Begin
Input the start node S and the set of Goal nodes
Open = [S] ; closed= [ ];
Pred[S]= null; found= false;
While (open is not empty) and found = false Do Begin
L= the set of nodes on open for which h has the best value;
If L is a set with a single element then let X be that element;
Else
If there are any goals in L then let X be one of them
Else let X be the first element of L;
endif
remove X from open and put in close;
if X in the goal node then found=true
else
Begin
Generate the set Succ of children of X ;
For each child Y in Succ do
If Y is not already in open or closed then
Begin
Compute h[Y]; pred[Y]=X;
Insert Y in open;
Endif
Endfor
Endif
Endwhile
If found is false output failure
Else trace the pointers in pred to compute the solution path
Endif
Artificial Intelligence
End.
Example:
P[D]= A
#2 X=B25
Closed= [B25 , A100 ] P[E]= B
Open = [E25 , C33 , D30 ]
P[G]= D
#3 X=D30 P[H]=D
Closed= [D30 , B25 , A100 ]
P[F]=C
Open = [G37 , H32 , E35 , C33 ]
P[I]= E
#4 X= H32
P[L]=I
P[M]= I
Artificial Intelligence
#5 X= C33
Closed= [C33 , H32 , D30 , B25 , A100 ]
Open = [F38 , G37 , E35 ]
#6 X=E35
Closed = [ E35 , C33 , H32 , D30 , B25 , A100 ]
Open = [I20 , F38 , G37 ]
#7 X= I20
Closed = [ I20 , E35 , C33 , H32 , D30 , B25 , A100 ]
Open = [L3 , M5 , F38 , G37 ]
#8 X= L3
Closed = [ L3 , I20 , E35 , C33 , H32 , D30 , B25 , A100 ]
Open = [M5 , F38 , G37 ]
#9 X= M5
Found = true
Path: A B E I M
Artificial Intelligence
Lecture 3
Cost Function
The heuristic function is a way to inform the search about the direction to a goal. It provides
an informed way to guess which neighbor of a node will lead to a goal. There is no general
theory for finding heuristics, because every problem is different.
Another way to measure the cost from the start state to the goal state is the evaluation
function (cost function). Cost function can be measured as,
g (n1) g (n2)
n1 n2
h(n1) h(n2)
Goal
f (n1) = g(n1) + hn1) f(n2) = g(n2) + h(n2)
example: S
Example:
Consider the following state space graph in Figure below with Initial State: A and Goal State
M. Find the path using A* algorithm.
7
A 14
100
9
B D
38 35 15
C 8
12 33
H
E 25 8 G 32
22 35 10
14
F
9 K
15 20
13 26
18
I J
10 40 8 6
8 14
N
L M
8
3 5
#0 open = [A100], closed = [ ]
#1 X=A100
State Pred h(state) g(state) f(state)
Open = [B45, C42, D49] A Null 100 0 100
Closed = [A] B A 38 7 45
__________________________ C A 33 9 42
#2 X= C42 D A 35 14 49
Closed = [C42, A100] F C,E,G 15 34, 33, 30 49, 48, 45
Open = [F49, B45, D49] E B 22 19 41
__________________________ I E 40 28 68
J F 8 51, 48 95, 56
#3 X= B45
G D 25 22 47
Closed = [B45, C42, A100] H D 32 29 61
Open = [E41, F49, D49] K G 26 32 58
__________________________ M J 5 62 67
#4 X= E41 N J 8 54 62
Closed = [E41, B45, C42, A100]
Open = [F48, I68, D49] F from E
Artificial Intelligence
#12 X= N62
Closed = [N62 , H61 , K58 , J56 , F45 , G47 , D49 , F48 , E41 , B45 , C42 , A100 ]
Open = [M67, I68]
_______________________________
#13 X= M67
Artificial Intelligence
_______________________________________________________________
Logic Representation
In order to determine appropriate actions to take to achieve goals, an intelligent system needs
to compactly represent information about the world and draw conclusions based on general
world knowledge and specific facts.
Propositional Logic
Logic Representation
Predicate Logic
Propositional Logic
Propositional symbols are used to represent facts. Each symbol can mean what we want it to
be. Each fact can be either true or false. Propositional symbols: P, Q, etc. representing
specific facts about the world. For example,
P1 = “Water is a liquid”.
P2= “Today is Monday”.
P3= “It is hot”
Example :
Propositions are combined with logical connectives to generate sentences with more complex
meaning. The connectives are:
Artificial Intelligence
p q ¬p p^ q Pvq P→q
T T F T T T
T F F F T F
F T T F T T
F F T F F T
For example, a proof of the equivalence of P → Q and ˥ P Q is given by the truth table:
Answer:
P Q ˥P ˥ P Q P→Q
T T F T T
T F F F F
F T T T T
F F T T T
Artificial Intelligence
Example: Use a truth table to list all possible truth value assignments to the propositions of the expression (P
∧ Q) (˥ Q P).
Answer
P Q P∧Q ˥Q ˥QP (P ∧ Q) (˥ Q P)
T T T F T T
T F F T T T
F T F F F F
F F F T T T
Lecture 4
Predicate Logic (First-Order logic)
In propositional logic, we can only represent the facts, which are either true or false.
Propositional logic is not sufficient to represent the complex sentences or natural language
statements. The propositional logic has very limited expressive power. Some statements
cannot be expressed in propositional logic, such as:
Predicate logic can express these statements and make inferences on them.
Predicate logic is also known as First-order logic. First-order logic is a powerful language
that develops information about the objects in a more easy way and can also express the
relationship between those objects.
𝑷(𝒙, 𝒚)
- A predicate P describes a relation or property.
- Variables (x, y) can take arbitrary values from some domain.
- Still have two truth values for statements (T and F).
- When we assign values to x and y, then P has a truth value.
Examples:
• Let Q(x, y) denote “x=y+3”.
What are truth values of:
- Q (1, 2) false
- Q (3, 0) true
Predicate argument
Artificial Intelligence
q= “water is liquid”
liquid (water)
liquid (milk)
Generality: - predicate logic allows the use of variables and then we can represent general
statements.
∀𝑋 𝑤𝑒𝑎𝑡ℎ𝑒𝑟 ( 𝑋 , 𝑟𝑎𝑖𝑛)
X ∈ { 𝑆𝑎𝑡𝑒𝑟𝑑𝑎𝑦, … . , 𝐹𝑟𝑖𝑑𝑎𝑦 )
P7= “it rained on Monday”
Definitions:
Constant: A constant refers to a specific object. A constant starts with a lower case letter.
Variable: A variable is used to refer to general classes of objects. A variable starts with an
upper case letter.
Function: a function name starts with a lower case letter. It has an associated number of
arguments and returns a value.
Term: a term is either a constant, variable or a function. The English alphabet (a...z, A...Z)
and the digits 0...9 and the character is used to constant term.
Artificial Intelligence
Atomic sentence: An atomic sentence is formed from a predicate symbol followed by list of
terms enclosed in parentheses and separated by commas. Predicate logic
sentences are delimited by a period (.).
Clauses: A clause is one or more literals combined using the connectives above. An atomic
sentence is called a unit clause (it consists of one literal).
Where b1… bn and 𝑎 are all positive literals. a ( ) is called the head of the horn clause. b1
( )… bn ( ) is called the body of the horn clause.
1- a ( ) (horn clause has no body), In this case the clause is called a Fact.
2- b1 ( ) ∧ b2 ( ) ∧ … ∧ bn ( )
The horn clause has no head. In this case it is regarded as a set of subgoals to be proved.
Thus, a horn clause can be defined as a clause of disjunction of literals with at most.
• These are the symbols that permit to determine or identify the range and scope of the
variable in the logical expression. There are two types of quantifier:
Common Identifiers
Unification
• Unification is the process of making a set of literals with the same predicate match each
other exactly.
Such that 𝑙1 𝐹 = 𝐿2 𝐹 = ⋯ = 𝐿𝑘 𝐹
Example:
Begin
Generate the first disagreement set D
Repeat
While (D in disagreement) do
If none of the terms in D consists of a variable by itself, then
* stop and report failure *
Else
*convert a variable into a term by adding a pair of the form (𝑡𝑖 , 𝑣𝑖 )
and perform the substitution on the predicates immediately *
Examples:
1. Find the MGU of Let 𝐋𝟏 = 𝐩(𝐗, 𝐟(𝐲)), 𝐋𝟐 = 𝐩(𝐚, 𝐟(𝐠(𝐳)))
Solution:
Step1:
D={x, a}
L1= p (a, f(y))
L2 = p (a, f (g (z)))
Step2:
D= {y, g (z)}
L”1= p (a, f (g (z)))
L”2= p (a, f (g (z))) Unified Successfully.
Artificial Intelligence
Step2:
D= {f (a), g (y)} Unification failed
Step2
D= {X, f(Y)}
L’1= p (b, f(Y), f(Y))
L’2= p (b, f(Y), f (g (b)))
Step3
D= {Y, g (b)}
L”1= p (b, f (g (b), b (g (b))
L”2= p (b, f (g (b), b (g (b)) Unified Successfully.
Step2
D= {Z, f (Z)} Unification Failed.
Artificial Intelligence
6. Find the MGU of L1= Q (a, g(x, a), f(y)), L2= Q (a, g (f (b), a), x)
Step1
D= {x, f (b)}
L1= Q (a, g (f (b), a), f(y))
L2= Q (a, g (f (b), a), f (b))
Step2
D= {y, b}
L’1= Q (a, g (f (b), a), f (b))
L’2= Q (a, g (f (b), a), f (b)) successfully unified
Lecture 5
Skolemization
Example:
∃𝑋 𝑓𝑎𝑡ℎ𝑒𝑟(𝑋, 𝑎𝑙𝑖).
Skolem constant
Skolem function
∀𝑋 𝑓𝑎𝑡ℎ𝑒𝑟 (𝑎, 𝑋)
∀𝑋 ∀𝑌 ∃𝑍 ∃𝑊 𝑝(𝑋, 𝑌, 𝑍, 𝑊)
∀𝑋 ∀𝑌 𝑝(𝑋, 𝑌, 𝑓(𝑥, 𝑦), 𝑔(𝑥, 𝑦)).
∀𝑋 ∀𝑌 ∃𝑍 ∀𝑊 𝑝(𝑋, 𝑌, 𝑍, 𝑊) Z= f(x,y)
∀𝑋{⅂ [𝑝(𝑋) ∧ 𝑞(𝑋)] ∨ [𝑟(𝑋, 𝑎) ∧ ∃𝑌( ⅂∃𝑍 𝑟(𝑌, 𝑍) ∨ 𝑆(𝑋, 𝑌))]} ∨ ∀𝑋 𝑡(𝑋)
Modus ponens
This rule states that:-
𝑔𝑖𝑣𝑒𝑛 𝑝 → 𝑞
&𝑝
___________________
𝑐𝑜𝑛𝑐𝑙𝑢𝑠𝑖𝑜𝑛: − 𝑞
Solution:
What conclusion can be drawn? (Conclusion: the ground is wet)
Let
P: “it is raining”
q:”the ground is wet”
𝑝→𝑞
𝑝
∴ 𝑞 Is true
1. ∀𝑋[𝑚𝑎𝑛(𝑋) → 𝑚𝑜𝑟𝑡𝑎𝑙(𝑋)]
2. 𝑚𝑎𝑛(𝑠𝑜𝑐𝑟𝑎𝑡𝑒𝑠)
𝑚𝑎𝑛(𝑠𝑜𝑐𝑟𝑎𝑡𝑒𝑠)
𝑚𝑜𝑟𝑡𝑎𝑙(𝑠𝑜𝑐𝑟𝑎𝑡𝑒𝑠)
Resolution
Definition: resolution is the process of choosing two clauses in normal form such that one (p)
contains the negation of a literal in the other clause (⅂p). The result is a clause called the
resolvent which consists of the disjunction of all the literals in the two clauses except the
literal p and its negation ⅂p.
𝑝∨𝑟∨𝑠 (Resolvent)
Example 1:
Given the following:
1. “Fido is a dog”.
2. “All dogs are animals”.
3. “All animals will die”.
Goal: Fido will die.
Solution:
(𝑓𝑖𝑑𝑜, 𝑋2 )
1) 𝑔: ⅂ 𝑑𝑖𝑒(𝑓𝑖𝑑𝑜) 𝐶4 : ⅂ 𝑎𝑛𝑖𝑚𝑎𝑙 (𝑓𝑖𝑑𝑜)
𝐶3′ : ⅂ 𝑎𝑛𝑖𝑚𝑎𝑙(𝑋2 ) ∨ 𝑑𝑖𝑒(𝑋2 )
(𝑓𝑖𝑑𝑜, 𝑋1 )
2) 𝐶4 : ⅂ 𝑎𝑛𝑖𝑚𝑎𝑙 (𝑓𝑖𝑑𝑜) 𝐶5 : ⅂ 𝑑𝑜𝑔(𝑓𝑖𝑑𝑜)
𝐶2′ : ⅂ 𝑑𝑜𝑔(𝑋1 ) ∨ 𝑎𝑛𝑖𝑚𝑎𝑙 (𝑋1 )
Artificial Intelligence
Example 2:
Consider the following story:
“Anyone passing his history exams and winning the lottery is happy. Anyone who studies or
is lucky passes all his exams. Ali did not study but he is lucky. Anyone who is lucky wins
the lottery”.
Solution:
𝐶3 : ⅂𝑠𝑡𝑢𝑑𝑦(𝑎𝑙𝑖) ∧ 𝑙𝑢𝑐𝑘𝑦(𝑎𝑙𝑖).
𝐶3.1 : ⅂𝑠𝑡𝑢𝑑𝑦(𝑎𝑙𝑖).
𝐶3.2 : 𝑙𝑢𝑐𝑘𝑦(𝑎𝑙𝑖).
1) 𝑔: ⅂ℎ𝑎𝑝𝑝𝑦(𝑍). (𝑍, 𝑋1 )
𝐶1 : ⅂𝑝𝑎𝑠𝑠(𝑋1 , ℎ𝑖𝑠𝑡𝑜𝑟𝑦) ∨ ⅂𝑤𝑖𝑛(𝑋1 , 𝑙𝑜𝑡𝑡𝑒𝑟𝑦) ∨ ℎ𝑎𝑝𝑝𝑦(𝑋1 ) 𝐶5 : ⅂𝑝𝑎𝑠𝑠(𝑍, ℎ𝑖𝑠𝑡𝑜𝑟𝑦) ∨
⅂𝑤𝑖𝑛(𝑍, 𝑙𝑜𝑡𝑡𝑒𝑟𝑦)
5)𝐶8 : ⅂𝑙𝑢𝑐𝑘𝑦(𝑎𝑙𝑖)
𝐶3.2 : 𝑙𝑢𝑐𝑘𝑦(𝑎𝑙𝑖) Goal proved
Example 3:
“All people who are not poor and smart are happy. These people who read are not stupid.
John can read and is wealthy. Happy people have exciting lives”.
Can anyone be found with an exciting life?
𝐶1 : ∀𝑋( ⅂𝑝𝑜𝑜𝑟(𝑋) ∧ 𝑠𝑚𝑎𝑟𝑡(𝑋) → ℎ𝑎𝑝𝑝𝑦(𝑋)
𝐶2 : ∀𝑌(𝑟𝑒𝑎𝑑(𝑌) → 𝑠𝑚𝑎𝑟𝑡(𝑌))
𝐶3 : 𝑟𝑒𝑎𝑑(𝑗𝑜ℎ𝑛) ∧ ⅂𝑝𝑜𝑜𝑟(𝑗𝑜ℎ𝑛)
𝐶4 : ∀𝑍(ℎ𝑎𝑝𝑝𝑦(𝑍) → 𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑍))
𝑔: ∀𝑊(𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑊))
𝐶2 : ⅂𝑟𝑒𝑎𝑑(𝑌) ∨ 𝑠𝑚𝑎𝑟𝑡(𝑌)
𝐶3.1 : 𝑟𝑒𝑎𝑑 (𝑗𝑜ℎ𝑛)
𝐶3.2 : ⅂𝑝𝑜𝑜𝑟 (𝑗𝑜ℎ𝑛)
𝐶4 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑍) ∨ 𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑍)
𝑔: ⅂𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑊)
1) 𝑔: ⅂𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑊) (W, Z)
𝐶4 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑍) ∨ 𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑍) 𝐶5 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑊)
(W, X)
2) 𝐶5 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑊)
𝐶1 : 𝑝𝑜𝑜𝑟(𝑋) ∨ ⅂𝑠𝑚𝑎𝑟𝑡(𝑋) ∨ ℎ𝑎𝑝𝑝𝑦(𝑋) 𝐶6 : 𝑝𝑜𝑜𝑟(𝑊) ∨ ⅂𝑠𝑚𝑎𝑟𝑡(𝑊)
5) 𝐶8 : 𝑝𝑜𝑜𝑟(𝑗𝑜ℎ𝑛)
𝐶3.2 : ⅂𝑝𝑜𝑜𝑟 (𝑗𝑜ℎ𝑛) Goal proved
Artificial Intelligence
Lecture 6
1) Breadth_ First
In this strategy, each clause in the base set (starting set of clauses) is compared for resolution
with every other clause in first round. On the second round, the new clauses produced on the
first round plus all the clauses in the base set are compared for resolution.
For the nth round, all previously generated clauses are added to the base set and all clauses are
compared for resolution.
In this strategy, the number of clauses to be compared can become extremely large at early
round of large problems which make it inefficient. However, this strategy ensures find the
shortest path to a solution.
1) 𝑅𝑒𝑠𝑜𝑙𝑣𝑒 𝐶1 & 𝐶2
𝐶4 : ⅂𝑟(𝑋1 ) ∨ ⅂𝑑(𝑋1 ) {(𝑋1 , 𝑋2 )}
2) 𝐶1 & 𝑔
𝐶5 : 𝑡(𝑋1 ) ∨ ⅂ℎ(𝑋1 ) {𝑋1 , 𝑍)}
3) 𝐶2 & 𝐶3.1
𝐶6 : ⅂𝑡(𝑎) {(𝑎, 𝑋2 )}
4) 𝐶3.2 & 𝑔
𝐶7 : 𝑟(𝑎) {(𝑎, 𝑍)}
Round 2:
5) 𝐶1 & 𝐶6
𝐶8 : ⅂𝑟(𝑎) {(𝑎, 𝑋1 )}
6) 𝐶1 & 𝐶7
𝐶9 : 𝑡(𝑎) {(𝑎, 𝑋1 )}
7) 𝐶2 & 𝐶5
𝐶10 : ⅂𝑑(𝑋1 ) ∨ ⅂ℎ(𝑋1 ) {(𝑋1 , 𝑋2 )}
8) 𝐶3.1 & 𝐶4
𝐶11 : ⅂ 𝑟(𝑎) {(𝑎, 𝑋1 )}
9) 𝐶3.2 & 𝐶5
𝐶12 : 𝑡(𝑎) {(𝑎, 𝑋1 )}
10) 𝑔 & 𝐶4
𝐶13 : ⅂𝑑(𝑋1 ) ∨ ⅂ℎ(𝑋1 ) {(𝑋1 , 𝑍)}
11) 𝐶4 & 𝐶7
𝐶14 : ⅂𝑑(𝑎) {(𝑎, 𝑋1 )}
Artificial Intelligence
12) 𝐶5 & 𝐶6
𝐶15 : ⅂ℎ(𝑎) {(𝑎, 𝑋1 )}
Round 3
13) Resolve 𝐶2 &𝐶9
𝐶16 : ⅂𝑑(𝑎) {(𝑎, 𝑋2 )}
𝐶1 𝐶1
𝐶3.1 𝐶14
𝐶1 𝐶1
The strategy is based on the principle that the negation of what we want prove is going to be
responsible for generating the empty clause (contradiction).
The set of support consists initially of the negation of the goal. Consequently, any resolvent
whose parent is in the set of support becomes a member of the set of support. Thus, this
strategy forces resolution between clauses of which at least one is either the negation of the
goal or a clause whose one of its ancestors in the negation of the goal.
Example:
Solution:
1) 𝑆 = {𝑔}
𝑔 & 𝐶4 𝐶5 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑋3 ) {(𝑋3 , 𝑊)}
2) 𝑆 = {𝑔, 𝐶5 }
𝐶5 & 𝐶1 𝐶6 : 𝑝𝑜𝑜𝑟(𝑋1 ) ∨ ⅂𝑠𝑚𝑎𝑟𝑡(𝑋1 ) {(𝑋1 , 𝑋3 )}
3) 𝑆 = {𝑔, 𝐶5 , 𝐶6 }
𝐶6 & 𝐶2 𝐶7 : ⅂𝑟𝑒𝑎𝑑(𝑋1 ) ∨ 𝑝𝑜𝑜𝑟 (𝑋1 ) {(𝑋1 , 𝑋2 )}
𝐶6 & 𝐶3.2 𝐶8 : ⅂𝑟𝑒𝑎𝑑(𝑎𝑙𝑖) {(𝑎𝑙𝑖, 𝑋1 )}
4) 𝑆 = {𝑔, 𝐶5 , 𝐶6 , 𝐶7 , 𝐶8 }
𝐶7 & 𝐶3.1 𝐶9 : 𝑝𝑜𝑜𝑟(𝑎𝑙𝑖) {(𝑎𝑙𝑖, 𝑋1 )}
𝐶7 & 𝐶3.2 𝐶10 : ⅂𝑟𝑒𝑎𝑑(𝑎𝑙𝑖) {(𝑎𝑙𝑖, 𝑋1 )}
𝐶8 & 𝐶3.1 𝐶11 : Stop
Artificial Intelligence
𝐶6 𝐶3.2
𝐶8 𝐶1
Example:
𝐶1 : 𝑝𝑜𝑜𝑟(𝑋) ∨ ⅂𝑠𝑚𝑎𝑟𝑡(𝑋) ∨ ℎ𝑎𝑝𝑝𝑦(𝑋)
𝐶2 : ⅂𝑟𝑒𝑎𝑑(𝑌) ∨ 𝑠𝑚𝑎𝑟𝑡(𝑌)
𝐶3.1 : 𝑟𝑒𝑎𝑑 (𝑗𝑜ℎ𝑛)
𝐶3.2 : ⅂𝑝𝑜𝑜𝑟 (𝑗𝑜ℎ𝑛)
𝐶4 : ⅂ℎ𝑎𝑝𝑝𝑦(𝑍) ∨ 𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑍)
𝑔: ⅂𝑒𝑥𝑐𝑖𝑡𝑖𝑛𝑔(𝑊)
Solution:
𝑔 𝐶4
W=X3
𝐶5 𝐶1
𝐶6 𝐶3.2
Artificial Intelligence
X3=X1
X1= ali
X2=ali
∴ W= ali
In this strategy, the negated goal is resolved with one of the original clauses. The resulting
clause is resolved with one of the original clause to produce a new clause which is a gain
resolved with the original set of input clause and so on until the empty clause is generated.
This strategy fails sometimes to generate the empty clause even if the original goal is true.
Example:
𝐶1 : ⅂𝑎 ∨ 𝑏
𝐶2 : ⅂𝑎 ∨ ⅂𝑏
𝐶3 : 𝑎 ∨ 𝑏
𝑔: ⅂𝑎 ∧ 𝑏
Solution:
𝑎 ∨ ⅂𝑏
𝑎∨𝑏 𝑎
𝑎
⅂𝑎 ∨ 𝑏 𝑏
𝑏
⅂𝑎 ∨ 𝑏 ⅂𝑎