aiml (2)
aiml (2)
&
MACHINE LEARNING
Lecture Notes
B.TECH 6th SEMESTER
Prepared By
Sonali Kar
Assistant Professor
Module-III: (6 hours)
UNCERTAINTY – Acting under Uncertainty, Basic Probability Notation, The Axioms of
Probability,Inference Using Full Joint Distributions, Independence, Bayes’ Rule and its Use,
PROBABILISTICREASONING – Representing Knowledge in an Uncertain Domain, The Semantics of
BayesianNetworks, Efficient Representation of Conditional Distribution, Exact Inference in
BayesianNetworks, Approximate Inference in Bayesian Networks
Books:
[1] Elaine Rich, Kevin Knight, & Shivashankar B Nair, Artificial Intelligence, McGraw Hill,3rd ed.,2009
[2] Stuart Russell, Peter Norvig, Artificial Intelligence -A Modern Approach, 4/e, Pearson, 2003.
[3] Nils J Nilsson, Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publications,2000
[4] Introduction to Artificial Intelligence & Expert Systems, Dan W Patterson, PHI.,2010
[5] S Kaushik, Artificial Intelligence, Cengage Learning, 1st ed.2011
1 Introduction
1.1 What is Artificial Intelligence?
• It is the branch of computer science that emphasizes the development of intelligence machines, thinking
and working like humans and able to make decisions. It is also known as Machine Intelligence.
• According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of
making intelligent machines, especially intelligent computer programs”.
• Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans think.
• AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent
software and systems.
1.2 Why we need AI?
• To create expert systems: The systems which exhibit intelligent behaviour with the capability to learn,
demonstrate, and explain and advice its users.
• To implement human intelligence in machines: Creating systems that understand, think, learn and behave
like humans. Helping machines find solutions to complex problems like humans do and applying them as
algorithms in a computer friendly manner.
1.3 What is intelligence in AI?
• The ability of a system to calculate, perceive relationships and analogies, learn from experience, store and
retrieve information from memory, solve problems, use natural language fluently, classify and adapt new
situations.
• The Intelligence is intangible.
• It is composed of
a) Reasoning
b) Learning
c) Problem solving
d) Perception
e) Linguistic intelligence
a) Reasoning − It is the set of processes that enables us to provide basis for judgement, making decisions,
and prediction.
b) Learning − It is the activity of gaining knowledge or skill by studying, practising, being taught, or
experiencing something. Learning enhances the awareness of the subjects of the study.
c) Problem solving - Problem solving also includes decision making, which is the process of selecting the
best suitable alternative out of multiple alternatives to reach the desired goal are available.
d) Perception − It is the process of acquiring, interpreting, selecting, and organizing sensory information.
e) Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the verbal and written
language.
1.4 Advantages and Disadvantages of AI
Advantages:
➢ High Accuracy with less error: AI machines or systems are prone to less errors and high accuracy as it
takes decisions as per pre-experience or information.
➢ High-Speed: AI systems can be of very high-speed and fast-decision making.
➢ High reliability: AI machines are highly reliable and can perform the same action multiple times with
high accuracy.
➢ Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the
ocean floor, where to employ a human can be risky.
➢ Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is
currently used by various E-commerce websites to show the products as per customer requirement.
➢ Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can
make our journey safer and hassle-free, facial recognition for security purpose, Natural language
processing to communicate with the human in human-language, etc.
Disadvantages:
➢ High Cost: The hardware and software requirement of AI is very costly as it requires lots of maintenance
to meet current world requirements.
➢ Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work out
of the box, as the robot will only do that work for which they are trained, or programmed.
➢ Unemployment
➢ No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the
feeling so it cannot make any kind of emotional attachment with human, and may sometime be harmful
for users if the proper care is not taken.
➢ Increase dependency on machines: With the increment of technology, people are getting more dependent
on devices and hence they are losing their mental capabilities.
➢ No Original Creativity: As humans are so creative and can imagine some new ideas but still AI machines
cannot beat this power of human intelligence and cannot be creative and imaginative.
1.5 Applications of AI
i. Gaming
ii. Natural language processing
iii. Expert systems
iv. Speech Recognition
v. Handwriting Recognition
vi. Intelligent robots
vii. Computer vision etc
Module-1 Lecture-2
Learning Objective:
2 Agents in Artificial Intelligence
2.1 What is an Agent?
2.2 What is an Intelligent Agent?
Learning Objective:
3 Structure of an Agent
3 Structure of an Agent
The structure of an intelligent agent is a combination of architecture and agent program.
It can be viewed as:
Agent = Architecture + Agent program
Architecture: Architecture is machinery that an AI agent executes on.
Agent program: The agent function (f) for an artificial agent will be implemented by an agent program. An
agent's behaviour is described by the agent function that maps any given percept sequence to an action.
The agent function f maps from percept histories to actions:
f: P* → A
The part of the agent taking an action on the environment is called an actuator.
Learning Objective:
4 Agent Environments
4.1 Features of Environment
4 Agent Environments
• An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself.
• An environment can be described as a situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent with something to sense and act upon
it.
4. Deterministic vs Stochastic:
•
If an agent's current state and selected action can completely determine the next state of the environment,
then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely by an agent.
5. Single-agent vs Multi-agent:
• If only one agent is involved in an environment, and operating by itself then such an environment is
called single agent environment.
• However, if multiple agents are operating in an environment, then such an environment is called a multi-
agent environment.
• The agent design problems in the multi-agent environment are different from single agent environment.
6. Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current percept is required
for the action.
• In episodic the agent’s experience divided in to atomic episodes.
• Next episode not dependent on actions taken in previous episode.
• However, in Sequential environment (non episodic), an agent requires memory of past actions to
determine the next best actions.
7. Known vs Unknown:
• Known and unknown are not actually a feature of an environment, but it is an agent's state of knowledge
to perform an action.
• In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
8. Accessible vs Inaccessible:
• If an agent can obtain complete and accurate information about the state's environment, then such an
environment is called an Accessible environment else it is called inaccessible.
• An empty room whose state can be defined by its temperature is an example of an accessible
environment.
• Information about an event on earth is an example of Inaccessible environment
Module-1 Lecture-5
Learning Objective:
5 Good Behaviour: The Concept of Rationality
5.1 What is Rational Agent?
5.2 What is Rationality?
5.3 The Nature of Environments
6 Types of Agents
In artificial intelligence, agents are entities that sense their surroundings and act to accomplish predetermined
objectives. From basic reactive reactions to complex decision-making, these agents display a wide range of
behaviors and skills.
Agents can be grouped into 5 classes based on their degree of perceived intelligence and capability.These are
given below :
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Learning Agents
Example: An investment advisor algorithm suggests investment options by considering factors such as
potential returns, risk tolerance, and liquidity requirements, with the goal of maximizing the investor's long-
term financial satisfaction.
The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a
solution is found, the actions it recommends can be carried out. This is called the execution phase.
Thus, we have a simple “formulate, search, execute” design for the agent. After formulating a goal and a
problem to solve, the agent calls a search procedure to solve it.
It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do
typically, the first action of the sequence and then removing that step from the sequence. Once the solution
has been executed, the agent will formulate a new goal.
8 Searching
• In AIML searching is the process of finding a solution or a path from an initial state to a goal state,
usually within a search space.
• A search space is essentially a set of all possible states or configurations of a system or environment.
In AI, search is a key technique used for problems such as puzzle solving, pathfinding, decision
making, and optimization.
• Searching allows an agent or algorithm to explore various possible solutions in an intelligent manner
and find the optimal or desired solution.
• There are different types of search algorithms, mainly categorized into:
a) Uninformed Search (Blind Search)
b) Informed Search (Heuristic Search)
Example:
Time Complexity: The Time Complexity of BFS algorithm is O (bd ).
Space Complexity: The Space Complexity of BFS algorithm is O (bd ).
Advantages:
➢ Simplicity: This algorithm is easy to understand and implement using a queue.
➢ Systematic Exploration: Explores all nodes level by level, ensuring no node is missed within the
same depth before moving deeper.
➢ Wide Range of Applications: BFS is versatile, applied in areas like web crawling, social network
analysis, and AI-based problem-solving.
Disadvantages:
➢ High Memory Usage: BFS requires storing all nodes at the current level in memory, which can grow
significantly in large or densely connected graphs.
➢ Slow for Deep Solutions: If the solution lies deep in the graph, BFS can become inefficient as it
explores all shallower nodes first.
Module-1 Lecture-10
Learning Objective:
8 Searching
8.1 Uniformed Search
8.1.2 Depth-first Search
Algorithm:
Step 1: PUSH the starting node into the stack.
Step 2: If the stack is empty then stops and return failure.
Step 3: If the top node of the stack is the goal node, then stop and return success.
Step 4: Else POP the top node from the stack and process it. Find all its neighbours that are in ready state and
PUSH them into the stack in any order.
Step 5: If stack is empty Go to step 6 else Go to step 3.
Step 6: Exit
Example:
Let us take an example for implementing DFS algorithm.
Time Complexity: The Time Complexity of DFS algorithm is O (bd ).
Space Complexity: The Space Complexity of DFS algorithm is O (bd ).
Advantages:
➢ DFS consumes very less memory space.
➢ It will reach the goal node in a less time period than BFS if it traverses in a right path.
➢ It may find a solution without examining much of the search because we may get the desired solution
in the very first go.
Disadvantages:
➢ It is possible that many states keep reoccurring. There is no guarantee of finding the goal node.
➢ Sometimes the states may also enter into infinite loops.
Module-1 Lecture-11
Learning Objective:
9 Searching
9.2 Informed(Heuristic) Search
9.2.1 Greedy best-first Search
9.2.2 A* Search
Algorithm:
Step 1: Place the starting node or root node into the queue.
Step 2: If the queue is empty, then stop and return failure.
Step 3: If the first element of the queue is our goal node, then stop and return success.
Step 4: Else, remove the first element from the queue. Expand it and compute the estimated goal distance
for each child. Place the children in the queue in ascending order to the goal distance.
Step 5: Go to step-3
Step 6: Exit.
Example:
root.
...
Time Complexity: The worst case time complexity of Greedy best first search is O(bm ).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm ). Where, m is the
maximum depth of the search space.
Advantage:
➢ It is more efficient than that of BFS and DFS.
➢ Time complexity of Best first search is much less than Breadth first search.
Disadvantages:
➢ It can behave as an unguided depth-first search in the worst case scenario.
➢ It can get stuck in a loop as DFS.
➢ This algorithm is not optimal.
Module-1 Lecture-12
Learning Objective:
9 Searching
9.2 Informed(Heuristic) Search
9.2.1 Greedy best-first Search
9.2.2 A* Search
9.2.2 A* Search
A* is a powerful graph traversal and pathfinding algorithm widely used in artificial intelligence and computer
science. This algorithm is a specialization of best-first search.
It is mainly used to find the shortest path between two nodes in a graph, given the estimated cost of getting
from the current node to the destination node.
A* requires heuristic function to evaluate the cost of path that passes through the particular state. This
algorithm is complete if the branching factor is finite and every action has fixed cost. A* requires heuristic
function to evaluate the cost of path that passes through the particular state. It can be defined by following
formula.
f(n) = g(n)+h(n)
Where,
f(n): The actual cost path from the start state to the goal state.
g(n): The actual cost path from the start state to the current state.
h(n): The actual cost path from the current state to goal state.
Algorithm:
Step-1: Place the starting node in the OPEN list.
Step-2: If OPEN list is empty, then stop and return failure.
Step-3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node
n is goal node then return success and stop, otherwise.
Step-4: Expand node n and generate all of its successors, and put n into the closed list. For every successor
n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n'
and place into OPEN list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.
Step 6: Return to Step 2.
Example:-
Find the most cost effective path to reach from start state A to final state J using A* Algorithm.
Ans:-
The numbers written on edges represent the distance between the nodes.
The numbers written on nodes represent the heuristic value.
Stap-1:
We start with node A. Node B and Node F can be reached from node A.
Here we calculate f(B) and f(F) by using A* Algorithm.
f(B) = 6+8=14
f(F) = 3+6=9
As f(F)<F(B), we go to node F
Path A F
Step-2:
Node G and node H can be reached from node F.
Here we calculates f(G) and f(H)
f(G) = (3+1)+5= 9
f(H) = (3+7)+3= 13
As f(G)<f(H), we go to node G.
Path A F G
Step-3:
Node I can be reached from node G.
Here we calculate f(I)
f(I)= (3+1+3)+1= 8
Here we go to node I
Path A F G I
Step-4:
Node E,H and J can be reached from node I.
We calculate f(E),f(H) and f(J)
f(E) = (3+1+3+5)+3=15
f(H) = (3+1+3+2)+3=12
f(J) = (3+1+3+3)+0 =10
As f(J) is least, go to node J.
Path A F G I J
Time Complexity: The time complexity of A* search algorithm is O (b^d).
Space Complexity: The space complexity of A* search algorithm is O (b^d).
Advantages:
➢ A* search algorithm is the best algorithm than other search algorithms.
➢ A* search algorithm is optimal and complete.
➢ This algorithm can solve very complex problems.
Disadvantages:
➢ It does not always produce the shortest path as it is mostly based on heuristics and approximation.
➢ A* search algorithm has some complexity issues.
➢ The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it
is not practical for various large-scale problems.
Module-1 Lecture-13
Learning Objective:
10 Constraint Satisfaction Problems (CSP)
10.1 Crypt Arithmetic Problem
Step-2:
LETTER DIGIT
Next we have T+G=U & O+O=T T 2
We will go for O+O=T first. O 1
G
We have O=1, so O+O=1+1=2(T)
U
Step-3:
Next we have T+G=U
We T=2, so 2+G=U
Now here we know U must generate carry so 2+G must be 10 or greater that 10 means we must add such
number in 2 so, that we can get carry generated(or we can add 10 or more than 10).
We have first option (if we consider G=9, i.e 2+G as 2+9(G)=11, here we get U=1)
But we can’t chose U=1 as 1 is already assigned to O.
We have second option (if we consider G=8, i.e 2+Gas 2+8(G)=10, here we get U=0)
Which can be chosen & then we can tally the answer as follows:
2 1 LETTER DIGIT
+8 1 T 2
O 1
-------------------
G 8
10 2 U 0
Module-1 Lecture-14
Learning Objective:
11 Means-End-Analysis
11 Means-End-Analysis:
• A collection of search strategies that can reason either forward or backward but for a problem one direction
or the other must be chosen, but a mixture of the two directions is appropriate for solving a complex and
large problem.
• Such a mixed strategy, make it possible that first to solve the major part of a problem and then go back
and solve the small problems arise during combining the big parts of the problem. Such a technique is
called Means-Ends Analysis.
• Means-Ends Analysis is problem-solving techniques used in Artificial intelligence for limiting search in
AI programs.
• It is a mixture of Backward and forward search technique.
• The means end analysis process centers around the detection of dfifferences between the current state and
the goal state.
How means-ends analysis Works:
The means-ends analysis process can be applied recursively for a problem. It is a strategy to control search in
problem-solving.
Following are the main Steps which describe the working of MEA techniques for solving a problem.
1. First, evaluate the difference between Initial State and final State.
2. Select the various operators which can be applied for each difference.
3. Apply the operator at each difference, which reduces the difference between the current state and goal state.
Operator Subgoaling:
In the Mean end analysis process, we detect the differences between the current state and goal state. Once
these differences occur, then we can apply an operator to reduce the differences. But sometimes it is possible
that an operator cannot be applied to the current state. So we create the sub problem of the current state, in
which operator can be applied, such type of backward chaining in which operators are selected, and then sub
goals are set up to establish the preconditions of the operator is called Operator Subgoaling.
Algorithm of Means-Ends Analysis
Step 1: Compare CURRENT to GOAL, if there are no differences between them then return.
Step-2: Otherwise, select the most important difference and reduce it by doing the following steps until
success or failure occurs:
a) Select a new operator O which is applicable for the current difference, and if there is no such operator,
then signal failure.
b) Attempt to apply operator O to CURRENT. Make a description of two states.
i) O-START, a state in which O’s preconditions are satisfied.
ii) O-RESULT, the state that would result if O were applied In O-START.
c) If
(First-Part MEA (CURRENT, O-START))
And
(LAST-Part MEA (O-Result, GOAL)),
are successful, then signal Success and return the result of combining FIRST-PART, O, and LAST-
PART.
Example:
Let's take an example where we know the initial state and goal state as given below. In this problem, we need
to get the goal state by finding differences between the initial state and goal state and applying operators.
Solution:
To solve the above problem, we will first find the differences between initial states and goal states, and for
each difference, we will generate a new state and will apply the operators. The operators we have for this
problem are:
• Move
• Delete
• Expand
1. Evaluating the initial state: In the first step, we will evaluate the initial state and will compare the initial
and Goal state to find the differences between both states.
2. Applying Delete operator: As we can check the first difference is that in goal state there is no dot symbol
which is present in the initial state, so, first we will apply the Delete operator to remove this dot.
3. Applying Move Operator: After applying the Delete operator, the new state occurs which we will again
compare with goal state. After comparing these states, there is another difference that is the square is outside
the circle, so, we will apply the Move Operator.
4. Applying Expand Operator: Now a new state is generated in the third step, and we will compare this state
with the goal state. After comparing the states there is still one difference which is the size of the square, so,
we will apply Expand operator, and finally, it will generate the goal state.
Module-2 Lecture-15
Learning Objective:
12 Adversarial Search
12.1 Game Playing
12.2 Game Tree
12 Adversarial Search:
• Adversarial search is a game-playing technique where the agents are surrounded by a competitive
environment.
• A conflicting goal is given to the agents (multi agent). These agents compete with one another and try to
defeat one another in order to win the game.
• Such conflicting goals give rise to the adversarial search.
• Here, game-playing means discussing those games where human intelligence and logic factor is used,
excluding other factors such as luck factor. Tic-tac-toe, chess, checkers, etc., are such type of games where
no luck factor works, only mind works.
• Mathematically, this search is based on the concept of ‘Game Theory.’ According to game theory, a game
is played between two players. To complete the game, one has to win the game and the other looses
automatically.
Example: Tic-Tac-Toe game tree:The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
• There are two players MAX and MIN.
• Players have an alternate turn and start with MAX.
• MAX maximizes the result of the game tree.
• MIN minimizes the result.
Explanation:
• From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN place o, and
both players play alternatively until we reach a leaf node where one player has three in a row or all squares
are filled.
• Both players will compute each node, minimax, the minimax value which is the best achievable utility
against an optimal adversary.
• Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player is doing
his best to prevent another one from winning. MIN is acting against Max in the game.
• So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply. Max place x,
then MIN puts o to prevent Max from winning, and this game continues until the terminal node.
• In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space of possibilities
that MIN and MAX are playing tic-tac-toe and taking turns alternately.
In a given game tree, the optimal strategy can be determined from the minimax value of each node, which can
be written as MINIMAX(n). MAX prefer to move to a state of maximum value and MIN prefer to move to a
state of minimum value then:
Module-2 Lecture-16
Learning Objective:
13.Mini-Max Algorithm
13.Mini-Max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game
theory. It provides an optimal move for the player assuming that opponent is also playing optimally. This
algorithm uses recursion to search through the game-tree.
• This Algorithm computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they get the maximum
benefit.
• Here both the Players of the game are opponent of each other, where MAX will select the maximized value
and MIN will select the minimized value.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the tree
as the recursion.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare each
value in terminal state with initial value of Maximizer and determines the higher nodes values. It will find the
maximum among the all.
For node D max(-1,-∞) => max(-1,8)= 8
For Node E max(-3, -∞) => max(-3, -1)= -1
For Node F max(2, -∞) => max(2,1) = 2
For node G max(-3, -∞) = max(-3, 4) = 4
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find the
3rd layer node values.
For node B min(8,-1) = -1 or for node B min(8,+∞)=>min(8,-1)=-1
For node C min (2, 4) = 2 or for node C min(2,+∞)=>min(2,4)=-2
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node.
In this game tree, there are only 4 layers, hence we reach immediately to the root node, but in real games,
there will be more than 4 layers.
For node A max(-1, 2)= 2 or node A max(-1,-∞)=>max(-1,2)=2
That was the complete workflow of the minimax two player game
Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max algorithm is
O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).
Module-2 Lecture-17
Learning Objective:
14. Optimal decisions in multiplayer games
Fig:- The first three plies of a game tree with three players (A, B, C). Each node is labelled with values from
the viewpoint of each player. The best move is marked at the root.
Module-2 Lecture-18
Learning Objective:
15. Alpha-Beta Pruning
Let's take an example of two-player search tree to understand the working of Alpha-beta pruning.
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these value
of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same value
to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= ∞, and
β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of alpha
will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now passes
to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and then
compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will become
1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be changed,
it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the condition α>=β,
so the next child of C which is G will be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.
Module-2 Lecture-18
Learning Objective:
16. Logical agent
17. Knowledge based Agent
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the
agent knows, and what the agent goals are. With these specifications, we can fix its behavior. For example,
suppose an automated taxi agent needs to go from a station A to station B, and he knows the way from A to
B, so this comes at the knowledge level.
2. Implementation level
This is the physical representation of logic and knowledge. At the implementation level agent perform
actions as per logical and knowledge level. At this level, an automated taxi agent actually implement his
knowledge and logic so that he can reach to the destination.
Module-2 Lecture-18
Learning Objective:
18. Logic
18.1 Propositional Logic
18.1.1 Syntax of propositional logic
18.1.2 Logical Connectives
18.1.3 Truth Table
18. Logic:
In AIML logic is the formal and structured approach to reasoning that allows machines (computers, robots, or
systems) to make decisions, solve problems, or draw conclusions based on a set of rules, facts, or knowledge.
Logic helps machines perform reasoning tasks like humans do by following certain principles of logic (such
as true/false, and/or/not, etc.).
Types of Logic in AIML:
There are two main types of logic used in AIML, these are
1. Propositional Logic (PL)
2. First-Order Logic (FOL) or Predicate Logic
d) 5 is a prime number.
➢ A proposition is a declarative statement that can either be True (T) or False (F) but not both.
➢ Propositional Logic uses five main logical connectives to connect statements.The connectives are:
NOT(Negation),AND(Conjuction),OR(Disjunction),IMPLIES and BICONDITIONAL.
➢ Every propositional logic statement must be clear and unambiguous.
➢ When combining two or more propositions, always use parentheses to avoid confusion.
➢ When combining propositions, you must always follow the truth table to evaluate the logic.
➢ Avoid writing statements that contradict each other.
➢ The implication represents a cause-effect relationship. Always ensure the cause happens before the
effect.
Example (P → Q)
➢ Always write complex propositional logic in standard form:
1. NOT(Negation): A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
Example: It is raining.
P=It is raining.
¬P
2. AND(Conjunction): A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. OR(Disjunction): A sentence which has ∨ connective, such as P ∨ Q is called disjunction, where P and
Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor.
Q= Ritika is Engineer, so we can write it as P ∨ Q.
4. IMPLIES(Implication): A sentence such as P → Q, is called an implication. Implications are also
known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining,
Q= Street is wet, so it is represented as P → Q
5. IF AND ONLY IF(Biconditional): A sentence such as P↔ Q is a Biconditional sentence, example: An
angle is right if and only if it measures 90 degree.
Example:
Let P= An angle is right
Q= An angle is measures 90 degree
It can be represented as P ↔ Q.
Following is the summarized table for Logical Connectives:
P ¬P
T F
F T
For Conjunction:
P Q P∧ Q
T T T
T F F
F T F
F F F
For Disjunction:
P Q P∨ Q
T T T
T F T
F T T
F F F
For Implication:
P Q P→Q
T T T
T F F
F T T
F F T
For Biconditional:
P Q P ↔Q
T T T
T F F
F T F
F F T
P Q R (P ∨ Q) ∧ R
A≅ B
This states that NOT (A OR B) is logically equivalent to (NOT A AND NOT B).
A B A∨B ¬(A∨B) ¬A ¬B A ∧ ¬B
T T T F F F F
T F T F F T F
F T T F T F F
F F F T T T T
Since the columns for ¬(A∨B)¬(A∨B) and ¬A∧¬B¬A∧¬B are identical, the two expressions are logically
equivalent.
Tautologies:
A proposition P is a tautology if it is true under all circumstances. It means it contains the only T in the final
column of its truth table.
T T T F F T T
T F F T F F T
F T T F T T T
F F T T T T T
Contradiction:
A statement that is always false is known as a contradiction.
P ∼P P ∧∼P
T F F
F T F
1. Commutative Law A∨ B≅ B ∨A
A∧ B≅ B ∧A
2. Associative Law A ∨ (B ∨ C) ≅ (A ∨ B) ∨ C
A ∧ (B∧C) ≅ (A ∧ B) ∧C
4. Distributive Laws A ∨ (B ∧ C) ≅ (A ∨ B) ∧ (A ∨ C)
A ∧ (B ∨ C) ≅ (A ∧ B) ∨ (A ∧ C)
6. Absorption Laws A ∨ (A ∧ B) ≅ A
A ∧ (A ∨ B) ≅ A
A ∨ (¬A ∧ B) ≅ A ∨ B
A ∧ (¬A ∨ B) ≅ A ∧ B
7. Idempotence Law A ∨A ≅A
A ∧ A≅ A
We cannot represent relations like ALL, some, or none with propositional logic.
Example:
¬p
¬ ¬q
¬s
¬t
Translating Conjunction
a. It is raining and Mary is sick
(p ∧ q)
(t ∧ s)
(¬r ∧ ¬p)
(s ∧ ¬q)
translation 1: It is not the case that both it is raining and Mary is sick
¬(p ∧ q)
(¬p ∧ q)
Translating Disjunction
a. It is raining or Mary is sick
(p ∨ q)
((r ∧ p) ∨ s)
or (r ∧ (p ∨ s))
c. Mary is sick or Mary isn’t sick
(q ∨ ¬q)
((s ∨ q) ∨ p)
or (s ∨ (q ∨ p))
e. It is not the case that Mary is sick or Bob stayed up late last night
¬(q ∨ t)
Translating Implication
a. If it is raining, then Mary is sick
(p → q)
(s → p)
c. Mary is sick and it is raining implies that Bob stayed up late last night
((q ∧ p) → t)
¬(p → ¬s)
(p ↔ q)
b. If Mary is sick then it is raining, and vice versa
((p → q) ∧ (q → p))
or (p ↔ q)
(p ↔ s)
¬(p ↔ s)