0% found this document useful (0 votes)
5 views

aiml (2)

The document is a set of lecture notes for a B.Tech course on Artificial Intelligence and Machine Learning, covering foundational concepts such as intelligent agents, problem-solving strategies, adversarial search, and learning methods. It includes detailed modules on various topics, including the structure of agents, their environments, and the principles of rationality in AI. Additionally, it provides references to textbooks and digital learning resources for further study.

Uploaded by

babulaman423
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

aiml (2)

The document is a set of lecture notes for a B.Tech course on Artificial Intelligence and Machine Learning, covering foundational concepts such as intelligent agents, problem-solving strategies, adversarial search, and learning methods. It includes detailed modules on various topics, including the structure of agents, their environments, and the principles of rationality in AI. Additionally, it provides references to textbooks and digital learning resources for further study.

Uploaded by

babulaman423
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

ARTIFICIAL INTELLIGENCE

&
MACHINE LEARNING
Lecture Notes
B.TECH 6th SEMESTER
Prepared By
Sonali Kar
Assistant Professor

Department of Computer Science Engineering


Balasore college of Engineering & Technology
NH - 16, Sergarh, Balasore, 756060 - Odisha, India
6th Semester
Artificial Intelligence & Machine Learning

Module-I: (12 hours)

INTRODUCTION –The Foundations of Artificial Intelligence; - INTELLIGENT AGENTS – Agents and


Environments, Good Behaviour: The Concept of Rationality, the Nature of Environments, the Structure of
Agents, SOLVING PROBLEMS BY SEARCH – Problem-Solving Agents, Formulating problems, Searching
for Solutions, Uninformed Search Strategies, Breadth-first search, Depth-first search, Searching with Partial
Information, Informed (Heuristic) Search Strategies, Greedy best-first search, A* Search, CSP, Means-End-
Analysis.

Module-II: (12 hours)


ADVERSARIAL SEARCH – Games, The Mini-Max algorithm, optimal decisions in multiplayer
games, Alpha-Beta Pruning, Evaluation functions, Cutting off search, LOGICAL AGENTS –Knowledge-
Based agents, Logic, Propositional Logic, Reasoning Patterns in Propositional Logic,Resolution, Forward
and Backward chaining - FIRST ORDER LOGIC – Syntax and Semantics of First-Order Logic, Using First-
Order Logic , Knowledge Engineering in First-Order Logic -INFERENCE IN FIRST ORDER LOGIC –
Propositional vs. First-Order Inference, Unification and Lifting, Forward Chaining, Backward Chaining,
Resolution

Module-III: (6 hours)
UNCERTAINTY – Acting under Uncertainty, Basic Probability Notation, The Axioms of
Probability,Inference Using Full Joint Distributions, Independence, Bayes’ Rule and its Use,
PROBABILISTICREASONING – Representing Knowledge in an Uncertain Domain, The Semantics of
BayesianNetworks, Efficient Representation of Conditional Distribution, Exact Inference in
BayesianNetworks, Approximate Inference in Bayesian Networks

Module-IV: (10 hours)


LEARNING METHODS – Statistical Learning, Learning with Complete Data, Learning with
HiddenVariables, Rote Learning, Learning by Taking Advice, Learning in Problem-solving, learningfrom
Examples: Induction, Explanation-based Learning, Discovery, Analogy, FormalLearning Theory,Neural Net
Learning and Genetic Learning. Expert Systems: Representingand Using DomainKnowledge, Expert System
Shells, Explanation, Knowledge Acquisition.

Books:
[1] Elaine Rich, Kevin Knight, & Shivashankar B Nair, Artificial Intelligence, McGraw Hill,3rd ed.,2009
[2] Stuart Russell, Peter Norvig, Artificial Intelligence -A Modern Approach, 4/e, Pearson, 2003.
[3] Nils J Nilsson, Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publications,2000
[4] Introduction to Artificial Intelligence & Expert Systems, Dan W Patterson, PHI.,2010
[5] S Kaushik, Artificial Intelligence, Cengage Learning, 1st ed.2011

Digital Learning Resources:


Course Name: Artificial Intelligence Search Methods For Problem
Solving Course Link:
https://swayam.gov.in/nd1_noc20_cs81/preview
Course Instructor: Prof. D. Khemani, IIT Madras
Course Name: Fundamentals of Artificial Intelligence
Course Link:
https://swayam.gov.in/nd1_noc20_me88/preview
Course Instructor: Prof. S. M. Hazarika, IIT Guwahati
Course Name: Introduction to Machine Learning
Course Link:
https://nptel.ac.in/courses/106/105/106105152 Course
Instructor: Prof. S. Sarkar, IIT Kharagpur
Course Name: Machine Learning
Course Link:
https://nptel.ac.in/courses/106/106/106106202 Course
Instructor: Prof. Carl Gustaf Jansson, IIT Madras
Module-1 Lecture-1
Introduction
Learning Objective:
1 Introduction
1.1 What is Artificial Intelligence?
1.2 Why we need AI?
1.3 What is intelligence in AI?
1.4 Advantages and Disadvantages of AI
1.5 Applications of AI

1 Introduction
1.1 What is Artificial Intelligence?
• It is the branch of computer science that emphasizes the development of intelligence machines, thinking
and working like humans and able to make decisions. It is also known as Machine Intelligence.
• According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of
making intelligent machines, especially intelligent computer programs”.
• Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans think.
• AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent
software and systems.
1.2 Why we need AI?
• To create expert systems: The systems which exhibit intelligent behaviour with the capability to learn,
demonstrate, and explain and advice its users.
• To implement human intelligence in machines: Creating systems that understand, think, learn and behave
like humans. Helping machines find solutions to complex problems like humans do and applying them as
algorithms in a computer friendly manner.
1.3 What is intelligence in AI?
• The ability of a system to calculate, perceive relationships and analogies, learn from experience, store and
retrieve information from memory, solve problems, use natural language fluently, classify and adapt new
situations.
• The Intelligence is intangible.
• It is composed of

a) Reasoning
b) Learning
c) Problem solving
d) Perception
e) Linguistic intelligence
a) Reasoning − It is the set of processes that enables us to provide basis for judgement, making decisions,
and prediction.
b) Learning − It is the activity of gaining knowledge or skill by studying, practising, being taught, or
experiencing something. Learning enhances the awareness of the subjects of the study.
c) Problem solving - Problem solving also includes decision making, which is the process of selecting the
best suitable alternative out of multiple alternatives to reach the desired goal are available.
d) Perception − It is the process of acquiring, interpreting, selecting, and organizing sensory information.
e) Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the verbal and written
language.
1.4 Advantages and Disadvantages of AI
Advantages:
➢ High Accuracy with less error: AI machines or systems are prone to less errors and high accuracy as it
takes decisions as per pre-experience or information.
➢ High-Speed: AI systems can be of very high-speed and fast-decision making.
➢ High reliability: AI machines are highly reliable and can perform the same action multiple times with
high accuracy.
➢ Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the
ocean floor, where to employ a human can be risky.
➢ Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is
currently used by various E-commerce websites to show the products as per customer requirement.
➢ Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can
make our journey safer and hassle-free, facial recognition for security purpose, Natural language
processing to communicate with the human in human-language, etc.
Disadvantages:
➢ High Cost: The hardware and software requirement of AI is very costly as it requires lots of maintenance
to meet current world requirements.
➢ Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work out
of the box, as the robot will only do that work for which they are trained, or programmed.
➢ Unemployment
➢ No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the
feeling so it cannot make any kind of emotional attachment with human, and may sometime be harmful
for users if the proper care is not taken.
➢ Increase dependency on machines: With the increment of technology, people are getting more dependent
on devices and hence they are losing their mental capabilities.
➢ No Original Creativity: As humans are so creative and can imagine some new ideas but still AI machines
cannot beat this power of human intelligence and cannot be creative and imaginative.
1.5 Applications of AI
i. Gaming
ii. Natural language processing
iii. Expert systems
iv. Speech Recognition
v. Handwriting Recognition
vi. Intelligent robots
vii. Computer vision etc
Module-1 Lecture-2

Learning Objective:
2 Agents in Artificial Intelligence
2.1 What is an Agent?
2.2 What is an Intelligent Agent?

2 Agents in Artificial Intelligence


2.1 What is an Agent?
• An agent is anything that can perceive its environment through sensors and acts upon that environment
through actuators.
• An agent can be:
Human agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs,
vocal tract and other body parts work as actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various
motors for actuators.
Software Agent: Software agent can have keystrokes, file contents, which act as sensors and display on
the screen, files etc act as actuators.
Before moving forward, we should first know about sensors, effectors, and actuators.
• Sensor: Sensor is a device which detects the change in the environment and sends the information to other
electronic
• Devices: An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that converts energy into motion. The actuators are
only responsible for moving and controlling a system. An actuator can be an electric motor, gears, break
etc.
• Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms,
fingers and display screen.

Fig: Agents interact with environments through sensors and actuators.


2.2 What is an Intelligent Agent?
• An intelligent agent is an autonomous entity which acts upon an environment using sensors and actuators
for achieving goals.
• An intelligent agent may learn from the environment to achieve their goals. An intelligent agent is also
called as a rational agent which is one that does the right thing.
• An intelligent agent can transform perception into actions rationally.
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
Module-1 Lecture-3

Learning Objective:
3 Structure of an Agent

3 Structure of an Agent
The structure of an intelligent agent is a combination of architecture and agent program.
It can be viewed as:
Agent = Architecture + Agent program
Architecture: Architecture is machinery that an AI agent executes on.
Agent program: The agent function (f) for an artificial agent will be implemented by an agent program. An
agent's behaviour is described by the agent function that maps any given percept sequence to an action.
The agent function f maps from percept histories to actions:
f: P* → A
The part of the agent taking an action on the environment is called an actuator.

Fig: Structure of an Agent


Example:
Simple example-the vacuum-cleaner world: This particular world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move
left, move right, suck up the dirt, or do nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial tabulation of this agent function is
shown in below figure. An agent program that implements it which is mentioned below.
Module-1 Lecture-4

Learning Objective:
4 Agent Environments
4.1 Features of Environment

4 Agent Environments
• An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself.
• An environment can be described as a situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent with something to sense and act upon
it.

4.1 Features of Environment


An environment can have various features from the point of view of an agent.

1. Fully observable vs Partially Observable


2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs Partially Observable:


• If an agent sensor can sense or access the complete state of an environment at each point of time then it
is a fully observable environment, else it is partially observable.
• A fully observable environment is easy as there is no need to maintain the internal state to keep track
history of the world.
• An agent with no sensors in all environments then such an environment is called as unobservable.
2. Static vs Dynamic:
• If an environment does not undergo any change especially when an agent is busy in performing a specific
task, then the environment is said to be static otherwise it dynamic.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of a
static environment.
3. Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be performed within it,
then such an environment is called a discrete environment else it is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves that can be
performed.
• A self-driving car is an example of a continuous environment.

4. Deterministic vs Stochastic:

If an agent's current state and selected action can completely determine the next state of the environment,
then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely by an agent.
5. Single-agent vs Multi-agent:
• If only one agent is involved in an environment, and operating by itself then such an environment is
called single agent environment.
• However, if multiple agents are operating in an environment, then such an environment is called a multi-
agent environment.
• The agent design problems in the multi-agent environment are different from single agent environment.
6. Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current percept is required
for the action.
• In episodic the agent’s experience divided in to atomic episodes.
• Next episode not dependent on actions taken in previous episode.
• However, in Sequential environment (non episodic), an agent requires memory of past actions to
determine the next best actions.

7. Known vs Unknown:
• Known and unknown are not actually a feature of an environment, but it is an agent's state of knowledge
to perform an action.
• In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
8. Accessible vs Inaccessible:
• If an agent can obtain complete and accurate information about the state's environment, then such an
environment is called an Accessible environment else it is called inaccessible.
• An empty room whose state can be defined by its temperature is an example of an accessible
environment.
• Information about an event on earth is an example of Inaccessible environment
Module-1 Lecture-5

Learning Objective:
5 Good Behaviour: The Concept of Rationality
5.1 What is Rational Agent?
5.2 What is Rationality?
5.3 The Nature of Environments

5.1 What is Rational Agent?


● A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize
its performance measure with all possible actions.
● A rational agent is said to perform the right things.
● AI is about creating rational agents to use for game theory and decision theory for various real-world
scenarios.
● For an AI agent, the rational action is most important because in AI reinforcement learning algorithm,
for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a
negative reward.
5.2 What is Rationality?
● Rationality is concerned with expected actions and results depending upon what the agent has perceived.
Performing actions with the aim of obtaining useful information is an important part of rationality.
● The rationality of an agent is measured by its performance measure. Rationality can be judged on the
basis of following points:

➢ The performance measures, which determine the degree of success.


➢ The agent’s prior knowledge about the environment.
➢ The actions that the agent can perform.
➢ Agent’s Percept Sequence till now.
This leads to a definition of a rational agent:
● For each possible percept sequence, a rational agent should select an action that is expected to maximize
its performance measure, given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
● A rational agent always performs the right action, where the right action means the action that causes the
agent to be most successful in the given percept sequence.
● The problem the agent solves is characterized by Performance Measure, Environment, Actuators, and
Sensors (PEAS).
5.3 The Nature of Environments
● To design a rational agent, we must specify the task environment.
● The performance measure, the environment, and the agent’s actuators and sensors are grouped as the task
environment, and called as PEAS (Performance measure, Environment, Actuators, Sensors).
Task Environment: PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
Performance Measures: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
PEAS for Medical Diagnose:
Performance Measures: Healthy patient, Minimized cost
Environment: Patient,Hospital,Staffs
Actuators: Tests,Treatements
Sensors: Keyboard(Entry of symptoms)
PEAS for Vacuum Cleaner:
Performance Measures: Cleanness ,Efficiency ,Battery life ,Security
Environment: Room, Table, Wood floor, Carpet
Actuators: Wheels ,Brushes, Vacuum Extractor
Sensors: Camera ,Dirt detection sensor, Cliff sensor, Bump Sensor ,Infrared Wall Sensor
Module-1 Lecture-6
Learning Objective:
6 Types of Agents
6.1 Simple Reflex Agents
6.2 Model-Based Reflex Agents
6.3 Goal-Based Agents

6 Types of Agents

In artificial intelligence, agents are entities that sense their surroundings and act to accomplish predetermined
objectives. From basic reactive reactions to complex decision-making, these agents display a wide range of
behaviors and skills.
Agents can be grouped into 5 classes based on their degree of perceived intelligence and capability.These are
given below :
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Learning Agents

6.1 Simple Reflex Agents


The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history. They have no internal state or memory and respond instantly
to the current situation.
Example:- An automatic door sensor is a simple reflex agent. When the sensor detects movement near the
door, it triggers the mechanism to open. The rule is: if movement is detected near the door, then open the
door. It does not consider any additional context, such as who is approaching or the time of day, and will
always open whenever movement is sensed.

6.2 Model-Based Reflex Agents


Model-based agents are more sophisticated than simple reflex agents. These agents are capable of tracking the
situation and working in a partially observable environment.
A model-based agent has two important factors:
a) Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
b) Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the model they perform actions.
Example:- A vacuum cleaner like the Roomba one that maps a room and remembers obstacles like furniture. It ensures
cleaning without repeatedly bumping into the same spots.

6.3 Goal-Based Agents


Goal-based agents have predefined objectives or goals that they aim to achieve. By combining descriptions of
goals and models of the environment, these agents plan to achieve different objectives, like reaching particular
destinations. They use search and planning methods to create sequences of actions that enhance decision-
making in order to achieve goals. Goal-based agents differ from reflex agents by including forward-thinking
and future-oriented decision-making processes.
Example: A delivery robot tasked with delivering packages to specific locations. It analyzes its current
position, destination, available routes, and obstacles to plan an optimal path towards delivering the package.
Module-1 Lecture-7
Learning Objective:
6.4 Utility-Based Agents
6.5 Learning Agents

6.4 Utility-Based Agents


These agents are comparable to goal-based agents, but provide an extra component of utility measurement
which makes them different by providing a measure of success at a given state.
The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in
order to perform the best action.

Example: An investment advisor algorithm suggests investment options by considering factors such as
potential returns, risk tolerance, and liquidity requirements, with the goal of maximizing the investor's long-
term financial satisfaction.

6.5 Learning Agents


In artificial intelligence, a learning agent is an agent that possesses the ability to learn from its past experiences.
It starts to act with basic knowledge and then able to act and adapt automatically through learning.
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from environment
Critic: Learning element takes feedback from critic which describes that how well the agent is doing
with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
Learning agents are able to learn, analyze performance, and look for new ways to improve the performance.
Examples:
Chatbots: AIML is frequently used to develop chatbots that can simulate conversation with users. These
chatbots use pattern matching to respond to user inputs. A classic example is the A.L.I.C.E (Artificial
Linguistic Internet Computer Entity) chatbot, which learns from user interactions to provide more accurate
and helpful responses.
Recommendation Systems: AIML can also be used to create recommendation systems. These agents analyze
user preferences and behaviors to suggest products, services, or content. For instance, an online shopping
website might use an AIML-based learning agent to recommend items to customers based on their browsing
history and purchase patterns.
Module-1 Lecture-8
Learning Objective:
7 Solving Problems By Search
7.1 Problem-Solving Agents
7.2 Formulating problems
7.3 Searching for Solutions

7.1 Problem-Solving Agents:


A problem is a set of information that the agent will utilize to make decisions. A problem-solving refers to a
state where we wish to reach to a definite goal from a present state or condition.
According to computer science, problem-solving is a part of artificial intelligence, which includes various
approaches including heuristics and algorithms.
There are some following steps which require to solve a problem :
a. Goal Formulation: It is the first step in problem solving and based on the current situation and
the agent’s performance measure.
b. Problem Formulation: It is the process of deciding what actions and states to consider, given a
goal.

The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a
solution is found, the actions it recommends can be carried out. This is called the execution phase.
Thus, we have a simple “formulate, search, execute” design for the agent. After formulating a goal and a
problem to solve, the agent calls a search procedure to solve it.
It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do
typically, the first action of the sequence and then removing that step from the sequence. Once the solution
has been executed, the agent will formulate a new goal.

Problem solving components


A problem can be defined formally by five components
i. Initial state: The first component that describes the problem is the initial state that the agent starts in.
ii. Action: A description of the possible actions available to the agent. Given a particular state s,
ACTIONS(s) returns the set of actions that can be executed in s.
iii. Transition Model: A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from doing action a in
state s. We also use the term successor to refer to any state reachable from a given state by a single
action.
Together, the initial state, actions, and transition model implicitly define the state space of the problem.
The state space forms a directed network or graph in which the nodes are states and the links between
nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions
iv. Goal Test: It determines whether a given state is a goal state.
v. Path Cost: A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.
Example problems: 8-puzzle,8-queens problem, The travelling salesman problem etc.

7.2 Formulating problems


The formulation is reasonable, but it is still a model, an abstract mathematical description and not the real
thing.
The process of removing detail from a representation is called abstraction.

7.3 Searching for Solutions


• A solution is an action sequence, so search algorithms work by considering various possible action
sequences.
• The possible action sequences starting at the initial state form a search tree with the initial state at the root;
the branches are actions and the nodes correspond to states in the state space of the problem.
• Expanding the current state; that is, applying each legal action to the current state, thereby generating a
new set of states. The current state is the parent node, newly generated states are child nodes.
• Leaf node is a node with no children in the tree. The set of all leaf nodes available for expansion at any
given point is called the frontier.
• The process of expanding nodes on the frontier continues until either a solution is found or there are no
more states to expand.
• Search algorithms all share this basic structure they vary primarily according to how they choose which
state to expand next is called as search strategy.
Module-1 Lecture-9
Learning Objective:
8 Searching
8.1 Uniformed Search
8.1.1 Breadth-first Search

8 Searching
• In AIML searching is the process of finding a solution or a path from an initial state to a goal state,
usually within a search space.
• A search space is essentially a set of all possible states or configurations of a system or environment.
In AI, search is a key technique used for problems such as puzzle solving, pathfinding, decision
making, and optimization.
• Searching allows an agent or algorithm to explore various possible solutions in an intelligent manner
and find the optimal or desired solution.
• There are different types of search algorithms, mainly categorized into:
a) Uninformed Search (Blind Search)
b) Informed Search (Heuristic Search)

8.1.1 Uninformed Search (Blind Search):


• The Uninformed Search is also called as Blind Search do not have any additional information about
the goal beyond the initial state. These can also do is generate successors and distinguish a goal state
from a non-goal state.
• They explore the search space systematically without any heuristics to guide the search.
• There present different types of uninformed search algorithms, they are
1) Breadth-first search
2) Depth-first search
3) Uniform-cost search
4) Depth-limited search
5) Iterative deepening depth-first search
6) Bidirectional search

8.1.1 Breadth-first Search


• Breadth-first search is the most common search strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their successors, and so on.
• Here all the nodes are expanded at a given depth in the search tree before any nodes at the next level
are expanded.
• Breadth-first search is an instance of the general graph-search algorithm in which the shallowest
unexpanded node is chosen for expansion. This is achieved very simply by using a FIFO queue for the
frontier. The new nodes go to the back of the queue, and old nodes, which are shallower than the new
nodes, get expanded first. There is one slight tweak on the general graph-search algorithm, which is
that the goal test is applied to each node when it is generated rather than when it is selected for
expansion.
• Thus, breadth-first search always has the shallowest path to every node on the frontier.
Algorithm:
Step 1: Place the starting node on the queue.
Step 2: If the queue is empty, return failure and stop.
Step 3: If the first element on the queue is a goal node return success and stop. Otherwise,
Step 4: Remove and expand the first element from the queue and place all the children at the end of the queue
in any order.
Step 5: If queue is empty Go to Step 6 else go to step 3.
Step 6: Exit.

Example:
Time Complexity: The Time Complexity of BFS algorithm is O (bd ).
Space Complexity: The Space Complexity of BFS algorithm is O (bd ).
Advantages:
➢ Simplicity: This algorithm is easy to understand and implement using a queue.
➢ Systematic Exploration: Explores all nodes level by level, ensuring no node is missed within the
same depth before moving deeper.
➢ Wide Range of Applications: BFS is versatile, applied in areas like web crawling, social network
analysis, and AI-based problem-solving.
Disadvantages:
➢ High Memory Usage: BFS requires storing all nodes at the current level in memory, which can grow
significantly in large or densely connected graphs.
➢ Slow for Deep Solutions: If the solution lies deep in the graph, BFS can become inefficient as it
explores all shallower nodes first.
Module-1 Lecture-10
Learning Objective:
8 Searching
8.1 Uniformed Search
8.1.2 Depth-first Search

8.1.2 Depth-first Search


• Depth First Search (DFS) algorithm is a recursive algorithm for searching all the vertices of a graph or
tree data structure. This algorithm traverses a graph in a depthward motion and uses a stack to remember
to get the next vertex to start a search, when a dead end occurs in any iteration.
• DFS uses a stack data structure for its implementation

Algorithm:
Step 1: PUSH the starting node into the stack.
Step 2: If the stack is empty then stops and return failure.
Step 3: If the top node of the stack is the goal node, then stop and return success.
Step 4: Else POP the top node from the stack and process it. Find all its neighbours that are in ready state and
PUSH them into the stack in any order.
Step 5: If stack is empty Go to step 6 else Go to step 3.
Step 6: Exit
Example:
Let us take an example for implementing DFS algorithm.
Time Complexity: The Time Complexity of DFS algorithm is O (bd ).
Space Complexity: The Space Complexity of DFS algorithm is O (bd ).

Advantages:
➢ DFS consumes very less memory space.
➢ It will reach the goal node in a less time period than BFS if it traverses in a right path.
➢ It may find a solution without examining much of the search because we may get the desired solution
in the very first go.
Disadvantages:
➢ It is possible that many states keep reoccurring. There is no guarantee of finding the goal node.
➢ Sometimes the states may also enter into infinite loops.
Module-1 Lecture-11
Learning Objective:
9 Searching
9.2 Informed(Heuristic) Search
9.2.1 Greedy best-first Search
9.2.2 A* Search

9.2 Informed(Heuristic) Search


• Informed search algorithms are a type of search algorithm that uses heuristic functions to guide the
search process.
• A heuristic function is a function that maps from problem state descriptions to measure of desirability
usually represented as number. The purpose of heuristic function is to guide the search process in the
most profitable directions by suggesting which path to follow first when more than is available.
• Generally a term heuristic is used for any advice that is effective but is not guaranteed to work in every
case. For example in case of travelling sales man (TSP) problem we are using a heuristic to calculate
the nearest neighbour. Heuristic is a method that provides a better guess about the correct choice to
make at any junction that would be achieved by random guessing. This technique is useful in solving
though problems which could not be solved in any other way. Solutions take an infinite time to
compute.
• There are different types of informed search techniques are present
Greedy Best-first Search/ Best first Search
A* Search

9.2.1 Greedy best-first Search/ Best first Search


• Best first search is an instance of graph search algorithm in which a node is selected for expansion based on
evaluation function f (n). Traditionally, the node which is the lowest evaluation is selected for the explanation
because the evaluation measures distance to the goal.
• Best first search can be implemented within general search frame work via a priority queue, a data structure
that will maintain the fringe in ascending order of f values.
• It is the combination of depth first and breadth first search algorithm.
• Best first search algorithm is often referred greedy algorithm this is because they quickly attack the most
desirable path as soon as its heuristic weight becomes the most desirable.

Algorithm:
Step 1: Place the starting node or root node into the queue.
Step 2: If the queue is empty, then stop and return failure.
Step 3: If the first element of the queue is our goal node, then stop and return success.
Step 4: Else, remove the first element from the queue. Expand it and compute the estimated goal distance
for each child. Place the children in the queue in ascending order to the goal distance.
Step 5: Go to step-3
Step 6: Exit.
Example:

root.
...
Time Complexity: The worst case time complexity of Greedy best first search is O(bm ).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm ). Where, m is the
maximum depth of the search space.
Advantage:
➢ It is more efficient than that of BFS and DFS.
➢ Time complexity of Best first search is much less than Breadth first search.
Disadvantages:
➢ It can behave as an unguided depth-first search in the worst case scenario.
➢ It can get stuck in a loop as DFS.
➢ This algorithm is not optimal.
Module-1 Lecture-12
Learning Objective:
9 Searching
9.2 Informed(Heuristic) Search
9.2.1 Greedy best-first Search
9.2.2 A* Search

9.2.2 A* Search
A* is a powerful graph traversal and pathfinding algorithm widely used in artificial intelligence and computer
science. This algorithm is a specialization of best-first search.
It is mainly used to find the shortest path between two nodes in a graph, given the estimated cost of getting
from the current node to the destination node.
A* requires heuristic function to evaluate the cost of path that passes through the particular state. This
algorithm is complete if the branching factor is finite and every action has fixed cost. A* requires heuristic
function to evaluate the cost of path that passes through the particular state. It can be defined by following
formula.
f(n) = g(n)+h(n)
Where,

f(n): The actual cost path from the start state to the goal state.
g(n): The actual cost path from the start state to the current state.
h(n): The actual cost path from the current state to goal state.
Algorithm:
Step-1: Place the starting node in the OPEN list.
Step-2: If OPEN list is empty, then stop and return failure.
Step-3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node
n is goal node then return success and stop, otherwise.
Step-4: Expand node n and generate all of its successors, and put n into the closed list. For every successor
n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n'
and place into OPEN list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.
Step 6: Return to Step 2.
Example:-
Find the most cost effective path to reach from start state A to final state J using A* Algorithm.

Ans:-
The numbers written on edges represent the distance between the nodes.
The numbers written on nodes represent the heuristic value.
Stap-1:
We start with node A. Node B and Node F can be reached from node A.
Here we calculate f(B) and f(F) by using A* Algorithm.
f(B) = 6+8=14
f(F) = 3+6=9
As f(F)<F(B), we go to node F
Path A F
Step-2:
Node G and node H can be reached from node F.
Here we calculates f(G) and f(H)
f(G) = (3+1)+5= 9
f(H) = (3+7)+3= 13
As f(G)<f(H), we go to node G.
Path A F G
Step-3:
Node I can be reached from node G.
Here we calculate f(I)
f(I)= (3+1+3)+1= 8
Here we go to node I
Path A F G I
Step-4:
Node E,H and J can be reached from node I.
We calculate f(E),f(H) and f(J)
f(E) = (3+1+3+5)+3=15
f(H) = (3+1+3+2)+3=12
f(J) = (3+1+3+3)+0 =10
As f(J) is least, go to node J.
Path A F G I J
Time Complexity: The time complexity of A* search algorithm is O (b^d).
Space Complexity: The space complexity of A* search algorithm is O (b^d).

Advantages:
➢ A* search algorithm is the best algorithm than other search algorithms.
➢ A* search algorithm is optimal and complete.
➢ This algorithm can solve very complex problems.
Disadvantages:
➢ It does not always produce the shortest path as it is mostly based on heuristics and approximation.
➢ A* search algorithm has some complexity issues.
➢ The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it
is not practical for various large-scale problems.
Module-1 Lecture-13
Learning Objective:
10 Constraint Satisfaction Problems (CSP)
10.1 Crypt Arithmetic Problem

10 Constraint Satisfaction Problems (CSP)


Constraint Satisfaction Problems (CSP) play a crucial role in artificial intelligence (AI) as Iit solves various
problems that require decision-making under certain constraints. CSPs represent a class of problems where
the goal is to find a solution that satisfies a set of constraints. These problems are commonly encountered in
fields like scheduling, planning, resource allocation, and configuration.
A constraint satisfaction problem consists of three components, X, D, and C:
X is a set of variables, {X1,...,Xn}.
D is a set of domains, {D1,...,Dn}, one for each variable.
C is a set of constraints that specify allowable combinations of value.

Popular Problems with CSP


The following problems are some of the popular problems that can be solved using CSP:
1. Crypt Arithmetic Problem (Coding alphabets to numbers.)
2. n-Queens Problem
3. Sudoku
4. Map Coloring
5. Crossword
10.1 Crypt Arithmetic Problem
Cryptarithmetic Problem is a type of constraint satisfaction problem where the game is about digits and its
unique replacement either with alphabets or other symbols. In cryptarithmetic problem, the digits (0-9) get
substituted by some possible alphabets or symbols. The task in cryptarithmetic problem is to substitute each
digit with an alphabet to get the result arithmetically correct.
Rules:
The rules or constraints on a cryptarithmetic problem are as follows:
• There should be a unique digit to be replaced with a unique alphabet.
• No two letters have same value.
• The result should satisfy the predefined arithmetic rules, i.e., 2+2 =4, nothing else.
• Digits should be from 0-9 only.
• There should be only one carry forward, while performing the addition operation on a problem.
• The problem can be solved from both sides, i.e., lefthand side (L.H.S), or righthand side (R.H.S)
Example:
Let’s understand the crypt arithmetic problem as well as it’s constraints better with the help of an example:
TO
+ GO
-----------
OUT
Step-1:
These alphabets are replaced by numbers such that all the constraints are satisfied.So initially we have all
blank spaces.
We start from left most side, there we have left most symbol is : O
It is the letter which is generated by carrying. LETTER DIGIT
T
So carry generated can be only one, so we have O=1.
O 1
When we are doing addition of n letters & result of adding G
of letters is n+1, then resulted letter value is always 1 as carry. U

Step-2:
LETTER DIGIT
Next we have T+G=U & O+O=T T 2
We will go for O+O=T first. O 1
G
We have O=1, so O+O=1+1=2(T)
U
Step-3:
Next we have T+G=U
We T=2, so 2+G=U
Now here we know U must generate carry so 2+G must be 10 or greater that 10 means we must add such
number in 2 so, that we can get carry generated(or we can add 10 or more than 10).
We have first option (if we consider G=9, i.e 2+G as 2+9(G)=11, here we get U=1)
But we can’t chose U=1 as 1 is already assigned to O.
We have second option (if we consider G=8, i.e 2+Gas 2+8(G)=10, here we get U=0)
Which can be chosen & then we can tally the answer as follows:
2 1 LETTER DIGIT
+8 1 T 2
O 1
-------------------
G 8
10 2 U 0
Module-1 Lecture-14
Learning Objective:
11 Means-End-Analysis

11 Means-End-Analysis:
• A collection of search strategies that can reason either forward or backward but for a problem one direction
or the other must be chosen, but a mixture of the two directions is appropriate for solving a complex and
large problem.
• Such a mixed strategy, make it possible that first to solve the major part of a problem and then go back
and solve the small problems arise during combining the big parts of the problem. Such a technique is
called Means-Ends Analysis.
• Means-Ends Analysis is problem-solving techniques used in Artificial intelligence for limiting search in
AI programs.
• It is a mixture of Backward and forward search technique.
• The means end analysis process centers around the detection of dfifferences between the current state and
the goal state.
How means-ends analysis Works:
The means-ends analysis process can be applied recursively for a problem. It is a strategy to control search in
problem-solving.
Following are the main Steps which describe the working of MEA techniques for solving a problem.
1. First, evaluate the difference between Initial State and final State.
2. Select the various operators which can be applied for each difference.
3. Apply the operator at each difference, which reduces the difference between the current state and goal state.

Operator Subgoaling:
In the Mean end analysis process, we detect the differences between the current state and goal state. Once
these differences occur, then we can apply an operator to reduce the differences. But sometimes it is possible
that an operator cannot be applied to the current state. So we create the sub problem of the current state, in
which operator can be applied, such type of backward chaining in which operators are selected, and then sub
goals are set up to establish the preconditions of the operator is called Operator Subgoaling.
Algorithm of Means-Ends Analysis
Step 1: Compare CURRENT to GOAL, if there are no differences between them then return.
Step-2: Otherwise, select the most important difference and reduce it by doing the following steps until
success or failure occurs:
a) Select a new operator O which is applicable for the current difference, and if there is no such operator,
then signal failure.
b) Attempt to apply operator O to CURRENT. Make a description of two states.
i) O-START, a state in which O’s preconditions are satisfied.
ii) O-RESULT, the state that would result if O were applied In O-START.
c) If
(First-Part MEA (CURRENT, O-START))
And
(LAST-Part MEA (O-Result, GOAL)),
are successful, then signal Success and return the result of combining FIRST-PART, O, and LAST-
PART.
Example:
Let's take an example where we know the initial state and goal state as given below. In this problem, we need
to get the goal state by finding differences between the initial state and goal state and applying operators.

Solution:
To solve the above problem, we will first find the differences between initial states and goal states, and for
each difference, we will generate a new state and will apply the operators. The operators we have for this
problem are:
• Move
• Delete
• Expand
1. Evaluating the initial state: In the first step, we will evaluate the initial state and will compare the initial
and Goal state to find the differences between both states.

2. Applying Delete operator: As we can check the first difference is that in goal state there is no dot symbol
which is present in the initial state, so, first we will apply the Delete operator to remove this dot.

3. Applying Move Operator: After applying the Delete operator, the new state occurs which we will again
compare with goal state. After comparing these states, there is another difference that is the square is outside
the circle, so, we will apply the Move Operator.
4. Applying Expand Operator: Now a new state is generated in the third step, and we will compare this state
with the goal state. After comparing the states there is still one difference which is the size of the square, so,
we will apply Expand operator, and finally, it will generate the goal state.
Module-2 Lecture-15
Learning Objective:
12 Adversarial Search
12.1 Game Playing
12.2 Game Tree

12 Adversarial Search:
• Adversarial search is a game-playing technique where the agents are surrounded by a competitive
environment.
• A conflicting goal is given to the agents (multi agent). These agents compete with one another and try to
defeat one another in order to win the game.
• Such conflicting goals give rise to the adversarial search.
• Here, game-playing means discussing those games where human intelligence and logic factor is used,
excluding other factors such as luck factor. Tic-tac-toe, chess, checkers, etc., are such type of games where
no luck factor works, only mind works.
• Mathematically, this search is based on the concept of ‘Game Theory.’ According to game theory, a game
is played between two players. To complete the game, one has to win the game and the other looses
automatically.

12.1 Game Playing


• Game playing is an important domain of AI.
• Games do not require much knowledge, the only knowledge we need to provide is the rules, legal moves
and the conditions of winning or losing the game.
• The most common search techniques in game playing are :
(i) Mini Max algorithm
(ii)Alpha Beta Prunning

12.2 Game Tree


A game tree is a tree where nodes of the tree are the game states and edges of the tree are the moves by players.
Game tree involves initial state, action function/successor function, and result Function/utility function.
Optimal Decisions in Games:
We will consider games with two players, whom we will call MAX and MIN. MAX moves first, and then
they take turns moving until the game is over. At the end of the game, points are awarded to the winning player
and penalties are given to the loser. A game can be formally defined as a kind of search problem with the
following components:
➢ The initial state, which includes the board position and identifies the player to move
➢ A successor function, which returns a list of (move, state) pairs, each indicating a legal move and the
resulting state.
➢ A terminal test, which determines when the game is over. States where the game has ended are called
terminal states.
➢ A utility function (also called an objective function or payoff function), which gives a numeric value
for the terminal states. In chess, the outcome is a win, loss, or draw, with values +1,-1, or 0.
The initial state and the legal moves for each side define the game tree for the game. The below figure shows
part of the game tree for tic-tac-toe (noughts and crosses). From the initial state, MAX has nine possible
moves. Play alternates between MAX's placing an X and MIN'S placing an O until reach leaf nodes
corresponding to terminal states such that one player has three in a row or all the squares are filled. The number
on each leaf node indicates the utility value of the terminal state from the point of view of MAX: high values
are assumed to be good for MAX and bad for MIN (which is how the players get their names). It is MAX's
job to use the search tree (particularly the utility of terminal states) to determine the best move.

Example: Tic-Tac-Toe game tree:The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
• There are two players MAX and MIN.
• Players have an alternate turn and start with MAX.
• MAX maximizes the result of the game tree.
• MIN minimizes the result.

Fig:- Game Tree of Tic-Tac-Toe game

Explanation:
• From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN place o, and
both players play alternatively until we reach a leaf node where one player has three in a row or all squares
are filled.
• Both players will compute each node, minimax, the minimax value which is the best achievable utility
against an optimal adversary.
• Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player is doing
his best to prevent another one from winning. MIN is acting against Max in the game.
• So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply. Max place x,
then MIN puts o to prevent Max from winning, and this game continues until the terminal node.
• In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space of possibilities
that MIN and MAX are playing tic-tac-toe and taking turns alternately.
In a given game tree, the optimal strategy can be determined from the minimax value of each node, which can
be written as MINIMAX(n). MAX prefer to move to a state of maximum value and MIN prefer to move to a
state of minimum value then:
Module-2 Lecture-16
Learning Objective:
13.Mini-Max Algorithm

13.Mini-Max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game
theory. It provides an optimal move for the player assuming that opponent is also playing optimally. This
algorithm uses recursion to search through the game-tree.
• This Algorithm computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they get the maximum
benefit.
• Here both the Players of the game are opponent of each other, where MAX will select the maximized value
and MIN will select the minimized value.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the tree
as the recursion.

Working of Min-Max Algorithm:


Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the
utility values for the terminal states.
In the below tree diagram, let's take A is the initial state of the tree. Suppose maximizer takes first turn which
has worst case initial value = -∞, and minimizer will take next turn which has worst-case initial value = +∞.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare each
value in terminal state with initial value of Maximizer and determines the higher nodes values. It will find the
maximum among the all.
For node D max(-1,-∞) => max(-1,8)= 8
For Node E max(-3, -∞) => max(-3, -1)= -1
For Node F max(2, -∞) => max(2,1) = 2
For node G max(-3, -∞) = max(-3, 4) = 4
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find the
3rd layer node values.
For node B min(8,-1) = -1 or for node B min(8,+∞)=>min(8,-1)=-1
For node C min (2, 4) = 2 or for node C min(2,+∞)=>min(2,4)=-2

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node.
In this game tree, there are only 4 layers, hence we reach immediately to the root node, but in real games,
there will be more than 4 layers.
For node A max(-1, 2)= 2 or node A max(-1,-∞)=>max(-1,2)=2
That was the complete workflow of the minimax two player game
Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max algorithm is
O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).
Module-2 Lecture-17
Learning Objective:
14. Optimal decisions in multiplayer games

14. Optimal decisions in multiplayer games


● Many popular games allow more than two players. Let us examine how to extend the minimax idea to
multiplayer games. This is straightforward from the technical viewpoint, but raises some interesting new
conceptual issues.
● First, we need to replace the single value for each node with a vector of values. For example, in a three-
player game with players A, B, and C, a vector (vA, vB, vC ) is associated with each node.
● For terminal states, this vector gives the utility of the state from each player’s viewpoint. (In two-player,
zero-sum games, the two-element vector can be reduced to a single value because the values are always
opposite.) The simplest way to implement this is to have the UTILITY function return a vector of utilities.
● Now we have to consider non terminal states. Consider the node marked X in the game tree shown in the
figure below.
● In that state, player C chooses what to do. The two choices lead to terminal states with utility vectors
vA =1, vB =2, vC =6 and vA =4, vB =2, vC =3 . Since 6 is bigger than 3, C should choose the first move.
This means that if state X is reached, subsequent play will lead to a terminal state with utilities vA =1,
vB =2, vC =6 . Hence, the backed-up value of X is this vector.
● The backed-up value of a node n is always the utility vector of the successor state with the highest value
for the player choosing at n.
● Anyone who plays multiplayer games, such as Diplomacy, quickly becomes aware that much more is
going on than in two-player games. Multiplayer games usually involve alliances, whether formal or
informal, among the players. Alliances are made and broken as the game proceeds.
● For Example suppose A and B are in weak positions and C is in a stronger position. Then it is often optimal
for both A and B to attack C rather than each other, lest C destroy each of them individually. In this way,
collaboration emerges from purely selfish behavior. As soon as C weakens under the joint onslaught, the
alliance loses its value, and either A or B could violate the agreement. In some cases, explicit alliances
merely make concrete what would have happened anyway. In other cases, a social stigma attaches to
breaking an alliance, so players must balance the immediate advantage of breaking an alliance against the
long-term disadvantage of being perceived as untrustworthy.
● If the game is not zero-sum, then collaboration can also occur with just two players. Suppose, for example,
that there is a terminal state with utilities vA =1000, vB =1000 and that 1000 is the highest possible utility
for each player. Then the optimal strategy is for both players to do everything possible to reach this state—
that is, the players will automatically cooperate to achieve a mutually desirable goal.
`

Fig:- The first three plies of a game tree with three players (A, B, C). Each node is labelled with values from
the viewpoint of each player. The best move is marked at the root.
Module-2 Lecture-18
Learning Objective:
15. Alpha-Beta Pruning

15. Alpha-Beta Pruning


• Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the
minimax algorithm.
• It reduces the computation time by a huge factor. This allows us to search much faster and even go into
deeper levels in the game tree. It cuts off branches (reducing the size of the search tree) in the game tree
which need not be searched because there already exists a better move available.
• Hence there is a technique by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves two threshold parameter
Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves
but also entire sub-tree.
• The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer.
The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer.
The initial value of beta is +∞.
• The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm
does, but it removes all the nodes which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is α>=β


Key points about alpha-beta pruning:

➢ The Max player will only update the value of alpha.


➢ The Min player will only update the value of beta.
➢ While back tracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
➢ We will only pass the alpha, beta values to the child nodes.
Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta pruning.

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these value
of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same value
to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= ∞, and
β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of alpha
will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now passes
to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and then
compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will become
1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be changed,
it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the condition α>=β,
so the next child of C which is G will be pruned, and the algorithm will not compute the entire sub-tree G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.
Module-2 Lecture-18
Learning Objective:
16. Logical agent
17. Knowledge based Agent

16. Logical agent:


A logical agent in Artificial Intelligence is an intelligent agent that makes decisions based on logical reasoning.
It uses knowledge representation and inference mechanisms to derive conclusions from available information.
Logical agents operate using propositional logic or first-order logic (FOL) to make deductions and solve
problems systematically.
17. Knowledge based Agent
• The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge base is
a set of sentences. Each sentence is expressed in a language called a knowledge representation language
and represents some assertion about the world. Sometimes we dignify a sentence with the name axiom,
when the sentence is taken as given without being derived from other sentences.
• There must be a way to add new sentences to the knowledge base and a way to query what is known.
The standard names for these operations are TELL and ASK, respectively. Both operations may involve
inference that is, deriving new sentences from old.
• Inference must obey the requirement that when one ASKs a question of the knowledge base, the answer
should follow from what has been told to the knowledge base previously.
• The agent maintains a knowledge base, KB, which may initially contain some background knowledge.
• Each time the agent program is called, it does three things.
➢ First, it TELLs the knowledge base what it perceives.
➢ Second, it ASKs the knowledge base what action it should perform. In the process of
answering this query, extensive reasoning may be done about the current state of the world,
about the outcomes of possible action sequences, and so on.
➢ Third, the agent program TELLs the knowledge base which action was chosen, and the agent
executes the action.
• The details of the representation language are hidden inside three functions that implement the interface
between the sensors and actuators on one side and the core representation and reasoning system on the
other.
• The functions are discussed below:
a. MAKE-PERCEPT-SENTENCE(): This function returns a sentence which tells the perceived
information by the agent at a given time.
b. MAKE-ACTION-QUERY(): This function returns a sentence which tells what action the agent must
take at the current time.
c. MAKE-ACTION-SENTENCE(): This function returns a sentence which tells an action is selected
as well as executed.
Various levels of knowledge-based agent:

1. Knowledge level

Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the
agent knows, and what the agent goals are. With these specifications, we can fix its behavior. For example,
suppose an automated taxi agent needs to go from a station A to station B, and he knows the way from A to
B, so this comes at the knowledge level.

2. Implementation level

This is the physical representation of logic and knowledge. At the implementation level agent perform
actions as per logical and knowledge level. At this level, an automated taxi agent actually implement his
knowledge and logic so that he can reach to the destination.
Module-2 Lecture-18
Learning Objective:
18. Logic
18.1 Propositional Logic
18.1.1 Syntax of propositional logic
18.1.2 Logical Connectives
18.1.3 Truth Table

18. Logic:
In AIML logic is the formal and structured approach to reasoning that allows machines (computers, robots, or
systems) to make decisions, solve problems, or draw conclusions based on a set of rules, facts, or knowledge.
Logic helps machines perform reasoning tasks like humans do by following certain principles of logic (such
as true/false, and/or/not, etc.).
Types of Logic in AIML:
There are two main types of logic used in AIML, these are
1. Propositional Logic (PL)
2. First-Order Logic (FOL) or Predicate Logic

18.1 Propositional Logic (PL) :


• Propositional Logic is also known as Boolean Logic, is the simplest form of logic used in Artificial
Intelligence (AI) to express facts, statements, and conditions.
• It uses propositions (statements) that can be either True (T) or False (F).
Example of Propositional Logic:
a) It is Suterday.
b) The Sun rises from West (False proposition)

c) 13+2= 67(False proposition)

d) 5 is a prime number.

Rules for Writing Propositional Logic:

➢ A proposition is a declarative statement that can either be True (T) or False (F) but not both.
➢ Propositional Logic uses five main logical connectives to connect statements.The connectives are:
NOT(Negation),AND(Conjuction),OR(Disjunction),IMPLIES and BICONDITIONAL.
➢ Every propositional logic statement must be clear and unambiguous.
➢ When combining two or more propositions, always use parentheses to avoid confusion.
➢ When combining propositions, you must always follow the truth table to evaluate the logic.
➢ Avoid writing statements that contradict each other.
➢ The implication represents a cause-effect relationship. Always ensure the cause happens before the
effect.
Example (P → Q)
➢ Always write complex propositional logic in standard form:

18.1.1 Syntax of Propositional Logic:


The syntax of propositional logic defines the allowable sentences for the knowledge representation.
There are two types of Propositions:
a) Atomic Propositions /Atomic Sentence
b) Complex propositions/Complex Sentence

a) Atomic Proposition/Atomic Sentence:


Atomic propositions are simple propositions. It consists of a single proposition symbol. These are the
sentences which must be either true or false.
Example:
5+2 is 7, it is an atomic proposition as it is a true fact.
"The Sun is cold" is also a proposition as it is a false fact.
b) Complex propositions/complex Sentence:
Complex propositions are constructed by combining simpler or atomic propositions, using
parentheses and logical connectives.
Example:
"It is raining today, and street is wet."
"Ankit is a doctor, and his clinic is in Mumbai."

18.1.2 Logical Connectives:


Logical connectives are used to connect two simpler propositions or represent a sentence logically.
We can create complex propositions with the help of logical connectives.
There are mainly five connectives, which are given as follows:

1. NOT(Negation): A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
Example: It is raining.
P=It is raining.
¬P
2. AND(Conjunction): A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. OR(Disjunction): A sentence which has ∨ connective, such as P ∨ Q is called disjunction, where P and
Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor.
Q= Ritika is Engineer, so we can write it as P ∨ Q.
4. IMPLIES(Implication): A sentence such as P → Q, is called an implication. Implications are also
known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining,
Q= Street is wet, so it is represented as P → Q
5. IF AND ONLY IF(Biconditional): A sentence such as P↔ Q is a Biconditional sentence, example: An
angle is right if and only if it measures 90 degree.
Example:
Let P= An angle is right
Q= An angle is measures 90 degree
It can be represented as P ↔ Q.
Following is the summarized table for Logical Connectives:

Word Technical Term Symbol Meaning Example


NOT NEGATION ¬ Reverses the truth ¬A → True if A is
value False
AND CONJUNCTION ∧ True if both operands (A ∧ B) → True only
are true if A and B are both
True
OR DISJUNCTION ∨ True if at least one (A ∨ B) → True if A
operand is true or B (or both) are
True
IMPLIES IMPLICATION → True unless the first (A → B) → False
operand is true and only if A is True and
the second is false B is False
IF AND ONLY IF BICONDITIONAL ↔ True if both operands (A ↔ B) → True if A
have the same truth and B are both True
value or both False

18.1.3 Truth Table:


A truth table is a table used in logic and Boolean algebra to show all possible truth values of logical
expressions based on their inputs. It lists all possible combinations of truth values for variables and shows the
result of applying logical operators.
For Negation:

P ¬P
T F
F T
For Conjunction:

P Q P∧ Q
T T T
T F F
F T F
F F F

For Disjunction:

P Q P∨ Q
T T T
T F T
F T T
F F F

For Implication:

P Q P→Q
T T T
T F F
F T T
F F T

For Biconditional:

P Q P ↔Q
T T T
T F F
F T F
F F T

Truth table with three propositions:


We can build a proposition composing three propositions P, Q, and R. This truth table is made-up of 8n
Tuples as we have taken three proposition symbols.

P Q R (P ∨ Q) ∧ R

TRUE TRUE TRUE TRUE

TRUE TRUE FALSE FALSE


TRUE FALSE TRUE TRUE

TRUE FALSE FALSE FALSE

FALSE TRUE TRUE TRUE

FALSE TRUE FALSE FALSE

FALSE FALSE TRUE FALSE

FALSE FALSE FALSE FALSE


Module-2 Lecture-19
Learning Objective:
18. Logic
18.1 Propositional Logic
18.1.4 Precedence of Logical Connectives
18.1.5 Evaluation Rules
18.1.6 Logical Equivalence
18.1.7 Equivalence Laws
18.1.8 Limitations of Propositional logic
18.1.9 Translate English sentences into Propositional Logic

18.1.4 Precedence of Logical Connectives:


The precedence of logical connectives determines the order in which operations are evaluated in a logical
expression, similar to operator precedence in arithmetic.

18.1.5 Evaluation Rules:


➢ Operators with higher precedence are evaluated first unless parentheses dictate otherwise.
➢ Parentheses override precedence, ensuring that the enclosed operations are computed first.

Precedence Order (Highest to Lowest)

Precedence Operator Name


1 (Highest) ¬ NOT (Negation)
2 ∧ AND (Conjunction)
3 ∨ OR (Disjunction)
4 → IMPLICATION (If-Then)
5 (Lowest) ↔ BICONDITIONAL (If and Only If)

18.1.6 Logical Equivalence:


Logical equivalence means that two logical expressions always produce the same truth values for all
possible inputs. If two statements A and B are logically equivalent, we write:

A≅ B

This means that A and B have the same truth table.


Example:

1. Prove ¬(A ∨ B) ≅ (¬A ∧ ¬B)

Ans:- ¬(A ∨ B) ≅ (¬A ∧ ¬B)

This states that NOT (A OR B) is logically equivalent to (NOT A AND NOT B).

A B A∨B ¬(A∨B) ¬A ¬B A ∧ ¬B
T T T F F F F
T F T F F T F
F T T F T F F
F F F T T T T

Since the columns for ¬(A∨B)¬(A∨B) and ¬A∧¬B¬A∧¬B are identical, the two expressions are logically
equivalent.

Tautologies:
A proposition P is a tautology if it is true under all circumstances. It means it contains the only T in the final
column of its truth table.

Example: Prove that the statement (P⟶Q) ↔(∼Q⟶∼P) is a tautology.

P Q P→Q ~Q ~P ~Q⟶∼P (P→Q)⟷( ~Q⟶~P)

T T T F F T T

T F F T F F T

F T T F T T T

F F T T T T T
Contradiction:
A statement that is always false is known as a contradiction.

Example: Show that the statement P ∧∼P is a contradiction.

P ∼P P ∧∼P

T F F

F T F

18.1.7 Equivalence Laws:


Equivalence Laws or Relations are used to reduce or simplify a given well formed formula or to derive a new
formula from the existing formula. These laws can be verified using the truth table approach.

Some of the important equivalence laws are given below.

Sl. No Name of Relation Equivalence Relations

1. Commutative Law A∨ B≅ B ∨A
A∧ B≅ B ∧A

2. Associative Law A ∨ (B ∨ C) ≅ (A ∨ B) ∨ C
A ∧ (B∧C) ≅ (A ∧ B) ∧C

3. Double Negation Law ¬(¬A) ≅ A

4. Distributive Laws A ∨ (B ∧ C) ≅ (A ∨ B) ∧ (A ∨ C)
A ∧ (B ∨ C) ≅ (A ∧ B) ∨ (A ∧ C)

5. De Morgan’s Laws ¬(A ∨ B) ≅¬A ∧ ¬B


¬(A ∧ B) ≅¬A ∨ ¬B

6. Absorption Laws A ∨ (A ∧ B) ≅ A
A ∧ (A ∨ B) ≅ A
A ∨ (¬A ∧ B) ≅ A ∨ B
A ∧ (¬A ∨ B) ≅ A ∧ B
7. Idempotence Law A ∨A ≅A
A ∧ A≅ A

8. Excluded Middle Law A ∨ ¬A≅ T(True)

9. Contradiction Law A ∧ ¬A ≅ F(False)

10. Commonly Used Equivalence A∨ F ≅A


Relations A∨T≅T
A∧T≅A
A∧F≅F
A → B ≅ ¬A ∨ B
A ↔ B ≅ ( A→B) ∧ ( B→A)
≅ (A ∧ B) ∨ ¬A ∧ ¬B

18.1.8 Limitations of Propositional logic:


4.6 Limitations of Propositional logic

We cannot represent relations like ALL, some, or none with propositional logic.

Example:

a. All the girls are intelligent.

b. Some apples are sweet.

Propositional logic has limited expressive power.


In propositional logic, we cannot describe statements in terms of their properties or logical relationships.

18.1.9 Translate English sentences into Propositional Logic


Example:
a. Let p = It is raining
b. Let q = Mary is sick
c. Let t = Bob stayed up late last night
d. Let r = Paris is the capital of France
e. Let s = John is a loud-mouth
Translating Negation
a. It isn’t raining

¬p

b. It is not the case that Mary isn’t sick

¬ ¬q

c. Paris is not the capital of France


¬r

d. John is in no way a loud-mouth

¬s

e. Bob did not stay up late last night

¬t

Translating Conjunction
a. It is raining and Mary is sick

(p ∧ q)

b. Bob stayed up late last night and John is a loud-mouth

(t ∧ s)

c. Paris isn’t the capital of France and It isn’t raining

(¬r ∧ ¬p)

d. John is a loud-mouth but Mary isn’t sick

(s ∧ ¬q)

e. It is not the case that it is raining and Mary is sick

translation 1: It is not the case that both it is raining and Mary is sick

¬(p ∧ q)

translation 2: Mary is sick and it is not the case that it is raining

(¬p ∧ q)

Translating Disjunction
a. It is raining or Mary is sick

(p ∨ q)

b. Paris is the capital of France and it is raining or John is a loud-mouth

((r ∧ p) ∨ s)

or (r ∧ (p ∨ s))
c. Mary is sick or Mary isn’t sick

(q ∨ ¬q)

d. John is a loud-mouth or Mary is sick or it is raining

((s ∨ q) ∨ p)

or (s ∨ (q ∨ p))

e. It is not the case that Mary is sick or Bob stayed up late last night

¬(q ∨ t)

Translating Implication
a. If it is raining, then Mary is sick
(p → q)

b. It is raining, when John is a loud-mouth

(s → p)
c. Mary is sick and it is raining implies that Bob stayed up late last night

((q ∧ p) → t)

d. It is not the case that if it is raining then John isn’t a loud-mouth

¬(p → ¬s)

Translating Equivalence or Biconditional Statement


a. It is raining if and only if Mary is sick

(p ↔ q)
b. If Mary is sick then it is raining, and vice versa

((p → q) ∧ (q → p))

or (p ↔ q)

c. It is raining is equivalent to John is a loud-mouth

(p ↔ s)

d. It is raining is not equivalent to John is a loud-mouth

¬(p ↔ s)

You might also like