0% found this document useful (0 votes)
2 views

AIML unit1 notes

The document provides an overview of agents in artificial intelligence, defining various types of agents (human, robotic, software) and their components such as sensors, actuators, and effectors. It discusses intelligent and rational agents, the Turing Test, and problem-solving techniques in AI, including search algorithms and their classifications. Additionally, it highlights the PEAS representation model for defining agent properties and outlines the importance of rationality and performance measures in AI systems.

Uploaded by

sharanshrajput19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AIML unit1 notes

The document provides an overview of agents in artificial intelligence, defining various types of agents (human, robotic, software) and their components such as sensors, actuators, and effectors. It discusses intelligent and rational agents, the Turing Test, and problem-solving techniques in AI, including search algorithms and their classifications. Additionally, it highlights the PEAS representation model for defining agent properties and outlines the importance of rationality and performance measures in AI systems.

Uploaded by

sharanshrajput19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

AIML

NOTES
Unit 1

Ms. Deepika yadav


Assistant Professor
CSE Department
Agents in Artificial Intelligence
An AI system can be defined as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through
actuators. An AI agent can have mental properties such as knowledge, belief, intention,
etc.

What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon
that environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.

Before moving forward, we should first know about sensors, effectors, and actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.

Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.


o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a
way to maximize its performance measure with all possible actions.

A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement


learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.

Note: Rational agents in AI are very similar to intelligent agents.


Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:

o Performance measure which defines the success criterion.


o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.

Note: Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.

Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:

1. Agent = Architecture + Agent program

Following are the main three terms involved in the structure of an AI agent:

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.

1. f:P* → A

Agent program: Agent program is an implementation of agent function. An agent


program executes on the physical architecture to produce function f.

PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent
or rational agent, then we can group its properties under PEAS representation model. It
is made up of four words:

o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:

Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

Example of Agents with their PEAS representation

Agent Performance Environment Actuators Sensors


measure

1. o Healthy o Patient o Tests Keyboard


Medical (Entry of
patient Hospital Treatments
Diagnose o o symptoms)
o Minimized o Staff
cost

2. o Cleanness o Room o Wheels o Camera


Vacuum
Cleaner o Efficiency o Table o Brushes o Dirt
o Battery life o Wood o Vacuum detection

o Security floor Extractor sensor

o Carpet o Cliff
sensor
o Various
obstacles o Bump
Sensor
o Infrared
Wall
Sensor
3. Part - o Percentage o Conveyor o Jointed o Camera
picking
of parts in belt with Arms Joint
Robot o
correct parts, o Hand angle
bins. o Bins sensors.

Turing Test in AI
In 1950, Alan Turing introduced a test to check whether a machine can think like a human
or not, this test is known as the Turing Test. In this test, Turing proposed that the
computer can be said to be an intelligent if it can mimic human response under specific
conditions.

Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?"

The Turing test is based on a party game "Imitation game," with some modifications. This
game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two
players and his job is to find that which player is machine among two of them.
Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this on the
basis of questions and their responses.

The conversation between all players is via keyboard and screen so the result would not
depend on the machine's ability to convert words as speech.

The test result does not depend on each correct answer, but only how closely its
responses like a human answer. The computer is permitted to do everything possible to
force a wrong identification by the interrogator.

The questions and answers can be like:

Interrogator: Are you a computer?

PlayerA (Computer): No

Interrogator: Multiply two large numbers such as (256896489*456725896)

Player A: Long pause and give the wrong answer.

In this game, if an interrogator would not be able to identify which is a machine and
which is human, then the computer passes the test successfully, and the machine is said
to be intelligent and can think like a human.

"In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no AI
program to till date, come close to passing an undiluted Turing test".

Chatbots to attempt the Turing test:


ELIZA: ELIZA was a Natural language processing computer program created by Joseph
Weizenbaum. It was created to demonstrate the ability of communication between
machine and humans. It was one of the first chatterbots, which has attempted the Turing
Test.

Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to
simulate a person with Paranoid schizophrenia(most common chronic mental disorder).
Parry was described as "ELIZA with attitude." Parry was tested using a variation of the
Turing Test in the early 1970s.

Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in


2001. This bot has competed in the various number of Turing Test. In June 2012, at an
event, Goostman won the competition promoted as largest-ever Turing test content, in
which it has convinced 29% of judges that it was a human.Goostman resembled as a 13-
year old virtual boy.
The Chinese Room Argument:
There were many philosophers who really disagreed with the complete concept of
Artificial Intelligence. The most famous argument in this list was "Chinese Room."

In the year 1980, John Searle presented "Chinese Room" thought experiment, in his
paper "Mind, Brains, and Program," which was against the validity of Turing's Test.
According to his argument, "Programming a computer may make it to understand a
language, but it will not produce a real understanding of language or consciousness
in a computer."

He argued that Machine such as ELIZA and Parry could easily pass the Turing test by
manipulating keywords and symbol, but they had no real understanding of language. So
it cannot be described as "thinking" capability of a machine such as a human.

Features required for a machine to pass the Turing test:


o Natural language processing: NLP is required to communicate with Interrogator in
general human language like English.
o Knowledge representation: To store and retrieve information during the test.
o Automated reasoning: To use the previously stored information for answering the
questions.
o Machine learning: To adapt new changes and can detect generalized patterns.
o Vision (For total Turing test): To recognize the interrogator actions and other objects
during a test.
o Motor Control (For total Turing test): To act upon objects if requested.
Problem Solving Techniques in AI
The process of problem-solving is frequently used to achieve objectives or resolve
particular situations. In computer science, the term "problem-solving" refers to artificial
intelligence methods, which may include formulating ensuring appropriate, using
algorithms, and conducting root-cause analyses that identify reasonable solutions.
Artificial intelligence (AI) problem-solving often involves investigating potential solutions
to problems through reasoning techniques, making use of polynomial and differential
equations, and carrying them out and use modelling frameworks. A same issue has a
number of solutions, that are all accomplished using an unique algorithm. Additionally,
certain issues have original remedies. Everything depends on how the particular situation
is framed.

Cases involving Artificial Intelligence Issues


Artificial intelligence is being used by programmers all around the world to automate
systems for effective both resource and time management. Games and puzzles can pose
some of the most frequent issues in daily life. The use of ai algorithms may effectively
tackle this. Various problem-solving methods are implemented to create solutions for a
variety complex puzzles, includes mathematics challenges such crypto-arithmetic and
magic squares, logical puzzles including Boolean formulae as well as N-Queens, and quite
well games like Sudoku and Chess. Therefore, these below represent some of the most
common issues that artificial intelligence has remedied:

o Chess
o N-Queen problem
o Tower of Hanoi Problem
o Travelling Salesman Problem
o Water-Jug Problem
Problem formulation in artificial intelligence (AI) is the process of
determining what actions and states to consider to achieve a goal. It's a step in problem
definition that can be complex if there are multiple ways to reach the goal.

Here are some components of problem formulation:

 Initial state: The state that starts the AI agent toward the goal
 Action: The stage that works with a specific class from the initial state and all possible
actions
 Transition: The stage that integrates the action from the previous stage and forwards
the final stage to the next

 The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in the form
of an action sequence.
 Problem formulation can be complicated and may cause confusion and reduced
efficiency. Some complications include: Too many steps, Too many paths, Confusion,
and Reduced efficiency.

Search Algorithms in Artificial Intelligence


Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result. Problem-
solving agents are the goal-based agents and use atomic representation. In this topic, we
will learn various problem-solving search algorithms.

Search Algorithm Terminologies:


o Search: Searchingis a step by step procedure to solve a search-problem in a given search
space. A search problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a system
may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
o Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a
transition model.
o Path Cost: It is a function which assigns a numeric cost to each path.
o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:


Following are the four essential properties of search algorithms to compare the efficiency
of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution


(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.

Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Types of search algorithms
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search
applies a way in which search tree is searched without any information about the search
space like initial state operators and test for the goal, so it is also called blind search. It
examines each node of the tree until it achieves the goal node.

It can be divided into five main types:

o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search

Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can find
a solution more efficiently than an uninformed search strategy. Informed search is also
called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another
way.
An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

Uninformed Search Algorithms


Uninformed search is a class of general-purpose search algorithms which operates
in brute force-way. Uninformed search algorithms do not have additional
information about state or search space other than how to traverse the tree, so it is
also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

Advantages:

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example:
In the above search tree, we have shown the flow of depth-first search, and it will follow
the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
not found. After backtracking it will traverse node C and then G, and here it will terminate
as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by
the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of
steps or high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-first
search. In this algorithm, the node at the depth limit will treat as it has no successor nodes
further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.
4. Uniform-cost Search Algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph. This algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has
the lowest cumulative cost. Uniform-cost search expands nodes according to their path
costs form the root node. It can be used to solve any graph/tree where the optimal cost
is in demand. A uniform-cost search algorithm is implemented by the priority queue. It
gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent
to BFS algorithm if the path cost of all edges is the same.

Advantages:

o Uniform cost search is optimal because at every state the path with the least cost is
chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and only concerned about
path cost. Due to which this algorithm may be stuck in an infinite loop.

Example:

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node.
Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0
and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-
cost search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

5. Iterative deepening depth-first Search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.

Advantages:

o It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.

6. Bidirectional Search Algorithm:


Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-search,
to find the goal node. Bidirectional search replaces one single search graph with
two small subgraphs in which one starts the search from an initial vertex and other
starts from goal vertex. The search stops when these two graphs intersect each
other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.


o In bidirectional search, one should know the goal state in advance.

Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides
one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward
direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both searches.

Time Complexity: Time complexity of bidirectional search using BFS is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

What is Heuristics?
A heuristic is a technique that is used to solve a problem faster than the classic methods.
These techniques are used to find the approximate solution of a problem when classical
methods do not. Heuristics are said to be the problem-solving techniques that result in
practical and quick solutions.

Heuristics are strategies that are derived from past experience with similar problems.
Heuristics use practical methods and shortcuts used to produce the solutions that may
or may not be optimal, but those solutions are sufficient in a given limited timeframe.
History
Psychologists Daniel Kahneman and Amos Tversky have developed the study of
Heuristics in human decision-making in the 1970s and 1980s. However, this concept was
first introduced by the Nobel Laureate Herbert A. Simon, whose primary object of
research was problem-solving.

Why do we need heuristics?


Heuristics are used in situations in which there is the requirement of a short-term solution.
On facing complex situations with limited resources and time, Heuristics can help the
companies to make quick decisions by shortcuts and approximated calculations. Most of
the heuristic methods involve mental shortcuts to make decisions on past experiences.

The heuristic method might not always provide us the finest solution, but it is assured
that it helps us find a good solution in a reasonable time.

Based on context, there can be different heuristic methods that correlate with the
problem's scope. The most common heuristic methods are - trial and error, guesswork,
the process of elimination, historical data analysis. These methods involve simply
available information that is not particular to the problem but is most appropriate. They
can include representative, affect, and availability heuristics.

Heuristic search techniques in AI (Artificial Intelligence)

We can perform the Heuristic techniques into two categories:

Direct Heuristic Search techniques in AI


It includes Blind Search, Uninformed Search, and Blind control strategy. These search
techniques are not always possible as they require much memory and time. These
techniques search the complete space for a solution and use the arbitrary ordering of
operations.

The examples of Direct Heuristic search techniques include Breadth-First Search (BFS) and
Depth First Search (DFS).

Weak Heuristic Search techniques in AI


It includes Informed Search, Heuristic Search, and Heuristic control strategy. These
techniques are helpful when they are applied properly to the right types of tasks. They
usually require domain-specific information.

The examples of Weak Heuristic search techniques include Best First Search (BFS) and A*.

Before describing certain heuristic techniques, let's see some of the techniques listed
below:

o Bidirectional Search
o A* search
o Simulated Annealing
o Hill Climbing
o Best First search
o Beam search

First, let's talk about the Hill climbing in Artificial intelligence.

Hill Climbing Algorithm

o Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution
to the problem. It terminates when it reaches a peak value where no neighbor has a higher
value.
o Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to
move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the
previous states.

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing algorithm
which is showing a graph between various states of algorithm and Objective
function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function,
and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is
to find the global minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum and local maximum.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape.
It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state, and if
it finds better than the current state, then move else be in the same state. This algorithm
has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This
algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time as
it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be better than
it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this
search algorithm selects one neighbor node at random and decides whether to choose
it as a current state or examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighboring states, but there is another state also present which is higher
than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor states
of the current state contains the same value, because of this algorithm does not find any
best direction to move. A hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we


can improve this problem.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed
to be incomplete because it can get stuck on a local maximum. And if algorithm applies
a random walk, by moving a successor, then it may complete but not efficient. Simulated
Annealing is an algorithm which yields both efficiency and completeness.

In mechanical term Annealing is a process of hardening a metal or glass to a high


temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state. The same process is used in simulated annealing in which the algorithm
picks a random move, instead of picking the best move. If the random move improves
the state, then it follows the same path. Otherwise, the algorithm follows the path which
has a probability of less than 1 or it moves downhill and chooses another path.
Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms.
It uses the heuristic function and search. Best-first search allows us to take the advantages
of both algorithms. With the help of best-first search, at each step, we can choose the
most promising node. In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n),
and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal node
or not. If any successor node is goal node, then return success and terminate the
search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages of
both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.


o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:
Consider the below search problem, and we will traverse it using greedy best-first search.
At each iteration, each node is expanded using evaluation function f(n)=h(n) , which is
given in the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm).
Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.

Optimal: Greedy best first search algorithm is not optimal.

2.) A* Search Algorithm:


A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of
UCS and greedy best-first search, by which it solve the problem efficiently. A* search
algorithm finds the shortest path through the search space using the heuristic function.
This search algorithm expands less search tree and provides optimal result faster. A*
algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.

At each point in the search space, only those node is expanded which have the lowest value
of f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and
stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not then
compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes
in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table so we will calculate the f(n) of each state
using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start
state.
Here we will use OPEN and CLOSED list.
Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path
with cost 6.
Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost
path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic


function, and the number of nodes expanded is exponential to the depth of solution d.
So the time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


Constraint Satisfaction Problem (CSP)
A Constraint Satisfaction Problem in artificial intelligence involves a set of variables,
each of which has a domain of possible values, and a set of constraints that define the
allowable combinations of values for the variables. The goal is to find a value for each
variable such that all the constraints are satisfied.

More formally, a CSP is defined as a triple (X,D,C), where:

 X is a set of variables { x1, x2, ..., xn}.


 D is a set of domains {D1, D2, ..., Dn}, where each Di is the set of possible values
for xi.
 C is a set of constraints {C1, C2, ..., Cm}, where each Ci is a constraint that restricts
the values that can be assigned to a subset of the variables.

The goal of a CSP is to find an assignment of values to the variables that satisfies all the
constraints. This assignment is called a solution to the CSP.

Solving Constraint Satisfaction Problems


A State-space

Solving a CSP typically involves searching for a solution in the state space of possible
assignments to the variables. The state-space is a set of all possible configurations of
variable assignments, each of which is a potential solution to the problem. The state
space can be searched using various algorithms, including backtracking, forward
checking, and local search.

The Notion of the Solution

The notion of a solution in CSP depends on the specific problem being solved. In
general, a solution is a complete assignment of values to all the variables in a way that
satisfies all the constraints. For example, in a scheduling problem, a solution would be a
valid schedule that satisfies all the constraints on task scheduling and resource
allocation.

Domain Categories within CSP


The domain of a variable in a Constraint satisfaction problem in artificial intelligence can
be categorized into three types: finite, infinite, and continuous. Finite domains have a
finite number of possible values, such as colors or integers. Infinite domains have an
infinite number of possible values, such as real numbers. Continuous domains have an
infinite number of possible values, but they can be represented by a finite set of
parameters, such as the coefficients of a polynomial function.

In mathematics, a continuous domain is a set of values that can be described as a


continuous range of real numbers. This means that there are no gaps or interruptions in
the values between any two points in the set.

On the other hand, an infinite domain refers to a set of values that extends indefinitely
in one or more directions. It may or may not be continuous, depending on the specific
context.

Types of Constraints in CSP


Several types of constraints can be used in a Constraint satisfaction problem in artificial
intelligence, including:

 Unary Constraints:
A unary constraint is a constraint on a single variable. For example, Variable A
not equal to “Red”.
 Binary Constraints:
A binary constraint involves two variables and specifies a constraint on their
values. For example, a constraint that two tasks cannot be scheduled at the same
time would be a binary constraint.
 Global Constraints:
Global constraints involve more than two variables and specify complex
relationships between them. For example, a constraint that no two tasks can be
scheduled at the same time if they require the same resource would be a global
constraint.

Basic workflow of constraint satisfaction problem


 We need to analyze the problem perfectly
 We need to derive the constraints given in the problem
 Then we need to derive a solution form a given constraint.
 Find whether we have reached the foal state, If we have not reached the goal state, we
need to make a guess and that guess has to be added as the new constraint.
 After adding new constraint, again we go for the evaluation of the solution. Now we
need to solve the problem by using this added new constraint, again we have to check,
whether we have reached the goal state, if yes, solution found
Algorithm : Constraint satisfaction
1. Propagate available constraints. To do this , first set OPEN to the set of all objects that
must have values assigned to them in a complete solution. Then do until an
inconsistency is detected or until OPEN is empty:
a. Select an object OB from OPEN. Strengthen as much as possible the set
of constraints that apply to OB
b. If this is different from the set that was assigned the last time OB was
examined or if it this is the first time OB has been examined, then add
to OPEN all objects that share any constraints with OB.
c. Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report
the solution.
3. If the solution of the constraints discovered above defines a contradiction, then return
failure.
4. If neither of the above occurs, then it is necessary to make a guess at something in order
to proceed. To do this, loop until a solution is found or all possible solutions have been
eliminated:
a. Select an object whose value is not yet determined and select a way of
strengthening the constraints on that object.
b. Recursively invoke constraint satisfaction with the current set of
constraints augmented by the strengthening constraint just selected.

Example :
Cryptarithmatic puzzles,

SEND DONALD CROSS


+MORE +GERALD +ROADS
………… ……….. …………….
MONEY ROBERT DANGER

Algorithm Working:
Consider the crypt arithmetic problem shown in the below fig.
Problem:
SEND
+MORE Assign decimal digit to each of the letters in such a way that the answer to the
problem is correct to the same letter occurs more than once, it must be assign the
…………
same digit each time. No two different letters may be assigned the same digit.
MONEY
Consider the crypt arithmetic problem.

Constraints:-
1. No two digit can be assigned to same letter.
2. Only single digit number can be assign to a letter.
3. No two letters can be assigned same digit.
4. Assumption can be made at various levels such that they do not contradict each other.
5. The problem can be decomposed into secured constraints. A constraint satisfaction
approach may be used.
6. Any of search techniques may be used.
7. Backtracking may be performed as applicable us applied search techniques.
8. Rule of arithmetic may be followed.

Initial state of problem.


D=? E=? Y=? N=? R=? O=? S=? M=? C1=? C2=?
C1 ,C 2, C3 stands for the carry variables respectively.
Goal State: the digits to the letters must be assigned in such a manner so that the sum is
satisfied.

 The Solution process proceeds in cycles.


 At each cycle, 2 important things are done
i. Constraints are propagated by using rules that correspond to the
properties of arithmetic.
ii. A value is guessed for some letter whose value is not determined
M=1
S=8 or 9
O=0 or 1 O→0
N=E or E+1 →N=E+1 C2=1
N+R>8
E=2
C1=0 C1=1
N=3
2+D=Y R=8 or 9 2+D=10+Y
N+R=10+E D=8+Y
R=9 2+D=Y or 2+D= 10+Y
D=8 or 9
D=8
S=8 D=9

Y=0 Y=1

C1,C2,C3 and C4 indicate the carry bits outs of the columns, numbering from the right.

Rules propagating constraints generate the following additional


constraints:
 M=1, since two single digit numbers plus a carry cannot total more than 19.
 S=8 or 9, since S+M+C3>9 ( to generate the carry) and M=1, S+1+C3>9, so S+C3>8
and C3 is atmost 1.
 O=0, since S+M(1)+C3(<=1) must be atleast 10 to generate a carry and it can be at
most 11. But M is already 1, so O must be 0.
 N=E or E+1, depending on the value of C2.But N cannot have the same value as E. So
N=E+1 and C2 is 1.
 In order to for C2 to be 1, the sum of N+R+C1 must be greater than 8
 N+R cannot be greater than 18, even with a carry in, so E cannot be 9.
Assume that no more constraints can be generated. To progress at this point, guessing happens.
If E is assigned the value 2. Now the next cycle begins.

Constraint propagator shows that :


 N=3, since N=E+1
 R=8 or 9, since R+N(3)+C1(1 or 0)=2 or 12. But since N is already 3, the sum
of these nonnegative numbers cannot be less than 3. Thus R+3+(0 or 1)=12 and
R=8 or 9.
 2+D=Y or 2+D=10+Y, from the sum in the rightmost column.
Assuming no more constraint generation then guess is required.
Suppose C1 is chosen to guess a value for 1, then we reach dead end as shown
in the fig , when this happens , the process will be backtrack and C1=0.
Algorithm for constraint satisfaction in which chronological backtracking is used when
guessing leads to an inconsistent set of constraints.
Constraints are generated are left alone if they are independent of the problem and its
cause. This approach is called dependency directed backtracking (DDB).
State Space Search in Artificial Intelligence
State space search is a problem-solving technique used in Artificial Intelligence (AI) to find the
solution path from the initial state to the goal state by exploring the various states. The state
space search approach searches through all possible states of a problem to find a solution. It is an
essential part of Artificial Intelligence and is used in various applications, from game-playing
algorithms to natural language processing.

Introduction
A state space is a way to mathematically represent a problem by defining all the possible states
in which the problem can be. This is used in search algorithms to represent the initial state, goal
state, and current state of the problem. Each state in the state space is represented using a set of
variables.

The efficiency of the search algorithm greatly depends on the size of the state space, and it is
important to choose an appropriate representation and search strategy to search the state space
efficiently.

One of the most well-known state space search algorithms is the A algorithm. Other commonly
used state space search algorithms include breadth-first search (BFS), depth-first search
(DFS), hill climbing, simulated annealing, and genetic algorithms.

Features of State Space Search


State space search has several features that make it an effective problem-solving technique in
Artificial Intelligence. These features include:

 Exhaustiveness:
State space search explores all possible states of a problem to find a solution.
 Completeness:
If a solution exists, state space search will find it.
 Optimality:
Searching through a state space results in an optimal solution.
 Uninformed and Informed Search:
State space search in artificial intelligence can be classified as uninformed if it provides
additional information about the problem.

In contrast, informed search uses additional information, such as heuristics, to guide the
search process.
Steps in State Space Search
The steps involved in state space search are as follows:

 To begin the search process, we set the current state to the initial state.
 We then check if the current state is the goal state. If it is, we terminate the algorithm and
return the result.
 If the current state is not the goal state, we generate the set of possible successor states that
can be reached from the current state.
 For each successor state, we check if it has already been visited. If it has, we skip it, else we add
it to the queue of states to be visited.
 Next, we set the next state in the queue as the current state and check if it's the goal state. If it
is, we return the result. If not, we repeat the previous step until we find the goal state or explore
all the states.
 If all possible states have been explored and the goal state still needs to be found, we return
with no solution.
State Space Representation
State space Representation involves defining an INITIAL STATE and a GOAL STATE and
then determining a sequence of actions, called states, to follow.

 State:
A state can be an Initial State, a Goal State, or any other possible state that can be generated by
applying rules between them.
 Space:
In an AI problem, space refers to the exhaustive collection of all conceivable states.
 Search:
This technique moves from the beginning state to the desired state by applying good rules while
traversing the space of all possible states.
 Search Tree:
To visualize the search issue, a search tree is used, which is a tree-like structure that represents
the problem. The initial state is represented by the root node of the search tree, which is the
starting point of the tree.
 Transition Model:
This describes what each action does, while Path Cost assigns a cost value to each path, an
activity sequence that connects the beginning node to the end node. The optimal option has the
lowest cost among all alternatives.

Example of State Space Search


The 8-puzzle problem is a commonly used example of a state space search. It is a sliding puzzle
game consisting of 8 numbered tiles arranged in a 3x3 grid and one blank space. The game aims
to rearrange the tiles from their initial state to a final goal state by sliding them into the blank
space.

To represent the state space in this problem, we use the nine tiles in the puzzle and their
respective positions in the grid. Each state in the state space is represented by a 3x3 array with
values ranging from 1 to 8, and the blank space is represented as an empty tile.

The initial state of the puzzle represents the starting configuration of the tiles, while the goal
state represents the desired configuration. Search algorithms utilize the state space to find a
sequence of moves that will transform the initial state into the goal state.
This algorithm guarantees a solution but can become very slow for larger state spaces.
Alternatively, other algorithms, such as A search, use heuristics to guide the search more
efficiently.

Our objective is to move from the current state to the target state by sliding the numbered tiles
through the blank space. Let's look closer at reaching the target state from the current state.
To summarize, our approach involved exhaustively exploring all reachable states from the
current state and checking if any of these states matched the target state.

Applications of State Space Search


 State space search algorithms are used in various fields, such as robotics, game playing,
computer networks, operations research, bioinformatics, cryptography, and supply chain
management. In artificial intelligence, state space search algorithms can solve problems
like pathfinding, planning, and scheduling.
 They are also useful in planning robot motion and finding the best sequence of actions to
achieve a goal. In games, state space search algorithms can help determine the best move for a
player given a particular game state.
 State space search algorithms can optimize routing and resource allocation in computer
networks and operations research.
 In Bioinformatics, state space search algorithms can help find patterns in biological data and
predict protein structures.
 In Cryptography, state space search algorithms are used to break codes and find cryptographic
keys.

Iterative Deepening Search


The iterative Deepening Search (IDS) algorithm is an iterative graph searching strategy
that uses much less memory in each iteration while helping from the completeness of
the Breadth-First Search (BFS) strategy (similar to Depth-First Search).
IDS acquires the desired validity by imposing a depth limit on DFS, which reduces the
possibility of becoming stuck in an infinite or very long branch. It traverses each node's
branch from left to right until it reaches the required depth. After that, IDS returns to the
root node and investigates a branch similar to DFS.
Let’s use the DFS example again to see how IDS works
Pictorial Representation
In this Graph, we use the stack data structure S to keep track of the nodes we've
visited.
☑️Assume node 'A' is the source node.
☑️Assume that node 'D' is the solution node.
☑️ S2 is now empty. Since no solution has been found and the maximum depth has
not been reached, set the depth limit L to 1 and restart the search from the beginning.

☑️S2:
 Initially, only node A is reachable. So put it in S2 and mark it as visited.
 The current level is 0.
☑️S2: A
 After exploring A, three nodes are now accessible: B, C, and D.
 Assume we begin our exploration with node B.
 B should be pushed into S2 and marked as visited.
 The current level is one.

☑️S2: B, A
 Node B will be treated as having no successor because the current level is already the
limited depth L.
 As a result, nothing is reachable.
 Take B from S2.
 The current value is 0.
☑️S2: A
 Explore A once more.
 There are two unvisited nodes, C and D, that can be reached.
 Assume we begin our exploration with node C.
 C is pushed into S1 and marked as visited.
 The current level is one.

☑️S2: C, A
 Because the current level already has the limited depth L, node C is considered to have no
successor.
 As a result, nothing is reachable.
 Take C from S2.
 The current value is 0.
☑️S2: A
 Explore A once more.
 There is only one unvisited node D, that can be reached.
 D should be pushed into S2 and marked as visited.
 The current level is one.

☑️S2: D, A
 D is explored, but no new nodes are found.
 Take D from S2.
 The current value is 0.
☑️S2: A
 Explore A once more.
 There is no new reachable node.
 Take A from S2.

☑️ Similarly at depth limit 2, IDS has already explored all the nodes reachable from a;
if the solution exists in the Graph, it has been found.

You might also like