Ai Unit2 Updated12
Ai Unit2 Updated12
GPS
• The General Problem Solver (GPS) was an AI program proposed
by Herbert Simon, J.C. Shaw, and Allen Newell.
• It was the first useful computer program that came into existence in
the AI world. The goal was to make it work as a universal problem-
solving machine. Of course there were many software programs that
existed before, but these programs performed specific tasks.
• GPS was the first program that was intended to solve any general
problem. GPS was supposed to solve all the problems using the same
base algorithm for every problem.
Examples of Problems in Artificial
Intelligence
• Travelling Salesman Problem
• Tower of Hanoi Problem
• Chess
• Sudoku
• Logical Puzzles and so on.
Problem-Solving Agents
• Intelligent agents are supposed to act in such a way that the environment
goes through a sequence of states that maximizes the performance
measure.
Unfortunately, this specification is difficult to translate into a successful
agent design. The task is simplified if the agent can adopt a goal and aim to
satisfy it.
• Intelligence requires knowledge and knowledge holds some less desirable
properties like:
• It is enormous.
• It is tough to characterized precisely.
• It is dynamic.
• It is structured - a way that matches to the way it will be used.
Formulating problems
• Problem formulation is the process of deciding what actions and
states to consider.
• In general, an agent with several intermediate options of unknown
value can decide what to do by first examining different possible
sequences of actions that lead to states of known value and then
choosing the best one.
• This process is called search. A search algorithm takes a problem as
input and returns a solution in the form of an action sequence. Once
a solution is found, the actions it recommends can be carried out.
This is called the execution phase. Hence, we have a simple
"formulate, search, execute" design for the agent.
Formulating problems
• Formulating problems is an art. First, we look at the different
amounts of knowledge that an agent can have concerning its actions
and the state that it is in. This depends on how the agent is
connected to its environment.
• There are four essentially different types of problems.
• single state
• multiple state
• contingency
• exploration
• First, suppose that the agent's sensors give it enough information to
tell exactly which state it is in (i.e., the world is completely accessible)
and suppose it knows exactly what each of its actions does.
• Then it can calculate exactly what state it will be in after any
sequence of actions. For example, if the initial state is 5, then it can
calculate the result of the actions sequence {right, suck}.
• This simplest case is called a single-state problem.
• Now suppose the agent knows all of the effects of its actions, but world
access is limited.
• For example, in the extreme case, the agent has no sensors so it knows
only that its initial state is one of the set {1,2,3,4,5,6,7,8}.
• In this simple world, the agent can succeed even though it has no sensors.
It knows that the action {right} will cause it to be in one of the states
{2,4,6,8}. In fact, it can discover that the sequence {right, suck, left, suck} is
guaranteed to reach a goal state no matter what the initial state is.
• In this case, when the world is not fully accessible, the agent must reason
about sets of states that it might get to, rather than single states. This is
called the multiple-state problem.
• The agent can solve the problem if it can perform sensing actions during
execution.
• For example, starting from one of {1,3}: first suck dirt, then move right,
then suck only if there is dirt there. In this case the agent must calculate a
whole tree of actions rather than a single sequence, i.e., plans now have
conditionals in them that are based on the results of sensing actions.
• For this reason, we call this a contingency problem. Many problems in the
real world are contingency problems. This is why most people keep their
eyes open when walking and driving around.
• Single-state and multiple-state problems can be handled by similar search
techniques. Contingency problems require more complex algorithms.
• They also lend them selves to an agent design in which planning and
execution are interleaved.
• The last type of problem is the exploration problem.
• In this type of problem, the agent has no information about the
effects of its actions. The agent must experiment, gradually
discovering what its actions do and what sorts of states exist.
• This is search in the real world rather than in a model.
Formulating problems
• State space search is a process used in the field of computer science,
including artificial intelligence (AI), in which
successive configurations or states of an instance are considered,
with the intention of finding a goal state with the desired property.
• Problems are often modelled as a state space, a set of states that a
problem can be in.
• The state space of a problem is the set of all states reachable from
the initial state by executing any sequence of actions. States is
representation of all possible outcomes.
Components of problems formulation
• Initial state
• Actions
• Successor function
• Goal test
• Path cost
1) The initial state : The initial state for our agent in this case (Romania)
might be described as In(Arad).
2) Actions : It gives the possible actions from the current state, in this case
the possible actions are (Go(Sibiu), Go(Timisoara), Go(Zerind)}.
3) Transition Model : This is Specified by using a function RESULT(s,a), that
returns the state that results from doing action a in state s.
RESULT(In(Arad),Go(Zerind)) = In(Zerind).
4) Path Cost : Path cost function assigns a numeric cost to each path, In
the present case it is the distance in kilometers.
5) Goal Test : It determines whether the given state is Goal State
8 Queens Problem
• Completeness:
• An algorithm is said to be complete if it definitely finds solution to
the problem, if exists:
A
1. Stands for BFS stands for Breadth First Search. DFS stands for Depth First Search.
BFS(Breadth First Search) uses Queue data
2. Data Structure DFS(Depth First Search) uses Stack data structure.
structure for finding the shortest path.
DFS is also a traversal approach in which the
BFS is a traversal approach in which we first
traverse begins at the root node and proceeds
3. Definition walk through all nodes on the same level before
through the nodes as far as possible until we reach
moving on to the next level.
the node with no unvisited nearby nodes.
BFS can be used to find a single source shortest
path in an unweighted graph because, in BFS, In DFS, we might traverse through more edges to
4. Technique
we reach a vertex with a minimum number of reach a destination vertex from a source.
edges from a source vertex.
Conceptual
5. BFS builds the tree level by level. DFS builds the tree sub-tree by sub-tree.
Difference
It works on the concept of FIFO (First In First
6. Approach used It works on the concept of LIFO (Last In First Out).
Out).
BFS is more suitable for searching vertices DFS is more suitable when there are solutions away
7. Suitable for
closer to the given source. from source.
DFS is more suitable for game or puzzle problems.
Suitable for BFS considers all neighbors first and therefore
We make a decision, and the then explore all paths
8. Decision not suitable for decision-making trees used in
through this decision. And if this decision leads to
Treestheirwinning games or puzzles.
win situation, we stop.
Visiting of Siblings/
9. Here, siblings are visited before the children. Here, children are visited before the siblings.
Children
Removal of Nodes that are traversed several times are The visited nodes are added to the stack and then
10.
Traversed Nodes deleted from the queue. removed when there are no more nodes to visit.
DFS algorithm is a recursive algorithm that uses the
11. Backtracking In BFS there is no concept of backtracking.
idea of backtracking
BFS is used in various applications such as DFS is used in various applications such as acyclic
12. Applications
bipartite graphs, shortest paths, etc. graphs and topological order etc.
13. Memory BFS requires more memory. DFS requires less memory.
14. Optimality BFS is optimal for finding the shortest path. DFS is not optimal for finding the shortest path.
DFS has lesser space complexity because at a time
In BFS, the space complexity is more critical as
15. Space complexity it needs to store only a single path from the root to
compared to time complexity.
the leaf node.
16. Speed BFS is slow as compared to DFS. DFS is fast as compared to BFS.
When the target is close to the source, BFS When the target is far from the source, DFS is
17. When to use?
performs better. preferable.
18. Tree
Informed Search Algorithms
• So far we have talked about the uninformed search algorithms which
looked through search space for all possible solutions of the problem
without having any additional knowledge about search space.
• But informed search algorithm contains an array of knowledge such
as how far we are from the goal, path cost, how to reach to goal
node, etc. This knowledge help agents to explore less to the search
space and find more efficiently the goal node.
• The informed search algorithm is more useful for large search space.
• Informed search algorithm uses the idea of heuristic, so it is also
called Heuristic search.
Parameters Informed Search Uninformed Search
Known as It is also known as Heuristic Search. It is also known as Blind Search.
Using Knowledge It uses knowledge for the searching process. It doesn’t use knowledge for the searching process.
Direction There is a direction given about the solution. No suggestion is given regarding the solution in it.
10 1
Closed List
A
I J
A D
A D E
A D E J
Time Complexity: The worst case time complexity of best first search is
O(bm).
Space Complexity: The worst case space complexity of best first
search is O(bm). Where, m is the maximum depth of the search space.
Complete:Best-first search is also incomplete, even if the given state
space is finite.
Optimal: Best first search algorithm is not optimal.
Greedy best-first search algorithm
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if
node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor
n', check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n'
and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer
which reflects the lowest g(n') value.
Points to remember:
• A* algorithm returns the path which occurred first, and it does not search for all remaining
paths.
• The efficiency of A* algorithm depends on the quality of heuristic.
• A* algorithm expands all nodes which satisfy the condition f(n)
Complete: A* algorithm is complete as long as:
○ Admissible: the first condition requires for optimality is that h(n) should be an admissible
If the heuristic function is admissible, then A* tree search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on heuristic function,
and the number of nodes expanded is exponential to the depth of solution d. So the time
complexity is O(b^d), where b is the branching factor.
Space Complexity: The space complexity of A* search algorithm is O(b^d)
Advantages:
○ A* search algorithm is the best algorithm than other search algorithms.
○ A* search algorithm is optimal and complete.
○ This algorithm can solve very complex problems.
Disadvantages:
○ It does not always produce the shortest path as it mostly based on heuristics and
approximation.
○ A* search algorithm has some complexity issues.
○ The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.
Iterative Deepening A* algorithm (IDA*)
● Iterative deepening A* (IDA*) is a graph traversal and path-finding method that can
determine the shortest route in a weighted graph between a defined start node and
any one of a group of goal nodes.
● It is a kind of iterative deepening depth-first search that adopts the A* search
algorithm idea of using a heuristic function to assess the remaining cost to reach the
goal.
● A memory-limited version of A* is called IDA*. It performs all operations that A* does
and has optimal features for locating the shortest path, but it occupies less memory.
● Iterative Deepening A Star uses a heuristic to choose which nodes to explore and at
which depth to stop, as opposed to Iterative Deepening DFS, which utilizes simple
depth to determine when to end the current iteration and continue with a higher
depth.
How IDA* algorithm work?
Step 1: Initialization
Set the root node as the current node, and find the f-score.
Sep 2: Set threshold
Set the cost limit as a threshold for a node i.e the maximum f-score allowed for that node for further
explorations.
Step 3: Node Expansion
Expand the current node to its children and find f-scores.
Step 4: Pruning
If for any node the f-score > threshold, prune that node because it’s considered too expensive for that
node. and store it in the visited node list.
Step 5: Return Path
If the Goal node is found then return the path from the start node Goal node.
Step 6: Update the Threshold
If the Goal node is not found then repeat from step 2 by changing the threshold with the minimum
pruned value from the visited node list. And Continue it until you reach the goal node.
S
f(n)=100
A B C
f(n)=120 f(n)=130 f(n)=120
D G E F
f(n)=140 f(n)=125 f(n)=140 f(n)=125
ITERATION 1 [LEVEL 0] ITERATION 2 [LEVEL 1] ITERATION 2 [LEVEL 2]
E [ 140>120 ]
F [ 125>120 ]
Disadvantages
● Slow process.
● Can't tell whether the optimal solution is found.
● Some other method is also required.
Beam Search Algorithm
A heuristic search algorithm that examines a graph by extending the
most promising node in a limited set is known as beam search
algorithm.
The number of nodes n represents the beam width.
Beam width is 1, then hill climbing. For infinite , then best first search.
This algorithm only keeps the lowest number of nodes on open list.
Components of Beam Search
A beam search takes three components as its input:
1. The problem usually represented as graph and contains a set of
nodes in which one or more of the nodes represents a goal.
2. The set of heuristic rules for pruning: are rules specific to the
problem domain and prune unfavorable nodes from memory
regarding the problem domain.
3. A memory with a limited available capacity : The memory is where
the "beam" is stored, memory is full, and a node is to be added to the
beam, the most costly node will be deleted, such that the memory limit
is not exceeded.
• Time Complexity of Beam Search
• The time complexity of the Beam Search algorithm depends on the following things, such
as:
• The accuracy of the heuristic function.
• In the worst case, the heuristic function leads Beam Search to the deepest level in the
search tree.
• The worst-case time = O(B*m)
B is the beam width, and m is the maximum depth of any path in the search tree.
• Space Complexity of Beam Search
• The space complexity of the Beam Search algorithm depends on the following things, such
as:
• Beam Search's memory consumption is its most desirable trait.
• Since the algorithm only stores B nodes at each level in the search tree.
• The worst-case space complexity = O(B*m)
Genetic algorithm
A genetic algorithm is an adaptive heuristic search algorithm inspired by "Darwin's theory of
evolution in Nature."
They are commonly used to generate high-quality solutions for optimization problems and search
problems.
Basic terminologies
○ Population: Population is the subset of all possible or probable solutions, which can solve the
given problem.
○ Chromosomes: A chromosome is one of the solutions in the population for the given problem,
and the collection of gene generate a chromosome.
○ Gene: A chromosome is divided into a different gene, or it is an element of the chromosome.
Search space
The population of individuals are maintained within search space. Each individual
represents a solution in search space for given problem. Each individual is coded as
a finite length vector (analogous to chromosome) of components. These variable
components are analogous to Genes. Thus a chromosome (individual) is composed
of several genes (variable components).
○ Fitness Function: The fitness function is used to determine the individual's fitness level in the
population. It means the ability of an individual to compete with other individuals. In every
iteration, individuals are evaluated based on their fitness function.
○ Genetic Operators: In a genetic algorithm, the best individual mate to regenerate offspring
better than parents. Here genetic operators play a role in changing the genetic composition of
the next generation.
○ Selection: After calculating the fitness of every existent in the population, a selection process is
used to determine which of the individualities in the population will get to reproduce and
produce the seed that will form the coming generation.
Genetic algorithms are based on an analogy with genetic structure and behaviour of chromosomes
of the population. Following is the foundation of GAs based on this analogy –
Belief State
• When the environment is partially observable, and the agent doesn't
know for sure what state it is in; and
• when the environment is nondeterministic, the agent doesn't know what
state it transitions to after taking an action
An agent will thinking "I'm either in state s1 or s2, and if I do action a, end up
in state s2, s4 or s5."
• A set of physical states that the agent believes are possible a belief states.
Conditional Plan
● In partially observable and nondeterministic environments, the solution
to a problem is no longer a sequence,
● but rather a conditional plan (sometimes called a contingency plan or a
strategy)
● that specifies what to do, depending on what percepts agent receives,
while executing the plan.
AO* algorithm
Best-first search is what the AO* algorithm does. The AO* method divides any given difficult
problem into a smaller group of problems that are then resolved using the AND-OR graph
concept.
AND OR graphs are specialized graphs that are used in problems that can be divided into
smaller problems.
The AND side of the graph represents a set of tasks that must be completed to achieve the
main goal, while the OR side of the graph represents different methods for accomplishing the
same main goal.
Working of AO* algorithm:
● A* algorithm and AO* algorithm both works on the best first search.
● They are both informed search and works on given heuristics values.
● A* always gives the optimal solution but AO* doesn’t guarantee to give the optimal solution.
● Once AO* got a solution doesn’t explore all possible paths but A* explores all paths.
● When compared to the A* algorithm, the AO* algorithm uses less memory.
● opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.
Here in the example below the Node which is given is the heuristic value i.e h(n). Edge length is
considered as 1.
The Erratic
(Unpredictable)
Vacuum World
● For example, in the erratic vacuum world, the 5 Suck action in state 1
cleans up either just the current location, or both locations:
● RESULTS(1, Suck) = {5,7}
Conditional Plan
● The Erratic Vacuum World, to get solution, the action sequence
produces a conditional plan (not sequence plan)
● a conditional plan can contain if-then-else steps.
● [Suck, if State = 5 then [Right, Suck] else [ ]].
● Here, solutions are trees rather than sequences
AND-OR Search Trees
● This agent stores its map in a table, result[s, a], that records the state
resulting from executing action a in state s.
● To explore map, The difficulty comes when the agent has tried all the
actions in a state.
● To avoid dead end,
● The algorithm keeps another table that lists, for each state, the parent
state, to which the agent has not yet backtracked.
● If the agent has run out of states to which it can backtrack, then its
search is complete.
● It keeps just one current state in memory, and it can do random walk to
explore the environment.
● A random walk simply selects at random one of the available actions from the
current state;
● Preference can be given to actions that not yet tried.
● This process will continue until it finds a goal or completes its exploration.
● Advantage
○ It provided that the space is finite and safely explorable.
● Drawback
○ The process can be very slow.
● An environment in which a random walk will take exponentially many
steps to find the goal.
● for each state in the top row except S, backward progress is twice as
likely as forward progress.
Learning Real-time A* (LRTA*) search Algorithm