AI Unit-III
AI Unit-III
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will
compare each value in terminal state with initial value of Maximizer and determines the higher
nodes values. It will find the maximum among the all.
For node D max(-1,- -∞) => max(-1,4)= 4
For Node E max(2, -∞) => max(2, 6)= 6
For Node F max(-3, -∞) => max(-3,-5) = -3
For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and
will find the 3rd layer node values.
For node B= min(4,6) = 4
For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value
and find the maximum value for the root node. In this game tree, there are only 4 layers, hence
we reach immediately to the root node, but in real games, there will be more than 4 layers.
For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in
the finite search tree.
Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-
Max algorithm is O(bm), where b is branching factor of the game-tree, and m is the
maximum depth of the tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really slow for complex games such
as Chess, go, etc. This type of games has a huge branching factor, and the player has lots of
choices to decide. This limitation of the minimax algorithm can be improved from alpha-beta
pruning which we have discussed in the next topic.
Alpha-Beta Pruning
Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can
cut it to half. Hence there is a technique by which without checking each node of the game tree
we can compute the correct minimax decision, and this technique is called pruning. This
involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta
pruning. It is also called as Alpha-Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
The two-parameter can be defined as:
1. Alpha: The best (highest-value) choice we have found so far at any point along the path
of Maximizer. The initial value of alpha is -∞.
2. Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
Note: To better understand this topic, kindly study the minimax algorithm.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of
Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values
of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value
of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the value
at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the
value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of C which is G will be pruned, and the algorithm
will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following
is the final game tree which is the showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is
examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:
Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of
the tree, and works exactly as minimax algorithm. In this case, it also consumes more time
because of alpha-beta factors, such a move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time complexity for such an order is O(b m).
Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens
in the tree, and best moves occur at the left side of the tree. We apply DFS hence it first search
left of the tree and go deep twice as minimax algorithm in the same amount of time. Complexity
in ideal ordering is O(bm/2).
Rules to find good ordering:
Following are some rules to find good ordering in alpha-beta pruning:
Occur the best move from the shallowest node.
Order the nodes in the tree such that the best nodes are checked first.
Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first, then
threats, then forward moves, backward moves.
We can bookkeep the states, as there is a possibility that states may repeat.
What is planning in AI?
The planning in Artificial Intelligence is about the decision making tasks performed by the
robots or computer programs to achieve a specific goal.
The execution of planning is about choosing a sequence of actions with a high likelihood to
complete the specific task.
Blocks-World planning problem
The blocks-world problem is known as Sussman Anomaly.
Noninterleaved planners of the early 1970s were unable to solve this problem, hence it is
considered as anomalous.
When two subgoals G1 and G2 are given, a noninterleaved planner produces either a plan for
G1 concatenated with a plan for G2, or vice-versa.
In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are allowed to rest on the flat
surface. The given condition is that only one block can be moved at a time to achieve the goal.
The start state and goal state are shown in the following diagram.
Choose the best rule for applying the next rule based on the best available heuristics.
Apply the chosen rule for computing the new problem state.
Detect when a solution has been found.
Detect dead ends so that they can be abandoned and the system’s effort is directed in more
fruitful directions.
Detect when an almost correct solution has been found.
Goal stack planning
This is one of the most important planning algorithms, which is specifically used by STRIPS.
The stack is used in an algorithm to hold the action and satisfy the goal. A knowledge base is
used to hold the current state, actions.
Goal stack is similar to a node in a search tree, where the branches are created if there is a
choice of an action.
The important steps of the algorithm are as stated below:
i. Start by pushing the original goal on the stack. Repeat this until the stack becomes empty. If
stack top is a compound goal, then push its unsatisfied subgoals on the stack.
ii. If stack top is a single unsatisfied goal then, replace it by an action and push the action’s
precondition on the stack to satisfy the condition.
iii. If stack top is an action, pop it from the stack, execute it and change the knowledge base by
the effects of the action.
iv. If stack top is a satisfied goal, pop it from the stack.
Non-linear planning
This planning is used to set a goal stack and is included in the search space of all possible
subgoal orderings. It handles the goal interactions by interleaving method.
It takes larger search space, since all possible goal orderings are taken into consideration.
Complex algorithm to understand.
Algorithm
1. Choose a goal 'g' from the goalset
2. If 'g' does not match the state, then
Choose an operator 'o' whose add-list matches goal g
Push 'o' on the opstack
Add the preconditions of 'o' to the goalset
3. While all preconditions of operator on top of opstack are met in state
Pop operator o from top of opstack
state = apply(o, state)
plan = [plan; o]
This is how the problem goes — There is a table on which some blocks are placed. Some blocks may
or may not be stacked on other blocks. We have a robot arm to pick up or put down the blocks. The
robot arm can move only one block at a time, and no other block should be stacked on top of the block
which is to be moved by the robot arm.
Our aim is to change the configuration of the blocks from the Initial State to the Goal State, both of
which have been specified in the diagram above.
Goal Stack Planning is one of the earliest methods in artificial intelligence in which we
work backwards from the goal state to the initial state.
We start at the goal state and we try fulfilling the preconditions required to achieve the initial state.
These preconditions in turn have their own set of preconditions, which are required to be satisfied
first. We keep solving these “goals” and “sub-goals” until we finally arrive at the Initial State. We
make use of a stack to hold these goals that need to be fulfilled as well the actions that we need
to perform for the same.
Apart from the “Initial State” and the “Goal State”, we maintain a “World State” configuration as
well. Goal Stack uses this world state to work its way from Goal State to Initial State. World State on
the other hand starts off as the Initial State and ends up being transformed into the Goal state.
At the end of this algorithm we are left with an empty stack and a set of actions which helps us
navigate from the Initial State to the World State.
Predicates can be thought of as a statement which helps us convey the information about a
configuration in Blocks World.
Given below are the list of predicates as well as their intended meaning
1. ON(A,B) : Block A is on B
2. ONTABLE(A) : A is on table
Using these predicates, we represent the Initial State and the Goal State in our example like this:
Goal State
Thus a configuration can be thought of as a list of predicates describing the current scenario.
All the four operations have certain preconditions which need to be satisfied to perform the same.
These preconditions are represented in the form of predicates.
The effect of these operations is represented using two lists ADD and DELETE. DELETE List
contains the predicates which will cease to be true once the operation is performed. ADD List on the
other hand contains the predicates which will become true once the operation is performed.
The Precondition, Add and Delete List for each operation is rather intuitive and have been listed
below.
For example, to perform the STACK(X,Y) operation i.e. to Stack Block X on top of Block Y, No
other block should be on top of Y (CLEAR(Y)) and the Robot Arm should be holding the Block X
(HOLDING(X)).
Once the operation is performed, these predicates will cease to be true, thus they are included
in DELETE List as well. (Note : It is not necessary for the Precondition and DELETE List to be the
exact same).
On the other hand, once the operation is performed, The robot arm will be free (ARMEMPTY) and
the block X will be on top of Y (ON(X,Y)).
The other 3 Operators follow similar logic, and this part is the cornerstone of Goal Stack Planning.
Hierarchical Planning in AI
Hierarchical planning is a planning method based on Hierarchical Task Network (HTN) or
HTN planning.
It combines ideas from Partial Order Planning & HTN Planning.
HTN planning is often formulated with a single “top level” action called Act, where the aim is
to find an implementation of Act that achieves the goal.
In HTN planning, the initial plan is viewed as a very high level description of what is to be
done.
This plan is refined by applying decomposition actions.
Each action decomposition reduces a higher level action to a partially ordered set of lower
level actions.
This decomposition continues until only the primitive actions remain in the plan.
Consider the example of a hierarchical plan to travel from a certain source to destination,
Hierarchi
cal Plan to travel from a certain Source to Destination by Bus
In the above hierarchical planner diagram, suppose we are Travelling from source “Mumbai”
to Destination “Goa”.
Then, you can plan how to travel: whether by Plane, Bus, or a Car. Suppose, you choose to
travel by “Bus”.
Then, “Take-Bus” plan can be further broken down into set of actions like: Goto Mumbai –
Bus stop, Buy-Ticket for Bus, Hop-on Bus, & Leave for Goa.
Now, the four actions in previous point can be individually broken down. Take, “By-Ticket for
Bus”.
It can be decomposed into: Goto Bus stop counter, Request Ticket & Pay for Ticket.
Thus, each of these actions can be decomposed further, until we reach the level of actions that
can be executed without deliberation to generate the required motor control sequences.
Here, we also use the concept of “One level Partial Order Planner” where say, if you plan to take
a trip, you need to decide a location first. This can be done by One Level Planner as:
Switch on computer > Start web browser > Open Redbus website > Select date > Select class >
Select bus > … so on
1. The key benefit of hierarchical structure is that, at each level of the hierarchy, plan is reduced
to a small number of activities at the next lower level, so the computational cost of finding
the correct way to arrange those activities for the current problem is small.
2. HTN methods can create the very large plans required by many real-world applications.
3. Hierarchical structure makes it easy to fix problems in case things go wrong.
4. For complex problems hierarchical planning is much more efficient than single level
planning.
Disadvantages of Hierarchical Planning: