0% found this document useful (0 votes)
13 views

Aiml Unit-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Aiml Unit-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

SUBJECT :

ARTIFICIAL INTELLIGENCE AND


MACHINE LEARNING
Unit-II
UNIT II (8 Hrs)
Adversarial Search and Games- Game Theory, Optimal
Decisions in Games Heuristic Alpha Beta Tree Search,
Stochastic Games, Limitations of Game Search
Algorithms. Genetic Algorithms.
Adversarial Search and Games
Adversarial Search:
Adversarial search is a search, where we examine the problem which
arises when we try to plan ahead of the world and other agents are
planning against us.
•In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.
•But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
Adversarial Search and Games
Adversarial Search:
•The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and playing
against each other. Each agent needs to consider the action of other agent and
effect of that action on their performance.
•So, Searches in which two or more players with conflicting goals are trying
to explore the same search space for the solution, are called adversarial
searches, often known as Games.
•Games are modeled as a Search problem and heuristic evaluation function, and
these are the two main factors which help to model and solve games in AI.
Types of Games in AI:
Deterministic Chance Moves
Perfect information Chess, Checkers, go, Othello Backgammon, monopoly
Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nuclear war

•Perfect information: A game with the perfect information is that in which agents can look into the complete board. Agents have
all the information about the game, and they can see each other moves also. Examples are Chess, Checkers, Go, etc.

•Imperfect information: If in a game agents do not have all information about the game and not aware with what's going on,
such type of games are called the game with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.

•Deterministic games: Deterministic games are those games which follow a strict pattern and set of rules for the games, and
there is no randomness associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.

•Non-deterministic games: Non-deterministic are those games which have various unpredictable events and has a factor of
chance or luck. This factor of chance or luck is introduced by either dice or cards. These are random, and each action response is
not fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Formalization of the problem:
A game can be defined as a type of search in AI which can be formalized of the following
elements:

•Initial state: It specifies how the game is set up at the start.


•Player(s): It specifies which player has moved in the state space.
•Action(s): It returns the set of legal moves in state space.
•Result(s, a): It is the transition model, which specifies the result of moves in the state space.
•Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The
state where the game ends is called terminal states.
•Utility(s, p): A utility function gives the final numeric value for a game that ends in terminal
states s for player p. It is also called payoff function. For Chess, the outcomes are a win, loss,
or draw and its payoff values are +1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.
Game tree:
A game tree is a tree where nodes of the tree are the game states and
Edges of the tree are the moves by players. Game tree involves initial
state, actions function, and result Function.
Example: Tic-Tac-Toe game tree:
The following figure is showing part of the game-tree for tic-tac-toe
game. Following are some key points of the game:
•There are two players MAX and MIN.
•Players have an alternate turn and start with MAX.
•MAX maximizes the result of the game tree
•MIN minimizes the result.
Example Explanation:
•From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN
place o, and both player plays alternatively until we reach a leaf node where one player has
three in a row or all squares are filled.
•Both players will compute each node, minimax, the minimax value which is the best
achievable utility against an optimal adversary.
•Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each
player is doing his best to prevent another one from winning. MIN is acting against Max in the
game.
•So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply.
Max place x, then MIN puts o to prevent Max from winning, and this game continues until the
terminal node.
•In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space
of possibilities that MIN and MAX are playing tic-tac-toe and taking turns alternately.
Hence adversarial Search for the minimax procedure works as follows:
•It aims to find the optimal strategy for MAX to win the game.
•It follows the approach of Depth-first search.
•In the game tree, optimal leaf node could appear at any depth of the tree.
•Propagate the minimax values up to the tree until the terminal node
discovered.
In a given game tree, the optimal strategy can be determined from the
minimax value of each node, which can be written as MINIMAX(n).
MAX prefer to move to a state of maximum value and MIN prefer to
move to a state of minimum value then:
•There are two players MAX and MIN.
Mini-Max Algorithm in Artificial
Intelligence
•Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and game
theory. It provides an optimal move for the player assuming that opponent is also playing optimally.
•Mini-Max algorithm uses recursion to search through the game-tree.
•Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and
various tow-players game. This Algorithm computes the minimax decision for the current state.
•In this algorithm two players play the game, one is called MAX and other is called MIN.
•Both the players fight it as the opponent player gets the minimum benefit while they get the maximum
benefit.
•Both Players of the game are opponent of each other, where MAX will select the maximized value and
MIN will select the minimized value.
•The minimax algorithm performs a depth-first search algorithm for the exploration of the complete game
tree.
•The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack the tree
as the recursion.
Pseudo-code for MinMax Algorithm:
1.function minimax(node, depth, maximizingPlayer) is
2.if depth ==0 or node is a terminal node then
3.return static evaluation of node
4.
5.if MaximizingPlayer then // for Maximizer Player
6.maxEva= -infinity
7. for each child of node do
8. eva= minimax(child, depth-1, false)
9.maxEva= max(maxEva,eva) //gives Maximum of the values
10.return maxEva
11.
12.else // for Minimizer player
13. minEva= +infinity
14. for each child of node do
15. eva= minimax(child, depth-1, true)
16. minEva= min(minEva, eva) //gives minimum of the values Initial call:
17. return minEva Minimax(node, 3, true)
Working of Min-Max Algorithm:
•The working of the minimax algorithm can be easily described using an example. Below
we have taken an example of game-tree which is representing the two-player game.
•In this example, there are two players one is called Maximizer and other is called
Minimizer.
•Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
•This algorithm applies DFS, so in this game-tree, we have to go all the way through the
leaves to reach the terminal nodes.
•At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved in
solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the utility
values for the terminal states. In the below tree diagram, let's take A is the initial state of the tree. Suppose
maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will take next turn which has
worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare each value in terminal
state with initial value of Maximizer and determines the higher nodes values. It will find the maximum among the all.

•For node D max(-1,- -∞) => max(-1,4)= 4


•For Node E max(2, -∞) => max(2, 6)= 6
•For Node F max(-3, -∞) => max(-3,-5) = -3
•For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find the 3 rd layer node
values.
•For node B= min(4,6) = 4
•For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the maximum value for
the root node. In this game tree, there are only 4 layers, hence we reach immediately to the root node, but in real games, there will
be more than 4 layers.
•For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
•Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in
the finite search tree.
•Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
•Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-
Max algorithm is O(bm), where b is branching factor of the game-tree, and m is the
maximum depth of the tree.
•Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really slow for complex
games such as Chess, go, etc. This type of games has a huge branching factor, and the
player has lots of choices to decide. This limitation of the minimax algorithm can be
improved from alpha-beta pruning which we have discussed in the next topic.
Heuristic Alpha-Beta Tree Search Algorithm:

The Heuristic Alpha-Beta Tree Search algorithm is a recursive algorithm that explores the
game tree to find the best move for the AI agent. It uses alpha-beta pruning to discard
branches that are guaranteed to be worse than previously explored branches. This
optimization significantly reduces the number of nodes to explore, making the algorithm
more efficient. A heuristic evaluation function is used to estimate the value of each game
state and prioritize exploring more promising branches.
Alpha-Beta Pruning
•Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the minimax
algorithm.
•As we have seen in the minimax search algorithm that the number of game states it has to examine are exponential
in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence there is a technique by
which without checking each node of the game tree we can compute the correct minimax decision, and this technique
is called pruning. This involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.
•Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves but also
entire sub-tree.
•The two-parameter can be defined as:
• Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer. The
initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer. The
initial value of beta is +∞.
•The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but
it removes all the nodes which are not really affecting the final decision but making algorithm slow. Hence by
pruning these nodes, it makes the algorithm fast.
1.function minimax(node, depth, alpha, beta, maximizingPlayer) is
2.if depth ==0 or node is a terminal node then Pseudo-code for Alpha-beta Pruning:
3.return static evaluation of node
4.if MaximizingPlayer then // for Maximizer Player
5. maxEva= -infinity
6. for each child of node do
7. eva= minimax(child, depth-1, alpha, beta, False)
8. maxEva= max(maxEva, eva)
9. alpha= max(alpha, maxEva)
10. if beta<=alpha
11. break
12. return maxEva
13.else // for Minimizer player
14. minEva= +infinity
15. for each child of node do
16. eva= minimax(child, depth-1, alpha, beta, true)
17. minEva= min(minEva, eva)
18. beta= min(beta, eva)
19. if beta<=alpha
20. break
21. return minEva
Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these value of alpha and beta
passed down to node B where again α= -∞ and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with firstly 2 and then 3, and
the max (2, 3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now β= +∞, will compare
with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B


which is node E, and the values of α= -∞, and β= 3 will also be
passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will be compared with 5,
so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of alpha will be changed the
maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A which is Node
C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and then compared with right
child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be changed, it will compare
with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C which is G will
be pruned, and the algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final game tree which is the
showing the nodes which are computed and nodes which has never computed. Hence the optimal value for the maximizer is 3 for
this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node
is examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:
•Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the
leaves of the tree, and works exactly as minimax algorithm. In this case, it also consumes
more time because of alpha-beta factors, such a move of pruning is called worst ordering. In
this case, the best move occurs on the right side of the tree. The time complexity for such an
order is O(bm).
•Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS hence it
first search left of the tree and go deep twice as minimax algorithm in the same amount of
time. Complexity in ideal ordering is O(bm/2).
Rules to find good ordering:
Following are some rules to find good ordering in alpha-beta pruning:
•Occur the best move from the shallowest node.
•Order the nodes in the tree such that the best nodes are checked first.
•Use domain knowledge while finding the best move. Ex: for Chess, try order:
captures first, then threats, then forward moves, backward moves.
•We can bookkeep the states, as there is a possibility that states may repeat.
Stochastic Games
Limitations of Game Search Algorithms
Game search algorithms in AI have some limitations, and it's important to be aware of these when designing or
analyzing game-playing systems. Some common limitations include:

1. Exponential Growth of Game Tree:


- As the depth of the game tree increases, the number of possible states or nodes grows exponentially. This leads to
a combinatorial explosion, making it computationally expensive to explore the entire tree.
- Example: In chess, the number of possible moves at each turn can be large. As the game progresses, the
branching factor increases, resulting in an exponential growth of the game tree. Algorithms like minimax may
struggle to explore all possible outcomes.

2. Memory Requirements:
- Storing the entire game tree in memory may not be feasible for complex games. The memory requirements for
large game trees can be prohibitive, especially when dealing with limited computational resources.
- Example: In games like Go, where the board size is large, storing the entire game tree in memory becomes
challenging. The sheer number of possible positions and moves requires significant memory resources.
Limitations of Game Search Algorithms

Go Game image
Limitations of Game Search Algorithms
3. Time Constraints:
- Real-time constraints can be challenging for certain game scenarios. In applications where quick decisions are required,
exhaustive search may not be possible, and algorithms need to balance between exploration and exploitation efficiently.
- Example: In real-time strategy games, decisions need to be made quickly. Algorithms like minimax with alpha-beta pruning
may not explore the entire tree within the time constraints, leading to suboptimal decisions.
4. Incomplete Information:
- Game search algorithms assume complete knowledge of the game state. In real-world scenarios, information might be
incomplete or uncertain, making it challenging to accurately evaluate and predict future states.
- Example: In card games like poker, players don't have complete information about the opponent's cards. Game search
algorithms that assume complete knowledge may struggle in such scenarios.
5. Heuristic Quality:
- The effectiveness of heuristic evaluation functions directly impacts the performance of game-playing algorithms. Designing a
good heuristic that accurately reflects the game's dynamics is challenging and may require domain-specific knowledge.
- Example: In the game of checkers, designing a heuristic that accurately captures the positional advantage can be challenging. A
poorly designed heuristic may lead to suboptimal move selection.
Limitations of Game Search Algorithms
Limitations of Game Search Algorithms
6. **Limited Horizon Effect:**
- Algorithms with a fixed depth limit may suffer from the horizon effect, where they fail to see critical moves or strategies
beyond their search depth. This can lead to suboptimal decision-making.
- *Example:* In a real-time strategy game, if the search depth is limited, the algorithm may fail to see potential threats or
opportunities beyond a certain horizon, leading to suboptimal decisions.

7. **Non-Optimal Moves:**
- Game search algorithms might not always find the optimal move due to limitations in the search space exploration.
Suboptimal moves may be chosen if the search is cut off prematurely or if the evaluation function does not capture the true value
of a state.
- *Example:* In the game of Tic-Tac-Toe, if a search algorithm has a limited depth, it may not find the optimal move, allowing the
opponent to force a draw or win.

8. **Adversarial Nature:**
- Game search algorithms assume that opponents play optimally or near-optimally. In reality, opponents may exhibit suboptimal
behavior, randomness, or strategic patterns that the algorithm may not anticipate.
- *Example:* In games like poker, opponents may bluff or deviate from optimal strategies. Algorithms assuming optimal
opponents may struggle to adapt to unpredictable opponent behavior.
Limitations of Game Search Algorithms

9. **Dynamic Environments:**
- Game search algorithms may struggle in dynamic or changing environments where the optimal strategy evolves over time.
Adapting to unforeseen changes in the game dynamics can be challenging.
- *Example:* In a dynamic strategy game where the environment changes over time, algorithms may have difficulty adapting to
new conditions, as they are designed based on a static view of the game.

10. **Computational Complexity:**


- Some game search algorithms, especially those based on more sophisticated techniques like Monte Carlo Tree Search (MCTS),
can be computationally intensive. This limits their applicability in resource-constrained environments.
- *Example:* In games like 3D chess, more advanced algorithms like Monte Carlo Tree Search (MCTS) may require significant
computational resources, limiting their applicability in resource-constrained environments.
Genetic Algorithms
Genetic algorithms (GAs) are a type of computational optimization technique inspired by the principles of natural selection and
genetics. They are used to solve complex problems by mimicking the process of evolution to improve a population of potential
solutions iteratively. These algorithms operate on a set of candidate solutions encoded as strings of binary digits or other data
structures.
• Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary algorithms.
• Genetic algorithms are based on the ideas of natural selection and genetics.
• These are intelligent exploitation of random search provided with historical data to direct the search into the region of better
performance in solution space.
• They are commonly used to generate high-quality solutions for optimization problems and search problems.
• Genetic algorithms simulate the process of natural selection which means those species who can adapt to changes in their
environment are able to survive and reproduce and go to next generation.
• In simple words, they simulate “survival of the fittest” among individual of consecutive generation for solving a problem.
• Each generation consist of a population of individuals and each individual represents a point in search space and possible
solution. Each individual is represented as a string of character/integer/float/bits. This string is analogous to the Chromosome
Foundation of Genetic Algorithms
Genetic algorithms are based on an analogy with genetic structure and behaviour of
chromosomes of the population. Following is the foundation of GAs based on this
analogy –
0 seconds of 0 secondsVolume 0%

1.Individual in population compete for resources and mate


2.Those individuals who are successful (fittest) then mate to create more offspring
than others
3.Genes from “fittest” parent propagate throughout the generation, that is
sometimes parents create offspring which is better than either parent.
4.Thus each successive generation is more suited for their environment.
Search space
The population of individuals are maintained within search space.
Each individual represents a solution in search space for given problem.
Each individual is coded as a finite length vector (analogous to chromosome) of components.
These variable components are analogous to Genes.
Thus a chromosome (individual) is composed of several genes (variable components).
Fitness Score
• A Fitness Score is given to each individual which shows the ability of an individual to “compete”.
• The individual having optimal fitness score (or near optimal) are sought.
• The GAs maintains the population of n individuals (chromosome/solutions) along with their fitness scores.
• The individuals having better fitness scores are given more chance to reproduce than others.
• The individuals with better fitness scores are selected who mate and produce better offspring by combining
chromosomes of parents.
• The population size is static so the room has to be created for new arrivals. So, some individuals die and get
replaced by new arrivals eventually creating new generation when all the mating opportunity of the old population
is exhausted.
• It is hoped that over successive generations better solutions will arrive while least fit die.
• Each new generation has on average more “better genes” than the individual (solution) of previous generations.
• Thus each new generations have better “partial solutions” than previous generations.
• Once the offspring produced having no significant difference from offspring produced by previous populations,
the population is converged. The algorithm is said to be converged to a set of solutions for the problem.
Operators of Genetic Algorithms
Once the initial generation is created, the algorithm evolves the generation using following operators –
1) Selection Operator: The idea is to give preference to the individuals with good fitness scores and allow them to pass their
genes to successive generations.
2) Crossover Operator: This represents pairing between individuals. Two individuals are selected using selection operator and
crossover sites are chosen randomly. Then the genes at these crossover sites are exchanged thus creating a completely new
individual (offspring). For example –

3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity in the population to avoid
premature convergence. For example –
3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity in the population to avoid
premature convergence. For example –
The whole algorithm can be summarized as –

1) Randomly initialize populations p


2) Determine fitness of population
3) Until convergence repeat:
a) Select parents from population
b) Crossover and generate new population
c) Perform mutation on new population
d) Calculate fitness for new population

You might also like