0% found this document useful (0 votes)
6 views

Lecture Adversarial Searches

The document discusses adversarial search in artificial intelligence, focusing on competitive environments where agents have conflicting goals, commonly represented as games. Key concepts covered include the Minimax algorithm for optimal decision-making in games and Alpha-Beta pruning to enhance search efficiency. The document outlines the structure of games, the importance of utility functions, and the process of evaluating moves to maximize outcomes against an opponent.

Uploaded by

Ayesha Asad
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lecture Adversarial Searches

The document discusses adversarial search in artificial intelligence, focusing on competitive environments where agents have conflicting goals, commonly represented as games. Key concepts covered include the Minimax algorithm for optimal decision-making in games and Alpha-Beta pruning to enhance search efficiency. The document outlines the structure of games, the importance of utility functions, and the process of evaluating moves to maximize outcomes against an opponent.

Uploaded by

Ayesha Asad
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Artificial Intelligence

ADVERSARIAL SEARCH

1
ADVERSARIAL SEARCH
• In which we examine the problems that arise when we try to plan
ahead in a world where other agents are planning against us.

• In this lecture we will cover competitive environments, in which the


agents’ goals are in conflict, giving rise to adversarial search
problems—often known as games.

• We will cover the followings:


– GAMES
– OPTIMAL DECISIONS IN GAMES
– MiniMax Algorithm
– ALPHA–BETA PRUNING

2
Adversarial Search
• Adversarial search is a search, where we examine the problem
which arises when we try to plan ahead of the world and other
agents are planning against us.
• In previous topics, we have studied the search strategies which
are only associated with a single agent that aims to find the
solution which often expressed in the form of a sequence of
actions.
• But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this
situation usually occurs in game playing.
Adversarial Search
• The environment with more than one agent is termed as multi-
agent environment, in which each agent is an opponent of other
agent and playing against each other. Each agent needs to
consider the action of other agent and effect of that action on
their performance.
• So, Searches in which two or more players with conflicting goals
are trying to explore the same search space for the solution, are
called adversarial searches, often known as Games.
• Games are modeled as a Search problem and heuristic evaluation
function, and these are the two main factors which help to model
and solve games in AI.
Games
• A game can be defined as a search problem with the
following elements:

– S0: Initial state which specifies how the game is


set up at the start.

– PLAYER(s): Defines which player has the move in a


state s.

– ACTIONS(s): Returns set of legal moves in a state


s.
5
Games
– RESULT(s,a): The transition model defines the
result of a move from state s with action a.

– TERMINAL-TEST(s): A terminal test at a state s,


which is true when the game is over and false
otherwise.

– U T I L I T Y (s, p): A utility function also called an


objective function or payoff function, defines the
final numeric value for a game that ends in
terminal state s for a player p.
6
Games
• The initial state, ACTIONS function, and RESULT function define the game
tree—a tree where the nodes are game states and the edges are moves.
Figure shows part of the game tree for Tic-Tac-Toe.

7
Simple Approach in Tic-Tac-Toe
• In simple algorithm, calculate all the possible moves from the
current position. For a game of Noughts and Crosses the
result might look like:

• Expand each of these new possible moves for the other


player.
• Continue this expansion until a winning position for the player
is found.
8
Simple Approach in Tic-Tac-Toe

– This algorithm will work and locate a series of winning


moves at the cost of enormous calculation e.g. one board
at first, 9 at the next level, 9∗8 next and so on. In total:

– This is not so big that we cannot calculate it, but it is


alarming since Noughts and Crosses is such a simple game.
9
OPTIMAL DECISIONS IN GAMES
• An optimal strategy leads to outcomes at least as good as any
other strategy when one is playing an infallible opponent.
• Two Player Games:
– Consider a Zero-Sum game in which gain of one player is
balanced exactly by the loss of other player.
– Static Evaluation Function f(n) = Complete R/C/D open
positions for X – Complete R/C/D open positions for O

10
OPTIMAL DECISIONS IN GAMES
– Higher the result of f(n), the closer the move towards a win. Three
moves result in 3 but only one move results a win for X in Figure.

– This f(n) is useful but another heuristics is necessary to pick the move
with highest f(n) while protecting against a loss in the next move.

– For this purpose, a Minimax algorithm given next, in which the


algorithm's opponent will be trying to minimize whatever value the
algorithm is trying to maximize (hence, ”Minimax").

– Thus, the computer should make the move which leaves its opponent
capable of doing the least damage.

11
Minimax algorithm
• Minimax uses one of the two basic strategies:

– Entire game tree is searched to the leaf nodes


– Tree is searched to a predefined depth and then evaluated.

• Can pursue the tree by making guesses as to how the


opponent will play.

• Cost function can be used to evaluate how the


opponent is likely to play.

12
Minimax algorithm
• After evaluating some number of moves ahead
we examine the total value of the cost to each
player.

• The goal is to find a move which will maximize


the value of our move and will minimize the value
of the opponents moves.

• The algorithm used is the Minimax search


procedure presented next.
13
The Minimax algorithm

14
Partial Example Tree For Minimax
Algorithm

15
Example

16
Example

17
ALPHA–BETA PRUNING
• Minimax search is exponential like DFS and couldn’t
be eliminated but can be effectively cut in half.

• Idea of Pruning to eliminate large parts of the tree


can be used and the particular technique we
examine is called Alpha–Beta Pruning.

• When applied to a standard Minimax tree, it


returns the same move as Minimax would, but
prunes away branches that cannot possibly
influence the final decision.
18
ALPHA–BETA PRUNING
• Consider, the two-ply game tree from Figure
given next.
• Let’s go through the calculation of the optimal
decision once more, this time paying careful
attention to what we know at each point in
the process.

19
ALPHA–BETA PRUNING

20
ALPHA–BETA PRUNING
• Alpha–beta search updates the values of α and β
as it goes along and prunes the remaining
branches at a node (i.e., terminates the recursive
call) as soon as the value of the current node is
known to be worse than the current α or β value
for MAX or MIN, respectively.

• The complete algorithm is given next. It will be


good to trace its behavior when applied to the
tree in Figure.
21
ALPHA–BETA PRUNING

22
Example

23
Example

24
EXample

25

You might also like