0% found this document useful (0 votes)
7 views

Unit 2 Artificial Intelligence - Problem-solving Through Searching[1]

This document discusses problem-solving agents, particularly goal-based and problem-solving agents, which utilize search algorithms to navigate complex environments. It outlines various search strategies, including uninformed and informed algorithms, and details the characteristics of production systems in AI. Additionally, it covers the formulation of problems, the components of production rule systems, and the importance of heuristic functions in guiding search processes.

Uploaded by

Virat D
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unit 2 Artificial Intelligence - Problem-solving Through Searching[1]

This document discusses problem-solving agents, particularly goal-based and problem-solving agents, which utilize search algorithms to navigate complex environments. It outlines various search strategies, including uninformed and informed algorithms, and details the characteristics of production systems in AI. Additionally, it covers the formulation of problems, the components of production rule systems, and the importance of heuristic functions in guiding search processes.

Uploaded by

Virat D
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

Unit 2: Problem-

solving through
searching

1
Problem-solving agents

 In the last unit, we have discussed about the reflex


agents, which base their actions on a direct mapping
from states to actions.
 Such agents cannot operate well in environments for
which this mapping would be too large to store and
would take too long to learn.
 Now we will discuss Goal-based agents, which consider
future actions and the desirability of their outcomes.

2
 One kind of goal-based agent called a problem-solving agent.
 Problem-solving agents use atomic representations, i.e., states of the
world are considered as wholes, with no internal structure visible to the
problem-solving algorithms.
 Goal-based agents that use more advanced factored or structured
representations are usually call planning agents.

3
Search Algorithms

 Informed search algorithms: can do quite well given


some guidance on where to look for solutions.

 Uninformed search algorithms: algorithms that are


given no information about the problem other that its
definition.

4
 Goal formulation: based on the current situation and the agent’s
performance measure, is the first step in problem solving.
 Problem formulation: the process of deciding what actions and states to
consider, given a goal.
 An agent with several immediate options of unknown value can decide
what to do by first examining future actions that eventually lead to states
of known value.

5
Well-defined problems and
solutions
A problem can be defined formally by five components:
 Initial State
 Actions: a description of the possible actions available to the agent.
 Transition model: a description of what each action does.
 Goal test: which determines whether a given state is a goal state.
 Path cost: that assigns a numeric cost to each path.

6
7
 The problem-solving agent chooses a cost function that
reflects its own performance measure.
 In this chapter, it is assumed that the cost of a path can
be described as the sum of the costs of the individual
actions along the path.
 The step cost of taking action a in state s to reach state
s’ is denoted by c(s, a, s’).

8
 A solution to a problem is an action sequence that leads
from the initial state to a goal state.

 Solution quality is measured by the path cost function.

 Optimal solution has the lowest path cost among all


solutions.

9
Production System

 A production system or production rule system is based


on a set of rules about behavior.
 These rules are a basic representation found helpful in
expert systems, automated planning and action
selection.
 It is a computer program typically used to provide some
form of AI, which consists primarily of a set of rules
about behavior but it also includes the mechanism
necessary to follow those rules as the system responds
to states of the world.

10
Playing Chess

11
12
A Water-Jug Problem

13
Components of Production
Rule System
 Global database: it is the central data structure.
 Set of Production rules: the production rules operate
on the global database. Each rule usually has a
precondition that is either satisfied or not by the global
database.
 A control system: it chooses which applicable rule
should be applied and ceases computation when a
termination condition on the database is satisfied.

14
Features of Production
System in AI
 Simplicity: the structure of each sentence in a
production system is unique and uniform as they use the
“IF-THEN” structure. This structure provides simplicity
in knowledge representation.
 Modularity: the production rule code the knowledge
available in discrete pieces.
 Modifiability: the facility for modifying rules. It allows
the development of production rules in a skeletal form
first and then it is accurate to suite a specific
application.
 Knowledge-intensive: the knowledge base of the
production system stores pure knowledge.
15
Production System Rules

 Deductive Inference Rules

 Abductive Inference Rules

 If (Condition) Then (Condition): the production rules are


also known as condition-action, antecedent-consequent,
pattern-action, situation-response, feedback-results
pairs.

16
Classes of Production System
in AI
 Monotonic PS: It’s a production system in which the
application of a rule never prevents the later
application of another rule, that could have also been
applied at the time the first rule was selected.
 Partially Commutative PS: It’s a type of production
system in which the application of a sequence of rules
transforms state X into state Y, then any permutation of
those rules that is allowable also transforms state X into
state Y.

17
 Non-Monotonic PS: These are useful for solving ignorable problems. These
systems are important from an implementation standpoint because they
can be implemented without the ability to backtrack to previous states
when it is discovered that an incorrect path was followed.
 Commutative System: these are usually useful for problems in which
changes occur but can be reversed and in which the order of operation is
not critical.

18
PS Characteristics

19
Control/Search Strategies

 How would you decide which rule to apply while


searching for a solution for any problem? There are
certain requirements for a good control strategy that
you need to keep in mind, such as:
 The first requirement for a good control strategy is that it
should cause motion.
 The second requirement for a good control strategy is that
it should be systematic.
 Finally, it must be efficient in order to find a good answer.

20
Example Problems

 Toy Problem: vacuum cleaner world problem

21
Toy problem continues…

 States: determined by both the agent location and the


dirt locations.
 Initial state
 Actions: Left, Right, and Suck or also include Up and
Down.
 Transition model: the actions have their expected
effects.
 Goal test: checks whether all the squares are clean.
 Path cost: Each step costs 1, so the path cost is the
number of steps in the path.

22
 8-puzzle problem

23
 8-queens problem

24
Real-world problems

 Route-finding problem
 Touring problem
 Traveling salesman problem
 A water-jug problem
 VLSI layout
 Robot navigation
 Automatic assemble sequencing

25
Searching for solutions

 A solution is an action sequence, so search algorithms


work by considering various possible action sequences.
 The possible action sequences starting at the initial
state form a search tree with the initial state at the
root; the branches are actions and the nodes correspond
to states in the state space of the problem.

26
27
28
29
30
Measuring problem-solving
performance
 Completeness: is the algorithm guaranteed to find a
solution when there is one?

 Optimality: Does the strategy find the optimal solution.

 Time complexity

 Space complexity

31
Search Algorithms

 Uninformed search algorithms: algorithms that are


given no information about the problem other that its
definition.

 Informed search algorithms: can do quite well given


some guidance on where to look for solutions.

32
33
Uninformed search strategies

Also called blind search


1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening DFS
6. Bidirectional search

34
1. Breadth-first search

 BFS is a simple strategy in which the root node is


expanded first, then all the successors of the root node
are expanded next, then their successors, and so on.
 In general, all the nodes are expanded at a given depth
in the search tree before any nodes at the next level
are expanded.

35
 BFS is an instance of the general graph-search algorithm
in which the shallowest unexpanded node is chosen for
expansion.
 FIFO queue is used for the frontier.
 New nodes, which are always deeper than their parents
go to the back of the queue, and old nodes, which are
shallower than the new nodes, get expanded first.

36
37
38
2. Uniform-cost search

 UCS expands the node n with the lowest path cost g(n).
This is done by storing the frontier as a priority queue
ordered by g.
 UCS expands nodes in order of their optimal path cost.
 Differences by BFS:
 1. the goal test is applied to a node when it is selected for
expansion, rather than when it is first generated.
 2. a test is added in case a better path is found to a node
currently on the frontier.

39
40
3. Depth-First Search

 DFS always expands the deepest node in the current


frontier of the search tree.
 The search proceeds immediately to the deepest level
of the search tree, where the nodes have no successors.
 Used LIFO stack.
 The DFS algorithm is an instance of the graph-search
algorithm.

41
 The properties of the DFS depend strongly on whether the graph-search or
tree-search version is used.
 The graph-search version, which avoids repeated states and redundant
paths, is complete in finite state spaces because it will eventually expand
every node.
 The tree-search version, is not complete.
 A variant of DFS called backtracking search uses still less memory.

42
43
4. Depth-limited Search

 The embarrassing failure of DFS in infinite state space


can be alleviated by supplying DFS with a
predetermined depth limit l.
 Nodes at depth l are treated as if they have no
successors.
 The DLS solves the infinite-path problem.
 DFS can be viewed as a special case of DLS with l=∞.

44
45
5. Iterative deepening DFS

 Iterative deepening DFS is general strategy, often used


in combination with DFS search, that finds the DLS.
 It does this by gradually increasing the limit – first 0,
then 1, then 2 and so on – until a goal is found.
 This will occur when the depth limit reaches d, the
depth of the shallowest goal node.

46
 Iterative deepening search may seem wasteful because states are
generated multiple times.
 IDS is the preferred uninformed search method when the search space is
large and the depth of the solution is not known.

47
48
6. Bidirectional Search
 The idea behind bidirectional search is to run two simultaneous searches –
one forward from initial state and the other backward from the goal –
hoping that the two searches meet in the middle.

49
Comparing Uninformed
Search Strategies

50
Informed search strategies
 Also called heuristic search.
 Given some guidance on where to look for solutions.
 A heuristic is a technique that improves the efficiency of a search process,
possible by sacrificing claims of completeness.
 Heuristics are like tour guides.
 They are good to the extent that they point in generally interesting
directions; they are bad to the extent that they may miss points of
interest.

51
 Informed search algorithm contains an array of knowledge such as how far
we are from the goal, path cost, how to reach to goal node, etc.
 Best-first search is and instance of the general Tree-search or graph-
search algorithm in which a node is selected for expansion based on an
evaluation function, f(n).
 The evaluation function is constructed as a cost estimate, so the node
with the lowest evaluation is expanded first.
 Most best-first algorithms include as a component of f a heuristic function,
denoted h(n).

52
Heuristic Function

 A heuristic function is a function that maps from


problem state descriptions to measures of desirability,
usually represented as numbers.
 It finds the most promising path.
 The purpose of a heuristic function is to guide the
search process in the most profitable directions by
suggesting which path to follow first when more than
one is available.

53
 It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.
 h(n) = estimated cost of the cheapest path from the state at node n to a
goal state, and always positive value.
 Heuristic function takes a node as input, but unlike, uniformed-cost
search depends only on the state at that node.
 Best-first search
 A* algorithm

54
Best-first Search (Greedy
Search)
 Greedy best-first search tries to expand the node that is
closest to the goal, on the grounds that is likely to lead
to a solution quickly.
 Thus, it evaluates nodes by using just the heuristic
function; that is, f(n) = h(n).
 Best-first search, which is a way of combining the
advantages of both DFS and BFS into a single method.

55
 Step 1: Place the starting node into the OPEN list.
 Step 2: If the OPEN list is empty, Stop and return failure.
 Step 3: Remove the node n, from the OPEN list which has the lowest value
of h(n), and places it in the CLOSED list.
 Step 4: Expand the node n, and generate the successors of node n.
 Step 5: Check each successor of node n, and find whether any node is a
goal node or not. If any successor node is goal node, then return success
and terminate the search, else proceed to Step 6.
 Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both lists, then add it to the OPEN list.
 Step 7: Return to Step 2.

56
57
OR Graphs
 It is sometimes important to search a graph instead so that duplicate
paths will not be pursued.
 An algorithm to do this will operate by searching a directed graph in which
each node represents a point in the problem space.
 Each node will contain, in addition to a description of the problem state it
represents, and an indication of how promising it is, a parent link that
points back to the best node from which it came, and a list of the nodes
that were generated from it.

58
 The parent link will make it possible to recover the path
to the goal once the goal is found.
 The list of successors will make it possible, if a better
path is found to an already existing node, to propagate
the improvement down to its successors.
 The graph of this sort is called an OR graph, since each
of its branches represents an alternative problem-
solving path.

59
We will need to use two lists of nodes for Pure Heuristic Search:
 OPEN – nodes that have been generated and have had the heuristic
function applied to them but which have not yet been examined .
 It is actually a priority queue in which the elements with the highest priority
are those with the most promising value of the heuristic function.
 CLOSED – nodes that have already been examined. We need to keep these
nodes in memory if we want to search a graph rather than a tree, since
whenever a new node is generated, we need to check whether it has been
generated before.

60
 jn

61
62
Advantages and
disadvantages
 Advantages:
 Best first search can switch between BFS and DFS by
gaining the advantages of both the algorithms.
 This algorithm is more efficient than BFS and DFS
algorithms.
 Disadvantages:
 It can behave as an unguided depth-first search in the
worst case scenario.
 It can get stuck in a loop as DFS.
 This algorithm is not optimal.

63
Measuring of Greedy best-
first search
 Time complexity: O(bd)

 Space complexity: O(bd)

 Completeness: incomplete, if the given state space is


finite.

 Optimal: not optimal.

64
A* Search Algorithm

 The most widely known form of best-first search is


called A* search.
 It evaluates nodes by combining g(n), the cost to reach
the node, and h(n), the cost to get from the node to the
goal:
f(n) = g(n) + h(n)
 Since g(n) gives the path cost from the start node to
node n, and h(n) is the estimated cost of the cheapest
path from n to the goal.

65
66
 It has combined features of UCS and greedy best-first
search, by which it solve the problem efficiently.
 A* search algorithm finds the shortest path through the
search space using the heuristic function.
 This search algorithm expands less search tree and
provides optimal result faster.
 We use search heuristic as well as the cost to reach the
node.

67
68
A* Algorithm steps
1: Place the starting node in the OPEN list.
2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.
3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and
stop, otherwise
4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or
CLOSED list, if not then compute evaluation function for n' and place into
Open list.
5: Else if node n' is already in OPEN and CLOSED, then it should be attached
to the back pointer which reflects the lowest g(n') value.
6: Return to Step 2.

69
70
Advantages and
disadvantages
Advantages:
 A* search algorithm is the best algorithm than other search algorithms.
 A* search algorithm is optimal and complete.
 This algorithm can solve very complex problems.

 Disadvantages:
 It does not always produce the shortest path as it mostly based on heuristics
and approximation.
 A* search algorithm has some complexity issues.
 The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

71
Measuring of A* Algorithm
 Time complexity: O(bd)
 Space complexity: O(bd)
 Complete: A* algorithm is complete as long as:
 Branching factor is finite.
 Cost at every action is fixed.
 Optimal: A* search algorithm is optimal if it follows below two conditions:
 Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
 Consistency: Second required condition is consistency for only A* graph-search.

72
Conditions of Optimality

 An admissible heuristic is one that never overestimates


the cost to reach the goal.
 The tree-search version of A* is optimal if h(n) is
admissible, while the graph-search version is optimal if
h(n) is consistent.
 Because g(n) is the actual cost to reach n along the
current path, and f(n) = g(n) + h(n), we have as an
immediate consequence that f(n) never overestimates
the true cost of a solution along the current path
through n.

73
 A second, slightly stronger condition called consistency (or sometimes
monotonicity) is required only for applications of A* to graph search.
 A heuristic h(n) is consistent if, for every node n and every successor n’ of
n generated by any action a, the estimated cost of reaching the goal from
n is no greater than the step cost of getting to n’ plus the estimated cost
of reaching the goal from n’:
h(n) <= c(n, a, n’) + h(n’)
This is a form of the general triangle inequality, which stipulates that each
side of a triangle cannot be longer that the sum of the other two sides.

74
Local Search: Hill Climbing
Algorithm
Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.
 Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems.
 It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
 A node of hill climbing algorithm has two components which are state and
value.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

75
76
Features of Hill Climbing

 Generate and Test variant: Hill Climbing is the variant


of Generate and Test method. The Generate and Test
method produce feedback which helps to decide which
direction to move in the search space.
 Greedy approach: Hill-climbing algorithm search moves
in the direction which optimizes the cost.
 No backtracking: It does not backtrack the search
space, as it does not remember the previous states.

77
State-space diagram for Hill
Climbing
The state-space landscape is a graphical representation of the hill-
climbing algorithm which is showing a graph between various states of
algorithm and Objective function/Cost.

78
 Local Maximum: Local maximum is a state which is better than its
neighbor states, but there is also another state which is higher than it.

 Global Maximum: Global maximum is the best possible state of state


space landscape. It has the highest value of objective function.

 Current state: It is a state in a landscape diagram where an agent is


currently present.

 Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.

 Shoulder: It is a plateau region which has an uphill edge.

79
Types of Hill Climbing
Algorithm
 Simple hill Climbing

 Steepest-Ascent hill-climbing

 Simulated Annealing

80
1. Simple Hill Climbing
 It only evaluates the neighbor node state at a time and selects the first
one which optimizes current cost and set it as a current state.
 Features:
 Less time consuming
 Less optimal solution and the solution is not guaranteed

81
2. Hill Climbing: Steepest-
Ascent
The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm.
 This algorithm examines all the neighboring nodes of the current state and
selects one neighbor node which is closest to the goal state.
 This algorithm consumes more time as it searches for multiple neighbors.

82
3. Hill Climbing: Simulated
Annealing
 lll

83
Problem Reduction

 We will know how to get from a node to a goal state.

 If we can discover how to get from that node to goal


state along any one of the branches leaving it.

84
AND-OR Graphs

 The AND-OR graph (or tree) is useful for representing


the solution of problems that can be solved by
decomposing them into a set of smaller problems, all of
which must then be solved.
 This decomposition, or reduction, generates arcs that
we call AND arcs.
 One AND arc may point to any number of successors
nodes, all of which must be solved in order for the arc
to point to a solution.

85
 In order to describe an algorithm for searching and AND-OR graph, we
need to exploit a value that we call FUTILITY.

86
 Just as in an OR graph, several arcs may emerge from a single node,
indicating a variety of ways in which the original problem might be solved.
 This is why the structure is called AND-OR graph.
 AND arcs are indicated with a line connecting all the components.
 In order to find the solution in an AND-OR graph, we need an algorithm
similar to best-first search but with the ability to handle the AND arcs
appropriately.

87
 It may be necessary to get to more than one solution
state since each arm of an AND arc must lead to its own
solution node.

88
89
90
 There is an important another way in which an
algorithm for searching an AND-OR graph must differ
from one for searching an OR graph.

 Individual paths from node to node cannot be


considered independently of the paths through other
nodes connected to the original ones by AND arcs.

 In the best-first search algorithm, the desired path from


one node to another was always the one with the lowest
cost. But this is not always the case when searching an
AND-OR graph.

91
92
 Limitation: It fails to take into account any interaction
between subgoals.

93
The AO* Algorithm

 Nilsson calls problem reduction algorithm the


AO*algorithm.
 Rather than two lists, OPEN and CLOSED, that were
used in the A* algorithm, the AO* algorithm will use a
single structure GRAPH, representing the part of the
search graph that has been explicitly generated so far.

94
 Each node in the graph will point both down to its
immediate successors and up to its immediate
predecessors.
 Each node in the graph will also have associated with an
h’ value, an estimate of the cost of a path from itself to
a set of solution nodes.
 Here, we will not store g, the cost of getting from the
start node to the current node.

95
96
97
Adversarial Search
 Adversarial search is a search, where we examine the problem which
arises when we try to plan ahead of the world and other agents are
planning against us.
 The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other. Each agent needs to consider the action of
other agent and effect of that action on their performance.
 So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
 Games are modeled as a Search problem and heuristic evaluation
function, and these are the two main factors which help to model and
solve games in AI.

98
99
Mini-max Algorithm

 Mini-max algorithm is a recursive or backtracking


algorithm which is used in decision-making and game
theory. It provides an optimal move for the player
assuming that opponent is also playing optimally.
 Mini-Max algorithm uses recursion to search through the
game-tree.
 Min-Max algorithm is mostly used for game playing in AI.
Such as Chess, Checkers, tic-tac-toe, go, and various
tow-players game. This Algorithm computes the
minimax decision for the current state.
 In this algorithm two players play the game, one is
called MAX and other is called MIN.
100
 Both the players fight it as the opponent player gets the
minimum benefit while they get the maximum benefit.
 Both Players of the game are opponent of each other,
where MAX will select the maximized value and MIN will
select the minimized value.
 The minimax algorithm performs a depth-first search
algorithm for the exploration of the complete game
tree.
 The minimax algorithm proceeds all the way down to
the terminal node of the tree, then backtrack the tree
as the recursion.

101
102
103
104
Measuring properties of Mini-
Max
 Complete- Min-Max algorithm is Complete. It will
definitely find a solution (if exist), in the finite search
tree.
 Optimal- Min-Max algorithm is optimal if both
opponents are playing optimally.
 Time complexity- As it performs DFS for the game-tree,
so the time complexity of Min-Max algorithm is O(bm),
where b is branching factor of the game-tree, and m is
the maximum depth of the tree.
 Space Complexity- Space complexity of Mini-max
algorithm is also similar to DFS which is O(bm).

105
 Limitation: The main drawback of the minimax
algorithm is that it gets really slow for complex games
such as Chess, go, etc. This type of games has a huge
branching factor, and the player has lots of choices to
decide.

 This limitation of the minimax algorithm can be


improved from alpha-beta pruning which we have
discussed in the next topic.

106
Alpha-Beta Pruning
 Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.
 As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there
is a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is
called pruning.
 This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.
 Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.

107
 The two-parameter can be defined as:
 Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.
 Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.

 The Alpha-beta pruning to a standard minimax algorithm returns the same


move as the standard algorithm does, but it removes all the nodes which
are not really affecting the final decision but making algorithm slow.
Hence by pruning these nodes, it makes the algorithm fast.

108
109
 a

110
111
Constraint Satisfaction
Problems
 This topic describes a way to solve a wide variety of
problems more efficiently. We use a factored
representation for each state: a set of variables, each
of which has a value.
 A problem is solved when each variable has a value that
satisfies all the constraints on the variable.
 A problem described this way is called a constraint
satisfaction problem ,or CSP.
 CSP search algorithms take advantage of the structure
of states and use general-purpose rather than problem-
specific heuristics to enable the solution of complex
problems.
 The main idea is to eliminate large portions of the
search space all at once by identifying variable/value
combinations that violate the constraints.
Defining CSP

 Each domain Di consists of a set of allowable values,


{v1,………,vk} for variable Xi. Each constraint Ci consists
of a pair <scope, rel>, where scope is a tuple of
variables that participate in the constraints and rel is a
relation that defines the values that those variables can
take on.
 A relation can be represented as an explicit list of all
tuples of values that satisfy the constraint, or as an
abstract relation that supports two operations: testing if
a tuple is a member of the relation and enumerating
the members of the relation.
 For example, if X1 and X2 both have the domain {A,B},
then the constraint saying the two variables must have
different values can be written as <(X1,X2), [(A, B),
(B,A)]> or as <(X1,X2),X1 is not equal to X2>.
 To solve a CSP, we need to define a state space and the notion of a
solution. Each state in a CSP is defined by an assign m e n t of values to
some or all of the variables, {Xi = vi, Xj = vj ,...}.
 An assignment that does not violate any constraints is called a consistent
or legal assignment. A complete assignment is one in which every variable
is assigned, and a solution to a CSP is a consistent, complete assignment.
 A partial assignment is one that assigns values to only some of the
variables.
117
Example problem: Map
coloring
 We are given the task of coloring each region either red, green, or blue in
such a way that no neighboring regions have the same color. To formulate
this as a CSP, we define the variables to be the regions:

 The domain of each variable is the set:

119
 The constraints require neighboring regions to have distinct colors. Since
there are nine places where regions border, there are nine constraints:

 The nodes of the graph correspond to variables of the problem, and a link
connects any two variables that participate in a constraint.

120

You might also like