0% found this document useful (0 votes)
13 views

Chapter04.3-4 4e

Local search algorithms operate by iteratively improving a single state or small set of states rather than exploring the full search space. This includes hill-climbing, simulated annealing, and genetic algorithms. These methods are useful for optimization problems where the goal is to find the best configuration rather than the path. Continuous state spaces can be handled by discretization or gradient-based methods. Nondeterministic or partially observable environments require maintaining a belief state about possible world states rather than a single state.

Uploaded by

Nazia Enayet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Chapter04.3-4 4e

Local search algorithms operate by iteratively improving a single state or small set of states rather than exploring the full search space. This includes hill-climbing, simulated annealing, and genetic algorithms. These methods are useful for optimization problems where the goal is to find the best configuration rather than the path. Continuous state spaces can be handled by discretization or gradient-based methods. Nondeterministic or partially observable environments require maintaining a belief state about possible world states rather than a single state.

Uploaded by

Nazia Enayet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Artificial Intelligence: A Modern Approach

Fourth Edition, Global Edition

Chapter 4

Sear ch in Com p lex


En vir on m en ts

Chapter 4, Sections 3–4 1

© 2022 Pearson Education Ltd.


Outline
♦ Local Search and Optimization Problems
♦ Hill-climbing
♦ Simulated annealing
♦ Genetic algorithms
♦ Local search in continuous spaces
♦ Search with Nondeterministic Actions
♦ Search in Partially Observable Environments

Chapter 4, Sections 3–4 2

© 2022 Pearson Education Ltd.


Local Search and Optimization Problems

In many optimization problems, path is irrelevant; the goal state


itself is the solution
Then state space = set of “complete” configurations; find optimal
configuration, e.g., TS P
or, find configuration satisfying constraints, e.g., timetable
In such cases, one can use iterative improvement algorithms; keep a
single “current” state, try to improve it

Local search algorithms operate by searching from a start state to


neighboring states, without keeping track of the paths, nor the set of states
that have been reached.

They are not systematic—they might never explore a portion of the search
space where a solution actually resides.

© 2022 Pearson Education Ltd.


Example: Travelling Salesperson Problem
Start with any complete tour, perform pairwise exchanges

Variants of this approach get within 1% of optimal very quickly with


thou- sands of cities

Chapter 4, Sections 3–4 4

© 2022 Pearson Education Ltd.


Example: n-queens

Put n queens on an n × n board with no two queens on the


same row, column, or diagonal
Move a queen to reduce number of conflicts

h=5 h=2 h=0

Almost always solves n-queens problems almost instantaneously


for very large n, e.g., n = 1million

Chapter 4, Sections 3–4 5

© 2022 Pearson Education Ltd.


Hill-climbing (or gradient ascent/descent)
“Like climbing Everest in thick fog with amnesia”

function Hill-Climbing( problem) returns a state that is a local maximum


inputs: problem, a problem
local variables: current, a node
neighbor, a node
current ← Make-Node(Initial-Stat e [problem])
loop do
neighbor ← a highest-valued successor of current
if Value[neighbor] ≤ Value[current] then return State[current]
current ← neighbor
end

Chapter 4, Sections 3–4 6

© 2022 Pearson Education Ltd.


Hill-climbing contd.
Useful to consider state space
landscape
objective function global maximum

shoulder
local maximum
"flat" local maximum

state space
current
state

Random-restart hill climbing overcomes local maxima—trivially complete


Random sideways moves escape from shoulders loop on flat
maxima

Chapter 4, Sections 3–4 7

© 2022 Pearson Education Ltd.


Simulated annealing
Idea: escape local maxima by allowing some “bad” moves
but gradually decrease their size and frequency

function Simulated-Annealing( problem, schedule) returns a solution


state
inputs: problem, a problem
schedule, a mapping from time to “temperature”
local variables: current, a node
next, a node
T, a “temperature” controlling prob. of downward steps
current ← Make-Node(Initial-Stat e [problem])
for t ← 1 to ∞ do
T ← schedule[t]
if T = 0 then return current
next ← a randomly selected successor of current
∆ E ← Value[next] – Value[current]
if ∆ E > 0 then current ← next
else current ← next only with probability e∆ E /T

Chapter 4, Sections 3–4 8

© 2022 Pearson Education Ltd.


Properties of simulated annealing
At fixed “temperature” T , state occupation probability
reaches Boltzman distribution
E(x)
p(x) = αe kT

T decreased *slowly enough =⇒ always reach best state x∗


E(x ) E(x) E(x )−E(x)
because e kT /e kT = e
∗ � 1 for small
kT
T
Is this necessarily an interesting guarantee??
Devised by Metropolis et al., 1953, for physical process
modelling Widely used in VLSI layout, airline scheduling, etc.

Chapter 4, Sections 3–4 9

© 2022 Pearson Education Ltd.


Local beam search
Idea: keep k states instead of 1; choose top k of all their
successors
Not the same as k searches run in parallel!
Searches that find good states recruit other searches to join them
Problem: quite often, all k states end up on same local hill
Idea: choose k successors randomly, biased towards good ones
Observe the close analogy to natural selection!

Chapter 4, Sections 3–4 10

© 2022 Pearson Education Ltd.


Genetic algorithms
= stochastic local beam search + generate successors from pairs of
states
24748552 24 31% 32752411 32748552 32748152
32752411 23 29% 24748552 24752411 24752411

24415124 20 26% 32752411 32752124 32252124

32543213 11 14% 24415124 24415411 24415417

Fitness Selection Pairs Cross−Over Mutation

Chapter 4, Sections 3–4 11

© 2022 Pearson Education Ltd.


Genetic algorithms contd.
GAs require states encoded as strings (GPs use programs)
Crossover helps iff substrings are meaningful components

+ =

GAs /= evolution: e.g., real genes encode replication


machinery!

Chapter 4, Sections 3–4 12

© 2022 Pearson Education Ltd.


Continuous state spaces
Suppose we want to site three airports in Romania:
– 6-D state space defined by (x1 , y2 ), (x2 , y2 ), (x3 , y3 )
– objective function f (x1, y2, x2, y2, x3, y3) =
sum of squared distances from each city to nearest airport
Discretization methods turn continuous space into discrete space,
e.g., empirical gradient considers ±δ change in each coordinate
Gradient methods compute
 
 ∂f ∂f ∂f ∂f ∂f ∂f 
∇f =  
, , , , 
∂x 1 , ∂y1 ∂x 2 ∂y2 ∂x 3 3
∂y
to increase/reduce f , e.g., by x ← x + α∇f (x)
Sometimes can solve for ∇f (x) = 0 exactly (e.g., with one city).
Newton–Raphson (1664, 1690) iterates x ← x − H−f 1 (x)∇f
(x) to solve ∇f (x) = 0, where H i j = ∂ 2 f /∂xi ∂xj

Chapter 4, Sections 3–4 13

© 2022 Pearson Education Ltd.


Search with Nondeterministic Actions
Agent doesn’t know the state its transitioned to after action, the
environment is nondeterministic.

Rather, it will know the possible states it will be in, which is called
“belief state”

Examples:
• The erratic vacuum world (if-then-else) steps. If statement tests
to know the current state.
• AND-OR search trees. Two possible actions (OR nodes).
Branching that happens from a choice (AND nodes).
• Try, try again. A cyclic plan where minimum condition (every leaf
= goal state & reachable from other points in the plan)

Chapter 4, Sections 3–4 14

© 2022 Pearson Education Ltd.


Search in Partially Observable Environments
Problem of partial observability, where the agent’s percepts are not
enough to pin down the exact state.

Searching with no observation: Agent’s percepts provide no


information at all, sensorless problem (or a conformant problem).
• Solution: sequence of actions, not a conditional plan

Searching in partially observable environments requires a function


that monitors or estimates the environment to maintain the belief
state.

Chapter 4, Sections 3–4 15

© 2022 Pearson Education Ltd.


Summary
Local search methods keep only a small number of states in
memory that are useful for optimization.

In nondeterministic environments, agents can apply AND–OR


search to generate contingency plans that reach the goal regardless
of which outcomes occur during execution.

Belief-state is the set of possible states that the agent is in for


partially observable environments.

Standard search algorithms can be applied directly to belief-state


space to solve sensorless problems.

Chapter 4, Sections 3–4 16

© 2022 Pearson Education Ltd.

You might also like