0% found this document useful (0 votes)
23 views35 pages

(Week 2) Lecture 2b Search 2 2022

Uploaded by

Kenny Atu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views35 pages

(Week 2) Lecture 2b Search 2 2022

Uploaded by

Kenny Atu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Lecture Two B

Search within a Complex


Environment
(We don’t care about the sequence of actions, just provide a viable solution)

The CSC415 Team


Main Text for this Course
Title
Artificial Intelligence: a Modern
Approach (2020, 4th Edition)

Author: Stuart Russell and Peter Norvig


Preamble

 In Lecture 2(A), we discussed search techniques in fully


observable, deterministic, static, and known environments.
 Incidentally, not all problems have these characteristics,
especially the real-world ones.

3
Optimization Problems
 An optimization problem is a kind of computational problems
that is defined by the following:
 A search space of possible solutions to the problem
 A set of constraints
 That guides the feasibility of solutions-a solution that violates a hard
constraint cannot be regarded as valid.
 An objective function
 Can either be maximized or minimized, depending on the nature of the
problem.

4
Optimization Problems (2)

 They are generally divided into two:


 Discrete optimization problems
 Where the variables to select from are discrete; countable; 0, 1, …, n
 Continuous optimization problems
 The variables to select from are not discrete
 The search space is kind of infinite

 The goal of an optimization agent is to select/find the best


solution among the set of feasible solutions
5
Neighbour: A solution that can be generated
from another solution by just making a single
move w.r.t. a heuristic function

Global maximum/minimum: The highest peak


amongst all the objective function values

Local maximum: The highest peak in a


neighbourhood of solutions, but it is not the
actual best (the global maximum).

State space: A set of all feasible solutions to an


optimization problem

Shoulder: A flat area with two or more local


maxima. “Sideways moves” are possible here.

Examples of optimization problems include: integrated-circuit design, factory


floor layout, job shop scheduling, telecommunications network optimization,
crop planning, portfolio management, and travelling salesman problem.
6
Local Search for Optimization problems

 Local search algorithms operate by searching from a start


state Local search to neighboring states
 without keeping track of the paths,
 nor the set of states that have been reached.
 It means they are not systematic—they might never explore a
portion of the search space where a solution actually resides.

7
Advantages of Local Search algorithms

1. They use very little memory


2. They can often find reasonable solutions in large or infinite
state spaces for which systematic algorithms are unsuitable.

8
Hill Climbing Search
(Just ascend, we might hit gold)
function HILL-CLIMBING(problem) returns a state, a local maximum one
current ← problem.INITIAL
while true do
neighbour ← a highest-valued successor state of current
if Is-Better(VALUE(neighbour),VALUE(current)) then return current
current ← neighbour

The idea of hill climbing is that it keeps track of one current state and on each
iteration moves to the neighbouring state with highest value—that is, it heads in the
direction that provides the steepest ascent.

It terminates when it reaches a “peak” where no neighbour has a higher value.


9
Hill-Climbing and the 8-queens problem

 In the 8-queens problem, the goal is to place 8 queens on a


chess board so that no queen attacks another.
 A heuristic function h defined as:
 Sum of all pairs of queens in an attacking position
 Any solution with is the solution to the problem.

10
Current configuration
Queen 1 = 1 (row) + 2
(diagonal) = 3
Q

Queen 2 = 2 (row) + 1 Q Q Q
(diagonal) = 3 Q Q
Q Q
H = Queen 1 + Queen 2 +
… + Queen 8

Image at the centre: the current configuration of an 8-queens problem and the
numbers shown depict the heuristic function values after moving a queen within
its column to the spot where the number is.

What move will an hill-climber make?

Can you compute the value of h for the configuration on the far right? 11
Simulated Annealing
function SIMULATED-ANNEALING(problem, schedule) returns a solution state
current ← problem.INITIAL
for t = 1 to do
T ← schedule(t)
if T = 0 then return current
next ← a randomly selected successor of current
E ← VALUE(current) – VALUE(next)
if E > 0 then current ← next
else current ← next only with probability

12
Evolutionary Algorithms

 Evolutionary algorithms can be seen as variants of stochastic


beam search that are explicitly motivated by the metaphor of
natural selection in biology:
 There is a population of individuals (states), in which the fittest
(highest value) individuals produce offspring (successor states) that
populate the next generation, a process called recombination.

13
Core Concepts in Evolutionary Algorithms
(EAs)
 Generation
 An iteration/epoch of an EA run.
 Population size
 The number of individuals, i.e. chromosomes encoding/representing a
solution of a problem.
 Selection
 The process of selecting which individuals will become the parents of
the next generation.
14
Core Concepts in Evolutionary Algorithms
(EAs)
 Recombination/Crossover
 The act of combining two parents to produce offspring
 Mutation
 Editing a little portion of an individual
 Elitism
 The phenomenon whereby the fittest individuals are usually
chosen/considered when recombination is to take place.

15
Various forms of EA

 Genetic Algorithm (GA)


 Genetic Programming (GP)
 Evolution Strategy (ES)
 Differential Evolution (DE)
 Memetic Algorithm (MA)
 Learning Classifier Systems (LCS)

16
function GENETIC-ALGORITHM(population, fitness) 1
repeat
weights ← WEIGHTED-BY(population, fitness)
population2 ← empty list
for i = 1 to SIZE(population) do
parent1, parent2 ← WEIGHTED-RANDOM-CHOICES(population,weights,2)
child ← REPRODUCE(parent1, parent2)
if(small random probability) then child ← MUTATE(child)
add child to population2
population ← population2
until some individual is fit enough, or enough time has elapsed
return the best individual in population, according to fitness

function REPRODUCE(parent1, parent2) returns an individual 2


n ← LENGTH(parent1)
c ← random number from 1 to n
return APPEND(SUBSTRING(parent1, 1, c), SUBSTRING(parent2, c+1, n))
Genetic Algorithm and the 8-queens problem

 To use a GA to solve the 8-queens problem, we can do the


following:
 Use a string of 8 digits to represent an individual solution (a
chromosome)
 Each i-th digit represents the row where the queen in column i is placed.
 For example, the chromosome below means place the queen in column 1
on the 3rd row, place the queen in column 2 on the 4 th row, and so on…
3 4 4 5 2 1 7 8

18
Fitness function = Number of non-attacking pairs = Number of pairs – Number of attacking pairs

Fitness of individual 1 in the initial population = 24, Selection rate = 30.77%


Fitness of individual 2 in the initial population = 23, Selection rate = 29.49%
Fitness of individual 3 in the initial population = 20, Selection rate = 25.64%
Fitness of individual 4 in the initial population = 11, Selection rate = 14.10%
19
Programming Project 1
GA for solving a travelling salesman problem

 Remember the TSP requires finding a Hamiltonian path such


that each city in the graph is visited once and the
traveller/agent returns to the original city.
 For this task, you are to implement a GA to solve a TSP of up
to 20 cities/nodes.

20
Programming Project 1 - Task

 Generate a random graph with the following properties


 The number of nodes should be determined randomly within the range
[15, 20]
 The weights on the edges should be generated randomly within the
range [1, 20]
 Implement a GA that will attempt to generate an optimal solution for
your TSP graph.
 Run the GA for up to 1000 iterations/generations

21
Programming Project 1 – Task (2)

 Use a chromosome length of n to encode the solution of the


TSP
 Where n is the number of nodes in your generated graph
 For the mutation rate, use 10% or 0.1
 This means that 10% of the genes that make-up a chromosome will
likely be mutated

22
Search with non-deterministic actions

 In this case, the environment is partially observable. The


agent in this environment does not know for sure which state
it is in.
 The possible states the agent might be in are called belief states.
 The solution is usually referred to as a conditional plan or a strategy.

23
A vacuum cleaner in a non-deterministic environment

 There are eight states in the sample environment (see next slide).
 There are three possible actions
 Suck: attempt to soak up the dirt on the floor
 Right: move to the right side of the room
 Left: move to the left side of the room
 The goal of the vacuum cleaner is to completely clean the floor
(without dirt in any area).

24
In the erratic vacuum world, the Suck action
works as follows:
 When applied to a dirty square the action
cleans the square and sometimes cleans up
dirt in an adjacent square, too.
 When applied to a clean square the action
sometimes deposits dirt on the carpet

 Conditional plan for State 1


 [Suck, if State=5 then [Right,Suck] else [
]]
The solutions are normally trees rather than
sequences due to the possibility of if-then-else
actions

Eight possible states of the erratic vacuum world

25
Solving non-deterministic environment using AND-OR Search
trees

 This kind of tree fits a non-deterministic environment because


the actions of an agent are non-deterministic.
 Due to the nature of the environment, an agent would have to
find a plan for possible states that it could be in after taking
an action.

26
Solving non-deterministic environment using AND-OR Search
trees (Cont’d)

 Firstly, an agent can take an action out of possible actions for


a state.
 This can be modelled using an OR node
 Although an action can now lead to possible belief states,
then these states need to be catered for.

27
Solving non-deterministic environment using AND-OR Search
trees (Cont’d)

 For example, the Suck action in state 1 results in the belief


state {5,7}, so the agent would need to find a plan for state 5
and for state 7.
 This can be modelled using an AND node
 The OR and AND nodes alternate, leading to a tree referred to
as the AND-OR search trees.

28
Explanation of the AND-OR search tree
 State nodes are OR nodes where some
action must be chosen.
 At the AND nodes, shown as circles, every
outcome must be handled, as indicated by
the arc linking the outgoing branches.

 Search tree algorithm on the next slide

AND-OR Search tree for the erratic vacuum world problem


29
function AND-OR-SEARCH(problem) returns a conditional plan or failure
return OR-SEARCH(problem, problem.INITIAL, [])

function OR-SEARCH(problem, state, path) returns //same as above


if problem.IS-GOAL(state) then return the empty plan
if IS-CYCLE(path) then return failure
for each action in problem.ACTIONS(state) do
plan ← AND-SEARCH(problem, RESULTS(state, action), [state] + path])
if plan failure then return [action] + plan]
return failure

function AND-SEARCH(problem, states, path) returns //same as above


for each si in states do
plani ← OR-SEARCH(problem, si, path)
if plani = failure then return failure
return [if s1 then plan1 else if s2 then plan2 else … if sn−1 then
plann−1 else plann]
Online Search Agents, Unknown
Environments
Offline Search vs. Online Search

 In offline search, the possible paths to a goal are all mapped


out before an agent takes the firs action.
 That’s not the case with online search. In this case, the action
of agent is based on the environmental response to its
previous action.
 Think of a game whereby an agent takes an action based on the move
of its opponent.

32
Online search

 Online search is a good idea in dynamic or semi-dynamic


environments
 where there is a penalty for sitting around and computing too long
 If an online agent takes an action, the precepts received from
the environment determines the next action to take
 Unlike offline search agents that explore a model of the state space,
online search agents model the real world (unpredictable world)

33
Self-Study/Activity

 Study the algorithm tagged “ONLINE-DFS-AGENT” in the main


text (Figure 4.21, page 137) and do the following:
 Do a work-through of the algorithm on the maze problem depicted in
Figure 4.19, page 136 of the main text.
 Can you think of/search for a better algorithm that can be used to
solve the problem.

34
Appendix

 Is-Better(y,x) can mean two things depending on the


context:
 Interpreted as: is the value of y less than x? If we are solving a
minimisation problem
 For maximisation problems, it is interpreted as: is the value of y greater
than x

35

You might also like