0% found this document useful (0 votes)
10 views

OTM Module 1_Part 2

Uploaded by

smitha shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

OTM Module 1_Part 2

Uploaded by

smitha shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

2.

Introduction to Meta‐
Heuristic and
Evolutionary Algorithms
Searching the Decision Space for
Optimal Solutions
• The goal of solving an optimization problem is finding a solution in the decision
space whose value of the objective function is the best among all possible solutions.

• One of the procedures applicable for finding the optimum in a decision space is
sampling or trial‐and‐error search.

• The methods that apply trial‐and‐error search include

(1) sampling grid

(2) random sampling

(3) targeted sampling


Sampling Grid
• The goal of a sampling grid is evaluating all possible solutions and choosing the best
one.

• If the problem is discrete, a sampling network evaluates all possible solutions and
constraints.

• The solution that satisfies all the constraints and has the best objective function
value among all feasible solutions is chosen as the optimum.

• When the decision space of a discrete problem is large, the computational burden
involved in evaluating the objective function and constraints could be prohibitive.

• Therefore, the sampling grid method is practical for relatively small problems only.
Sampling Grid (Contd…)
• When an optimization problem is continuous testing, all solutions are not possible
because there are an infinite number of them.

• In this situation the continuous problem is transformed to a discrete problem by


overlaying a grid on the decision space as shown in Figure 2.1

Figure 2.1: Sampling grid on a two‐dimensional


decision space
Sampling Grid (Contd…)
• The intersections of the grid are points that are evaluated.

• After discretization of the decision space, the procedure followed is as the same
as that employed for discrete problems.

• In this method, reducing the size of the grid interval improves the accuracy of
the search while increasing the computational burden.

• It is generally impossible to find solutions that are very near the global optimum
of a complex optimization problem because to achieve that it would be
necessary to choose a very small grid for which the computational burden would
in all probability be prohibitive.
Random sampling
• In random sampling method, possible solutions are chosen randomly and their
objective functions are evaluated.

• The best solution among the chosen possible solutions is designated as the
optimum.

• Suppose that there are S possible solutions, among which r = 1 is the optimal one,
and K possible solutions are chosen randomly among the S possible ones to be
evaluated.

• First, let us consider that the random selection is done without replacement, and
let Z denote the number of optimal solutions found in the randomly chosen sample
of K possible solutions (Z can only take the value 0 or 1 in this instance).
Random sampling (Contd…)
• The probability that one of the K chosen possible solutions is the optimal one is
equal to P(Z = 1) = K/S.

• Therefore, if there are S = 106 possible solutions and K = 105 possible solutions are
randomly chosen, the probability of selecting the optimal solution among those in
the randomly chosen sample is only 0.10 (10.0%) in spite of the computational effort
of evaluating 105 possible solutions.

• Also, random selection can be done with replacement. In this method, the
probability that one of the tested solutions is the optimal solution of the
optimization problem equals
Random sampling (Contd…)
• One of the key shortcomings of the sampling grid and random sampling
methods is that they require that all the decision space be searched precisely.

• This exerts a high and wasteful computational effort.

• In these two methods, the evaluation of any new possible solution is done
independently of previously tested solutions.

• In others words, there is no learning about the history of previous computations


to guide the search for the optimal solution more efficiently as the search
algorithm progresses through the computations.
Targeted Sampling
• Unlike sampling grid and random sampling, targeted sampling searches the
decision space, taking into account the knowledge gained from previously tested
possible solutions, and selects the next sample solutions based on results from
previously tested solutions.

• Thus, targeted sampling focuses gradually in areas of the decision space where
the optimum may be found with a high probability.

• Targeted sampling is the basis of all meta‐heuristic and evolutionary algorithms


that rely on a systematic search to find an optimum.
Targeted Sampling (Contd…)
• Meta‐heuristic and evolutionary algorithms of the targeted sampling type are
capable to solve all well‐posed real‐world and complex problems that other types
of optimization methods such as linear and nonlinear programming, dynamic
programming, and stochastic dynamic programming cannot solve.

• Meta‐heuristic and evolutionary algorithms have become a preferred solution


approach for most complex engineering optimization problems.

• Meta‐heuristic and evolutionary algorithms may prove to be computationally


intensive in finding an exact solution, but sometimes a near‐optimal solution is
sufficient.
Definition of Terms of Meta‐Heuristic and Evolutionary Algorithms
• Meta‐heuristic and evolutionary algorithms are problem‐independent
techniques that can be applied to a wide range of problems.

• Algorithms are made of iterative operations or steps that are terminated when a
stated convergence criterion is reached.

• Figure 2.2 shows a general schematic of an algorithm.


General schematic of a simple algorithm
• Meta‐heuristic and evolutionary algorithms start from an initial state and initial
data.

• The purpose of these algorithms is finding appropriate values for the decision
variables of an optimization problem so that the objective function is optimized.

• Although there are differences between meta‐heuristic and evolutionary


algorithms, they all require initial data and feature an initial state, iterations, final
state, decision variables, state variables, simulation model, constraints, objective
function, and fitness function.
Initial State

• Each meta‐heuristic and evolutionary algorithm starts from an initial state of variables.

• This initial state can be predefined, randomly generated, or deterministically calculated

from formulas.

Iterations

• Algorithms perform operations iteratively in the search for a solution. Evolutionary or


meta‐heuristic algorithms start their iterations with one or several initial solutions of the
optimization problem.

• Next, sequential operations are performed to generate new solution(s).

• An iteration ends when a new possible solution is generated. The new generated
solution(s) is (are) considered as initial solution(s) for the next iteration of the algorithm.
Final State
After satisfying the chosen termination criteria, the algorithm stops and reports the
best or final generated solution(s) of an optimization problem.

Termination criteria are defined in several different forms: (1) the number of
iterations, (2) the improvement threshold of the value of solution between
consecutive iterations, and (3) the run time of the optimization algorithm.

The first criterion refers to a predefined number of iterations that the algorithm is
allowed to execute. The second criterion sets a threshold for improving the solution
between consecutive steps. The third criterion stops the algorithm after a defined
run time and the best solution available at that time is reported.
Initial Data (Information)
Initial information is classified into two categories including

(1) data about the optimization problem, which are required for simulation, and

(2) parameters of the algorithm, which are required for its execution.

Decision Variables
• Decision variables are those whose values are calculated by execution of the algorithm,
and their values are reported as solution of an optimization problem upon reaching the
stopping criterion.
State Variables
The state variables are related to the decision variables. In fact, the values of
the state variables change as the decision variables change.

Objective Function
The objective function determines the optimality of solutions. An objective
function value is assigned to each solution of an optimization problem.
Simulation Model

• A simulation model is a single function or a set of mathematical


operations that evaluate the values of the state variables in
response to the values of the decision variables.

• The simulation model is a mathematical representation of a real


problem or system that forms part of an optimization problem.
• The mathematical representation is in terms of numerical and logical operations
programmed in the solution algorithm implemented for an optimization problem.
Constraints

• Constraints delimit the feasible space of solutions of an optimization


problem and are considered in meta‐heuristic and evolutionary algorithms.

• These influence the desirability of each possible solution. After objective


function and state variables related to each solution are evaluated, the
constraints are calculated and define conditions that must be satisfied for
feasibility of any possible solution.

• If the constraints are satisfied, the solution is accepted and it is called a


feasible solution; otherwise the solution is removed or modified.
Fitness Function

The value of the objective function is not always the chosen measure of
desirability of a solution.

For example, the algorithm may employ a transformed form of the


objective function by the addition of penalties that avoid the violation of
constraints, in which case the transformed function is called the fitness
function.

The fitness function is then employed to evaluate the desirability of


possible solutions.
Principles of Meta‐Heuristic and Evolutionary Algorithms
Figure 2.3 depicts the relation between the simulation model and the optimization algorithm in an
optimization problem.
Principles of Meta‐Heuristic and Evolutionary Algorithms
• The decision variables are inputs to the simulation model.

• Then, the state variables, which are outputs of the simulation model, are evaluated.

Thereafter, the objective function is evaluated.

• In the next step, the problem constraints are evaluated, and lastly the fitness value of

the current decision variables is calculated.

• At this time, the optimization algorithm generates a new possible solution of decision

variables to continue the iterations if a termination criterion is not reached.


Principles of Meta‐Heuristic and Evolutionary Algorithms

• The meta‐heuristic and evolutionary algorithms are independent of the

simulation model and they only employ the value of the current state

variables.

• In other words, these algorithms execute their operations independently of

the equations and calculations executed by the simulation model.

• The main difference between the various meta‐heuristic and evolutionary

algorithms is how they generate new solution(s) in their iterative

procedure, wherein these apply elements of artificial intelligence by

learning from previous experience(old possible solutions) and employ

accumulated information to generate new possible solutions.


Figure 2.4 portrays the process of optimization by meta‐heuristic and evolutionary
algorithms.

In summary, meta‐heuristic and evolutionary algorithms first generate a set of initial


solutions. The simulation model then calculates the decision variables (these are the
current possible solutions) with which to evaluate the objective function. The fitness values
corresponding to the current decision variables are evaluated based on the calculated
objective function.
Classification of Meta‐Heuristic and Evolutionary
Algorithms

Nature‐Inspired and Non‐Nature‐Inspired Algorithms

• Some algorithms are inspired by natural process, such as the


genetic algorithm (GA), the ant colony optimization (ACO), the
honey‐bee mating optimization (HBMO), and so on.
• On the other hand, there are other types of algorithms such as tabu
search (TS) that has origins unrelated to natural processes. It is
sometimes difficult to clearly assign an algorithm to one of these
two classes (nature‐ and non‐nature inspired), and many recently
developed algorithms do not fit either class or may feature elements
Classification of Meta‐Heuristic and Evolutionary
Algorithms

Population‐Based and Single‐Point Search Algorithms


• Some algorithms calculate iteratively one possible solution to an
optimization problem. This means that the algorithms generate a
single solution and they attempt to improve that solution in each
iteration. Algorithms that work on a single solution are called
trajectory methods and encompass local search‐based meta‐
heuristics, such as TS.
• In contrast, population‐based algorithms perform search processes
that describe the evolution of a set of solutions in the search space.
The GA is a good example of the population‐based algorithms.
Classification of Meta‐Heuristic and Evolutionary
Algorithms

Memory‐Based and Memory‐Less Algorithms


• A key feature of some meta‐heuristic and evolutionary algorithms is
that they resort to the search history to guide the future search for
an optimal solution.
• Memory‐less algorithms apply a Markov process to guide the search
for a solution as the information they rely upon to determine the
next action is the current state of the search process.
• There are several ways of using memory, which is nowadays
recognized as one of the fundamental capabilities of advanced
meta‐heuristic and evolutionary algorithms.

You might also like