OTM Module 1_Part 2
OTM Module 1_Part 2
Introduction to Meta‐
Heuristic and
Evolutionary Algorithms
Searching the Decision Space for
Optimal Solutions
• The goal of solving an optimization problem is finding a solution in the decision
space whose value of the objective function is the best among all possible solutions.
• One of the procedures applicable for finding the optimum in a decision space is
sampling or trial‐and‐error search.
• If the problem is discrete, a sampling network evaluates all possible solutions and
constraints.
• The solution that satisfies all the constraints and has the best objective function
value among all feasible solutions is chosen as the optimum.
• When the decision space of a discrete problem is large, the computational burden
involved in evaluating the objective function and constraints could be prohibitive.
• Therefore, the sampling grid method is practical for relatively small problems only.
Sampling Grid (Contd…)
• When an optimization problem is continuous testing, all solutions are not possible
because there are an infinite number of them.
• After discretization of the decision space, the procedure followed is as the same
as that employed for discrete problems.
• In this method, reducing the size of the grid interval improves the accuracy of
the search while increasing the computational burden.
• It is generally impossible to find solutions that are very near the global optimum
of a complex optimization problem because to achieve that it would be
necessary to choose a very small grid for which the computational burden would
in all probability be prohibitive.
Random sampling
• In random sampling method, possible solutions are chosen randomly and their
objective functions are evaluated.
• The best solution among the chosen possible solutions is designated as the
optimum.
• Suppose that there are S possible solutions, among which r = 1 is the optimal one,
and K possible solutions are chosen randomly among the S possible ones to be
evaluated.
• First, let us consider that the random selection is done without replacement, and
let Z denote the number of optimal solutions found in the randomly chosen sample
of K possible solutions (Z can only take the value 0 or 1 in this instance).
Random sampling (Contd…)
• The probability that one of the K chosen possible solutions is the optimal one is
equal to P(Z = 1) = K/S.
• Therefore, if there are S = 106 possible solutions and K = 105 possible solutions are
randomly chosen, the probability of selecting the optimal solution among those in
the randomly chosen sample is only 0.10 (10.0%) in spite of the computational effort
of evaluating 105 possible solutions.
• Also, random selection can be done with replacement. In this method, the
probability that one of the tested solutions is the optimal solution of the
optimization problem equals
Random sampling (Contd…)
• One of the key shortcomings of the sampling grid and random sampling
methods is that they require that all the decision space be searched precisely.
• In these two methods, the evaluation of any new possible solution is done
independently of previously tested solutions.
• Thus, targeted sampling focuses gradually in areas of the decision space where
the optimum may be found with a high probability.
• Algorithms are made of iterative operations or steps that are terminated when a
stated convergence criterion is reached.
• The purpose of these algorithms is finding appropriate values for the decision
variables of an optimization problem so that the objective function is optimized.
• Each meta‐heuristic and evolutionary algorithm starts from an initial state of variables.
from formulas.
Iterations
• An iteration ends when a new possible solution is generated. The new generated
solution(s) is (are) considered as initial solution(s) for the next iteration of the algorithm.
Final State
After satisfying the chosen termination criteria, the algorithm stops and reports the
best or final generated solution(s) of an optimization problem.
Termination criteria are defined in several different forms: (1) the number of
iterations, (2) the improvement threshold of the value of solution between
consecutive iterations, and (3) the run time of the optimization algorithm.
The first criterion refers to a predefined number of iterations that the algorithm is
allowed to execute. The second criterion sets a threshold for improving the solution
between consecutive steps. The third criterion stops the algorithm after a defined
run time and the best solution available at that time is reported.
Initial Data (Information)
Initial information is classified into two categories including
(1) data about the optimization problem, which are required for simulation, and
(2) parameters of the algorithm, which are required for its execution.
Decision Variables
• Decision variables are those whose values are calculated by execution of the algorithm,
and their values are reported as solution of an optimization problem upon reaching the
stopping criterion.
State Variables
The state variables are related to the decision variables. In fact, the values of
the state variables change as the decision variables change.
Objective Function
The objective function determines the optimality of solutions. An objective
function value is assigned to each solution of an optimization problem.
Simulation Model
The value of the objective function is not always the chosen measure of
desirability of a solution.
• Then, the state variables, which are outputs of the simulation model, are evaluated.
• In the next step, the problem constraints are evaluated, and lastly the fitness value of
• At this time, the optimization algorithm generates a new possible solution of decision
simulation model and they only employ the value of the current state
variables.