Datadriven Evolutionary Optimization Integrating Evolutionary Computation Machine Learning And Data Science Studies In Computational Intelligence 975 1st Ed 2021 Yaochu Jin download
Datadriven Evolutionary Optimization Integrating Evolutionary Computation Machine Learning And Data Science Studies In Computational Intelligence 975 1st Ed 2021 Yaochu Jin download
https://ebookbell.com/product/datadriven-evolutionary-modeling-in-
materials-technology-1st-edition-nirupam-chakraborti-44746376
https://ebookbell.com/product/data-driven-business-transformation-how-
to-disrupt-innovate-and-stay-ahead-of-the-competition-1st-edition-
peter-jackson-45083132
https://ebookbell.com/product/data-driven-truckers-technology-and-the-
new-workplace-surveillance-karen-levy-45334034
https://ebookbell.com/product/datadriven-retailing-a-nontechnical-
practitioners-guide-louisphilippe-kerkhove-46490812
Datadriven Bim For Energy Efficient Building Design Saeed Banihashemi
https://ebookbell.com/product/datadriven-bim-for-energy-efficient-
building-design-saeed-banihashemi-46844598
https://ebookbell.com/product/datadriven-approach-for-biomedical-and-
healthcare-nilanjan-dey-47061098
https://ebookbell.com/product/datadriven-iterative-learning-control-
for-discretetime-systems-ronghu-chi-47257398
https://ebookbell.com/product/datadriven-fluid-mechanics-miguel-a-
mendez-andrea-ianiro-bernd-r-noack-47558366
https://ebookbell.com/product/datadriven-science-and-engineering-
steven-l-brunton-j-nathan-kutz-47562344
Studies in Computational Intelligence 975
Yaochu Jin
Handing Wang
Chaoli Sun
Data-Driven
Evolutionary
Optimization
Integrating Evolutionary Computation,
Machine Learning and Data Science
Studies in Computational Intelligence
Volume 975
Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particular value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
Data-Driven Evolutionary
Optimization
Integrating Evolutionary Computation,
Machine Learning and Data Science
Yaochu Jin Handing Wang
Department of Computer Science School of Artificial Intelligence
University of Surrey Xidian University
Guildford, UK Xi’an, China
Chaoli Sun
School of Computer Science
and Technology
Taiyuan University of Science
and Technology
Taiyuan, China
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
process. Finally, purely data-driven optimization, where only data collected in real-
life is available for optimization and neither user-designed computer simulations nor
physical experiments are allowed. For example, optimization of a complex indus-
trial process or social system. In all the above cases, the amount of the collected data
may be either small or big, and the data may be heterogeneous, noisy, erroneous,
incomplete, ill-distributed or incremental.
Clearly, data-driven evolutionary optimization involves three different but
complementary scientific disciplines, namely evolutionary computation, machine
learning and deep learning, and data science. To effectively and efficiently solve
a data-driven optimization problem, data must be properly pre-processed. Mean-
while, machine learning techniques become indispensable for handling big data,
data paucity and various degrees of uncertainty in the data. Finally, solving the opti-
mization problem becomes extremely demanding when the optimization problem is
high-dimensional or large-scale, multi-objective and time-varying.
This book aims to provide researchers, including postgraduate research students,
and industrial practitioners a comprehensive description of the state-of-the-art
methods developed for data-driven evolutionary optimization. The book is divided
into 12 chapters. For the self-containedness of the book, a brief introduction to care-
fully selected important topics and methods in optimization, evolutionary computa-
tion and machine learning is provided in Chaps. 1–4. Chapter 5 provides the funda-
mentals of data-driven optimization, including heuristics and acquisition function-
based surrogate management, followed by Chaps. 6–8, presenting ideas that use
multiple surrogates for single-objective optimization. Representative evolutionary
algorithms for solving multi- and many-objective optimization algorithms and
surrogate-assisted data-driven evolutionary multi- and many-objective optimization
are described in Chaps. 7 and 8, respectively. Approaches to high-dimensional data-
driven optimization are elaborated in Chap. 9. A plethora of techniques for transfer-
ring knowledge from unlabelled to labelled data, from cheap objectives to expensive
ones, and from cheap problems to expensive ones are presented in Chap. 10, with the
help of semi-supervised learning, transfer learning and transfer optimization. Since
data-driven optimization is a strongly application-driven research area, offline data-
driven evolutionary optimization is treated in Chap. 11, exemplified with real-world
optimization problems such as airfoil design optimization, crude oil distillation opti-
mization and trauma system optimization. Finally, deep neural architecture search as
a data-driven expensive optimization problem is highlighted in Chap. 12. Of the 12
chapters, § 3.5– 3.6, § 4.2, § 5.2, § 6.4– 6.5, § 7.2– 7.3, § 9.6– 9.7, § 11.1, §
11.3 and Chap. 12 are written by Handing Wang, and § 3.7– 3.8, § 5.4.1, § 5.5, §
6.2– 6.3, § 9.2– 9.3 and Chap. 10 by Chaoli Sun. Handing worked as Postdoctoral
Associate during 2015–18 and Chaoli at first as Academic Visitor during 2012–13
and then as Postdoctoral Associate during 2015–17 in my group at Surrey.
To make it easier for the reader to understand and use the algorithms intro-
duced in the book, the source code for most data-driven evolutionary algorithms
presented in Chaps. 5–12 is made available (http://www.soft-computing.de/DDEO/
DDEO.html) and all baseline multi-objective evolutionary algorithms introduced in
Preface ix
this book are implemented in PlatEMO, an open-source software tool for evolutionary
multi-objective optimization (https://github.com/BIMK/PlatEMO).
This book would not have been possible without the support of many previous
colleagues, collaborators and Ph.D. students of mine. First of all, I would like to
thank Prof. Dr. Bernhard Sendhoff and Prof. Dr. Markus Olhofer, with both of whom
I closely worked at the Honda Research Institute Europe during 1999–2010. After
I joined Surrey in 2010, Markus and I still maintained close collaboration on a
number of research projects on evolutionary optimization. I would also thank Prof.
Kaisa Miettinen from the University of Jyväskylä, Finland, with whom I worked
closely as Finland Distinguished Professor during 2015–17 on evolutionary multi-
objective optimization. Thanks go to Prof. Tianyou Chai and Prof. Jinliang Ding from
Northeastern University, China, with whom I also collaborate with on evolutionary
optimization as Changjiang Distinguished Professor. The following collaborators
and previous or current Ph.D. students of mine have contributed to part of the work
presented in this book: Prof. Yew-Soon Ong, Prof. Jürgen Branke, Prof. Qingfu
Zhang, Prof. Xingyi Zhang, Prof. Aimin Zhou, Prof. Ran Cheng, Prof. Xiaoyan Sun,
Dr. Ingo Paenke, Dr. Tinkle Chugh, Mr. John Doherty, Dr. Dan Guo, Dr. Cuie Yang,
Dr. Ye Tian, Dr. Cheng He, Dr. Dudy Lim, Dr. Mingh Nhgia Le, Dr. Jie Tian, Dr.
Haibo Yu, Dr. Guo Yu, Dr. Michael Hüsken, Ms. Huiting Li, Ms. Xilu Wang, Ms.
Shufen Qin, Mr. Hao Wang, Ms. Guoxia Fu, Mr. Peng Liao, Mr. Sebastian Schmitt,
Ms. Kailai Gao, Dr. Jussi Hakanen, Dr. Tatsuya Okabe, Dr. Yanan Sun, Dr. Jan
O. Jansen, Mr. Martin Heiderich, Dr. Yuanjun Huang and Dr. Tobias Rodemann.
I would also like to take this opportunity to thank Prof. Xin Yao, Prof. Gary Yen,
Prof. Kay Chen Tan, Prof. Mengjie Zhang, Prof. Richard Everson, Prof. Jonathon
Fieldsend, Prof. Dr. Stefan Kurz, Prof. Edgar Körner and Mr. Andreas Richter for
their kind support over the past two decades. Finally, financial support from EPSRC
(UK), TEKES (Finland), National Natural Science Foundation of China, Honda
Research Institute Europe, Honda R&D Europe and Bosch Germany is gratefully
acknowledged.
1 Introduction to Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Definition of Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Convex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Quasi-convex Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Global and Local Optima . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Types of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Continuous Versus Discrete Optimization . . . . . . . . . . . . 5
1.2.2 Unconstrained Versus Constrained Optimization . . . . . . 7
1.2.3 Single Versus Multi-objective Optimization . . . . . . . . . . . 7
1.2.4 Deterministic Versus Stochastic Optimization . . . . . . . . . 8
1.2.5 Black-Box and Data-Driven Optimization . . . . . . . . . . . . 8
1.3 Multi-objective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Pareto Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 Preference Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.4 Preference Articulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Handling Uncertainty in Optimization . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.1 Noise in Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.2 Robust Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.3 Multi-scenario Optimization . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4.4 Dynamic Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.5 Robust Optimization Over Time . . . . . . . . . . . . . . . . . . . . 24
1.5 Comparison of Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . 27
1.5.1 Algorithmic Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.5.2 Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5.3 Reliability Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.5.4 Statistical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.5.5 Benchmark Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
xi
xii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Acronyms
RL Reinforcement learning
RM-MEDA Regularity model-based multiobjective estimation of distribution
algorithm
RMSE Root mean square error
ROOT Robust optimization over time
SA-COSO Surrogate-assisted cooperative swarm optimization algorithm
SBX Simulated binary crossover
SGD Stochastic gradient descent
SL-PSO Social learning particle swarm optimization
SOM Self-organizing map
SOP Single-objective optimization problem
SP Spacing
SQP Sequential quadratic programming
SVDD Support vector domain description
SVM Support vector machine
TLSAPSO Two-layer surrogate-assisted particle swarm optimization
TSP Travelling salesman problem
UCB Upper confidence bound
UMDA Univariate marginal distribution algorithm
Symbols
xxv
Chapter 1
Introduction to Optimization
x1 + x2 = 10.0, (1.2)
x1 − x2 < 5.0 (1.3)
0 ≤ x1 , x2 < 10.0 (1.4)
Of the above two constraints, Eq. (1.2) is called an equality constraint, Eq. (1.3)
is called an inequality constraint, and Eq. 1.4 is known as boundary constraints that
defines the lower and upper bounds of the decision variables. For an optimization
problem, one combination of the decision variables is known as a solution. For
example, for the optimization problem in (1.1), (x1 = 5.0, x2 = 5.0) is solution of
the problem, and since this solution satisfies both constraints, it is called a feasible
solution. By contrast, (x1 = 1.0, x2 = 8.0) does not satisfy the equality condition, and
therefore it is known as a non-feasible solution. The task of the above optimization
problem is to find a feasible solution that minimizes the objective function (1.1).
Generally, a minimization problem can be formulated as
where f (x) is the objective function, x ∈ Rn is the decision vector, x L and xU are
the lower and upper bounds of the decision vector, n is the number of decision
variables, g j (x) are the inequality constraints, and h k (x) are the equality constraints,
J ,K are the number of inequality and equality constraints, respectively. If a solution
x satisfies all constraints, it is called a feasible solution, and the solutions that achieve
the minimum value are called the optimal solutions or minimums. For a differentiable
objective function where all its partial derivatives equal zero, the solution is called
a stationary point, which may be a local minimum, a local maximum, or a saddle
point, as illustrated in Fig. 1.1.
Problem formulation in solving real-world optimization problems is of extreme
importance but challenging. The main reasons include
• In optimization of complex systems, there are large numbers of decision variables,
objectives and constraints and it is unpractical to optimize the overall system at
once. For example, deigning a car may be divided into conceptual design, design
of components and design of parts. Note that the amount of time and cost allowed
for these different design stages are different.
• Even for a subsystem, it is not easy for the user to determine whether a particular
requirement should be considered as an objective or as a constraint. In addition,
some objectives or constraints are added only after some preliminary optimization
1.1 Definition of Optimization 3
Definition 1.1 The optimization problem in Eq. (1.5) is called convex, if for all x1 ,
x2 ∈ Rn , and for all α + β = 1, α ≥ 0, β ≥ 0, the following condition holds:
∀x ∈ F.
∀x ∈ F and x − x∗ < ρ.
Fig. 1.3 An example of a continuous optimization problem. a Illustration of a jet engine turbine
blade. b A 3D B-spline representation of a blade using two 2D sections
• Decision variables. Parameters defining the geometry of the turbine blade. Note
that many different methods can be used to represent a geometry, e.g., based on
parameterization or B-splines, as shown in Fig. 1.3. Here, the decision variables are
related to the blade geometry and therefore are continuous. So this is a continuous
optimization problem. There is typically a trade-off between completeness and
compactness in representing a design. A complete representation is able to repre-
sent all different structures, may may have a large number of decision variables.
By contrast, a compact representation has a small number of decision variables,
but may be limited in presenting complex structure. Other constraints may include
causality and locality (Jin & Sendhoff, 2009).
• Constraints. Here the constraints may include the mechanical constrains, among
others.
• Objectives. Usually, minimization of the pressure loss is the main target in design of
a turbine blade, which is closely related to energy efficiency. However, objectives
such as deviation from a desired inflow angle, or the variance of the flow rate at the
out let, may also be included in optimization. Note that the pressure loss cannot
be calculated using an analytic mathematical function and can be evaluated using
numerical simulations or wind tunnel experiments only.
The TSP is a classical combinatorial optimization problem which aims to find the
shortest path connecting cities visited by a travelling salesman on his sales route.
Each city can be visited only once. The TSP can be formulated as follows.
• Decision variables. Here the connecting order of the cities to be visited is the
decision variable. For example, in the 10-city example shown Fig. 1.4, one solution
is A → B → C → D → E → F → G → H → I → J → A.
1.2 Types of Optimization Problems 7
• Constraints. The constrains are each city can be visited only once and the salesman
must be back to the starting city A is he starts from there.
• Objectives. The objective is to minimize the length of the total travelling path.
The problem in Eq. (1.5) has one objective function, therefore, such optimization
problems are known as single-objective optimization problems (SOPs). In the real
world, however, most problems have more than one objective to optimize, and these
objectives are usually conflicting with each other. These problems are called multi-
objective optimization problems (MOPs). In the evolutionary computation commu-
8 1 Introduction to Optimization
nity, MOPs with more than three objectives are termed many-objective optimiza-
tion problems (MaOPs), mainly because the hardness to solve MaOPs will become
dramatically more challenging for algorithms designed for solving two- or three-
objective problems.
There are also cases where the task of optimization is to find a feasible solution for
a highly constrained optimization problem, which is called constraint satisfaction.
Usually it is assumed that a given optimization is time invariant and there is no uncer-
tainty in the objective or constraint functions, nor in the decision variables. These
optimization problems are called deterministic optimization problems. In contrast to
deterministic problems, stochastic problems are subject to uncertainty in the objec-
tive and/or constraint functions, and in decision variables. The uncertainty may come
from different sources, such as noise in sensors, randomness in performing numerical
simulations, and changes in the operating condition. In these cases, one is interested
in finding a solution that is less sensitive to the uncertainty, which is an important
research topic of robust optimization and dynamic optimization to be elaborated in
Sect. 1.4.
It should be noted that stochastic optimization may also refer to research method-
ologies in which the decision variables are treated as random variables or the search
allow random changes. Here by an stochastic problem, we mean an optimization
problem that is subject to various types of uncertainty.
In the discussions so far, we assume that the objective of an optimization problem can
be described analytically by a mathematical expression. However, this is not always
possible for solving real-world problems, where the objective function is unknown
or cannot be written in an analytic form. These optimization problems are sometimes
called black-box optimization.
Typically, the objective value of a black-box optimization problem is evaluated
using computer simulations. There are also cases when the objective is evaluated by
performing physical or chemical experiments, although the system to be optimized is
not necessarily a black-box. For example, the aerodynamic performance of a turbine
engine or a vehicle is well well understood scientifically, although the aerodynamic
performance of such systems can be evaluated only by solving a large number of
differential equations. In addition, in the era of data science and big data, many
optimization problems are solved based on collected data, which may be incomplete,
noisy, and heterogeneous. Therefore, these problems are better called data-driven
1.2 Types of Optimization Problems 9
optimization problems (Jin et al., 2018), where optimization relies on data collected
from computer simulations, physical or chemical experiments, and data from daily
collected life.
For example in Fig. 1.6, solution A dominates solutions C and D, while solution
B is not dominated by solution A. On the other hand, solution B does not dominate
A either. Therefore, solutions A and B are non-dominated, and similarly, solutions
B, C and D, do not dominate one another. If solutions A and B are not dominated
by any feasible solutions, they are Pareto optimal solutions.
Definition 1.6 A solution x∗ is called Pareto optimal, if there does not exist any
feasible solution x ∈ Rn such that x ≺ x∗ .
The image of all Pareto optimal solutions in the objective space is called the
Pareto front or Pareto frontier, and the set of all solutions in the decision space is the
Pareto optimal set consisting of two connected piece-wise linear curves, as shown in
Fig. 1.7. In the figure, a few particular points are often used in optimization, which
are the ideal point, the nadir point and the knee point.
Definition 1.7 The ideal point, denoted by zideal , is constructed from the best objec-
tive values of the Pareto set.
Definition 1.8 The nadir point, denoted by znadir , is constructed from the worst
objective values over the Pareto optimal set.
In practice, a utopian point is often defined as follows:
utopian
zi = z iideal − , for all i = 1, 2, . . . , m (1.21)
Fig. 1.7 Illustration of the Pareto set (left panel) and Pareto front (right panel), together with the
ideal point, nadir point, knee point (C), boundary points (A and B) and extreme points (C and D)
where
m
gi (x1 , x2 ) = f i (x1 ) − f i (x2 ) + αi j ( f j (x1 ) − f j (x2 )), (1.25)
j =i
and ai j is a parameter defining the trade-off rate between the i-th and j-th objectives.
From the above definitions, we can see that both -dominance and α-dominance
strengthen the Pareto dominance relation, enabling a solution to dominate more
solutions.
An illustration of Pareto dominance, -dominance and α-dominance relations is
given in Fig. 1.8. In Fig. 1.8a, the region solution A Pareto dominates is shaded and
therefore, solution B is not dominated by A. Figure 1.8b shows the region solution
A -dominates, which is larger than that is dominated by the Pareto dominance.
1.3 Multi-objective Optimization 13
m
g(x) = wi f i (x) (1.26)
i=1
Similar to goals, however, it is hard for the user to to specify accurate weights
without a good understanding the problem. It is particularly tricky when the Pareto
front is non-convex, or non-uniform, discrete, or degenerate.
• Reference vectors. Reference vectors (Wang et al., 2017b) are similar to goals
and weights in that they are meant to provide an expectation of the objectives.
Different to the goals (reference points), reference vectors provide information
about the directions in the objective space in which solutions are desirable. Sim-
ilar to weights, reference vectors can also be used to convert a multi-objective
optimization problem into single-objective optimization problems.
• Preference relation. Another natural way of preferences over different objectives
is to describe the relative importance between pairs of objectives using human lin-
guistic variables. For example, one may indicate that objective 1 is more important
than objective 2, or objective 1 and objective 2 are equally important.
In optimization, a preferred order of the objectives can be converted into weights
or weight intervals. However, one main disadvantage is that the preference relation
cannot handle non-transitivity.
• Utility functions. Utility functions can be used to represent user, where the prefer-
ence information is implicitly involved in the objective function to rank solutions.
Unlike preference relations, the utility function ranks solutions rather than the
objectives. For example, given N solutions x1 , x1 , . . ., x N , the user is required to
give his or her preferences over the solutions by ranking them in a order. Then, an
imprecisely specified multi-attribute value theory formulation is employed to infer
the relative importance of the objectives. However, utility functions are based on
a strong assumption that all attributes of the preferences are independent, thereby
being unable to handle non-transitivity.
• Outranking. Neither preference relations based on the importance of objectives
nor the utility functions based on solutions are able to handle non-transitivity. An
alternative is so-called outranking that allows for non-transitivity. To determine an
outranking, a preference and indifference thresholds for each objective are given
by a preference ranking organization method for enrichment evaluations (Brans
et al., 1986), according to which every two solutions are compared. Consequently,
a preference ranking is obtained, which can be used for search of preferred solu-
tions. However, the outranking based methods require a large number of parameter
settings, which is non-trivial for the user for problems having a large number of
objectives (Brans et al., 1986).
• Implicit preferences. Articulation of preferences is always challenging for the
user when there is little knowledge about the problem to be solved. In this case,
knee solutions, around which a small improvement of any objective causes a large
degradation of others, are always of interest to to the user. In addition to knee points,
extreme points or the nadir point can work as a special form of preferences. With
the help of extreme points or the nadir point, the user can acquire knowledge about
the range of the Pareto front so as to describe their preferences more accurately.
16 1 Introduction to Optimization
Once the user preferences are properly modeled, they must be included in the opti-
mization process to obtain the preferred solutions, although this is not straightfor-
ward. In the following, we elaborate various preference articulation methods widely
used in multi-objective optimization.
utopian
m
utopian
min max wi [| f i (x) − z i |]+ ρ | f i (x) − z i |, (1.30)
i=1,2,...,m i=1
subject to: x ∈ Rn . (1.31)
where x, a are perturbations in the decision variables and environmental param-
eters, respectively, and z ∼ N (0, σ 2 ) is additive noise, σ 2 is the variance of the
noise.
Different methods have been proposed to handle different uncertainty in opti-
mization. For additive noise in fitness evaluations, the typical method is to evaluate
the fitness multiple times to reduce the influence of the noise (Rakshit et al., 2017).
For addressing the non-additive perturbations in the decision variables and envi-
ronmental parameters, one approach is to find a solution that is robust to changes
in environmental parameters or in the decision variables, which is typically known
as robust optimization. However, if the operating conditions change significantly,
a more realistic and effective approach will be multi-scenario optimization. If the
changes cannot be captured by a probability distribution, for example, there are con-
tinuous or periodic changes in the environment or decision variables, then dynamic
optimization may be more effective. Finally, robust optimization over time, which
aims to make a best compromise between performance and robustness, as well as
the cost that may occur in switching the designs.
The most straightforward method to cancel out the addition noise in fitness evalua-
tions is to do averaging either over time or over space, meaning sampling multiple
times of the same solution, or sampling multiple solutions around the solution, respec-
tively, and then use the average of the sampled objective values as the final objective
value (Liu et al., 2014).
Since evolutionary algorithms are population based search methods, other
approaches to noise reduction have also been proposed, including using a large
population size together with proportional selection, introducing recombination, and
introducing a threshold in selection. Since fitness evaluations can be computationally
intensive, noise reducing based on re-sampling or using a large population can be
impractical. Thus, efficiently re-sampling during the optimization becomes impor-
tant. For example, adapting the sample size is an effective approach. Typically, one
can use a smaller sample size in the early search stage and a larger sample size in
the later stage; alternatively, a large sample size can be used for a more promising
solution, and a small sample size is used for a poor solution. Apart from adaptive
sampling, use of local regression models to estimate the true fitness value is also
possible.
1.4 Handling Uncertainty in Optimization 19
Fig. 1.10 Illustration of robust solutions against changes in the decision variable (left panel) and
the environmental parameter (right panel)
The robustness of the solutions against perturbations in the decision variables and
environmental parameters is a basic requirement in practice. Figure 1.10 illustrates
two different situations of robustness. In the left panel, A and B are two local optima of
the objective function. Without considering the robustness of the solutions, solution
A has better performance than solution B, if we are considering a minimization
problem. However, when there is a perturbation x in the decision variable, f (x A )
will be much worse than f (x B ). Therefore, we consider solution B is more robust
to solution A. By contrast, in the right panel, solution A has the best performance at
the normal operating condition when a = a ∗ . However, when a becomes larger or
smaller, the performance worsens very quickly. In contrast solution A, solution B
has worse performance than solution A at the normal operating condition, however,
its performance degrades more slowly. As a result, solution B performs better than
A over a ∈ R2 except for a ∈ R1 . Thus, solution B is more robust than solution A
against changes in the environmental parameter a. Before we discuss methods for
obtaining robust optimal solutions, we first introduce a few widely used definitions
for robustness in the context of optimization (Beyer & Sendhoff, 2007; Jin & Branke,
2005).
If we consider the perturbations in the decision variables only, the expected objec-
tive function will be:
FE (x) = f (x + δ, a) p(δ)dδ. (1.38)
• Dispersion based robustness measure. The robustness can also be defined using
the dispersion of the objective function in the presence of uncertainty: e.g.
FD (x) = ( f (x + δ) − f (x))2 pδdδ, (1.39)
Methods for search of robust optimal solutions are based on the above robustness
definitions and their variants. That is, if one is interested in finding the robust optimal
solution, then, one of the above robustness measure can be used to replace the original
objective function. Since estimation of the robustness, no matter which of the above
definitions is used, requires to evaluate the objective value multiple times for one
solution, several ideas for reducing the computational costs have been proposed by
taking advantage of the population based search algorithm. For example, for the
expected objective function, the explicit averaging approach can calculate the mean
objective value to approximate the expected objective:
1
N
f¯(x) = f (x + δ), (1.40)
N i=1
1
N
FE (x) = f (xi ) (1.42)
N i=1
1
N
FD (x) = [ f (xi ) − FE (x)]2 . (1.43)
N i=1
Fig. 1.11 The wave drag coefficient distribution of three RAE5225 airfoil designs obtained by
single- and multi-scenario optimization methods
Robust optimization and dynamic optimization are two very different philosophies
for handling uncertainty in optimization. While robust optimization assumes that
one single robust optimal solution can handle all uncertainty to be experienced in
the lifetime of the design, dynamic optimization hypothesizes that the algorithm is
1.4 Handling Uncertainty in Optimization 25
Fig. 1.12 a Illustration of a moving Pareto set (left), and a model predicting its center (right). b
The moving Pareto set of a dynamic multi-objective optimization problem (left), the real trajectory
of the center of the Pareto set denoted by red circles, and the predicted trajectory of the center in
different environments denoted by green diamonds and blue squares (right)
always able to timely and rapidly track the moving optimum and that the change of
solution is not subject to any additional cost. Apparently, robust optimization and
dynamic optimization are two extreme situations and none of the basic assumptions
made by them can hold in many real-world situations.
To bridge the gap between robust optimization and dynamic optimization, robust
optimization over time (ROOT) was suggested (Jin et al., 2013; Yu et al., 2010). The
main hypotheses of ROOT can be summarized as follows:
• An optimal solution should not be switched to a new solution even if an environ-
mental change is detected, so long as its performance is not worse than the worst
performance the user can tolerate.
• Switching solutions is subject to additional cost.
• Once the performance of a solution is worse than the worst performance the user
can accept, a new optimal solutions should sought. However, the algorithm will
not search for the solution having the best performance in the current environment;
instead, the algorithm searches for the optimal solution that might be acceptable
in a maximum number of environments to reduce the number of solution change.
From the above hypotheses, one can see that ROOT can be seen as a compromise
between robust optimization and dynamic optimization. In contrast to the robustness
definition in Eq. (1.37), robustness over time can be generically defined as follows:
26 1 Introduction to Optimization
0 +T +∞
t=t
where p(a(t)) is the probability density function of a(t) at time t, and T is the length
of time interval, and t0 is the given starting time. From the above definition, we can
find that ROOT not only takes into account uncertainties in the decision space and
parameter space, but also the effect of these uncertainties in the time domain.
The ROOT definition in Eq. (1.46) is very generic, which is the average perfor-
mance over the time interval T . However, it needs to be rewritten if one is interested
in finding a ROOT solution that can be used in as many environments as possible.
Given the sequence of L problems as described in (1.45), the following optimization
problem can be defined to find the robust solution over time:
maximize R = l, (1.47)
s.t. f (x, a(t)) ≤ δ, t ∈ [tc , tc + l] (1.48)
where δ is the worst performance the user can accept, tc is the starting time (the time
instant when the solution is to be adopted), and l is the number of environments the
solution will be used. In other words, the robustness is simply defined as the number
of environments the solution can be used, which is to be maximized.
Note that similar to the conventional robust optimization, there will also be a
trade-off between the robustness defined in (1.47) and the average fitness over the
whole time period [0, T ], or between the robustness and the switching cost. Thus,
ROOT can also be formulated as a bi-objective optimization problem (Huang et al.,
2017):
maximize R = l, (1.49)
minimize C = ||x − x∗ || (1.50)
s.t. f (x, a(t)) ≤ δ, t ∈ [tc , tc + l] (1.51)
where C is the cost for switching the previous optimal solution x∗ to the new optimal
solution x, which is defined to be the Euclidean distance between the two solutions.
Of course, other definitions for the switching cost are possible.
As a result, a set of Pareto optimal solutions will be found for each environment.
In practice, one of these solutions should be chosen to be implemented, e.g., the one
that maximizes the ratio between robustness and switching cost (Huang et al., 2020).
It should be pointed out that finding out the ROOT solution is non-trivial as this
requires to predict the performance of a solution in the future based on history data.
An alternative is to find out all optima in the current environment using a multi-
population strategy and then choose the ROOT solution from them according to the
properties of these solutions before and after the environmental change.
Discovering Diverse Content Through
Random Scribd Documents
These Immoralities of the Stage had by an avow'd Indulgence been
creeping into it ever since King Charles his Time; nothing that was
loose could then be too low for it: The London Cuckolds, the most
rank Play that ever succeeded,[293] was then in the highest Court-
Favour: In this almost general Corruption, Dryden, whose Plays were
more fam'd for their Wit than their Chastity, led the way, which he
fairly confesses, and endeavours to excuse in his Epilogue to the
Pilgrim, revived in 1700 for his Benefit,[294] in his declining Age and
Fortune—The following Lines of it will make good my Observation.
The Authors of the old Batchelor and of the Relapse were those
whom Collier most labour'd to convict of Immorality; to which they
severally publish'd their Reply; the first seem'd too much hurt to be
able to defend himself, and the other felt him so little that his Wit
only laugh'd at his Lashes.[300]
My first Play of the Fool in Fashion, too, being then in a Course of
Success; perhaps for that Reason only, this severe Author thought
himself oblig'd to attack it; in which I hope he has shewn more Zeal
than Justice, his greatest Charge against it is, that it sometimes uses
the Word Faith! as an Oath, in the Dialogue: But if Faith may as well
signify our given Word or Credit as our religious Belief, why might
not his Charity have taken it in the less criminal Sense?
Nevertheless, Mr. Collier's Book was upon the whole thought so
laudable a Work, that King William, soon after it was publish'd,
granted him a Nolo Prosequi when he stood answerable to the Law
for his having absolved two Criminals just before they were executed
for High Treason. And it must be farther granted that his calling our
Dramatick Writers to this strict Account had a very wholesome Effect
upon those who writ after this time. They were now a great deal
more upon their guard; Indecencies were no longer Wit; and by
Degrees the fair Sex came again to fill the Boxes on the first Day of
a new Comedy, without Fear or Censure. But the Master of the
Revels,[301] who then licens'd all Plays for the Stage, assisted this
Reformation with a more zealous Severity than ever. He would strike
out whole Scenes of a vicious or immoral Character, tho' it were
visibly shewn to be reform'd or punish'd; a severe Instance of this
kind falling upon my self may be an Excuse for my relating it: When
Richard the Third (as I alter'd it from Shakespear)[302] came from
his Hands to the Stage, he expung'd the whole first Act without
sparing a Line of it. This extraordinary Stroke of a Sic volo occasion'd
my applying to him for the small Indulgence of a Speech or two, that
the other four Acts might limp on with a little less Absurdity! no! he
had not leisure to consider what might be separately inoffensive. He
had an Objection to the whole Act, and the Reason he gave for it
was, that the Distresses of King Henry the Sixth, who is kill'd by
Richard in the first Act, would put weak People too much in mind of
King James then living in France; a notable Proof of his Zeal for the
Government![303] Those who have read either the Play or the
History, I dare say will think he strain'd hard for the Parallel. In a
Word, we were forc'd, for some few Years, to let the Play take its
Fate with only four Acts divided into five; by the Loss of so
considerable a Limb, may one not modestly suppose it was robbed
of at least a fifth Part of that Favour it afterwards met with? For tho'
this first Act was at last recovered, and made the Play whole again,
yet the Relief came too late to repay me for the Pains I had taken in
it. Nor did I ever hear that this zealous Severity of the Master of the
Revels was afterwards thought justifiable. But my good Fortune, in
Process of time, gave me an Opportunity to talk with my Oppressor
in my Turn.
The Patent granted by his Majesty King George the First to Sir
Richard Steele and his Assigns,[304] of which I was one, made us
sole Judges of what Plays might be proper for the Stage, without
submitting them to the Approbation or License of any other
particular Person. Notwithstanding which, the Master of the Revels
demanded his Fee of Forty Shillings upon our acting a new One, tho'
we had spared him the Trouble of perusing it. This occasion'd my
being deputed to him to enquire into the Right of his Demand, and
to make an amicable End of our Dispute.[305] I confess I did not
dislike the Office; and told him, according to my Instructions, That I
came not to defend even our own Right in prejudice to his; that if
our Patent had inadvertently superseded the Grant of any former
Power or Warrant whereon he might ground his Pretensions, we
would not insist upon our Broad Seal, but would readily answer his
Demands upon sight of such his Warrant, any thing in our Patent to
the contrary notwithstanding. This I had reason to think he could not
do; and when I found he made no direct Reply to my Question, I
repeated it with greater Civilities and Offers of Compliance, 'till I was
forc'd in the end to conclude with telling him, That as his Pretensions
were not back'd with any visible Instrument of Right, and as his
strongest Plea was Custom, we could not so far extend our
Complaisance as to continue his Fees upon so slender a Claim to
them: And from that Time neither our Plays or his Fees gave either
of us any farther trouble. In this Negotiation I am the bolder to think
Justice was on our Side, because the Law lately pass'd,[306] by
which the Power of Licensing Plays, &c. is given to a proper Person,
is a strong Presumption that no Law had ever given that Power to
any such Person before.
My having mentioned this Law, which so immediately affected the
Stage, inclines me to throw out a few Observations upon it: But I
must first lead you gradually thro' the Facts and natural Causes that
made such a Law necessary.
Although it had been taken for granted, from Time immemorial, that
no Company of Comedians could act Plays, &c. without the Royal
License or Protection of some legal Authority, a Theatre was,
notwithstanding, erected in Goodman's-Fields about seven Years
ago,[307] where Plays, without any such License, were acted for
some time unmolested and with Impunity. After a Year or two, this
Playhouse was thought a Nusance too near the City: Upon which the
Lord-Mayor and Aldermen petition'd the Crown to suppress it: What
Steps were taken in favour of that Petition I know not, but common
Fame seem'd to allow, from what had or had not been done in it,
that acting Plays in the said Theatre was not evidently unlawful.[308]
However, this Question of Acting without a License a little time after
came to a nearer Decision in Westminster-Hall; the Occasion of
bringing it thither was this: It happened that the Purchasers of the
Patent, to whom Mr. Booth and Myself had sold our Shares,[309]
were at variance with the Comedians that were then left to their
Government, and the Variance ended in the chief of those
Comedians deserting and setting up for themselves in the little
House in the Hay-Market, in 1733, by which Desertion the Patentees
were very much distressed and considerable Losers. Their Affairs
being in this desperate Condition, they were advis'd to put the Act of
the Twelfth of Queen Anne against Vagabonds in force against these
Deserters, then acting in the Hay-Market without License.
Accordingly, one of their chief Performers[310] was taken from the
Stage by a Justice of Peace his Warrant, and committed to Bridewell
as one within the Penalty of the said Act. When the Legality of this
Commitment was disputed in Westminster-Hall, by all I could
observe from the learned Pleadings on both Sides (for I had the
Curiosity to hear them) it did not appear to me that the Comedian so
committed was within the Description of the said Act, he being a
Housekeeper and having a Vote for the Westminster Members of
Parliament. He was discharged accordingly, and conducted through
the Hall with the Congratulations of the Crowds that attended and
wish'd well to his Cause.
The Issue of this Trial threw me at that time into a very odd
Reflexion, viz. That if acting Plays without License did not make the
Performers Vagabonds unless they wandered from their Habitations
so to do, how particular was the Case of Us three late Menaging
Actors at the Theatre-Royal, who in twenty Years before had paid
upon an Averidge at least Twenty Thousand Pounds to be protected
(as Actors) from a Law that has not since appeared to be against us.
Now, whether we might certainly have acted without any License at
all I shall not pretend to determine; but this I have of my own
Knowledge to say, That in Queen Anne's Reign the Stage was in
such Confusion, and its Affairs in such Distress, that Sir John
Vanbrugh and Mr. Congreve, after they had held it about one Year,
threw up the Menagement of it as an unprofitable Post, after which
a License for Acting was not thought worth any Gentleman's asking
for, and almost seem'd to go a begging, 'till some time after, by the
Care, Application, and Industry of three Actors, it became so
prosperous, and the Profits so considerable, that it created a new
Place, and a Sine-cure of a Thousand Pounds a Year,[311] which the
Labour of those Actors constantly paid to such Persons as had from
time to time Merit or Interest enough to get their Names inserted as
Fourth Menagers in a License with them for acting Plays, &c. a
Preferment that many a Sir Francis Wronghead would have jump'd
at.[312] But to go on with my Story. This Endeavour of the Patentees
to suppress the Comedians acting in the Hay-Market proving
ineffectual, and no Hopes of a Reunion then appearing, the Remains
of the Company left in Drury-Lane were reduced to a very low
Condition. At this time a third Purchaser, Charles Fleetwood, Esq.,
stept in; who judging the best Time to buy was when the Stock was
at the lowest Price, struck up a Bargain at once for Five Parts in Six
of the Patent;[313] and, at the same time, gave the revolted
Comedians their own Terms to return and come under his
Government in Drury-Lane, where they now continue to act at very
ample Sallaries, as I am informed, in 1738.[314] But (as I have
observ'd) the late Cause of the prosecuted Comedian having gone so
strongly in his Favour, and the House in Goodman's-Fields, too,
continuing to act with as little Authority unmolested; these so
tolerated Companies gave Encouragement to a broken Wit to collect
a fourth Company, who for some time acted Plays in the Hay-Market,
which House the united Drury-Lane Comedians had lately quitted:
This enterprising Person, I say (whom I do not chuse to name,[315]
unless it could be to his Advantage, or that it were of Importance)
had Sense enough to know that the best Plays with bad Actors
would turn but to a very poor Account; and therefore found it
necessary to give the Publick some Pieces of an extraordinary Kind,
the Poetry of which he conceiv'd ought to be so strong that the
greatest Dunce of an Actor could not spoil it: He knew, too, that as
he was in haste to get Money, it would take up less time to be
intrepidly abusive than decently entertaining; that to draw the Mob
after him he must rake the Channel[316] and pelt their Superiors;
that, to shew himself somebody, he must come up to Juvenal's
Advice and stand the Consequence:
Though this Part of Leonora in itself was of so little value, that when
she got more into Esteem it was one of the several she gave away to
inferior Actresses; yet it was the first (as I have observ'd) that
corrected my Judgment of her, and confirm'd me in a strong Belief
that she could not fail in very little time of being what she was
afterwards allow'd to be, the foremost Ornament of our Theatre.
Upon this unexpected Sally, then, of the Power and Disposition of so
unforeseen an Actress, it was that I again took up the two first Acts
of the Careless Husband, which I had written the Summer before,
and had thrown aside in despair of having Justice done to the
Character of Lady Betty Modish by any one Woman then among us;
Mrs. Verbruggen being now in a very declining state of Health, and
Mrs. Bracegirdle out of my Reach and engag'd in another Company:
But, as I have said, Mrs. Oldfield having thrown out such new
Proffers of a Genius, I was no longer at a loss for Support; my
Doubts were dispell'd, and I had now a new Call to finish it:
Accordingly, the Careless Husband[340] took its Fate upon the Stage
the Winter following, in 1704. Whatever favourable Reception this
Comedy has met with from the Publick, it would be unjust in me not
to place a large Share of it to the Account of Mrs. Oldfield; not only
from the uncommon Excellence of her Action, but even from her
personal manner of Conversing. There are many Sentiments in the
Character of Lady Betty Modish that I may almost say were originally
her own, or only dress'd with a little more care than when they
negligently fell from her lively Humour: Had her Birth plac'd her in a
higher Rank of Life, she had certainly appear'd in reality what in this
Play she only excellently acted, an agreeably gay Woman of Quality
a little too conscious of her natural Attractions. I have often seen her
in private Societies, where Women of the best Rank might have
borrow'd some part of her Behaviour without the least Diminution of
their Sense or Dignity. And this very Morning, where I am now
writing at the Bath, November 11, 1738, the same Words were said
of her by a Lady of Condition, whose better Judgment of her
Personal Merit in that Light has embolden'd me to repeat them. After
her Success in this Character of higher Life, all that Nature had given
her of the Actress seem'd to have risen to its full Perfection: But the
Variety of her Power could not be known 'till she was seen in variety
of Characters; which, as fast as they fell to her, she equally excell'd
in. Authors had much more from her Performance than they had
reason to hope for from what they had written for her; and none
had less than another, but as their Genius in the Parts they allotted
her was more or less elevated.
In the Wearing of her Person she was particularly fortunate; her
Figure was always improving to her Thirty-sixth Year; but her
Excellence in acting was never at a stand: And the last new
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
ebookbell.com