没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
内容概要:本文介绍了一种新的算法——稳态与代际进化算法(SGEA),用于处理动态多目标优化问题(DMOP)。与现有方法不同,SGEA结合了稳态算法快速跟踪环境变化的能力和代际算法良好的多样性保持特性。当检测到环境变化时,SGEA不仅重用部分旧解,还基于先前环境和新环境的信息重新定位部分种群,从而快速适应变化。实验结果显示,SGEA在多个具有不同动态特性的双目标和三目标基准问题上表现出色,尤其在周期性环境中表现尤为突出。 适合人群:对进化计算、多目标优化及其应用有兴趣的研究人员和工程师,尤其是那些研究动态优化问题的专业人士。 使用场景及目标:①适用于需要解决动态变化的多目标优化问题的实际应用场景,如调度、控制等领域;②旨在提高算法对环境变化的响应速度和适应能力,确保算法在动态环境中保持高效性和稳定性。 其他说明:尽管SGEA在测试问题上表现出色,但在处理强变量关联或导致显著多样性损失的变化时仍存在挑战。未来工作包括将新的约束处理技术应用于动态约束问题,引入新算子以演化种群,以及开发新的动态基准测试和性能指标,以促进动态多目标进化算法的进一步研究和发展。
资源推荐
资源详情
资源评论

















IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017 65
A Steady-State and Generational Evolutionary
Algorithm for Dynamic Multiobjective
Optimization
Shouyong Jiang and Shengxiang Yang, Senior Member, IEEE
Abstract—This paper presents a new algorithm, called
steady-state and generational evolutionary algorithm, which
combines the fast and steadily tracking ability of steady-state
algorithms and good diversity preservation of generational algo-
rithms, for handling dynamic multiobjective optimization. Unlike
most existing approaches for dynamic multiobjective optimiza-
tion, the proposed algorithm detects environmental changes and
responds to them in a steady-state manner. If a change is detected,
it reuses a portion of outdated solutions with good distribution
and relocates a number of solutions close to the new Pareto front
based on the information collected from previous environments
and the new environment. This way, the algorithm can quickly
adapt to changing environments and thus is expected to provide a
good tracking ability. The proposed algorithm is tested on a num-
ber of bi- and three-objective benchmark problems with different
dynamic characteristics and difficulties. Experimental results
show that the proposed algorithm is very competitive for dynamic
multiobjective optimization in comparison with state-of-the-art
methods.
Index Terms—Change detection, change response, dynamic
multiobjective optimization, steady-state and generational evo-
lutionary algorithm.
I. INTRODUCTION
M
ANY real-world multiobjective optimization problems
(MOPs) are dynamic in nature, whose objective func-
tions, constraints, and/or parameters may change over time.
Due to the presence of dynamisms, dynamic MOPs (DMOPs)
pose big challenges to evolutionary algorithms (EAs) since
any environmental change may affect the objective vector,
constraints, and/or parameters. As a result, the Pareto-optimal
set (POS), which is a set of mathematical solutions to MOPs,
and/or the Pareto-optimal front (POF) that is the image of
POS in the objective space, may change over time. Then, the
Manuscript received November 13, 2015; revised March 7, 2016 and
May 3, 2016; accepted May 10, 2016. Date of publication August 1, 2016;
date of current version January 26, 2017. This work was supported by the
Engineering and Physical Sciences Research Council (EPSRC) of U.K. under
Grant EP/K001310/1 and the National Natural Science Foundation (NNSF)
of China under Grant 61673331. (Corresponding author: Shengxiang Yang.)
The authors are with the Centre for Computational Intelligence,
School of Computer Science and Informatics, De Montfort University,
This paper has supplementary downloadable material available at
http://ieeexplore.org, provided by the authors. This supplementary material
provides the formulation of the test problems used in the paper and some
supplementary experimental results. This material is 102 KB in size.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TEVC.2016.2574621
optimization goal is to track the moving POF and/or POS and
obtain a sequence of approximations over time.
DMOPs can be defined in different ways, according to the
nature of dynamisms [15], [41], [54]. In this paper, we mainly
consider the following kind of DMOPs:
min F(x, t) = ( f
1
(x, t),...,f
M
(x, t))
T
s.t.
⎧
⎪
⎨
⎪
⎩
h
i
(x, t) = 0, i = 1,...,n
h
g
i
(x, t) ≥ 0, i = 1,...,n
g
x ∈
x
, t ∈
t
(1)
where M is the number of objectives, n
h
and n
g
are the number
of equality and inequality constraints, respectively,
x
⊆ R
n
is the decision space, t is the discrete time instance,
t
⊆ R
is the time space, and F(x, t) :
x
×
t
→ R
M
is the objective
function vector that evaluates solution x at time t.
In the past few years, there has been an increasing amount
of research interest in the field of evolutionary multiob-
jective optimization as many real-world applications, like
thermal scheduling [42] and circular antenna design [3], have
at least two objectives that conflict with each other, i.e.,
they are MOPs. Due to multiobjectivity, the goal of solv-
ing MOPs is not to find a single optimal solution but to
find a set of tradeoff solutions. When an MOP involves
time-dependent components, it can be regarded as a DMOP.
Many real-life problems in nature are DMOPs, such as
planning [8], scheduling [12], [35], and control [15], [50].
There have been a number of contributions made to several
important aspects of this field, including dynamism classi-
fication [15], [41], test problems [4], [15], [20], [23]–[26],
performance metrics [9], [15], [17]–[19], [41], [55], and
algorithm design [9], [12], [15], [18], [21], [28], [54], [55].
Among these, algorithm design is the most important issue
as it is the problem-solving tool for DMOPs.
Due to the presence of dynamisms, the design of a dynamic
multiobjective optimization EA (DMOEA) is different from
that of a multiobjective optimization EA (MOEA) for static
MOPs. Specifically, DMOEAs should not only have a fast
convergence performance (which is crucial to their tracking
ability), but also be able to address diversity loss whenever
there is an environmental change in order to explore the new
search space. Besides, if changes are not assumed to be know-
able, DMOEAs should be able to detect them in order not
to mislead the optimization process. This is because, when a
1089-778X
c
2016 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/
redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

66 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017
change occurs, the previously discovered POS may not remain
optimal for the new environment.
In principle, a change can be detected by re-evaluating
dedicated detectors [12], [18], [47], [54], [55] or assessing
algorithm behaviors [15], [32], [37]. The former is a easy-to-
use mechanism and allows “robust detection” [37]ifahigh
enough number of detectors is used, but it may require addi-
tional cost since detectors have to be re-evaluated at every
generation, and it may not be accurate when there is noise
in function evaluations. The latter does not need additional
function evaluations, but it may cause false positives and thus
make algorithms overreacting when no change occurs. Both
of them cannot guarantee that changes are detected [37].
On the other hand, whenever a change is detected, it is
often inefficient to restart the optimization process from
scratch, although the restart strategy may be a good choice if
the environmental change is considerably severe [7]. In the
literature, various approaches have been proposed to handle
environmental changes, and they can be mainly categorized
into diversity-based approaches and convergence-based
approaches, according to their algorithm behaviors. Diversity-
based approaches focus on maintaining population diversity
whereas convergence-based ones aim to achieve a fast con-
vergence performance so that algorithms’ tracking ability are
guaranteed. Generally, population diversity can be handled by
increasing diversity using mutation of selected old solutions
or random generation of some new solutions upon the detec-
tion of environmental changes [12], [18], [55], maintaining
diversity throughout the optimization process [1], [2], [6], or
employing multipopulation schemes [18], [40]. Proper
diversity is helpful for exploring promising search
regions, but too much diversity may cause evolutionary
stagnation [5].
Convergence-based approaches try to exploit past infor-
mation for better tracking performance [7], especially when
the new POS is somewhat similar to the previous one or
environmental changes exhibit regular patterns. Accordingly,
recording relevant past information to be reused at a later stage
may be helpful for tracking the new POF as quickly as possi-
ble. The reuse of past information is closely related to the type
of environmental change and hence can be helpful for different
purposes [6]. If the environment changes periodically, relevant
information of the current POS can be stored in a memory
and can be directly reintroduced into the evolving population
when needed. This kind of strategy is often called memory-
based approaches and has been extensively studied in dynamic
multiobjective optimization [7], [8], [18], [22], [52]. In con-
trast, if the environment change follows a regular pattern, past
information can be collected and used to model the movement
of the changing POF/POS. Hence, the location of the new
POS can be predicted, helping the population quickly track
the moving POF. Prediction-based approaches have received
massive attention because most existing benchmark DMOPs
(e.g., the FDA test suite [15]) involve predictable charac-
teristics, and studies along this direction can be referred
to [
22], [28], [32], [33], [36], [47], [54], and [55].
Aside from the above-mentioned approaches, some stud-
ies concentrate on finding an insensitive robust POF instead
of closely tracking the moving POF [16], [27], [38].
Algorithm 1 Framework of SGEA
1: Input: N (population size)
2: Output: a series of approximated POFs
3: Create an initial parent population P
:
={x
1
,...,x
N
};
4: (A, P)
:
= EnvironmentSelection(P);
5: while stopping criterion not met do
6: for i
:
= 1toN do
7: if change detected and not responded then
8: ChangeResponse();
9: end if
10: y
:
= GenerateOffspring(P, A);
11: (P, A)
:
= UpdatePopulation(y);
12: end for
13: (A, P)
:
= EnvironmentSelection(P ∪ P);
14: Set P
:
= P;
15: end while
Robustness-based approaches assume that when the environ-
ment changes, the old obtained solution can still be used in
the new environment as long as its quality is acceptable [27].
However, the criterion for an acceptable optimal solution is
quite problem-specific, which may hinder the wide application
of these approaches.
Although a number of approaches have been proposed for
solving DMOPs, the development of DMOEAs is a rela-
tively young field and more studies are greatly needed. In this
paper, a new algorithm, called steady-state and generational
EA (SGEA), is proposed for efficiently handling DMOPs.
SGEA makes most of the advantages of steady-state EAs in
dynamic environments [48] for environmental change detec-
tion and response. If a change is detected, SGEA reuses a
portion of old solutions with good diversity and exploits infor-
mation collected from both previous environments and the new
environment to relocate a part of its evolving population. At
the end of every generation, like conventional generational
EAs [13], [56], SGEA performs environmental selection to
preserve good individuals for the next generation. By mixing
the steady-state and generational manners, SGEA can adapt
to dynamic environments quickly whenever a change occurs,
providing very promising tracking ability for DMOPs.
The rest of this paper is organized as follows. Section II
describes the framework of the proposed SGEA, together with
detailed descriptions of each component of the algorithm.
Section III is devoted to presenting experimental settings for
comparison. Section IV provides experimental results and
comparison on tested algorithms. A further discussion of the
algorithm is offered in Section V. Section VI concludes this
paper with discussions on future work.
II. P
ROPOSED SGEA
The basic framework of the proposed SGEA is presented
in Algorithm 1. SGEA starts with an initial population P and
the initialization of an elitist population
P and an archive A
through environmental selection. In every generational cycle,
SGEA detects possible environmental changes and evolves the
population in a steady-state manner. For each population mem-
ber, if a change is detected, then a change response mechanism
is adopted to handle the detected change. After that, genetic
operation is applied to produce one offspring solution for the
population member, which is then used to update the parent

JIANG AND YANG: SGEA FOR DYNAMIC MULTIOBJECTIVE OPTIMIZATION 67
Algorithm 2 EnvironmentSelection(Q)
1: Input: Q (a set of solutions)
2: Output: A (archive), P (N elitists preserved)
3: Set A
:
=∅and P
:
=∅;
4: Assign a fitness value to each member in Q;
5: for i
:
= 1to|Q| do
6: if F(i)<1 then
7: Copy x
i
from Q to A;
8: end if
9: end for
10: if |A| < N then
11: Copy the best N individuals in terms of their fitness values
from Q to
P;
12: else
13: if |A|==N then
14: Set P
:
= A;
15: else
16: Prune A to a set of N individuals by any truncation operator
and copy the truncated A to
P;
17: end if
18: end if
population P and archive A. At the end of each generation,
P and
P are combined. Similar to generational EAs [13], [56]
or speciation techniques used in niching [5], [29], a genera-
tional environmental selection is conducted on the combined
population to preserve a population of good solutions for the
next generation. This way, SGEA can be regarded as a steady-
state and generational MOEA. In the following sections, the
implementation of each component of SGEA will be detailed
step by step.
A. Environmental Selection
The environmental selection procedure (Algorithm 2),
which aims to preserve a fixed number of elitists from a
solution set Q after every generational cycle, starts with fit-
ness assignment. Each individual i of Q is assigned a fitness
value F(i), which is defined as the number of individuals that
dominate [56] it, as follows:
F(i) =|{j ∈ Q|j ≺ i}| (2)
where |·| denotes the cardinality of a set and j ≺ i indi-
cates that j dominates i. It should be noted that, various
fine-grained methods proposed in [14], [45], and [56] can
be used to assign fitness values for individuals. However, the
fitness assignment method used in this paper is relatively sim-
ple and computationally efficient. Most importantly, when an
external individual e enters the set Q, the update of F(i) needs
only one dominance comparison between individuals e and i.
The easy-to-update property of this method will be clearly
embodied in the population update procedure (to be described
in Section II-C).
Afterwards, individuals having a fitness value of zero are
identified as nondominated solutions and then copied to an
archive A.If|A| is smaller than the population size N
,the
best N individuals (including both dominated and nondomi-
nated ones) in terms of their fitness values are preserved in
an elitist population
P. Otherwise, there can be two situations:
either the number of nondominated solutions fits exactly the
population size, or there are too many nondominated solutions.
Algorithm 3 GenerateOffspring(P, A)
1: Input: P (parent population), A (archive population)
2: Output: y (offspring solution)
3: if rnd < 0.5 then
4: Perform binary tournament selection on P to select two distinct
individuals as the mating parents;
5: else
6: Randomly pick an individual from A and perform binary tour-
nament selection on P to select another distinct individual as
the mating parents.
7: end if
8: Apply genetic operators to generate a new solution y;
In the first case, all nondominated solutions are copied to P.
In the second case, a truncation technique is needed to reduce
A to a population of N nondominated solutions such that the
truncated A have the best diversity possible. In SGEA, the kth
nearest neighbor truncation technique proposed in the strength
Pareto EA 2 (SPEA2) [56] is used to perform the truncation
operation, although we recognise there are other options, e.g.,
the farthest first method [10], [11], which can also serve this
purpose. After that, solutions in the truncated A are copied
to
P.
Note that, like classical generational MOEAs, such as the
nondominated sorting genetic algorithm II (NSGA-II) [13]
and SPEA2 [56], SGEA performs environmental selection at
the end of each generation. Thus, SGEA can be generally
categorized into generational MOEAs.
B. Mating Selection and Genetic Operators
Mating selection is an important operation before the pro-
duction of new offspring (line 10 of Algorithm 1). In this
paper, mating parents can be selected either from the parent
population P or the archive population A. The benefit of such a
mating selection method has been extensively investigated on
static MOPs in a number of studies [30], [34], [44], [57]. While
selecting mating parents from P can maintain good population
diversity, selecting parents from A can significantly improve
the convergence speed of the population, which is considerably
desirable in fast-changing environments. If a mating parent
is to be selected from P, SGEA performs a binary tourna-
ment selection according to individuals’ fitness values. If not,
the mating parent can be randomly selected from the archive
population A.
Following the mating selection, genetic operators are
applied on the mating parents to generate a new offspring solu-
tion. In SGEA, the simulation binary crossover and polynomial
mutation are chosen as the recombination and mutation oper-
ators, respectively. The reproduction procedure is presented in
Algorithm 3.
C. Population Update
In SGEA, population update (line 11 of Algorithm 1)is
conducted on both the parent population P and archive popula-
tion A, which is detailed in Algorithm 4. The update operation
on P is in fact replacing the worst solution of P with the newly
generated solution y while the update on A is using y to update
the archived nondominated set. First, if y is not a duplicate

68 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017
Algorithm 4 UpdatePopulation(y)
1: Input: y (offspring solution)
2: Output: P (updated parent population), A (updated archive
population)
3: Set the fitness value of y as zero: F(y)
:
= 0;
4: for i
:
= 1to|P| do
5: if y == x
i
then
6: Return;
7: end if
8: if y ≺ x
i
then
9: Add one to the fitness value of x
i
: F(i)
:
= F(i) + 1;
10: end if
11: if y x
i
then
12: Add one to the fitness value of y: F(y)
:
= F(y) + 1;
13: end if
14: end for
15: Compute the individual in P having the highest fitness value:
ˆ
i
:
= i : argmax
{1≤i≤|P|}
F(i);
16: if F(y) ≤ F(
ˆ
i) then
17: Set x
ˆ
i
:
= y and F(
ˆ
i)
:
= F(y);
18: if F(y)<1 then
19: Remove all solutions in A that are dominated by y,andadd
y to A if A is not full;
20: end if
21: end if
solution, it will be compared with each member x
i
of P for
the dominance relation (lines 4–14 of Algorithm 4). If y domi-
nates x
i
(denoted as y ≺ x
i
), the fitness value of x
i
is increased
by one. If y is dominated by x
i
(denoted as y x
i
), the fitness
value of y is increased by one. Then, the worst individual in
P with the highest fitness value is identified, and if there are
two or more such individuals, a random one is selected. If y is
not worse than the identified individual x
ˆ
i
in terms of the fit-
ness value, the solution replacement takes place, as shown in
line 17 of Algorithm 4. Besides, if y is not dominated by any
member in P (which means its fitness value is zero), it should
be further considered to update the archive population A if A
is not full. This means, the archive update occurs only when
y successfully enters the parent population. It can be observed
that, the fitness assignment method used here is easy to update
an individual’s fitness value, which helps SGEA conduct solu-
tion replacement in the parent population and archive update
in an efficient manner.
D. Dynamism Handling
This section discusses two main aspects of dynamism han-
dling. One is change detection, a step to detect whether a
change has occurs during the evolutionary process. The other
is known as change response or change reaction, which takes
actions to quickly react to environmental changes so that the
population adapts to new environments rapidly.
1) Change Detection: Change detection can be
performed by either re-evaluating a portion of existing
solutions [12], [18], [47], [54], [55] or assessing some
statistical information of some selected population mem-
bers [15], [32], [37]. Since both methods choose a small
proportion of population members as detectors, detection
may fail if changes occur on nondetectors. On the contrary,
it will be computationally expensive if the whole population
members are chosen as detectors. Therefore, a good detection
method should strike a balance between the detection ability
and efficiency.
The proposed algorithm detects changes in a steady-state
manner, as shown in line 7 of Algorithm 1. In every gen-
eration, population members (in random order) are checked
one by one for discrepancy between their previous objective
values and re-evaluated ones. If a discrepancy exists in a pop-
ulation member, we assume a change is successfully detected
and there is no need to do further checks for the rest of popula-
tion members. When a change is detected, SGEA immediately
reacts to it in a steady-state manner. The detection method
is beneficial to prompt and steady change reaction at the
cost of high computational cost. For efficiency, the number
of individuals re-evaluated for change detection is restricted
to a small percentage of the population size. It is worth not-
ing that, re-evaluation-based change detection methods assume
that there is no noise in function evaluations, i.e., they are not
robust. Thus, the proposed method may not be suitable for
detecting changes in noisy environments.
2) Change Response: If a change is successfully detected,
some actions should be taken to react to the environmental
change. A good change response mechanism must be able
to maintain a good level of population diversity and relocate
the population in promising areas that are close to the new
POS. Simply discarding old solutions and randomly reinitial-
izing the population is beneficial to population diversity but
may be time-consuming for algorithms to converge. Likewise,
fully reusing old solutions for the new environment might
be misleading if the landscapes of two consecutive changes
are significantly different. Also, this may cause the loss of
population diversity. As a consequence, algorithms may get
trapped into local minima or cannot find all POF regions for
the new environment. For these reasons, in this paper the popu-
lation for the new environment consists of half of old solutions
and half of reinitialized solutions. The half old solutions
are selected by the farthest first selection method [11], [43],
which was originally proposed to reduce an approximation
set to the maximum allowable size. The farthest first selection
method has been reported to provide better approximation than
NSGA-II’s crowding distance [13] for unconstrained and con-
strained static MOPs [10], [11]. This method selects half of
old solutions that maximize the diversity in the objective space
(line 3 of Algorithm 5). The other half reinitialized solutions in
the new population are produced by a guess of the new location
of the changed POS. To make a correct or at least reasonable
guess, one must know two things, i.e., moving direction and
movement step-size. The following paragraphs contribute to
how to compute them.
Let C
t
be the centroid of POS and A
t
be the obtained
approximation set at time step t, then C
t
can be computed by
C
t
=
1
|A
t
|
x∈A
t
x. (3)
The movement step-size S
t
to the new location of the
changed POS at time step t + 1 can be estimated by
S
t
=C
t
− C
t−1
(4)
剩余17页未读,继续阅读
资源评论


IT猿手
- 粉丝: 8w+
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 中小企业信息化规划方案.doc
- 移动网络架构简介PPT课件.pptx
- 软件开发工作总结.docx
- 网络信息安全自查报告(优秀6篇).docx
- 企业项目管理案例分析.ppt
- 项目管理在高校职业培训中的应用研究.doc
- 微机原理与接口技术楼顺天第二版习题解答.doc
- 自我管理数据库自动性能诊断.pptx
- 综合布线工程项目设计.pptx
- 网络该不该实名制.ppt
- 广西高校资助政策网络知识竞赛题库(115页).doc
- 电子商务师二级试题.docx
- 基于单片机的智能电风扇毕业设计.doc
- 基于Matlab的四象限圆弧插补程序.doc
- 快消品网络营销策略.pptx
- 农家人自述互联网信息服务创业的经历和体会.doc
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
