0% found this document useful (0 votes)
6 views

2023 - A Two Stages Prediction Strategy For Evolutionary Dynamic Multi-Objective Optimization

Uploaded by

Adamo Oliveira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

2023 - A Two Stages Prediction Strategy For Evolutionary Dynamic Multi-Objective Optimization

Uploaded by

Adamo Oliveira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Applied Intelligence (2023) 53:1115–1131

https://doi.org/10.1007/s10489-022-03353-2

A two stages prediction strategy for evolutionary dynamic


multi-objective optimization
Hao Sun1,2 · Xuemin Ma1,2 · Ziyu Hu1,2 · Jingming Yang1,2 · Huihui Cui1,2

Accepted: 7 February 2022 / Published online: 25 April 2022


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022

Abstract
In many engineering and scientific research processes, the dynamic multi-objective problems (DMOPs) are widely involved.
It’s a quite challenge, which involves multiple conflicting objects changing over time or environment. The main task of
DMOPs is tracking the Pareto front as soon as possible when the object changes over time. To accelerate the tracking process,
a two stages prediction strategy (SPS) for DMOPs is proposed. To improve the prediction accuracy, population prediction
is divided into center point prediction and manifold prediction when the change is detected. Due to the limitations of the
support vector machine, the new population is predicted by the combination of the elite solution in the previous environment
and Kalman filter in the early stage. Experimental results show that the proposed algorithm performs better on convergence
and distribution when dealing with nonlinear problems, especially in the problems where the environmental change occurs
frequently.

Keywords Dynamic multi-objective problems · Evolutionary algorithm · Kalman filter · Support vector machine

1 Introduction MADM problems are affected by attributes and the


attributes’ weights [3, 9, 21, 22]. Therefore, the solutions
In the real world, there exists a large number of can’t be optimal for a long time. If constraints or parameters
optimization problems that consist of multi-objectives and change over time, MOPs become DMOPs [7]. The main
those objectives conflict with each other. This kind of task of DMOPs is tracking the Pareto front as soon
problem is usually known as multi-objective optimization as possible when the object changes over time. These
problems (MOPs). However, problems in both industrial dynamic factors including stock prices fluctuate in the
applications and scientific research often contain many economy, mechanical failure in the industrial production
uncertain and dynamic factors. For example, air traffic line, dynamic scheduling of workshop production, water
scheduling problems are usually affected by the unexpected and electricity mix dynamic scheduling, machine learning
factors including bad weather or emergencies; rolling bi-level optimization [15, 25]. Compared with classical
schedule is always affected by the environmental and steel methods, evolutionary algorithms (EAs) that evolve multi-
species, the single-valued neutrosophic (SVN) problem is solutions in a single run are good candidate solutions
affected by multiple attribute decision making (MADM), to solve MOPs. When dealing with DMOPs, traditional
EAs cannot adapt quickly to environmental changes once
the solution set converged to the true front. Therefore,
the research on dynamic multi-objective evolutionary
 Ziyu Hu
algorithms (DMOEAs) gathers evolutionary computation
[email protected] researchers’ attention and it is significant in both theory and
practice [8, 19].
1
In DMOPs, how to adapt to environmental changes
School of Electrical Engineering, Yanshan University,
Qinhuangdao, Hebei, 066004, People’s Republic of China
frequently is the critical part of researching the problem
2
under the premise of effective equilibrium distribution and
Engineering Research Center of the Ministry of Education
for Intelligent Control System and Intelligent Equipment,
convergence [4]. Change detection, change reaction and
Yanshan University, Qinhuangdao, Hebei, 066004, People’s problem optimization are the main research direction of
Republic of China DMOPs, which are introduced as follows:
1116 H. Sun et al.

1. Change Detection: The function of this operator for solving DMOPs [16]. Both algorithms make use of
is to detect whether a change has occurred. If special points in the PF and the prediction model is linear.
so, the corresponding processing strategy will be The prediction strategy adopts a simple linear model to
adopted. Usually, there are two strategies to adopt: deal with historical data and predict the PF in the next
reevaluating solutions [2] and checking population environment. The prediction accuracy of the linear model
statistical information [27]. The first approach is easy can meet the requirements when the problem is simple. But
to implement but it assumes that there is no uncertain the accuracy of prediction is not satisfied for the Pareto
factor when evaluating objective functions. The second front when the changes are large and complex. To improve
strategy could overcome the shortcoming caused by the accuracy of population prediction, artificial intelligence
the uncertain factor, but the algorithm may need some is introduced into the strategy of environmental change,
additional parameters to evaluate objective function. and the least squares support vector machine (LSSVM) is
Most existing work still focuses on detecting whether proposed to quicken the speed and accuracy of tracking the
a change has happened or not. A good strategy should PF. Overall, the main contributions of this paper include:
further estimate the degree of changes, which might 1. A two stages prediction strategy is presented, which
help the next two steps. predicts new solutions according to the characteristics
2. Change Reaction: This part mainly focuses on making of different models and problems at different stages of
actions when a change is detected. The actions evolution.
include: 1) Memory maintenance: The individual 2. In the early stage of evolution with insufficient samples
points or information extracted from the current and poor convergence, the Kalman Filter (KF) model
population are added into the former population and is adopted for prediction. It shows fast speed and fewer
the old information are deleted; 2) Parameter tuning: samples, but low accuracy.
The adaption of algorithm parameters, such as the 3. In the later stage of evolution where more attention is
mutation probability, are employed; 3) Population re- paid to accuracy, the LSSVM is selected, which requires
initialization: The population needs to be re-initialized more samples but less iterations, and has a good fitting
when an environmental change is detected. The effect for linear and nonlinear problems.
following techniques are widely utilized: reuse the 4. Since the two models effectively exert their respective
previous elite population from the previous populations; advantages at different stages, the algorithm can quickly
mutate the previous population by heuristic method; track the front and maintain good distribution.
predict a new population by model [17, 18]; apply
The rest of this paper is organized as follows. Section
different crossover and mutation operators [12].
II presents the definitions of DMOPs. Section III describes
3. Problem Optimization: This part tackles the current
the prediction strategy in detail. Section IV introduces the
MOPs as a traditional optimization problem. An MOEA
test instances and performance metrics. Section V analyzes
developed for solving stationary MOPs is usually
the experimental results. Section VI outlines the conclusions
applied directly or with a little modifications [5]. The
and suggestions for future research.
task of these modifications is to enhance the diversity
of the population using various techniques [20],
including random migration, which mainly includes
randomly generation points into the population in each
2 Related definitions
generation; recrudescence, which generates solutions
According to the essential feature of the uncertain factors in
with high crossover and mutation probabilities [6];
optimization problems from nature, DMOPs are classified
memory mechanisms, which maintains a portion of
into different categories [11]. We focus on the following
dominated solutions; multiple populations or parallel
type of DMOPs in this paper:
computing.
In the above three components, the change reaction is 
min F(x, t) = [f1 (x, t), f2 (x, t), ..., fm (x, t)]
the core of the DMOPs. At present, most of the prediction x∈Rn (1)
methods are used to generate new populations to predict the s.t.ai ≤ xi ≤ bi
population as close as possible to the true PF [1]. Qingya
Li proposed a predictive strategy based on special points for where F(x, t) consists of m real-valued objective functions,
evolutionary dynamic multi-objective optimization [14], Fei each of which is continuous with respect to x over [a, b],
Zou. et al proposed a knee-guided prediction approach for m is the number of objectives, t represents the time index,
dynamic multi-objective optimization [29] and Xuemin Ma Rn is the decision space, x represents the decision vector, n
et al. proposed a feature information prediction algorithm is the number of decision variables, and [ai , bi ] represents
A two stages prediction strategy... 1117

feasible region of decision space, which may also change


over time in the presence of time-varying constraints.
For a given multi-objective optimization problem, the
definition is listed as follows:

Definition 1 (Pareto Dominance) A decision vector x1


dominates another decision vector x2 , denoted by x1 ≺ x2
when the following two conditions hold: (1) x1 is at least
as good as x2 for all the objectives, i.e. fj (x1 ) ≤ fj (x2 ),
j = {1, 2, ..., m}. (2) x1 is strictly better than x2 for at least
one objective.

Definition 2 (Pareto optimal set) The Pareto optimal set,


denoted as PS, is formed by the set of solutions that are non-
dominated in the objective space, i.e. PS= {x ∈ Ω | ¬∃x∗ ∈
Ω, x∗ ≺ x}.

Definition 3 (Pareto optimal front) The Pareto optimal


front, denoted as PF, is the set of non-dominated solutions
with respect to the objective space, i.e. PF = {f(x)= f1 (x),
f2 (x), ..., fm (x) | x ∈ PS}.

Naturally, the PSt and PFt of dynamic multi-objective


Fig. 1 The prediction method of objective position and solution in
problems can be defined as: PSt = {x ∈ Ω | different space
¬∃x∗ ∈ Ω, x∗t ≺ xt } and PFt = {f (x, t)=
f1 (x, t), f2 (x, t), ..., fm (x, t) | xinPSt }. Type I: The PSt changes, whereas the PFt is fixed.
The PS of a continuous MOP with m objectives in the t- Type II: Both PSt and PFt change.
th environment, we can divide PSt into two parts:a center Type III: The PSt changes, whereas the PFt is fixed.
of mass in the origin point xt and a manifold  Ct , which is Type IV: Both PSt and PFt are fixed, although the problem
defined as follows: can dynamically change.
PSt ∼
= xt + 
Ct . (2) The problems of the fourth type often occur in a
particular case, which is generally considered to be stable
The xt is estimated as: or static MOP. The general static MOEA can be used to
1  t solve the problems of the fourth type. In addition to the
xt = t x (3)
|P | t t four types mentioned above, there is another type. When the
x ∈P
problem change occurs, several changes of the above types
where Pt = {x t } is the output of the
 t-th stationary MOP, may occur at the same time. During the discussion, the first
t  t
which is an approximation to PS . P is the cardinality of three types are usually considered.
Pt . Each point xt ∈ Pt can be formulated as:
Table 1 A general dynamic MOEA framework
xt = xt + 
xt (4)
The manifold of PSt with the center of mass in the origin
point is approximated as follows:
 t

Ct =  x (5)
Figure 1(a) gives the movement of PFs in biobjective
problems. Figure 1(b) gives the movement of PS centers.
This idea is inspired by the distribution estimation algorithm
[24]. The prediction based on environmental and history
information could strongly promote the prediction of true
Pareto front and solution [10].
Accordingly, there are four possible ways of DMOPs
over time according to the changes of PSt and PFt :
1118 H. Sun et al.

t−j t−j −1 t−j −2 t−j −p+1


An environmental change detection operator adopts where Xj = (x̄i , x̄i , x̄i , . . . , x̄i ), x ti is
the method of reassessment that there is no noise in t−j +1
the ith dimension of the center at time t. Yj = (x̄i ),
function evaluations and easy to implement. The strategy is the ith dimension of the center at time t − j + 1.
of population prediction is used for environmental change i = 1, 2, . . . , n, n is the dimension of decision vector,
response. The static multi-objective algorithm MOEA/D- j = 1, 2, . . . , l, l is the number of samples, P represents
DE adopts the multi-objective evolutionary algorithm based support vector machine input dimension. So we can get a
on decomposition [13]. Its computational framework is return function:
shown in Table 1.
Y = wϕ(X) + b (7)
where ϕ(X) is feature mapping, w and b are the
3 Center points subsection prediction undetermined coefficient.
strategy The LSSVM method is equivalent to solving the
minimum value problem as follows:
In this section, we focus on re-initializing a population ⎧
when a change is detected by using SPS. The basic idea ⎪
⎨ minQ(w, e) = 1 ||w||2 + γ
l
2 2 ej2
of SPS is to utilize history information to predict an initial j =1 (8)

⎩ s.t.Y = wϕ(X ) + b + e , j = 1, 2, ..., l
population that is close to the new Pareto solution set. To j j j
achieve this task, a population is divided into two parts: a where γ represents regular parameter, the Lagrange
center point and a manifold [26]. Each new environment function of the minimum value problem (8) is:
creates a center point, and as the environment changes,
l
a set of center points is created. A sequence of center γ
L(w, b, e, a) = 12 ||w||2 + 2 ej2
points is maintained to predict the next center points. j =1
(9)
The previous manifolds are used to approximate the next l
manifold. The new population is obtained by combining − aj (wϕ(Xj ) + b + ej − yj )
j =1
the predicted center and the estimated manifold. And then
make this predicted population be the initial population where a = (a 1 , a 2 , . . . , a l )T , The equilibrium conditions
when the next change is detected. The accuracy of the center of (9) is:
point prediction directly affects the accuracy of the PS. ⎧


l
⎪ ∂w = w − aj ϕ(Xj ) = 0
∂L
In this paper, we focus on improving the accuracy of the ⎪

⎪ j =1
center point prediction. Most of the classic dynamic multi- ⎪

⎨ l
∂b = − aj = 0
objective optimization algorithms use linear models to solve ∂L
(10)
dynamic problems. These models are only applicable when ⎪
⎪ j =1
⎪ ∂L

environmental changes are not frequent. However, most ⎪
⎪ = γ ej − aj = 0
⎪ ∂ej

of the problems in the real world are nonlinear problems ⎩ ∂L = wϕ(X ) + b + e − Y = 0
∂aj j j j
or environmental changes are frequent. The LSSVM is
not only suitable for solving linear problems, but also for Then:
⎡ ⎤⎡ ⎤ ⎡ ⎤
solving nonlinear problems. But the fly in the ointment is I 0 0 −Z w 0
⎢0 0 0 −1̄ ⎥ ⎢b⎥ ⎢0⎥
that LSSVM needs more learning samples for prediction. ⎢ ⎥⎢ ⎥ = ⎢ ⎥ (11)
Therefore, in the early stage of prediction, a simple and fast ⎣0 0 γI −I ⎦ ⎣ e ⎦ ⎣ 0 ⎦
Kalman model is used to predict. The properties and more Z 1̄ I 0 a Y
details of the LSSVM and Kalman models are introduced in
Z = [ϕ(X1 ), ϕ(X2 ), ..., ϕ(Xl )]T ,
the following.
Y = [Y1 , Y2 , ..., Yl ]T , 1̄ = [1, 1, ..., 1]T ∈ R l ,
e = [e1 , e2 , ..., el ]T , a = [a1 , a2 , ..., al ]T ,
3.1 Least squares support vector machine l
w = aj ϕ(Xj ), ej = 1
γ aj , then linear equations are
LSSVM is used to model historical centers. Each dimension j =1
available:
of the center requires a support vector machine model. Take     
the prediction process of the ith dimensional of the center 0 1̄T b 0
= (12)
as an example: 1̄T ZZT + γ −1 I a Y
Let w = ZZT , wij = K(Xi , Xj ). K(•, •) represents Kernel
function, the kernel function in the paper adopts
 the radial

||Xi −Xj ||2
S = {sj | sj = (Xj , Yj ), Xj ∈ RP , Yj ∈ R}lj =1 (6) basis kernel function (K(Xi , Xj ) = exp − 2σ ),
A two stages prediction strategy... 1119

w + γ −1 I represents correlative matrix, and let B ≡ w + the error covariance estimate, z denotes the measurement
γ −1 I, so: of the state vector. The process and measurement noise
 T    covariance matrices are Q and R, respectively, K is the KF
0 1̄ b 0
= (13) gain. A measurement z ∈ Rm that is given by
1̄ B a Y
To sum up, we can get:
zt = xt + vt (22)
1̄T B−1 Y
b= (14) p(v) ∼ N(0, R) (23)
1̄T B−1 1̄
The current estimates are obtained using the previous
a = B−1 (Y − b1̄) (15) predictions and the current observation. KF designed for
Combined with (14) and (15), we can summarize: prediction has two variants: 2 − D KF (2 by 2 KF) and
3 − D KF (3 by 3 KF). There is no control inputs in

l
this system. Moreover the process is Gaussian noise of
Y = wϕ(X) + b = aj K(Xj , X) + b (16)
j =1
zero mean, and the observation noise is Gaussian noise of
assumed variance. R and Q can be calculated [18].
Thus, the center point at time t + 1 can be predicted.
Taking the center point at time t as training samples for 3.3 PS manifold estimation
LSSVM, the center point at the next moment is predicted
by (16). According to (14) and (15), the parameter b and a Ct+1 is calculated by 
 Ct and 
Ct−1 [26], and the process is as
in (16) are calculated. follows:

3.2 Kalman filter model x̃t+1


i = x̃ti + εim (24)

The prediction accuracy of the LSSVM is closely related εim˜N(0, σim ) (25)
to the number of training samples. If the sample size of
the predicted object is too small, it may not be accurate, 1 2
σim = D(C̃t , C̃t−1 ) (26)
even if all the samples are supported vectors. Therefore, n
the prediction of LSSVM will be biased when the learning where xt ∈ Ct , i = 1, 2, ..., n, n is the decision variable,
sample is less than one complete learning cycle. However, D(A, B) represents the distance between manifolds A and
the KF can estimate the state of a process without any B, defined as:
such learning time and correct itself based on subsequently
1 
made measurements. Thus we take the KF to predict the D(A, B) = min ||x − y|| (27)
shortage of learning samples. Due to space limitations, only |A| y∈B
x∈A
the main formulas of Kalman prediction are given herein, where |A| represents the number of individuals in popula-
the prediction specific calculation process reference [18]. tion A, x − y is the Euclidean distance between x and
The simple process of calculation is as follows. y.
The equations for the time update step are:
3.4 SPS process
x−
t = A
xt−1 (17)
The SPS procedure can be integrated in the MOEA
P−
t = APt−1 A + Q
T
(18) framework. It targets to initialize a new population Pt when
a change happens at the beginning of time t. In the SPS
And the equations for the measurement update step are:
strategy, the LSSVM model requires sufficient samples
−1
to complete the prediction calculation, so the process is
Kt = P− −
t (Pt + R) (19) divided into two stages. When the sample number is less
than 2p, the historical center samples are insufficient. Some
 x−
xt =  x−
t + Kt (zt − t ) (20) new individuals are directly generated by the previous
generation, while some individuals are generated by the KF
Pt = (I − Kt )P−
t (21)
prediction model. When the sample number is greater than
where A is the state-transition matrix that relates the state 2p, the number of samples is enough to adopt the LSSVM
at the previous time step t − 1 to the state at the current model. The SPS procedure is summarized in Table 2. The
step t. x is the state vector to be estimated by the KF, P is input dimension of the LSSVM is p. the max number of
1120 H. Sun et al.

Table 2 SPS implementation process 4.1 Test instance

Benchmark problems play important roles in assessing the


performance of an algorithm and guiding the algorithm
design. Table 3 lists all the six test problems and their
PFs and PSs with details. In DMOP, there are many test
functions. In these test functions, the PF or the PS change
with time, while the number of decision variables, the
number of objectives, and the boundaries of the search space
keep fixed throughout the run. Furthermore, the geometric
shapes of the PSs are 1-D line segments for bi-objective
problems and 2-D rectangles for tri-objective problems. On
one hand, there is no reason that the PS of the real world
problem is so simple. On the other hand, this property may
favor some offspring reproduction procedures as discussed
in [24]. For this reason, in addition to testing linear functions
F1-F3, we also test three new DMOPs test instances, F5-
F7, proposed in [26], which have a nonlinear correlation
between the decision variables.
samples is M. According to Table 2, the min number of
samples is also p. The specific process is presented in Fig. 2. 4.2 Performance metrics

To evaluate the convergence and distribution of the


4 Test instances and performance metrics algorithm, the inverted generational distance (IGD) [28]
and the mean values of the IGD (MIGD) [26] metric for
This section introduces the test instances, performance DMOPs:
metrics used in the experiments. Then the validity of the
SPS is verified by predicting center points. d(v, Pt )
v∈Pt∗
I GD(Pt∗ , Pt ) = (28)
|Pt∗ |

d(v, Pt ) = min ||F (v) − F (u)|| (29)


u∈Pt

1 
MI GD = I GD(Pt∗ , Pt ) (30)
|T |
t∈T

where Pt∗ is a set of uniformly distributed Pareto optimal


set in the Pareto optimal front at the time t, Pt is the
approximate Pareto  front
 obtained by the algorithm in
consideration, d v, Pt represents the minimum Euclidean
distance of the point v between the true PF and the
approximate PF, where T is a set of discrete time points in
a run and |T | is the cardinality of T .
A lower value of IGD means that the algorithm has better
optimization performance. To obtain a low value of IGD, Pt
must be very close to the true PF and cannot miss any part
of the true PF which can be seen from (25) and (26). MIGD
is also a metric to evaluate the performance of algorithms
regarding convergence and distribution. The smaller the
Fig. 2 The flow chart of SPS MIGD, the better the algorithm performs.
A two stages prediction strategy... 1121

Table 3 Test instances used in experiments

Test Function Search Space Objectives,PS and PF Remarks

F1 [0, 1] × [−1, 1]n−1 f (x, t) = x1 , f2 (x, t) = g(1 − PF is fixed but PS change, two objectives.
1 n
f1
g ), g = 1 + (xi − G(t))2 , G =
i=2 
sin(0.5πt), t = n1T ττT , P S(t) : 0 ≤
x1 ≤ 1, xi√= G,for i = 2, ..., n, P F (t) :
f2 = 1 − f1 , 0 ≤ f1 ≤ 1
F2 [0, 1] × [−1, 1]n−1 f1 (x, t) = x1 , f2 (x, t) = g(1 − PF changes but PS is fixed,two objectives.
H n
( fg1 ) ), g = 1+9 H = 1.25 +
xi2
i=2  
0.75 sin(0.5πt), t = n1T τ
τT P S(t) :
0 ≤ x1 ≤ 1, xi = 0,for i = 2, ..., n,
P F (t) : f2 = 1 − f1H , 0 ≤ f1 ≤ 1
F3 [0, 1] × [−1, 1]n−1 f1 (x, t) = x1 , f2 (x, t) = g(1 − PF changes and PS changes,two objectives.
H n
( fg1 ) ), g = 1 + (xi − G)2
i=2
G = sin(0.5πt)H  =  1.25 +
0.75 sin(0.5πt), t = n1T ττT P S(t) :
0 ≤ x1 ≤ 1, xi = G,for i = 2, ..., n,
P F (t) : f2 = 1 − f1H , 0 ≤ f1 ≤ 1
F5 [0, 5]n f1 (x, t) = |x1 − a|H + yi2 , f2 (x, t) = PF changes and PS changes,two objectives.
i∈I1
|x1 − a − 1|H + yi2 yi = xi − b −
i∈I2
H + ni
1+|x1 −a| , H = 1.25+0.75 sin(πt)
a =2 cos(πt)
 + 2b = 2 sin(2πt) + 2, t =
1
nT
τ
τT I1 = {i|1 ≤ i ≤ n} ,i is odd
,I2 = {i|1 ≤ i ≤ n},i is even. P S(t) : a ≤
i
x1 ≤ a + 1, xi = b + 1 − |x1 − a|H + n ,for
i = 2, ..., n P F (t) : f1 = s , f2 =
H

(1 − s)H , 0 ≤ s ≤ 1
F6 [0, 5]n f1 (x, t) = |x1 − a|H + yi2 , f2 (x, t) = PF changes and PS changes,two objectives.
i∈I1
|x1 − a − 1|H + yi2 yi = xi − b −
i∈I2
i
1+|x1 −a|H + n , H = 1.25+0.75 sin(πt)
a = 2 cos(1.5πt) sin(0.5πt) + 2b  = 
2 cos(1.5πt) cos(0.5πt) + 2t = n1T ττT
I1 = {i|1 ≤ i ≤ n}, i is odd,I2 = {i|1 ≤
i ≤ n} ,i is even P S(t) : a ≤ x1 ≤
i
a + 1, xi = b + 1 − |x1 − a|H + n ,for
i = 2, ..., n P F (t) : f1 = s , f2 =
H

(1 − s)H , 0 ≤ s ≤ 1
F7 [0, 5]n f1 (x, t) = |x1 − a|H + yi2 , f2 (x, t) = PF changes and PS changes, two objectives.
i∈I1
|x1 − a − 1|H + yi2 yi = xi − b −
i∈I2
i
1+|x1 −a|H + n , H = 1.25+0.75 sin(πt)
a = 1.7(1 − sin(πt)) sin(πt) + 3.4,
b = 1.4(1
 − sin(πt)) cos(πt) + 2.1, t =
τT I1 = {i|1 ≤ i ≤ n},i is odd,I2 =
1 τ
nT
{i|1 ≤ i ≤ n},i is even P S(t) : a ≤
i
x1 ≤ a + 1, xi = b + 1 − |x1 − a|H + n ,for
i = 2, ..., n P F (t) : f1 = s , f2 =
H

(1 − s)H , 0 ≤ s ≤ 1
1122 H. Sun et al.

Table 4 Parameters setting

Parameter PPS MOEA/D-KF SPS MRCDMO

dimension of decision variables 20 20 20 20


Neighborhood size 20 20 20 20
Decomposition method Tchebycheff Tchebycheff Tchebycheff Tchebycheff
Differential evolution CR=1.0,F=0.5 CR=1.0,F=0.5 CR=1.0,F=0.5 CR=1.0,F=0.5
The length of the history center point series M 23 23 23 *
The severity of changes(nT = 5) 5 5 5 5
The model order p 3 3 3 *
γ * * 40 *
σ * * 0.7 *
Process noise * Gaussian of N(0,0.04) Gaussian of N(0,0.04) *
Observation noise * Gaussian of N(0,0.01) Gaussian of N(0,0.01)
Number of runs 30 30 30 30
Number of change 40 40 40 40

5 Experiment of space, the comparison of the center points prediction of


function F1-F3 is shown in Fig. 6 when nT = 5 and the
5.1 Center points prediction experiment others are shown in the supplementary when nT = 10, 15.
Since the true Pareto solution set of the objective function is
To verify the effectiveness of the prediction algorithm, relatively simple, and the dimension remains fixed, the PPS
the prediction accuracy of the algorithm is simulated and prediction center and SPS prediction center overlap with the
analyzed by using the real Pareto front at different times. true Pareto solution set which is shown in Fig. 6. It can be
Because SPS can solve linear and nonlinear problems, we seen from Table 5 that the prediction accuracy of the AR
mainly show the result of the nonlinear test function. For the model is higher than that of the model combined KF and
same sample data, the model of KF and LSSVM proposed by LSSVM, but there only is little advantage. However, the
this paper and the aunivariate autoregression (AR) model used advantage of F6, F7 is obvious. With the increase of the
by the population prediction strategy (PPS) run separately, intensity of environmental changes, the advantage of F6 and
and the results are statistically analyzed. The black lines F7 is gradually showing, especially when nT = 5.
with + are with true PS of objective functions, blue lines Through the above, it can be explained that the SPS
with squares are with SPS, red lines with circles are with proposed in this paper has higher prediction accuracy
PPS. The parameters are set as shown in Table 4. In addition, and better applicability, Especially when the environment
the glossary of abbreviations are provided in Appendix. changes more dramatically.
For the same sample data, the model combining the KF
and LSSVM proposed in this paper and the AR model 5.2 SPS prediction experiment
adopted by the PPS strategy predict the center points
respectively and analyze the results. The comparison of To further demonstrate the superiority of SPS. The SPS,
the center points prediction of function F5-F7 is shown in MOEA/D-KF, PPS and multi-regional co-evolutionary
Figs. 3, 4 and 5 when nT = 15, 10, 5. Seen from the three dynamic multiobjective optimization (MRCDMO) [17]
figures, with the increase of the intensity of environmental algorithms (static multi-objective optimization algorithm
changes, SPS predicted values and the true PS coincide, choose MOEA/D-DE [23] run 30 times independently. The
there is an obvious deviation between the values predicted three algorithms do quantitative analysis under the same
by the AR model and the true PS. The mean value of the test function and hardware conditions. The settings of the
absolute error between prediction points and true points is algorithm parameters are shown in Table 4. The center
shown in Table 5 in different dimensions when nT = 5. points prediction plays a very important role in the whole
From the expression of the test functions, we can know that population prediction. So we focus on the situation when
in the real PS, the x2 -xn dimensional solution set is the nT = 10, and 5 in this paper. To visualize the quality of the
same, so only the prediction comparison of the dimensions obtained population, choose the minimum of MIGD in 30
of x1 and x2 is given. operations, and give the IGD trend of the three algorithms.
Then we also test the F1-F3 which have a linear The IGD trend comparison of SPS, MOEA/D-KF, PPS
correlation between decision variables. Due to the limitation and MRCDMO algorithms over time of changes on F5-
A two stages prediction strategy... 1123

Fig. 3 The center points prediction comparison simulation figure of F5-F7 when nT = 15

F7 is shown in Fig. 7. MOEA/D-KF takes the KF model chart of F5-F7 is given in this paper, the IGD trend chart
which is just a simple linear filtering model, can provide of F1-F3 is shown in the addenda. In F1-F3 PPS gets a
reasonable performance as a prediction method. MRCDMO little better performance than SPS. SPS performs as well
adopts multireginal center points to predict solutions when as PPS in a linear function. And the IGD in SPS performs
environmental changes occur. However, they will show the much better than that in PPS on F5-F7 when nT =10, and
same shortage as the PPS when dealing with the nonlinear 5. It is not difficult to explain why SPS performs less than
problems, so we only analyze the SPS and PPS in detail. PPS on F1-F3. The AR(n) model adopted in PPS is a linear
The population predicted by SPS gets a small IGD when model. And the functions of F1-F3 are linear functions, F5-
nT =10, and 5. Due to the limitation of space, the IGD trend F7 are nonlinear. So linear models predict linear functions

Fig. 4 The center points prediction comparison simulation figure of F5-F7 when nT = 10
1124 H. Sun et al.

Fig. 5 The center points prediction comparison simulation figure of F5-F7 when nT = 5

are certainly accurate. On the contrary, the LSSVM model Compared with PPS, the MIGD of SPS is smaller,
shows many unique advantages in solving small sample, which means SPS has higher convergence accuracy and
nonlinear, and high dimensional pattern recognition. wider distribution. The variance of SPS is also smaller
The comparison between the approximate PF and the true which shows that the computational stability of SPS is
PF obtained by the MOEA/D-KF, PPS, SPS is presented in high.
Figs. 8 and 9. For F3, the PF and PS both change, which is the same to
Although there is a small beat on F1, F3 in the early stage, F1. IGD appears a big beat after convergence in SPS. It is
it is much better than that in PPS of F2, F5-F7. Convergence not difficult to explain: in the early stage of the algorithm,
precision of F2, F5-F7 obtained by SPS is better than that the function changes less frequently, but the approximate PF
by PPS and MOEA/D-KF. From Fig. 8, it is observed that of storage is different from the true PF. Compared with AR
SPS performs better than PPS on F5-F7, but it’s not obvious. linear model, the LSSVM that belonged to intelligent model
The worst part is that the convergence accuracy of F1, F3 is is more dependent on samples; although the Kalman filter
not as good as PPS and MOEA/D-KF in the PF comparison has been added to predict, in the case of large samples, there
diagram in supplementary. However, the approximate PF of
F5-F7 obtained by SPS in Fig. 9 has better convergence than
Table 5 The mean of absolute error between the true and predicted
PPS. That is to say, SPS still be able to work well when center points
the environmental change frequently on nonlinear functions.
The mean and variance of IGD running 30 times in the same Test Function Models The x1 The x2
environment are given in the Table 6.
F1 PPS 1.15e-16 1.46e-04
PF of F1 is fixed but the PS changes after a change has
SPS 0 3.85e-04
occurred. Both of these algorithms have good convergence
F2 PPS 7.12e-17 0
and distribution in the later stage, which we can know from
SPS 0 0
Fig. S5 and S6 in supplementary. The IGD of PPS converges
F3 PPS 9.68e-17 1.25e-16
faster than SPS, and there is a big beat at about t = 10
SPS 0 3.85e-04
in SPS in Fig. S3 in supplementary. It can also be seen in
F5 PPS 1.12e-15 6.97e-03
Table 6 that the MIGD of PPS is smaller than that of SPS,
SPS 3.94e-04 7.68e-04
and PPS has higher computational stability. The best results
F6 PPS 1.00e-02 4.31e-03
are shown in boldface in Table 5 and Table 6.
SPS 3.11e-04 1.39e-04
PF of F2 changes after a change has occurred, but PS
F7 PPS 7.37e-02 2.78e-02
is fixed. These algorithms perform well in convergence
SPS 3.19e-04 4.24e-04
and distribution from Fig. S3,S5,and S6 in supplementary.
A two stages prediction strategy... 1125

Fig. 6 The center points prediction comparison simulation figure of F1-F3 when nT = 5

will be a big mutation in the early stages. With the iteration model used by PPS is significantly reduced for F5-F7. From
of the algorithm, the approximate PF is much closer to Fig. 7, the IGD obtained by SPS convergence when t = 40
the true PF, which provids LSSVM model a more accurate on F5, while the PPS finally converges when t = 75. The
learning model, then the prediction accuracy and prediction value of MIGD obtained by SPS is smaller both on mean
stability are improved simultaneously. and variance, which indicates that SPS is superior to the
With the increasing of the complexity of the true Pareto PPS algorithm in convergence, distribution and calculation
solution set, the prediction accuracy of the linear prediction stability.

Fig. 7 IGD trend comparison of MOEA/D-KF, MRCDMO, PPS and SPS over time of changes on F5-F7
1126 H. Sun et al.

Fig. 8 PF obtained by MOEA/D-KF, MRCDMO, PPS, SPS at t = 149, 150, 151, 152, 153 on F5-F7 when nT = 10

The changes of F6 and F7 are more complicated. accuracy of the model decreases. Therefore, the IGD
The prediction accuracy of the AR linear model used in of PPS algorithm fluctuates continuously throughout the
PPS is low (as described above), which will lead to the calculation cycle. It is obvious that the prediction accuracy
prediction of population deviate seriously from the true of Pareto solution set on SPS is higher than PPS for F5-F7,
PS. Since the ideal approximate PF is not identified before which indicates that the LSSVM, an artificial intelligence
the environmental change (40 iterations), the distortion model, used in this algorithm performs better for complex
factor of the learning samples increases and the prediction DMOPs.

Fig. 9 PF obtained by MOEA/D-KF, MRCDMO, PPS, SPS at t = 149, 150, 151, 152, 153 on F5-F7 when nT = 5
A two stages prediction strategy... 1127

Table 6 MIGD values of four strategies on F1-F3,F5-F7

Instance Models nT = 10 nT = 5

Maximum Minimum Mean STD Maximum Minimum Mean STD

F1 MOEA/D-KF 1.83e-02 1.74e-02 1.78e-02 2.35e-04 5.26e-02 4.64e-02 4.92e-02 1.77e-03


MRCDMO 2.96e-02 2.72e-02 2.83e-02 3.87e-04 5.31e-02 4.33e-02 4.51e-02 1.93e-03
PPS 8.95e-03 7.36e-03 7.75e-03 3.43e-04 1.31e-02 8.05e-03 9.17e-03 1.31e-03
SPS 3.78e-02 3.41e-02 3.57e-02 9.44e-04 2.33e-02 1.58e-02 1.98e-02 2.36e-03
F2 MOEA/D-KF 1.05e-02 8.65e-03 9.48e-03 4.68e-04 1.14e-02 8.42e-03 9.75e-03 6.50e-04
MRCDMO 1.23e-02 7.87e-03 1.07e-02 6.38e-04 1.33e-02 9.35e-03 1.20e-02 7.45e-04
PPS 1.03e-02 8.00e-03 8.91e-03 6.10e-04 1.05e-02 8.07e-03 9.09e-03 6.67e-04
SPS 9.51e-03 7.65e-03 8.62e-03 4.47e-04 9.39e-03 7.33e-03 8.30e-03 5.10e-04
F3 MOEA/D-KF 2.39e-02 2.26e-02 2.33e-02 3.53e-04 6.84e-02 6.14e-02 6.45e-02 1.96e-03
MRCDMO 4.11e-00 3.88e-00 4.02e-00 5.51e-03 4.19e-00 3.75e-00 4.07e-00 5.52e-03
PPS 1.08e-02 8.21e-03 8.75e-03 5.67e-04 1.32e-02 8.98e-03 1.02e-02 1.12e-03
SPS 4.93e-02 4.44e-02 4.71e-02 1.22e-03 3.18e-02 2.02e-02 2.38e-02 3.05e-03
F5 MOEA/D-KF 1.11e-00 6.64e-01 8.75e-01 9.95e-02 2.04e-00 1.68e-00 1.89e-00 7.80e-02
MRCDMO 1.37e-00 8.79e-01 1.04e-00 1.14e-02 1.73e-00 9.32e-01 1.56e-00 1.48e-01
PPS 8.90e-01 8.89e-02 4.96e-01 2.01e-01 1.67e-00 3.28e-01 9.82e-01 3.75e-01
SPS 1.60e-01 6.24e-02 1.19e-01 2.56e-02 4.12e-01 1.48e-01 2.94e-01 6.17e-02
F6 MOEA/D-KF 2.44e-01 1.74e-01 2.04e-01 1.86e-02 9.78e-01 8.31e-01 8.87e-01 3.52e-02
MRCDMO 2.92e-01 8.85e-02 2.70e-01 2.05e-02 9.27e-01 6.54e-01 7.77e-01 5.75e-02
PPS 3.25e-01 7.90e-02 2.03e-01 7.00e-02 9.66e-01 6.20e-01 7.79e-01 7.14e-02
SPS 1.49e-01 4.48e-02 9.91e-02 2.68e-02 2.20e-01 5.54e-02 1.46e-01 5.75e-02
F7 MOEA/D-KF 2.92e-01 2.14e-01 2.49e-01 2.26e-02 8.61e-01 7.64e-01 8.13e-01 2.64e-02
MRCDMO 3.83e-01 2.97e-01 3.63e-01 3.23e-02 6.96e-01 5.73e-01 6.82e-01 5.64e-02
PPS 1.60e-01 5.80e-02 7.58e-02 2.48e-02 1.34e-00 7.12e-01 9.44e-01 2.13e-01
SPS 1.11e-01 5.78e-02 8.65e-02 1.33e-02 1.63e-01 8.02e-02 1.10e-01 2.17e-02

Fig. 10 The box plot of IGD values for MOEA/D-KF, MRCDMO, PPS, SPS for the test function F5-F7
1128 H. Sun et al.

Table 7 Mean values of MIGD obtained by SPS on six instances in different parameter settings

Instance F1 F2 F3 F5 F6 F7

0.05 3.25e-02 7.59e-02 4.69e-02 7.35e-01 7.68e-01 3.66e-01


0.1 3.14e-02 7.72e-02 4.55e-02 6.95e-01 7.71e-01 4.18e-01
0.15 3.03e-02 8.21e-02 4.58e-02 6.58e-01 8.11e-01 3.65e-01
0.2 3.21e-02 7.68e-02 4.73e-02 6.46e-01 7.49e-01 3.88e-01
0.25 3.16e-02 7.42e-02 4.81e-02 6.31e-01 7.32e-01 3.60e-01
0.3 3.41e-02 7.68e-02 4.77e-02 6.34e-01 8.64e-01 3.33e-01
0.35 3.23e-02 8.02e-02 4.59e-02 6.60e-01 7.47e-01 3.38e-01
0.4 3.27e-02 6.88e-02 4.72e-02 6.55e-01 6.86e-01 3.51e-01
0.45 3.22e-02 7.10e-02 4.53e-02 6.19e-01 8.69e-01 3.67e-01
0.5 3.31e-02 6.89e-02 4.51e-02 6.41e-01 7.23e-01 3.15e-01
0.55 3.33e-02 7.01e-02 4.51e-02 6.37e-01 7.23e-01 3.50e-01
0.6 2.96e-02 7.06e-02 4.72e-02 6.38e-01 8.00e-01 3.23e-01
0.65 3.20e-02 7.04e-02 4.43e-02 6.28e-01 8.83e-01 3.37e-01
0.7 3.02e-02 7.22e-02 4.66e-02 6.67e-01 7.17e-01 3.60e-01
0.75 3.19e-02 6.99e-02 4.43e-02 6.33e-01 9.13e-01 3.49e-01
0.8 3.30e-02 7.21e-02 4.59e-02 6.36e-01 7.80e-01 3.64e-01
0.85 3.52e-02 7.35e-02 4.57e-02 6.50e-01 7.74e-01 3.37e-01
0.9 3.67e-02 6.17e-02 4.80e-02 6.61e-01 8.24e-01 3.72e-01
0.95 4.02e-02 6.24e-02 5.09e-02 6.44e-01 8.16e-01 4.01e-01

Through data analysis, the advantages of SPS enhance with different parameters on these problems over 30 runs.
with increasing of the intensity of environmental changes. By calculating the average of all data in the Table 7, SPS
For more intuitive observation, the box plot of IGD values performs well under circumstances where 65% individuals
for PPS, MOEA/D-KF, SPS on F5-F7 when nT =10, and 5 are generated from P t−1 , and 35% use Kalman Filter for
is given in Fig. 10. From the comparison of box diagrams, prediction.
the median and four minute distance obtained by SPS are
smaller, which indicates that the accuracy of prediction is
higher. In addition, the exception point produced by SPS is 6 Conclusions and future work
obviously less than PPS and MOEA/D-KF.
According to a large number of experiments and data, In this paper, we propose SPS to enhance the performance
SPS, the algorithm proposed in this paper, is better of MOEAs in dealing with dynamic environments when the
than PPS and MOEA/D-KF on nonlinear functions on environment changes violently. In SPS, we focus on the
convergence accuracy and speed when the environment prediction of the center points. The main advantages of the
changes violently. And SPS is also applicable to the proposed SPS are as follows:
prediction of linear functions. The LSSVM artificial intelligence model is used to
improve the prediction accuracy of the central point, thereby
5.3 Comparisons of different parameters in improving the ability of the population to track the new
population selection PF in the new environment. Then the LSSVM model can
achieve high prediction accuracy in both linear and non-
In the SPS strategy, when 2 ≤ t < 2 ∗ p, 65% individuals linear changes, but its disadvantage is that it is overly
are generated from P t−1 , and 35% use Kalman Filter dependent on the distribution of training samples. The more
for prediction. To determine the appropriate parameter in the samples used for training cover the entire change period,
population selection, we consider six problems F1, F2, F3, the higher the prediction accuracy of the trained model. To
F5, F6 and F7. The number of individuals selected from overcome this shortcoming, the Kalman filter algorithm is
P t−1 is set from 0.05 to 0.95 in 0.05 intervals. The other introduced. In the case of insufficient samples, the Kalman
parameters are the same as in Table 4. Table 7 gives the filter algorithm is used to generate the initial population.
statistical results of MIGD of solutions obtained by SPS By combining the three algorithms, the algorithm achieves
A two stages prediction strategy... 1129

Table 8 Glossary of
abbreviations Parameters Settings

Dynamic multiobjective optimization problems DMOPs


Stages prediction strategy SPS
Multi-objective optimization problems MOPs
Single-valued neutrosophic SVN
Multiple attribute decision making MADM
Evolutionary algorithms EAs
Dynamic multi-objective evolutionary algorithms DMOEAs
Least squares support vector machine LSSVM
Kalman Filter KF
Inverted generational distance IGD
Mean inverted generational distance MIGD
Aunivariate autoregression AR
Population prediction strategy PPS
Multi-regional co-evolutionary dynamic multiobjective optimization MRCDMO
Pareto optimal set PS
Pareto optimal front PF
Min number of LSSVM samples p
Pareto Dominance ≺
Number of objectives m
Center of mass xt
Manifold 
Ct
Dimension of decision vector n
Number of LSSVM samples l
Dimension of support vector machine input P
Feature mapping ϕ(X)
Regular parameter γ
Kernel function K(•, •)
State-transition matrix A
Error covariance estimate P
Measurement of the state vector z
Process noise covariance matrices Q
Measurement noise covariance matrices R
KF gain K

higher convergence accuracy under linear and nonlinear Acknowledgements This work was supported by the National Natural
changes. In actual production, many problems need dynamic Science Foundation of China [No. 62003296, 61703361]; the Natural
Science Foundation of Hebei [No. E2018203162, F20202
optimization. For example, in the rolling production process,
03031]; the Science and Technology Research Projects of Hebei
the rolling speed has a great influence on the rolling [No. QN2020225]; the Post-Doctoral Research Projects of Hebei
efficiency, as the rolling speed increases or decreases, the [No. B2019003021]; the Hebei Province Graduate Innovation Funding
optimal rolling schedule will also change. The combination Project [CXZZBS2022134]. The authors would like to thank the editor
and anonymous reviewers for their helpful comments and suggestions
of research theoretical algorithms and actual production is
to improve the quality of this paper.
the main research work in the future.

Declarations
Appendix
Conflict of Interests The authors declared that they have no conflicts of
interest to this work. We declare that we do not have any commercial or
In order to facilitate readers to view the paper, the glossary associative interest that represents a conflict of interest in connection
of abbreviations is presented in Table 8. with the work submitted.
1130 H. Sun et al.

References Optim 44(2):195–207. https://doi.org/10.1080/0305215X.2011.


573853
16. Ma X, Yang J, Sun H, Hu Z, Wei L (2021) Feature infor-
1. Azzouz R, Bechikh S, Said LB, Trabelsi W (2018) Handling
mation prediction algorithm for dynamic multi-objective opti-
time-varying constraints and objectives in dynamic evolutionary
mization problems. European Journal of Operational Research.
multi-objective optimization. Swarm Evol Comput 39:222–248.
https://doi.org/10.1016/j.ejor.2021.01.028
https://doi.org/10.1016/j.swevo.2017.10.005
17. Ma X, Yang J, Sun H, Hu Z, Wei L (2021) Multiregional co-
2. Cheng J, Yen GG, Zhang G (2015) A many-objective evolutionary
evolutionary algorithm for dynamic multiobjective optimization.
algorithm with enhanced mating and environmental selections.
Inf Sci 545(4):1–24. https://doi.org/10.1016/j.ins.2020.07.009
IEEE Trans Evol Comput 19(4):592–605. https://doi.org/10.1109/
18. Muruganantham A, Tan KC, Vadakkepat P (2016) Evolutionary
TEVC.2015.2424921
dynamic multiobjective optimization via kalman filter prediction.
3. Coello Coello CA, González Brambila S, Figueroa Gamboa
IEEE Trans Cybern 46(12):2862. https://doi.org/10.1109/TCYB.
J, Castillo Tapia MG, Hernández Gómez R (2020) Evolu-
2015.2490738
tionary multiobjective optimization: open research areas and
19. Wang H, Wang D, Yang S (2009) A memetic algorithm with adap-
some challenges lying ahead. Complex Intell Syst 6:221–236.
tive hill climbing strategy for dynamic optimization problems.
https://doi.org/10.1007/s40747-019-0113-4
Soft Comput 13(8-9):763–780. https://doi.org/10.1007/s00500-
4. Cruz C, González JR, Pelta DA (2011) Optimization in dynamic
008-0347-3
environments: a survey on problems, methods and measures. Soft
20. Yang S, Yao X (2005) Experimental study on population-
Comput 15(7):1427–1448. https://doi.org/10.1007/s00500-010-
based incremental learning algorithms for dynamic optimization
0681-0
problems. Soft Comput 9(11):815–834. https://doi.org/10.1007/
5. Das S, Mandal A, Mukherjee R (2014) An adaptive differential
s00500-004-0422-3
evolution algorithm for global optimization in dynamic envi-
21. Zeng S, Chen S, Fan K (2020) Interval-valued intuitionistic
ronments. IEEE Trans Cybern 44(6):966–978. https://doi.org/10.
fuzzy multiple attribute decision making based on nonlinear
1109/TCYB.2013.2278188
programming methodology and topsis method. Inf Sci 506:424–
6. Fan R, Wei L, Sun H, Hu Z (2020) An enhanced reference vectors-
442. https://doi.org/10.1016/j.ins.2019.08.027
based multi-objective evolutionary algorithm with neighborhood-
22. Zeng S, Luo D, Zhang C, Li X (2020) A correlation-based
based adaptive adjustment. Neural Comput & Applic 32:11,767–
topsis method for multiple attribute decision making with single-
11,789. https://doi.org/10.1007/s00521-019-04660-5
valued neutrosophic information. Int J Inf Technol Decis Mak
7. He C, Tian Y, Wang H, Jin Y (2020) A repository of
19(1):343–358. https://doi.org/10.1142/S0219622019500512
real-world datasets for data-driven evolutionary multiobjective
23. Zhang Q, Li H (2007) Moea/d: a multiobjective evolutionary
optimization. Complex Intell Syst 6:189–197. https://doi.org/10.
algorithm based on decomposition. IEEE Trans Evol Comput
1007/s40747-019-00126-2
11(6):712–731. https://doi.org/10.1109/TEVC.2007.892759
8. He Z, Yen GG, Zhang J (2014) Fuzzy-based pareto optimality for
24. Zhang Q, Zhou A, Jin Y (2008) Rm-meda: a regularity model-
many-objective evolutionary algorithms. IEEE Trans Evol Com-
based multiobjective estimation of distribution algorithm. IEEE
put 18(2):269–285. https://doi.org/10.1109/tevc.2013.2258025
Trans Evol Comput 12(1):41–63. https://doi.org/10.1109/TEVC.
9. Hu Z, Wei Z, Sun H, Yang J, Wei L (2021) Optimization of
2007.894202
metal rolling control using soft computing approaches: a review.
25. Zhang Z (2008) Multiobjective optimization immune algorithm in
Arch Comput Method Eng 28:405–421. https://doi.org/10.1007/
dynamic environments and its application to greenhouse control.
s11831-019-09380-6
Appl Soft Comput 8(2):959–971. https://doi.org/10.1016/j.asoc.
10. Hu Z, Yang J, Sun H, Wei L, Zhao Z (2017) An improved
2007.07.005
multi-objective evolutionary algorithm based on environmen-
26. Zhou A, Jin Y, Zhang Q (2014) A population prediction strategy
tal and history information. Neurocomputing 222:170–182.
for evolutionary dynamic multiobjective optimization. IEEE
https://doi.org/10.1016/j.neucom.2016.10.014
Trans Cybern 44(1):40–53. https://doi.org/10.1109/TCYB.2013.
11. Jin Y, Branke J (2005) Evolutionary optimization in uncertain
2245892
environments-a survey. IEEE Trans Evol Comput 9(3):303–317.
27. Zhou A, Qu BY, Li H, Zhao SZ, Suganthan PN, Zhang Q (2011)
https://doi.org/10.1109/TEVC.2005.846356
Multiobjective evolutionary algorithms: a survey of the state of
12. Koo WT, Chi KG, Tan KC (2010) A predictive gradient strategy
the art. Swarm Evol Comput 1(1):32–49. https://doi.org/10.1016/j.
for multiobjective evolutionary algorithms in a fast changing envi-
swevo.2011.03.001
ronment. Memetic Computing 2(2):87–110. https://doi.org/10.
28. Zitzler E, Thiele L, Laumanns M, Fonseca CM, da Fonseca VG
1007/s12293-009-0026-7
(2003) Performance assessment of multiobjective optimizers: an
13. Li H, Zhang Q (2009) Multiobjective optimization problems
analysis and review. IEEE Trans Evol Comput 7(2):117–132.
with complicated pareto sets, moea/d and nsga-ii. IEEE Trans
https://doi.org/10.1109/TEVC.2003.810758
Evol Comput 13(2):284–302. https://doi.org/10.1109/tevc.2008.
29. Zou F, Yen G, Tang L (2019) A knee-guided prediction approach
925798
for dynamic multi-objective optimization. Inf Sci 509:193–209.
14. Li Q, Zou J, Yang S, Zheng J, Gan R (2019) A pre-
https://doi.org/10.1016/j.ins.2019.09.016
dictive strategy based on special points for evolutionary
dynamic multi-objective optimization. Soft Comput 23:3723–
3739. https://doi.org/10.1007/s00500-018-3033-0
15. Linnala M, Madetoja E, Ruotsalainen H, Hamalainen J (2012)
Publisher’s note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Bi-level optimization for a dynamic multiobjective problem. Eng
A two stages prediction strategy... 1131

Hao Sun received the Ph.D. Jingming Yang received the


degree in control theory and Ph.D. degree in Mechanical
control engineering from the Design and Theory from the
Yanshan University, Qin- Yanshan University, Qin-
huangdao, Hebei, China, in huangdao, Hebei, China, in
2015. He is a Lecturer with 2000. He is a Professor with
the School of Electrical Engi- the School of Electrical Engi-
neering, Yanshan University. neering, Yanshan University.
His current research interests His current research interests
include deep learning, target include metallurgical machin-
tracking and identification, ery integrated automation,
rolling process control and rolling process modeling,
intelligent algorithm. large PLC and fieldbus con-
trol, artificial neural network
and intelligent algorithm.
Xuemin Ma is currently
pursuing the Ph.D. degree
in control science and engi- Huihui Cui received the
neering with the School Master’s degree in control
of Electrical Engineering, science and engineering
Yanshan University, Qin- with the School of Electrical
huangdao, Hebei, China. Her Engineering, Yanshan Uni-
current research interests versity, Qinhuangdao, Hebei,
include dynamic multi- China. Her current research
objective optimization and interests include dynamic
evolutionary computation. multi-objective optimization
and evolutionary computation.

Ziyu Hu received the Ph.D.


degree in control science and
engineering from the Yan-
shan University, Qinhuang-
dao, Hebei, China, in 2018.
He is an Associate Professor
with the School of Electri-
cal Engineering, Yanshan Uni-
versity. His current research
interests include complex sys-
tem modeling, artificial intelli-
gence and multi-objective evo-
lutionary algorithm.

You might also like