Preference driven multi-objective optimization design
Preference driven multi-objective optimization design
Information Sciences
journal homepage: www.elsevier.com/locate/ins
a r t i c l e i n f o a b s t r a c t
Article history: Multi-objective optimization design procedures have shown to be a valuable tool for con-
Received 26 March 2015 trol engineers. These procedures could be used by designers when (1) it is difficult to
Revised 25 July 2015
find a reasonable trade-off for a controller tuning fulfilling several requirements; and
Accepted 8 December 2015
(2) if it is worthwhile to analyze design objectives exchange among design alternatives.
Available online 30 December 2015
Despite the usefulness of such methods for describing trade-offs among design alterna-
Keywords: tives (tuning proposals) with the so called Pareto front, for some control problems finding
Multi-objective optimization a pertinent set of solutions could be a challenge. That is, some control problems are com-
Controller tuning plex in the sense of finding the required trade-off among design objectives. In order to
Evolutionary multi-objective optimization improve the performance of MOOD procedures for such situations, preference handling
Preference handling mechanisms could be used to improve pertinency of solutions in the approximated Pareto
Multi-objective optimization design front. In this paper an overall MOOD procedure focusing in controller tuning applications
using designer’s preferences is proposed. In order to validate such procedure, a bench-
mark control problem is used and reformulated into a multi-objective problem statement,
where different preference handling mechanisms in the optimization process are evalu-
ated and compared. The obtained results validate the overall proposal as a potential tool
for industrial controller tuning.
© 2015 Elsevier Inc. All rights reserved.
1. Introduction
Multi-objective Optimization Design (MOOD) procedures using Evolutionary Multi-objective Optimization (EMO) have
shown to be a valuable tool for controller tuning applications [41]. They enable the designer or decision maker (DM) having
a close embedment into the design process; since it is possible to take into account each design objective individually; they
also enable comparing design alternatives (i.e. tuning proposals), in order to select a controller fulfilling the expected trade-
off among conflicting objectives. This MOOD procedure comprises at least, three fundamental steps: the multi-objective
problem (MOP) definition, the EMO process and the multicriteria decision making (MCDM) step.
Such procedures have been used with success when (1) it is difficult to find a reasonable trade-off for a controller tuning
fulfilling several requirements; and (2) if it is worthwhile analyzing design objectives exchange among design alternatives.
Despite the usefulness of such methods for describing trade-offs among design alternatives by the so called Pareto front,
∗
Corresponding author. Tel.: + 55 4132712579.
E-mail address: [email protected] (G. Reynoso-Meza).
http://dx.doi.org/10.1016/j.ins.2015.12.002
0020-0255/© 2015 Elsevier Inc. All rights reserved.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 109
for some control problems finding a pertinent set of solutions could be a challenge. In such instances, finding pertinent
solutions could be difficult due to complexity of the process and/or complexity of the MOP statement.
The former case refers when the process complexity makes difficult finding desirable solutions, even for 2 or 3 design
objectives. That is, the region in the decision (search) space which fulfils designer’s preferences could be difficult to find.
In the latter case, designer commonly face the problem of fulfilling several performance objectives and requirements. If
the number of design objectives is more than 3, it is said that the designer is dealing with a many-objective optimiza-
tion instance. This could increase the complexity of the EMO process and the MCDM step, since diversity and convergence
properties of a given algorithm usually conflict each other in the Pareto front approximation.
An alternative to overcome the above mentioned issues, is the inclusion of preferences in the EMO process. The inclusion
of preferences is exploited by algorithms in order to provide an interesting (useful) Pareto front approximation for designers
[7]. This information could be used in the same way to deal effectively with many-objective optimization instances [21],
since the algorithm could be able to focus in the interesting regions of the objective space. Furthermore, preferences could
be used to bridge any gap between problems definition, optimization and decision making process [34,41] leading to a
holistic design procedure. With this tool, designer could address more effectively with complex processes, in the sense of
complex to find a desirable trade-off among conflicting objectives.
The aim of this paper is twofold. On the one hand, proposing a MOOD procedure taking into account preference handling
for controller tuning, in order to improve pertinency of solutions when it is difficult to find a desirable (required) trade-off.
On the other hand, through the example provided, stating a controller tuning benchmark in the multi-objective optimization
context. The lack of formal benchmarks definitions for multi-objective optimization in the control context was noticed in
[41]; the statement of such benchmark will enable the comparison among techniques, methodologies and algorithms in the
MOOD procedure context. This situation can motivate further developments of the MOOD procedure in controller tuning
applications.
The remainder of this paper is as follows: in Section 2 a brief background on MOOD procedures and preference handling
is commented; in Section 3 the proposal of this paper is presented; Section 4 is devoted to validate the MOOD procedure
with preferences for controller tuning; with this aim, two different instances are stated using the Boiler Control benchmark
problem of [28]: a univariable and a multivariable statement. Finally, some concluding remarks are given.
Some notions on multi-objective optimization and preference handling techniques are required. They are provided below,
within the controller tuning framework.
subject to:
K (x ) ≤ 0 (2)
L (x ) = 0 (3)
xi ≤ xi ≤ xi , i = [1, . . . , n] (4)
where x = [x1 , x2 , . . . , xn ] is defined as the decision vector; J(x) as the objective vector and K(x), L(x) as the inequality and
equality constraint vectors respectively; xi , xi are the lower and upper bounds in the decision space.
It has been pointed out that there is not a single solution in MOPs, because there is not generally a better solution in all
the objectives. Therefore, a set of solutions, the Pareto set P , is defined. Each solution in the Pareto set defines an objective
vector in the Pareto front JP . All the solutions in the Pareto front conforms a set of Pareto optimal and non-dominated
solutions (Fig. 1):
Definition 1 (Pareto optimality [26]). An objective vector J(x1 ) is Pareto optimal if there is not another objective vector J(x2 )
such that Ji (x2 ) ≤ Ji (x1 ) for all i ∈ [1, 2, . . . , m] and Jj (x2 ) < Jj (x1 ) for at least one j, j ∈ [1, 2, . . . , m].
Definition 2 (Dominance [8]). An objective vector J(x1 ) is dominated by another objective vector J(x2 ) iff Ji (x2 ) ≤ Ji (x1 ) for
all i ∈ [1, 2, . . . , m] and Jj (x2 ) < Jj (x1 ) for at least one j, j ∈ [1, 2, . . . , m].
It is important to notice that the Pareto front is usually unknown, and the DM can only rely on a Pareto front approxi-
mation J ∗P . In order to successfully embed the multi-objective optimization concept into a design process, three fundamental
1
A maximization problem can be converted to a minimization one. For each of the objectives that has to be maximized, the transformation: max Ji (x ) =
−min(−Ji (x )) could be applied.
110 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
steps [39] are (at least) required: the MOP definition (measure); the multi-objective optimization process (search); and the
multi-criteria decision making (MCDM) step (decision making). This procedure will be named Multi-objective Optimization
Design (MOOD) procedure (Fig. 2). This procedure has been used for controller tuning applications with success, as com-
mented below.
When addressing a controller tuning problem by means of the MOOD procedure, the two following questions are impor-
tant:
The MOOD procedure is used in controller tuning applications not because controllers are difficult to optimize nor to find,
but it might be a complex task to find a reasonable trade-off. Although there a lot of well established tuning techniques,
the MOOD procedure is an alternative which focus on providing a reasonable trade-off solution in exchange of expending
(investing) more time in the EMO process and the MCDM step. A basic control loop is depicted in Fig. 3. It comprises
transfer functions P(s) and C(s) of a process and a controller respectively. The objective of this control loop is to keep the
process output Y(s) in the desired reference R(s). The control problem consists in selecting proper tuning parameters for
controller C(s) in order to achieve a desirable performance of the process output as well as robust stability margins. This
control problem is well known and it has been addressed with several techniques.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 111
Classic techniques [26] to calculate this set of solutions have been used (such as varying weighting vectors, -constraint,
and goal programming methods) as well as specialized algorithms (normal boundary intersection method [12] and nor-
mal constraint method [25] for example). Nevertheless sometimes these problems could be complex, non-linear and highly
constrained, situation which makes difficult to find a useful Pareto set approximation. According to this, another way to
approximate the Pareto set is by means of Evolutionary Multi-objective Optimization (EMO), which is useful due to the
flexibility of Multi-objective Evolutionary Algorithms (MOEAs) in dealing with non-convex and highly constrained functions
[8,9]. Such algorithms have been successfully applied in several control engineering [17,41] and engineering design areas
[45]. For this reason, MOEAs will be used in this work and hereafter the optimization process will be performed by means
of EMO in the MOOD procedure.
One potentially desirable characteristic of a MOEA is the mechanism for preference handling in order to calculate perti-
nent solutions. That is, the capacity to obtain a set of interesting solutions from the DM’s point of view. Incorporating the
DM’s preferences into MOEAs has been suggested to improve the pertinency of solutions (see for example [7,11]).
The designer’s preferences could be defined in the MOOD procedure in an a priori, progressive, or a posteriori fashion [29].
• A priori: In such cases, the DM could be interested in using an algorithm that enables incorporating such preferences in
the optimization procedure.
• Progressive: the optimization algorithm embeds the designer into the optimization process to adjust or change his or her
preferences on the fly.
• A posteriori: According to the set of solutions, he or she defines the preferences in order to select a preferable solution.
It is also possible to classify preference handling techniques into five classes [38] with respect to the question: what is it
important for the designer?:
• Dominance is essential: it is important for the designer to calculate a set of solutions that dominate one or more reference
objective vectors.
• Objective against objective: it is important for the designer to identify which objectives have priority over others through
the EMO process.
• Objective value against objective value: it is important for the designer to identify when the value of a given objective has
priority over the value of others.
• Subset against subset: identifying a combination of objectives and values that are preferred over others.
Some popular techniques include ranking procedures [10,33,49], goal attainment, and fuzzy relations [7]. In any case,
some desirable characteristics of the preference handling mechanism have been stated in [14]:
• It should enable the DM to decide how many solutions are required in the Pareto front approximation, which will be
analyzed in the MCDM step.
In the case of controller tuning, capabilities to assuring the dominance is essential feature will be highly appreciated.
Several tuning techniques and procedures are available for control engineers. Therefore and as noticed before, the tuning
problem is not about finding a solution, but finding a solution with the desirable trade-off. Because of this, a priori tech-
niques will be compatible with controller tuning within the MOOD context, given that usually it would exist an initial
solution available to work with. In this sense, dominance is essential, given that an initial solution is usually available, and it
is required to improve (at least) its overall performance. It will be assumed that exists such reference case controller, which
allows to have an idea about what it is important to optimize and which might be the desirable trade-off region for the
designer. Therefore, this work will focus on a priori preferences definitions where dominance is essential.
112 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Table 1
Interpretations for the I indicator.
I ( J ∗p1 , J ∗p2 ) < 1 → Every J 2 (x2 ) ∈ J ∗p2 is strictly dominated by at least one J 1 (x1 ) ∈ J ∗p1 .
I ( J ∗p1 , J ∗p2 ) = 1 ∧ I ( J ∗p2 , J ∗p1 ) = 1 → J ∗p1 = J ∗p2 .
I ( J ∗p1 , J ∗p2 ) > 1 ∧ I ( J ∗p2 , J ∗p1 ) > 1 → Neither J ∗p1 weakly dominates J ∗p2 nor J ∗p2 weakly dominates J ∗p1 .
A preference handling mechanism should be useful through the entire MOOD procedure. That is, regarding the MOP
definition, it should be easy to code such preferences; regarding the EMO, it should be helpful to find a balance between
diversity and pertinency mechanisms; finally, in the MCDM step, it should help to provide a useful Pareto front approx-
imation for the designers. According to this, three different a priori mechanisms will be analyzed: reference points [14],
indicator-based with preferences [52] and global physical programming [38].
In the event that for the designer it is only important to prioritize some of the objectives (Objective against objective),
fitter mechanisms such as the one proposed in [49] would be used; or if the designer is willing to refine a given set of
preferences on the fly (progressive scheme), proposals as [5,6,22,43] will be more appropriate. In both cases, they are out of
the scope of this work.
Definition 3. The weighted Euclidean distance to a set of reference points R is defined as:
m
Ji (x ) − R ji
d (x, R ) = min w (5)
R j ∈R
i=1
Jimax − Jimin
It has been used as auxiliary criteria within the NSGA-II algorithm, to decide which solutions will be pruned (discarded)
and which ones will remain in the Pareto front approximation.
Definition 4. The binary -indicator I ( J ∗p1 , J ∗p2 ) [53] for two Pareto front approximations J ∗p1 , J ∗p2 is defined as:
where
J 1 ( x 1 )l
J 1 (x1 ),J 2 (x2 ) = max ,
1≤l≤m J 2 ( x 2 )l
∀J (x ) ∈
1 1
J ∗p1 , J 2 (x2 ) ∈ J ∗p2 (8)
The IBEA algorithm is an indicator based MOEA [52], which uses the I ( J ∗p1 , J ∗p2 )
indicator [53] to evolve the entire Pareto
front approximation. In each iteration the Pareto front approximation which optimizes the indicator is selected; that is, the
population which describes a Pareto front approximation J ∗p1 which is better to the previous J ∗p2 according to this index,
is selected. If a fixed reference point R is used instead of a previous approximation J ∗p2 , then the population will evolve
towards the region which dominates such reference point.
Fig. 4. Physical programming (PP) notion. Five preference ranges have been defined: highly desirable (HD), desirable (D), tolerable (T) undesirable (U) and
highly undesirable (HU).
Table 2
Preferences Set for the benchmark setup. Five preference ranges have been defined: highly desir-
able (HD), desirable (D), tolerable (T) undesirable (U) and highly undesirable (HU).
Preference matrix
← HD →← D →← T →← U →← HU →
knowledge into classes2 with previously defined ranges3 according to a matrix of preferences. This matrix reveals the DM’s
wishes using physical units for each of the objectives in the MOP. From this point of view, the problem is moved to a
different range where all the variables are independent of the original MOP (see Fig. 4).
For each objective and its range of preferences in a matrix of preferences P, a class function ηq ( J (x ))|P , q = [1, . . . , m]
is built to translate each Jq (x) to a new range where all the objectives are equivalent to each other. A PP index J pp ( J (x )) =
m
q=1 ηq ( J (x )) is then calculated.
In [38] the Jpp (J(x)) index is modified, and a global PP (GPP) index Jgpp (ϕ) is defined for a given objective vector ϕ.
Main difference between both is that the latter uses linear functions to build the class functions, while the former uses
splines with several requirements to maintain convexity and continuity; the former fits better for local optimization algo-
rithms, while the latter for (global) stochastic and evolutionary techniques. Furthermore, this GPP index enables encoding K
preferences conditions, by defining a set of preferences K:
m
Jgpp (ϕ ) = min ηq (ϕ )|Pk (9)
Pk ∈K,k=[1,··· ,K]
q=1
Such index is helpful for pruning mechanism in MOEAs, in order to improve the pertinence of the solutions according to
the predefined preferences. A typical preference matrix is shown in Table 2. Next, a preference handling mechanism will be
selected, in order to state the preference driven MOOD procedure of this work.
In this section, a preference driven MOOD procedure for controller tuning is proposed. Firstly, an analysis on the above
commented preference handling mechanisms is presented, in order to select one of them for the overall procedure. After-
wards, in order to guarantee the successful implementation of the preference driven MOOD procedure, its three fundamental
steps will be stated for controller tuning purposes: the MOP definition, the EMO process, and the MCDM stage.
2
The original method states 4 classes: 1S (smaller is better); 2S (larger is better); 3S (a value is better); and 4S (a range is better)
3
According to the original method: highly desirable (HD), desirable (D), tolerable (T), undesirable (U) and highly undesirable (HU).
114 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Fig. 5. Comparison of path for successive aspiration levels (arrows) for Jgpp (ϕ) index, d (x, R ) and I indicator.
In Fig. 5, a visual comparison of the landscapes generated by the preference handling techniques for a bi-objective prob-
lem are depicted. Five successive reference points have been used in each one of them.
On the one hand, the reference points can deal with different reference conditions and successive dominated aspirations
levels. Nevertheless, a situation like the one depicted in Fig. 5 can arise through the evolution process: a point b is preferred
over point a since it is closer to reference point R. This situation does not preserve the dominance is essential feature. On
the other hand, the I indicator while it is effective dealing with multiple reference points, it doesn’t handle efficiently the
successive dominated aspiration levels. That is, the improvement path for successive aspiration levels is always the same.
According to the above commented, GPP index seems to be a practical option, since it can deals with multiple reference
conditions and it provides a more flexible path for successive dominated aspiration levels. Furthermore, linguistic labels used
in GPP are helpful for the designer, because they could provide not only a meaning for tolerability on design objectives, but
also defines successive hypervolumes of desirability. Let’s define the following vectors and values with these five preferences
ranges of Table 2 (see Fig. 6):
From a practical point of view, the tolerable vector T_Vector could be defined as the performance of an available tun-
ing procedure, and the D_Vector, HD_Vector as the following aspiration levels. This makes this proposal fully compatible
with the classification dominance is essential and it provides a path for the evolution process. Thus enabling a further im-
provement of the pertinency of the approximated Pareto front, according to the designer’s preferences. In order to state the
preferences ranges of Table 2, it is fundamental to have an understanding of the objectives to define the preference ranges.
Nevertheless, if the DM has no idea on such values, it could be an indicative of a perfunctory or precipitate selection of the
design objectives. Therefore, perhaps the DM should ponder the design objectives stated.
For above commented reasons, the GPP handling mechanism will be used as a pivotal tool in the overall procedure. Its
flexibility and benefits compensate the fact of defining an overall preference matrix, as shown in [38].
According to the basic control loop of Fig. 3, some common choices in controller tuning [41] for design objectives are:
• Noise sensitivity
JMu (x ) = C (s )(I + P (s )C (s ))−1 (12)
∞
where r(t), y(t), u(t) are the reference, measured variable and control action in time t.
For frequency domain objectives, there are empirical relationships and limits, which are helpful to provide them with
meaning. For example, it is known that practical limits for SISO processes exist (1.2 ≤ JMs (x ) ≤ 2.0 and 1.0 ≤ JM p (x ) ≤ 1.5
[1,31]). In the cases of time performance indexes, it is possible to provide meaning to them by using a reference case and
normalizing those indicator [28]. Other indexes, as stabilizing time, overshoot, among others provide more meaning, but
they are tied to the specific process at hand; i.e. they strongly depend on the features of the process to be controlled. In
any case, it has been noticed that using both kind of objectives could lead to more pertinent Pareto front approximations
for the designer [19].
As it has been noticed before, MOOD procedure could be valuable for designers when it is difficult to find a desirable
trade-off. In controller tuning, they are several and well established tuning techniques for different control loop strategies.
Therefore, any advantage that the MOOD procedure could provide to controller tuning, is related to dominating a reference
controller (or its surroundings), which is not fulfilling the desired specifications.
As explained before, the GPP index enables to state several preference conditions and it could handle successive domi-
nated aspiration levels. Preference ranges of Fig. 4 are defined for the sake of flexibility (as in [38]) to evolve the population
to a pertinent Pareto front. According to this, the typical preference matrix of Table 2 is defined.
The GPP index can be merged with pruning techniques to search actively for the pertinent Pareto front approximation.
Furthermore, it can be used to differentiate design objectives for the optimization process from design objectives for the
decision making. That is, perhaps the DM is interested in approximate a Pareto front for the most meaningful design ob-
jectives (objectives for decision making), but the designer would like to mind other design objectives (in the optimization
stage). For example, in Fig. 7, the DM is interested to perform a decision with design objectives J1 (x) and J2 (x), but it is also
116 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
1.2
1 1
0.8
0.8
0.6
J 3(x)
J (x)
0.6
2
0.4
2
0.2
0.4
0
1
0.2 0.8 1
0.6
0.4 0.5
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 J (x) 0 0 J1(x)
J (x) 2
1
Fig. 7. Difference between design objective for optimization and for decision making. The GPP index as a pruning mechanism to keep one solution in each
spherical sector is an intermediate solution, where a reduced Pareto front is approximated, both taking into account more design objectives.
interested in taking into account J3 (x) in the optimization. With an appropriate preference matrix, the GPP index will prefer
the solution over the • solution. Even if it appears to be a sub-optimal Pareto solution in the Pareto front approximation
for J1 (x) and J2 (x), it is Pareto optimal for the J1 (x), J2 (x) and J3 (x) objective space. If for example, a pruning mechanism
which keeps one solution for each spherical sector is used, when performing the approximation with two design objectives
the algorithm will keep only the • solution; if it is used for three design objectives, it will keep both solutions, but this could
potentially increase (unnecessarily) the size of the Pareto front approximation. The GPP index as a pruning mechanism to
keep one solution in each spherical sector is an alternative solution in-between, where a Pareto front is approximated in a
reduced objective subspace, but taking into account all design objectives.
The spMODE-II algorithm [38]4 will be used because this is an implementation using the GPP index to improve the
pertinency of solutions in the approximated Pareto front. It is a Differential Evolution (DE) based MOEA, which uses a
spherical grid to maintain diversity in the approximated Pareto front. For each spherical sector, usually a norm is used in
order to keep just one design alternative (original spMODE version [37]); nevertheless, this norm is substituted by the GPP
index in the new version of the algorithm.
The ten times the number of objectives thumb of rule for the quantity of solutions required in the approximated Pareto
front based on [24], is adopted. Also a clear distinction among design objectives and design objectives for decision making
is stated, as commented earlier. That is, in which subspace the DM would like to perform a decision making analysis, by
identifying objectives that should be minimized and minded in the search process, but they are not meant to be used for
decision making.
Different alternatives could be used by practitioners [4,13,20,48], nevertheless Level Diagrams (LD) will be used due
to their capabilities to depict m-dimensional Pareto fronts [3] and for design concepts comparison [35]. The taxonomy to
identify the visualizations is adopted from [35]5 .
The process under consideration is the benchmark for PID control 2012 described by [28]. It proposes a boiler control
problem [16,27] based on the work of [32]. This work improves the model provided in [2] by adding a non-linear combustion
equation with a first order lag to model the excess oxygen in the stack and the stoichiometric air-to-fuel ratio for complete
combustion. The non-linear explicit model is described by the following equations:
ẋ1 (t ) = c11 x4 (t )x1 8 + c12 u1 (t − τ1 ) − c13 u3 (t − τ3 )
9
(16)
4
Available at www.mathworks.com/matlabcentral/fileexchange/authors/289050.
5
LD/front/measure. For example, LD/J ∗p / Ĵ (x ) 2 , means that a visual representation of Pareto front approximation J ∗p with 2-norm in LD is presented.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 117
y1 (t ) = c51 x1 (t − τ4 ) + n1 (t ) (20)
y2 (t ) = c61 x1 (t − τ5 ) + n2 (t ) (21)
The reduced single input, single output (SISO) version of the benchmark stated in [28] is used in this example. This
reduced version comprises the steam pressure control by means of manipulating fuel flow. The identified nominal model6
G(s) to be used in the optimization process is:
0.3934
G (s ) = e−3.42s (24)
1 + 45.6794s
In order to control this process, a proportional-internal-derivative controller with derivative filter (PIDn) is proposed. Its
structure is as follows:
1 Td s
C (s ) = k p 1 + + (25)
Ti s Td /N · s + 1
where kp is the proportional gain; Ti , Td are the integral and derivative time values; and N the derivative filter. Aims of this
example are:
• Defining a controller tuning MOP statement with multiple preference conditions.
• Evaluating and comparing the performance of the preference handling approach, in order to validate its selection.
subject to:
0 ≤ xi ≤ 1, i = [1, 2, 3, 4] (27)
J1 (x): IAE performance for a unitary step reference change (Eq. (13)).
J2 (x): Maximum value of sensitivity function Ms for control loop (Eq. (10)).
J3 (x): Maximum value of Mu for noise rejection in the control loop (Eq. (12)).
Two preference matrix for the design objectives stated are defined (Table 3). Preference Set A promotes performance,
while the preference set B promotes robustness. Design objectives to perform a MCDM stage are J1 (x), J2 (x). That means
that 20 solutions are required and the objective space is just partitioned in two dimensions. It is important to notice that
all three objectives are used in the optimization process in order to calculate the GPP index, since they are included in the
preference matrix.
6
This model was obtained with a step response experiment using the standard identification toolbox from Matlab©with an standard step response.
118 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Table 3
Preferences for the univariable benchmark setup. Five preference ranges have been defined: highly
desirable (HD), desirable (D), tolerable (T) undesirable (U) and highly undesirable (HU).
← HD →← D →← T →← U →← HU →
Ji0 Ji1 Ji2 Ji3 Ji4 Ji5
← HD →← D →← T →← U →← HU →
Ji0 Ji1 Ji2 Ji3 Ji4 Ji5
Table 4
Parameters used for IB-MODE, RP-spMODE and spMODE-II in univariable
benchmark setup. Further details in [34].
• An IBEA [47] using a basic DE algorithm, hereafter denoted as IB-MODE. This strategy is selected because it is a state
of the art technique for handling simultaneous preference conditions. It uses the binary epsilon indicator previously
explained in Section 2.4.2.
• A spMODE algorithm using the reference point based multi-objective optimization technique described in [14] (and com-
mented in Section 2.4.1), hereafter denoted as RP-spMODE. This technique is selected because it is consistent with the
spMODE algorithm structure, and it could be used as a customized norm in its pruning mechanism.
In all cases, parameters used are depicted in Table 4 for each case. Additionally, the following control experiment is
included:
• A pure stochastic sampling approach, using the same function evaluations budget. This is used as a base test to evaluate
the usability of the approaches above.
The aim of this analysis is to evaluate the capabilities of such approaches to approximate a Pareto front in the T_HypV,
D_HypV and HD_HypV in order to validate the usefulness of the spMODE-II7 algorithm for these applications. A standard
CPU8 is used to calculate the Pareto front approximations for this benchmark.
In Fig. 8, distribution plots of the attained HD_HypV, D_HypV are depicted; in Table 5 numerical values for the best,
worst, median, mean and standard deviation of the attained hypervolumes are shown. Statistical significance has been vali-
dated using the Wilcoxon test at 95% with Bonferroni correction [15].
With the sampling approach it can be noticed that the HD_HypV for the preference Set B is most difficult to achieve.
Also, according with the provided data, the GPP approach is able to attain the HD_HypV better both preference sets simul-
taneously. This, therefore, justify its usability for preference driven MOOD procedure for controller tuning.
7
Hereafter, the GPP approach for EMO and spMODE-II will be used indistinctively.
8
DELL T1500 computer, Windows 7 system, processor Intel Core i7, 2.93 GHz with 8.00GB RAM.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 119
Fig. 8. Distribution plots for the hypervolume attained by the different approaches used in the univariable benchmark setup.
Table 5
Hypervolume achieved in the univariable benchmark setup (51 runs). Statistical significance has been evaluated according to the Wilcoxon and it is indi-
cated for each case.
HD_HypV Best 5.0280 10.0660 3.3333 4.6682 9.2954 8.8309 6.9039 2.4625
Median 4.0845†,‡ 0.0000,§ 1.1781,§ 3.4513†, ‡ 7.9294‡, § 6.6029‡, § 0.4179, † 0.0000, †
Worst 1.6416 0.0000 0.0000 2.2763 1.9594 0.4360 0.0000 0.0000
Mean 3.9631 0.7838 1.1847 3.4987 7.4335 6.2478 1.6839 0.0744
std 0.7621 2.0332 0.9856 0.6074 1.5662 1.7796 2.0977 0.3572
D_HypV Best 1.01e + 03 1.10e + 03 0.94e + 03 0.89e + 03 0.45e + 05 0.52e + 03 0.43e + 03 0.40e + 03
Median 0.87e + 03§ 0.89e + 03§ 0.75e + 03§ 0.86e + 03, †, ‡ 0.40e + 05‡, § 0.39e + 03‡, § 0.37e + 03, † 0.36e + 03, †
Worst 0.77e + 03 0.64e + 03 0.69e + 03 0.83e + 03 0.30e + 05 0.30e + 03 0.26e + 03 0.30e + 03
Mean 0.90e + 03 0.85e + 03 0.75e + 03 0.86e + 03 0.39e + 05 0.39e + 03 0.36e + 03 0.35e + 03
std 0.08e + 03 0.09e + 03 0.05 + e03 0.01e + 03 0.04e + 05 0.02e + 03 0.03e + 03 0.02e + 03
For the second instance the reduced two inputs, two outputs (TITO) version of the benchmark stated in [28] is used:
Y1 (s ) P (s ) P13 (s ) U1 (s ) P (s )
= 11 + 1d D (s ) (29)
Y3 (s ) P31 (s ) P33 (s ) U3 (s ) P3d (s )
where the inputs are fuel flow U1 (s) [%], air flow U2 (s) [%] and water flow U3 (s) [%], while the outputs are steam pressure
Y1 (s) [%], oxygen level Y2 (s) [%] and water level Y3 (s) [%]. D(s) is a measured load disturbance. This is a verified model, useful
to propose, evaluate and compare different kinds of tuning/control techniques [18,30,42,44,46].
For the sake of simplicity a proportional-integral (PI) controller for a multiple input, multiple output (MIMO) instance
will be used. According to [41], while several works focus on PI-like controller tuning using EMO, few of them deal with
MIMO instances. Furthermore, few of them use some mechanism for pertinency improvement in many-objective optimiza-
tion statements for these problems. Therefore, it is justified to test the MOOD procedure with the proposals contained in
this paper. In all instances, it is assumed that commonly used tuning techniques don’t fulfill all the designer’s requirements
and therefore, the MOOD procedure is employed. EA’s and MOEA’s in PI controller tuning is still an ongoing research field
[36,50,51]. The proposed multivariable PI controller structure is:
1
k p1 1 + 0
C (s ) =
Ti1 s
1
(30)
0 k p2 1 + Ti2 s
where kp1 , kp2 are the proportional gains, and Ti1 , Ti2 are the integral time values.
Aims of this example are:
• Providing a many-objective optimization statement for MIMO processes under quasi-real conditions.
• Validating the overall preference driven MOOD procedure for controller tuning under quasi-real conditions.
Quasi-real conditions makes reference to the following steps:
120 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
60 60
62
Pressure [%]
59
Steam
59.8
61 58
59.6
Experimental data 57
60 Identified Model
59.4 56
59 59.2 55
0 500 1000 1500 0 500 1000 1500 0 500 1000 1500
60 100
50
50 90
48
Drum Water
Level [%]
40 80
45
51 400 450
30 55 70
60
50
20 50 60 55 40 49
10 50 50
45 48
0 100 200 300 0 100 200 300
0 100 200 300
0 40 35
0 500 1000 1500 0 500 1000 1500 0 500 1000 1500
Time [secs.] Time [secs.] Time [secs.]
subject to:
0 ≤ xi ≤ 1, i = [1, 2, 3, 4] (34)
where x are the proportional gains and integral time values of the PI controllers. In order to bound the controller parameters,
the stochastic sampling described in [40] for stabilizing PI controllers is implemented, and therefore bound constraints are
xi ∈ [0, 1] for i = 1, . . . , 4; since such coding does not implies the overall stability for a MIMO system, constraint G1 (x) for
the eigenvalues λ of the overall system is included. The design objectives stated are:
J1 (x): Stabilizing time for Y1 (s) at presence of a step Load disturbance D(s).
J2 (x): Stabilizing time for Y2 (s) at presence of a step Load disturbance D(s).
9
Nominal linear models have been identified using simple step tests with the Matlab© identification toolbox.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 121
Table 6
Preferences set for the multivariable benchmark setup. Five preference ranges have been defined: highly de-
sirable (HD), desirable (D), tolerable (T) undesirable (U) and highly undesirable (HU).
← HD →← D →← T →← U →← HU →
Ji0 Ji1 Ji2 Ji3 Ji4 Ji5
Table 7
Parameters used for IB-MODE, RP-spMODE and spMODE-II in the multivariable
benchmark setup. Further details in [34].
J3 (x): Biggest log modulus for overall robustness [23]. The criterion is defined as:
W (s )
Lcm = 20 log ≤ Lmax (36)
1 + W (s ) cm
where W (s ) = −1 + det (I + P (s )C (s )). This criterion proposes a de-tuning of the proportional gains of each controller,
in order to fulfill a maximum value of the closed loop log modulus Lmax cm .
J4 (x): Maximum value of sensitivity function Ms for loop 1 (Eq. (10)).
J5 (x): Maximum value of sensitivity function Ms for loop 2 (Eq. (10)).
The preference matrix for the design objectives stated is depicted in Table 6. Design objectives to perform a MCDM
stage are J1 (x), J2 (x), J3 (x). That means that 30 solutions are required and the objective space is just partitioned in three
dimensions. It is important to notice that all five objectives are used in the optimization process in order to calculate the
GPP index, since they are included in the preference matrix.
In all cases, parameters used are depicted in Table 7 for each case. Additionally, the following control experiment is
included:
• A pure stochastic sampling approach, using the same function evaluations budget. This is used as a base test to evaluate
the usability of the approaches above. This is because it has been remarked that sampling procedures could be more
effective in many-objective optimization statements [10].
The aim of this analysis is to evaluate the capabilities of such approaches to approximate a Pareto front in the T_HypV,
D_HypV and HD_HypV in order to validate the usefulness of the spMODE-II algorithm for these applications. Same standard
CPU is used to calculate the Pareto front approximations for this benchmark.
In Fig. 10, distribution plots of the attained T_HypV are depicted; in Table 8 numerical values for the best, worst, me-
dian, mean and standard deviation of the attained T_HypV are shown. Statistical significance has been validated using the
Wilcoxon test at 95% with Bonferroni correction [15]. As it can be noticed, the approach using the GPP approximates better
the T_HypV, when compared with the reference point, indicator based and stochastic sampling approaches.
122 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Fig. 10. Distribution plots for the tolerable hypervolume attained by the different approaches used in the benchmark setup.
Table 8
Hypervolume achieved in 51 runs. Statistical significance has been evaluated according
to the Wilcoxon.
In Fig. 11 attainment surfaces at 50% are compared using level diagrams followings the guidelines of [35] for design
concepts comparison. In such visualization any surface of an approximated Pareto front J ∗p1 above 1 is dominated by a
surface below 1 of the other Pareto front approximation J∗p2 and vice versa. In Fig. 11a, the GPP approach is compared with
the IB-MODE; in such figure, it is possible to appreciate that the main differences between approaches is in the covering of
J5 , where each one dominates a portion of the other. In the case of Fig. 11b, the GPP approach consistently dominates the
RP-MODE.
Fig. 11. Attainment surfaces comparison using Level Diagrams for the different approaches used in the benchmark setup.
This design alternative has been implemented in the real process, and the performance index defined by the benchmark
Ibenchmark (Ce , Cr , ω) is shown in Table 9. Such index is an aggregate objective function, which combines ratios of the IAE
(13), ITAE (14) and TV10 (15). In order to evaluate a controller Ce , indexes are referred to a base case controller Cr where a
weighting factor ω for the ratios of the control action values (ωṪV ) is included. Further details are available in [28].
In the original benchmark, two PI controllers [k p1 , Ti1 , k p2 , Ti2 ] = [2.5, 50, 1.25, 50] are used as Cr , and the weighting
factor is proposed as ω = 0.25. Two different tests are proposed:
Test 1: Performance when the system had to attend a time variant load level.
Test 2: Performance when the system had to attend a sudden change in the steam pressure set-point.
Firstly, the selected controller without filtering the measured signal is employed (Figs. 14 and 16). Notice that perfor-
mance indexes related with ratios of IAE and ITAE are better than the reference controller. Nevertheless, the performance
10
also known as IADU.
124 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Fig. 12. Pareto Front approximation of the benchmark setup. Design alternative with the lowest GPP index () and the selected design alternative () are
depicted.
Fig. 13. Simulation performance of the approximated Pareto front from Fig. 12.
Table 9
Performance achieved of the selected design alternative (no filter) for the benchmark setup. Ratios of IAE
(RIAE), ITAE (RITAE) e IADU (RIADU) with respect to the PI reference case are depicted.
RIAE1 RIAE2 RIAE3 RITAE1 RITAE3 RIADU1 RIADU2 Ibenchmark (Ce , Cr , 0.25)
indicator Ibenchmark (Ce , Cr , ω) is worse in the case of Test 1 and it has almost the same performance than Test 2 (see Table 9).
This is due to the weighting factor used for the control action; the design alternative selected is more sensitive to noise and
therefore, the IADU ratio is bigger.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 125
Steam pressure and setpoint (%) Drum water level and setpoint (%)
62 53
Reference case Reference case
Evaluated case Evaluated case
61.5
52
61
51
60.5
60 50
59.5
49
59
48
58.5
58 47
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
50.2 80
50.1
70
50
60
49.9
49.8 50
49.7
40
49.6
49.5 30
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
65
85
80
60
75
70
55
65
60
50
55
45 50
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
Fig. 14. Performance for the Test 1 of the PI controller [k p1 , Ti1 , k p2 , Ti2 , ] = [1.533, 29.549, 5.315, 125.778] (without filter) and its comparison with the
reference case [k p1 , Ti1 , k p2 , Ti2 ] = [2.5, 50, 1.25, 50] in the benchmark setup.
126 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Steam pressure and setpoint (%) Drum water level and setpoint (%)
62 53
Reference case Reference case
Evaluated case Evaluated case
61.5
52
61
51
60.5
60 50
59.5
49
59
48
58.5
58 47
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
50.2 80
50.1
70
50
60
49.9
49.8 50
49.7
40
49.6
49.5 30
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
65
85
80
60
75
70
55
65
60
50
55
45 50
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000
Time (s) Time (s)
Fig. 15. Performance for the Test 1 of the PI controller [k p , Ti , k p , Ti ] = [1.533, 29.549, 5.315, 125.778] (τ f = 10) and its comparison with the reference case
[k p1 , Ti1 , k p2 , Ti2 ] = [2.5, 50, 1.25, 50] in the benchmark setup.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 127
64 54
52
63
50
62
48
61 Reference case 46
Evaluated case
44
60
42
59 40
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
50 60
48.5 45
48 40
47.5 35
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
70
47
65
60
46.5
55
Reference case
Evaluated case
50
46
45
40
45.5
35
45 30
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
Fig. 16. Performance for the Test 2 of the PI controller [k p1 , Ti1 , k p2 , Ti2 , ] = [1.533, 29.549, 5.315, 125.778] (without filter) and its comparison with the
reference case [k p1 , Ti1 , k p2 , Ti2 , ] = [2.5, 50, 1.25, 50] in the benchmark setup.
128 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
Steam pressure and setpoint (%) Drum water level and setpoint (%)
66 60
Reference case
58 Evaluated case
65
56
64 54
52
63
50
62 Reference case
Evaluated case 48
61 46
44
60
42
59 40
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
50 60
Reference case
49.5 55 Evaluated case
Reference case
Evaluated case
49 50
48.5 45
48 40
47.5 35
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
70
47
60
46.5
50 Reference case
Evaluated case
46
40
45.5
30
45 20
0 500 1000 1500 2000 0 500 1000 1500 2000
Time (s) Time (s)
Fig. 17. Performance for the Test 2 of the PI controller [k p1 , Ti1 , k p2 , Ti2 , ] = [1.533, 29.549, 5.315, 125.778] (τ f = 10) and its comparison with the reference
case [k p1 , Ti1 , k p2 , Ti2 , ] = [2.5, 50, 1.25, 50] in the benchmark setup.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 129
Table 10
Performance achieved of the selected design alternative (τ f = 10) for the benchmark setup. Ratios of IAE
(RIAE), ITAE (RITAE) e IADU (RIADU) with respect to the PI reference case are depicted.
RIAE1 RIAE2 RIAE3 RITAE1 RITAE3 RIAVU1 RIAVU2 Ibenchmark (Ce , Cr , 0.25)
Using a first order filter with τ f = 10 for the measured signal, the performance related to the control action is improved11
and as a consequence, the overall index Ibenchmark (Ce , Cr , ω) (Figs. 15 and 17 and Table 10). Therefore, the proposed PI
controllers have a performance (regarding this metric) which is better than the reference controllers.
In summary, the methodology is effective, bringing a controller fulfilling all the requirements with a better performance
than the reference controller. A comparison in equal conditions with other control solutions dealing with the boiler bench-
mark is not possible. In [18] a feedforward mechanism is used that is not included in this proposal; in [44] a data driven
approach is used (i.e. a on the fly tuning technique); in [30] a 2x2 PI controller matrix is proposed; and finally in [46] and
[42] results reported are not evaluated under the benchmark guidelines.
5. Conclusion
A MOOD procedure for controller tuning was presented in this paper. Such approach uses preferences, in order to focus
the evolutionary search towards the interested region of the Pareto front. The MOOD procedure is a powerful tool to analyze
objective exchanges and select a preferable solution. Nevertheless, it is a valuable tool when (1) it is difficult to find a
controller with a reasonable balance among design objectives; (2) it is worthwhile analyzing the trade-off among controllers
(design alternatives).
Three key points for its success are the following:
1. Approximating a compact set of solutions. Ten times the number of objectives seem to be a reasonable size for a Pareto
front approximation.
2. Beyond approximating a compact set, approximating a compact and pertinent set. For this, it is fundamental to have an
understanding of the objectives to define a preference range.12 If the DM has no idea on such values, that could be an
indicative of a perfunctory or precipitate selection of the design objectives. Therefore, perhaps the DM should ponder
the design objectives stated.
3. Deciding where to perform the DM stage. In addition to design objectives and constraints, a third category is included
in this work: objectives that should be minimized and taken into account during the search process, but which are not
meant to be used for decision making.
• A strategy to design feedforward compensators should be included in the overall MOOD procedure for controller tuning.
• More complex control tuning strategies should be evaluated and compared.
• It should be worthwhile to evaluate different optimization instances for multivariable controller tuning as multidisci-
plinary or reliable based statements.
• While the LD visualization is a powerful tool to analyze an m-dimensional Pareto front, it was also required to incorpo-
rate information from the time response of the approximated (and pertinent) Pareto front. Therefore, it seems to be a
promising area for development to build visualizations approaches for the specific application of controller tuning.
Acknowledgment
This work was partially supported by projects TIN2011-28082, ENE2011-25900 from the Spanish Ministry of Economy
and Competitiveness. First author gratefully acknowledges the partial support provided by the postdoctoral fellowship BJT-
304804/2014-2 from the National Council of Scientific and Technologic Development of Brazil (CNPq) for the development
of this work.
References
[1] K. Åström, H. Panagopoulos, T. Hägglund, Design of PI controllers based on non-convex optimization, Automatica 34 (5) (1998) 585–601.
[2] R. Bell, K.J. Åström, Dynamic models for boiler-turbine alternator units: data logs and parameter estimation for a 160 MW unit, Technical Report ISRN
LUTFD2/TFRT–3192–SE, Department of Automatic Control, Lund University, Sweden, 1987.
[3] X. Blasco, J. Herrero, J. Sanchis, M. Martínez, A new graphical visualization of n-dimensional Pareto front for decision-making in multiobjective opti-
mization, Inf. Sci. 178 (20) (2008) 3908–3924.
11
Such filter still guarantees overall stability in the control loop for the nominal process of Eq. (31).
12
Or at least, the tolerable values for design objectives.
130 G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131
[4] R. Cela, M. Bollaín, New cluster mapping tools for the graphical assessment of non-dominated solutions in multi-objective optimization, Chemom.
Intell. Lab. Syst. 114 (0) (2012) 72–86.
[5] M. Chica, Ó. Cordón, S. Damas, J. Bautista, Interactive preferences in multiobjective ant colony optimization for assembly line balancing, Soft Comput.
19 (10) (2015) 2891–2903.
[6] T. Chugh, K. Sindhya, J. Hakanen, K. Miettinen, An interactive simple indicator-based evolutionary algorithm (i-sibea) for multiobjective optimiza-
tion problems, in: A. Gaspar-Cunhah, C. Henggeler Antunes, C.C. Coello (Eds.), Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer
Science, vol. 9018, Springer International Publishing, 2015, pp. 277–291.
[7] C. Coello, Handling preferences in evolutionary multiobjective optimization: a survey In: Evolutionary Computation, Proceedings of the 2000 Congress,
Vol. 1, 2000, pp. 30–37.
[8] C.A.C. Coello, G.B. Lamont, Applications of multi-objective evolutionary algorithms, advances in natural computation, vol. 1, World scientific publishing,
2004.
[9] C.A.C. Coello, D.V. Veldhuizen, G. Lamont, Evolutionary algorithms for solving multi-objective problems, Kluwer Academic press, 2002.
[10] D.W. Corne, J.D. Knowles, Techniques for highly multiobjective optimization: some nondominated points are better than others, Proceedings of the
Ninth Annual Conference on Genetic and Evolutionary Computation GECCO ’07, ACM, New York, NY, USA, 2007, pp. 773–780.
[11] D. Cvetkovic, I. Parmee, Preferences and their application in evolutionary multiobjective optimization, IEEE Trans. Evol. Comput. 6 (1) (2002) 42–57.
[12] I. Das, J. Dennis, Normal-boundary intersection: a new method for generating the Pareto surface in non-linear multicriteria optimization problems,
SIAM J. Optim. 8 (1998) 631–657.
[13] A.R. de Freitas, P.J. Fleming, F.G. Guimarães, Aggregation trees for visualization and dimension reduction in many-objective optimization, Inf. Sci. 298
(2015) 288–314.
[14] K. Deb, J. Sundar, N. Udaya Bhaskara Rao, S. Chaudhuri, Reference point based multi-objective optimization using evolutionary algorithms, Int. J.
Comput. Intell. Res. 2 (3) (2006) 273–286.
[15] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary
and swarm intelligence algorithms, Swarm Evol. Comput. 1 (1) (2011) 3–18.
[16] I. Fernández, C. Rodríguez, J. Guzman, M. Berenguel, Decoupled predictive control algorithm with disturbance compensation for the benchmark of
2009–2010 (in spanish), Rev. Iberoam. Autom. Inf. Ind. 8 (2) (2011) 112–121.
[17] P. Fleming, R. Purshouse, Evolutionary algorithms in control systems engineering: a survey, Control Eng. Pract. 10 (2002) 1223–1241.
[18] J. Garrido, F. Márquez, F. Morilla, Multivariable PID control by inverted decoupling: application to the benchmark PID, in: Proceedings of the IFAC
Conference on Advances in PID Control (PID’12), 2012, 2012.
[19] L. Huang, N. Wang, J.-H. Zhao, Multiobjective optimization for controller design, Acta Autom. Sin. 34 (4) (2008) 472–477.
[20] A. Inselberg, The plane with parallel coordinates, Vis. Comput. 1 (1985) 69–91.
[21] H. Ishibuchi, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization: a short review, Evolutionary Computation, 2008. CEC 2008. (IEEE
World Congress on Computational Intelligence). IEEE Congress on, 2008, pp. 2419–2426.
[22] I. Kaliszewski, J. Miroforidis, D. Podkopaev, Interactive multiple criteria decision making based on preference driven evolutionary multiobjective opti-
mization with controllable accuracy, Eur. J. Oper. Res. 216 (1) (2012) 188–199.
[23] W.L. Luyben, Simple method for tuning SISO controllers in multivariable systems, Ind. Eng. Chem. Process Des. 25 (1986) 654–660.
[24] C.A. Mattson, A. Messac, Pareto frontier based concept selection under uncertainty, with visualization, Optim. Eng. 6 (2005) 85–115.
[25] A. Messac, A. Ismail-Yahaya, C. Mattson, The normalized normal constraint method for generating the Pareto frontier, Struct. Multidiscip. Optim. 25
(2003) 86–98.
[26] K.M. Miettinen, Nonlinear multiobjective optimization, Kluwer Academic Publishers, 1998.
[27] F. Morilla, Febrero, Benchmark 2009-10 grupo temático de ingeniería de control de cea-ifac: Control de una caldera, 2010, Available at www.cea-ifac.
es/w3grupos/ingcontrol.
[28] F. Morilla, Benchmark for PID control based on the boiler control problem, internal report, UNED, Spain, 2012. Available at http://www.dia.uned.es/
∼fmorilla/benchmark09_10/
[29] M. Munro, B. Aouni, Group decision makers’ preferences modeling within the goal programming model: an overview and a typology, J. Multi-Criteria
Decis. Anal. 19 (3–4) (2012) 169–184.
[30] Y. Ochi, PID controller design for MIMO systems by applying balanced truncation to integral-type optimal servomechanism, in: Proceedings of the
IFAC Conference on Advances in PID Control (PID’12), 2012.
[31] H. Panagopoulos, K. Åström, T. Hägglund, Design of PID controllers based on constrained optimization, Control Theory Appl. IEE Proc. 149 (1) (2002)
32–40.
[32] G. Pellegrinetti, J. Bentsman, Nonlinear control oriented boiler modeling-a benchmark problem for controller design, IEEE Trans. Control Syst. Technol.
4 (1) (1996) 57–64.
[33] B. Qu, P. Suganthan, Multi-objective evolutionary algorithms based on the summation of normalized objectives and diversified selection, Inf. Sci. 180
(17) (2010) 3170–3181. Including special section on virtual agent and organization modeling: theory and applications.
[34] G. Reynoso-Meza, Controller tuning by means of evolutionary multiobjective optimization: a holistic multiobjective optimization design procedure,
Ph.d. thesis. Universitat Politècnica de València, 2014.
[35] G. Reynoso-Meza, X. Blasco, J. Sanchis, J.M. Herrero, Comparison of design concepts in multi-criteria decision-making using level diagrams, Inf. Sci.
221 (2013) 124–141.
[36] G. Reynoso-Meza, X. Blasco, J. Sanchis, M. Martínez, Evolutionary algorithms for PID controller tuning: current trends and perspectives (in Spanish),
Rev. Iberoam. Autom. Inf. Ind. 10 (3) (2013) 251–268.
[37] G. Reynoso-Meza, J. Sanchis, X. Blasco, An adaptive parameter for the differential evolution algorithm, in: J. Cabestany, F. Sandoval, A. Prieto, J.M. Cor-
chado (Eds.), Bio-inspired systems: computational and ambient intelligence, vol. LNCS 5517, Springer-Verlag, 2009, pp. 375–382.
[38] G. Reynoso-Meza, J. Sanchis, X. Blasco, S. García-Nieto, Physical programming for preference driven evolutionary multi-objective optimization, Appl.
Soft Comput. 24 (2014) 341–362.
[39] G. Reynoso-Meza, J. Sanchis, X. Blasco, J.M. Herrero, Multiobjective evolutionary algorithms for multivariable PI controller tuning, Expert Syst. Appl.
39 (2012) 7895–7907.
[40] G. Reynoso-Meza, J. Sanchis, X. Blasco, J.M. Herrero, A stabilizing PID controller sampling procedure for stochastic optimizers, in: Memories of the
19th World Congress IFAC 2014, 2014.
[41] G. Reynoso-Meza, J. Sanchis, X. Blasco, M. Martínez, Controller tuning using evolutionary multi-objective optimization: current trends and applications,
Control Eng. Pract. 1 (2014) 58–73.
[42] J.D. Rojas, F. Morilla, R. Vilanova, Multivariable PI control for a boiler plant benchmark using the virtual reference feedback tuning, in: Proceedings of
the IFAC Conference on Advances in PID Control (PID’12), 2012.
[43] A. Ruiz, M. Luque, K. Miettinen, R. Saborido, An interactive evolutionary multiobjective optimization method: interactive wasf-ga, in: A. Gaspar-Cunha,
C. Henggeler Antunes, C.C. Coello (Eds.), Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 9019, Springer International
Publishing, 2015, pp. 249–263.
[44] M. Saeki, K. Ogawa, N. Wada, Application of data-driven loop-shaping method to multi-loop control design of benchmark PID 2012, in: Proceedings of
the IFAC Conference on Advances in PID Control (PID’12), 2012.
[45] K. Saridakis, A. Dentsoras, Soft computing in engineering design - a review, Adv. Eng. Inf. 22 (2) (2008) 202–221. Network methods in engineering
[46] A. Silveira, A. Coelho, F. Gomes, Model-free adaptive PID controllers applied to the benchmark PID12, in: Proceedings of the IFAC Conference on
Advances in PID Control (PID’12), 2012.
G. Reynoso-Meza et al. / Information Sciences 339 (2016) 108–131 131
[47] L. Thiele, K. Miettinen, P.J. Korhonen, J. Molina, A preference-based evolutionary algorithms for multi-objective optimization, Evol. Comput. 3 (2009)
411–436.
[48] T. Tušar, B. Filipič, Visualization of pareto front approximations in evolutionary multiobjective optimization: a critical review and the prosection
method, Evol. Comput. IEEE Trans. 19 (2) (2015) 225–245.
[49] Y. Wang, Y. Yang, Particle swarm optimization with preference order ranking for multi-objective optimization, Inf. Sci. 179 (12) (2009) 1944–1959.
Special Section: Web Search.
[50] J. Zhang, J. Zhuang, H. Du, S. Wang, Self-organizing genetic algorithm based tuning of PID controllers, Inf. Sci. 179 (7) (2009) 1007–1018.
[51] S.-Z. Zhao, M.W. Iruthayarajan, S. Baskar, P. Suganthan, Multi-objective robust PID controller tuning using two lbests multi-objective particle swarm
optimization, Inf. Sci. 181 (16) (2011) 3323–3335.
[52] E. Zitzler, S. Künzli, Indicator-based selection in multiobjective search, in: X.e. A. Yao (Ed.), Parallel Problem Solving from Nature - PPSN VIII, Lecture
Notes in Computer Science, vol. 3242, Springer, Berlin / Heidelberg, 2004, pp. 832–842, doi:10.1007/978-3-540-30217-9 84.
[53] E. Zitzler, L. Thiele, M. Laumanns, C. Fonseca, V. da Fonseca, Performance assessment of multiobjective optimizers: an analysis and review, IEEE Trans.
Evol. Comput. 7 (2) (2003) 117–132.