A novel efficient multi-objective optimization algorithm for expensive
A novel efficient multi-objective optimization algorithm for expensive
A R T I C L E I N F O A B S T R A C T
Keywords: The energy design of a building is often an activity of finding trade-offs between several conflicting goals.
Multi-objective optimization However, a large number of expensive simulation runs is usually required to complete a Building Performance
Building performance optimization Optimization (BPO) process with a high confidence of the optimal solutions. Although evolutionary algorithms
Energy-efficient buildings
have been enhanced with surrogate models, complex BPO problems with many design variables still require a
Bayesian optimization
Metamodeling
prohibitive number of expensive simulations, or lead to solutions with related low accuracy. Hence, performing
multi-objective optimizations of actual building designs is still one of the most challenging problems in building
energy design. A novel efficient multi-objective algorithm for expensive models based on a probabilistic
approach is presented in this work. The new algorithm reduces the computational time needed for the optimi
zation process, while increasing the quality of the solutions found. The algorithm was tested on the optimization
problem of three groups of analytical test functions and on the BPO problem related to the refurbishment of three
reference buildings. For the latter case, the efficiency, efficacy, and quality of the Pareto solutions found with the
proposed algorithm were compared with the true Pareto front previously sought with a brute force approach. The
results show that, for the most complex case among the three reference buildings, the algorithm can find about
50 % of the solutions on the true Pareto front with 100 % accuracy. In comparison, other algorithms tested on the
same problem and with the same number of expensive simulations, are able to find at best 5 % of solutions on the
true Pareto front with an accuracy around 5–10 %.
* Corresponding author.
E-mail addresses: [email protected] (R. Albertin), [email protected] (A. Prada), [email protected] (A. Gasparella).
https://doi.org/10.1016/j.enbuild.2023.113433
Received 18 April 2023; Received in revised form 17 July 2023; Accepted 6 August 2023
Available online 7 August 2023
0378-7788/© 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
R. Albertin et al. Energy & Buildings 297 (2023) 113433
optimization models for sustainable building design problems. Linear Regression in terms of computational time and reliability when
Machairas et al. [4] note that the number of papers addressing building applied to a calibration process.
design optimization is still relatively small compared to building control Among the various strategies for integrating metamodels into the
optimization. Shi et al. [9] approach the same topic from an architect’s optimization process [32], a common approach involves fitting the
perspective and find that 60 % of the works on building optimization metamodel to the available data (collected before the start of the opti
utilize evolutionary algorithms. According to these reviews, one of the mization process) and then optimizing the metamodel using an EA to
main challenges in BPO is the requirement for many expensive simula find a set of global solutions. These solutions are subsequently evaluated
tions before obtaining satisfactory results. Efficient optimization is by the expensive model, and the results are added to the existing data
crucial to find trade-off solutions in building design and refurbishment set. The process is repeated iteratively until convergence is reached. The
and to promote the practical application of BPO. However, an effective works of Xu et al. [17] and Brownlee and Wright [16] follow this
algorithm should also avoid premature convergence, which occurs when strategy. Nonetheless, some challenges may arise regarding the number
local optimal solutions (or dominated solutions) are selected instead of of design variables and/or the complexity of the metamodel, especially
global optimal solutions. This limitation often arises from an incomplete when it is a global approximation of the expensive model [33]. The
analysis of the solution space. higher the complexity of the costly model, the lower the accuracy with
Functional approximation is a common method to improve the op which the metamodel can approximate the model throughout the space
timization’s efficiency while maintaining good accuracy. It consists of of variables. For this, Knowles [34], applied the Chebyshev function to a
approximating the expensive model of a building with a mathematical multi-objective optimization problem guiding the selection of new
function, hereafter named metamodel, which is then optimized by offspring towards a portion of the objective space, thus avoiding po
means of an optimization algorithm [10–17]. Among the different types tential issues associated with a globally approximate model. Nguyen
of metamodels used as surrogate for building models, the most used are: et al. [35] analyzed works dealing with simulation-based optimization
and concluded that future research should focus on enhancing the effi
• Polynomial Regression, a popular metamodel that approximates the ciency of search techniques.
relationship between the input variables and the objective function Nevertheless, even an efficient algorithm may mistakenly identify
or constraints. It is a simple and widely used approach, but it may local solutions as global or yield a set of solutions that do not signifi
struggle to capture complex nonlinear relationships [18–19]. cantly differ from each other during the optimization process. Not only
• Kriging, known as Gaussian process regression, is a powerful meta does the algorithm evaluation consider the efficiency, but it takes into
model that uses a stochastic process to model the objective function account also the efficacy and solution quality. Efficiency measures the
or constraint surface. It provides a flexible framework to capture computational cost of the algorithm, while efficacy quantifies the dis
complex and nonlinear relationships. Kriging models estimate the tance between the predicted Pareto front (the set of solutions found by
mean and covariance of the process using training data and generate the algorithm at the end of the optimization process) and the true Pareto
predictions along with confidence intervals. Kriging is commonly solution (the global solutions to the optimization problem). Finally,
used in engineering design optimization and simulation-based opti solution quality assesses the uniformity of the Pareto front in the
mization problems [20–21]. objective space.
• Radial Basis Functions (RBF) metamodels use a set of radial basis Efficiency can be affected by the sampling technique chosen for the
functions to approximate the objective function or constraints. These selection of the initial sample. Sobol Sequence Sampling (SSS) is among
functions are centered at training points and their influence on the the most frequently used techniques. It is based on Sobol low-
metamodel is defined by their shape and spread. RBF metamodels discrepancy sequences designed to cover the entire search space more
can capture both global and local behavior and have been success evenly compared to random sampling methods. They have a determin
fully applied in various domains, including engineering design, istic and quasi-random nature and are well-suited for high-dimensional
finance, and environmental modeling [22–23]. problems. The Latin Hypercube Sampling (LHS) is also a widely used
• Support Vector Machines (SVM) is a well-known machine learning method to determine the initial population. LHS is a stratified sampling
algorithm that can also be used as a metamodel for optimization method that ensures a more even coverage of the search space compared
problems. SVM constructs a hyperplane in a high-dimensional to simple random sampling. Finally, the Simple Random Sampling (SRS)
feature space that maximally separates the data points. In the is a basic and widely used sampling technique where each candidate
context of metamodeling, SVM is trained on the input–output pairs solution is selected independently and with equal probability from the
generated by the original function and can approximate both linear search space. Although it lacks the systematic coverage of LHS or Sobol
and nonlinear relationships. SVM-based metamodels have been sequences, simple random sampling is computationally efficient and
applied in various fields, including engineering design, finance, and easy to implement.
computer science [24–25]. The comparison of optimization algorithms in BPO processes is often
• Artificial neural networks (ANN) have gained increasing attention in challenging due to the absence of known true solutions. As a result,
recent years [26]. For instance, Kalogirou [27] combined an ANN different authors employ various methods to evaluate and compare al
with a GA to optimize a solar energy system, reducing the time gorithms. One common approach is to set a simulation budget and
required to find an optimal solution. Yizhe Xu et al. [28] also used an compare the optimal solutions obtained by each algorithm against the
ANN coupled with an optimization algorithm to optimize the best solution found within that budget. Multiple simulation runs are
building envelope. often performed, and the best solution, along with the standard devia
tion or average solution, is reported. Alternatively, some authors
Metamodels, which are surrogate models, have been employed to search compare algorithms based on the number of iterations required for a
for optimal solutions by coupling them with either an EA or PS algo solution to reach a tolerance error compared to a reference value.
rithm, and the results have been compared. In this case, the chosen al Nevertheless, there is no guarantee that algorithms will find any optimal
gorithm is a Genetic Algorithm called NSGA-II [29]. Similarly, Yujun solution within a finite number of iterations when algorithms are eval
Jung et al. [30] successfully performed multi-objective optimization of a uated using noisy estimates of solutions, as. This issue can impact the
residential building by combining an ANN with NSGA-II, achieving ac conclusions drawn when comparing two or more suboptimal solutions
curate and efficient identification of optimal solutions. Other types of and may lead to incorrect assessments of the final performance of al
metamodels can be found in the literature. For example, Jianli Chen gorithms applied to BPO problems, as highlighted by Kämpf et al. [36].
et al. [31] compared the performance of a Gaussian Process and Multiple The nature of the parameters influencing building optimization often
lends itself to discrete optimization [37]. While geometric building
2
R. Albertin et al. Energy & Buildings 297 (2023) 113433
3
R. Albertin et al. Energy & Buildings 297 (2023) 113433
as minimization problems from this point onward. Then, the points with 2.2. Main loop
the lowest weighted mean value are used to fit the metamodel. Since a
different set of weights is used for the weighting process in each iteration The main loop in the algorithm consists of several steps. Firstly, the
of the loop, a new metamodel is created and fitted each time with a weighted mean value of the objective functions is calculated by taking
different set of points. This ensures a higher quality of the metamodel the scalar product of matrix Y and the corresponding row of weights
fitting process, as only the most suitable points are selected for its cre from matrix W. This calculation is performed for each point in the
ation. The minimum of the metamodels is determined using a genetic dataset D. The resulting vector Y
̂ contains the weighted mean values for
algorithm (GA) at this stage. The values of the design variables associ each point in the search space Rp present in the dataset.
ated with the minimum found by the GA are then used in the expensive Next, a set of points in matrix X is chosen based on their related
simulation model to obtain the corresponding objective function values. values in Y,
̂ sorted in descending order. The selected points are the top
Finally, the new point found by the GA and its related objective function ones with the lowest weighted mean values, up to a predetermined
values are added to the dataset. The algorithm checks if the iteration number set before the optimization process begins. These selected points
budget is exceeded and/or if the convergence criterion is met. If either are then used for the metamodel fitting process. The metamodel fitted on
condition is fulfilled, the main loop ends, and the Pareto front is the selected points during each iteration of the main loop is a multi-
evaluated. variate polynomial. The polynomial fitting is performed using the
Horseshoe method, a Bayesian approach [38].
2.1. Weights creation and sampling process Once the metamodel is fitted, a Genetic Algorithm (GA) is employed
to search for the minimum of the metamodel. If there are constraints, the
The sets of weights are created with an LHS process before the start of GA avoids the infeasible region of the search space by penalizing the
the main loop to investigate the entire objective space. This approach associated objective functions. The coordinates of the minimum ob
enables the dynamic prioritization of various objective functions during tained from the GA are then checked. If these coordinates are not already
different stages, thereby addressing the potential issue of the algorithm present in the dataset, they are evaluated using the expensive simulation
persistently converging to identical solutions when similar weight model, and the resulting objective function values are appended to the
values are used. Nevertheless, the weights for the objective functions are existing dataset. However, if the new solution is already in the dataset,
determined in a manner that ensures their sum always equals one. the loop proceeds to the next iteration.
In each iteration of the loop, each set is utilized to calculate the In some cases, when many design variables are integers, finding new
weighted mean value of the objective functions for the available dataset. coordinates that are not already in the dataset may require additional
To ensure compatibility with the maximum number of possible itera effort. As a solution, the algorithm can optionally search for a new
tions, the total number of weights’ sets must equal nimax . Within each set, minimum within the hammering distance: if a new solution found with
a weight is randomly generated for each objective function, nfobb , using the GA matches a solution in the dataset, a random design variable from
the Latin Hypercube Sampling (LHS) process. This weight is then applied the newly found solution is adjusted by adding or subtracting a relative
to the entire dataset during the corresponding iteration. step value. The sign (positive or negative) of the step value is randomly
Additionally, the weights are created in groups of nLHS elements determined. The choice of using the hammering distance option is
using the LHS process. Each group is designed to cover the entire arbitrary and is decided during the algorithm initialization.
objective space independently. This approach enables multiple searches Finally, the ending conditions are checked. If none of the ending
for the Pareto front within the same optimization process. Consequently, criteria are met, the loop proceeds to the next iteration, selecting a new
a matrix of weights, denoted as W, is constructed. The matrix has a size set of weights, calculating the new objective mean values, and repeating
of nimax by nfobb , obtained by concatenating submatrices of weights’ sets the entire process. Once an ending condition is met, the algorithm stops,
in an iterative process. Each submatrix is derived through an adjusted and the Pareto front is evaluated.
LHS design process and has dimensions of nLHS by nfobb .
The ratio between nimax and nLHS defines how many times the Pareto 2.3. Convergence criteria
front is searched for. The typical relationship between the two quantities
is: Ending conditions are utilized to interrupt the main loop and
nimax advance the algorithm to the next stage, that is the calculation and
nLHS = (1)
10 printing of the outputs. The code stops if it meets at least one of the three
this means that if no ending conditions is met during the process, the convergence criteria.
Pareto front is then searched for a total of 10 times (nimax /nLHS ).
In a similar process, the Latin Hypercube Sampling (LHS) method is 1. The first condition requires no new point is found on the Pareto front
employed to sample the values of the design variables to create the in nend iterations (excluding the simulations of the sampling process).
initial population. These sampled values are stored in the matrix X. If nend is calculated with different equations dependently on whether
any constraints exist, all points in the sampling set are checked. If a point the hammering distance option is enabled or not, to ensure that the
violates any constraints, it is randomly replaced with another set of entire Pareto front is searched at least once after the sampling pro
values. This replacement process continues until all points satisfies the cess, with an arbitrary precision related to the choice of the param
constraints. Once the sampling set is constraint-compliant, it is evalu eter nLHS (i.e., the number of weights’ sets created within a LHS
ated using the expensive model. This evaluation results in the creation of instance of the iterative process that leads to the weights’ matrix W);
a matrix Y which contains the values of the objective functions corre nLHS
nend = if Hammering − distance is enabled (2)
sponding to each point in the sampling set. The sampled design variable 10
points and their respective objective function values obtained during the
sampling process are then combined to form the initial dataset, denoted nend = nLHS if Hammering − distance is disabled (3)
as D = [XY]. The dataset matrix is not fixed in size, as in each iteration of
the main loop, a new set of design variable values and their corre 2. The second condition is met if all the objective functions’ values are
sponding objective function values are appended to the dataset. This less than an arbitrary threshold, defined as the maximum acceptable
iterative process allows for the continuous expansion of the dataset value for each objective function
throughout the optimization procedure. 3. Finally, the last ending condition is related to the expensive simu
lation budget. The code stops if the number of expensive simulations
4
R. Albertin et al. Energy & Buildings 297 (2023) 113433
3. Testbench
12 ≤ x ≤ 60 (18)
constraint C1 (x, y) = (x − 5)2 + y2 ≤ 25 (6)
constraintC2 (x, y) = (x − 8)2 + (y + 3)2 ≥ 7.7 (7) 3.2. Building simulation optimizations
0≤x≤5 (8)
The optimization process for the refurbishment of three buildings
was used as a secondary testbench to evaluate the proposed algorithm.
0≤y≤3 (9)
In a previous study by Prada et al. [32] conducted in 2018, they
While the second constraint function is redundant as it is not in the
feasible range of values for both variables x and y, the first constraint
function does not eliminate any solution from the feasible search space,
but rather reduce the density of feasible points present in it [39].
The second test case is relative to the Two-Bar truss design originally
studied by Palli et al., [42]. In this case, three variables (x, y) repre
senting geometrical properties of a two-bar system (i.e., the cross-
sectional area of the two bar and the vertical distance between the
two ends of the bars respectively – Fig. 2) are used to minimize the
volume of the truss (Equation (10) as well as the stresses along each of
the two bars (Equation (11). Also in this case, the variables are contin
uous. The constraint function ensures that the truss does not go under
elastic failure if a load specified in Equation (12) is applied to the
system.
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ √̅̅̅̅̅̅̅̅̅̅̅̅̅
minimize f1 (x, y) = x1 16 + y2 +x2 1 + y2 (10)
5
R. Albertin et al. Energy & Buildings 297 (2023) 113433
developed an algorithm that combined the multi-objective Genetic Al initial costs, derived from a regional price list, are considered for all the
gorithm NSGA-II with various metamodels. The BPO deals with the energy-saving measures (ESMs), along with annual energy costs, main
refurbishment of three specific buildings (Fig. 4): a penthouse (PH), an tenance costs, replacement costs, and residual values for equipment with
intermediate flat (IF), and a semi-detached house (SD). longer lifespans.
The three reference buildings present different compactness ratios S/ A brute force approach was employed to evaluate all possible com
V, where S represent the dispersing surface and V the conditioned vol binations of the variables listed earlier. For the intermediate flat, there
ume: 0.97 for the detached house, 0.63 for the penthouse, and 0.3 for the were 630 potential combinations, while for the penthouse, there were
intermediate flat in multi-story building. 13,230 combinations. In the case of the semi-detached house, the
The typical envelopes of constructions built before the first Italian number of combinations reached 277,830. By evaluating all possible
energy legislation in 1976, which have not undergone renovation, have combinations, it was possible to assess the effectiveness of the models in
been chosen for all three reference buildings. They are characterized by accurately identifying the true Pareto front.
an opaque envelope resistance of 0.97 m2K W− 1 and single pane glass
with a standard timber frame. The thermal bridges in these reference 3.3. Metrics
cases have two-dimensional thermal coupling coefficients calculated
according to EN ISO 10211. This calculation results in a linear trans In this study, some of the metrics proposed by Prada et al. [32] were
mittance of 0.098 W m− 1K− 1 for corners, 0.182 W m− 1K− 1 for inter used to evaluate the performance of the algorithm in solving the
mediate floors and walls, and 0.06 W m− 1K− 1 for the perimeter of building refurbishment optimization problems. However, these metrics
windows. could not be applied to the three test functions optimization problems
The infiltration rate for all the building reference configurations is due to the continuous nature of some design variables and the inability
determined based on the calculations specified in UNI EN 12207 and EN to use brute force methods to search for the true Pareto front solutions
15242. The heating system in the reference buildings consists of a required for metric assessment.
standard boiler coupled with radiators and an on–off control system. To The metrics are categorized into efficacy, efficiency, and quality
represent a climate typical of Northern Italy (Climatic zone E in the metrics. Efficiency metrics aim to assess the effort required by the al
Italian classification), the weather conditions of Milan was chosen. gorithm to converge, while efficacy metrics measure the distance be
The design variables related to the optimization process of the three tween the true Pareto front and the one found by the algorithm. Quality
buildings are relative to the energy-saving measures (ESMs) that it is metrics evaluate the variety or dissimilarity of solutions generated by
possible to apply to the non-renovated buildings: the algorithm [32]. It is important to note that all objective values were
normalized with respect to the objective functions of the existing
1. Placement of an additional layer of expanded polystyrene with buildings before evaluating these metrics.
varying thicknesses (ranging from 0 to 20 cm) to the vertical walls, For efficiency, the ratio between the number of costly building en
roof, and floor independently. Different initial costs were considered. ergy simulations and the total number of combinations in the variable
2. Replacement of windows: this measure focuses on replacing existing search space (NE) was chosen. NE allows for comparing different algo
windows with more efficient glazing systems, such as double or triple rithms regardless of computational time, as the time required for
pane windows, with either high or low solar heat gain coefficients. executing the MATLAB code or the computational time of the code itself
3. Boiler replacement: the existing boiler is replaced with either a is negligible compared to the time needed for each expensive simulation.
modulating or condensing boiler that includes an outside tempera Efficacy is quantified by two metrics: the fraction of true Pareto front
ture reset control. solutions found by the algorithm (PS) and the number of wrong optimal
4. Installation of a mechanical ventilation system: this measure in solutions identified by the algorithm (C).
volves installing a mechanical ventilation system equipped with a Finally, the selected quality metric is the pure diversity of the Pareto
cross-flow heat recovery system. front, which is normalized based on the pure diversity of the true Pareto
front (nPD).
A cost-optimal framework utilizing a multi-objective optimization
approach has been employed to determine the performance of different 4. Results
retrofit strategies. The first objective focuses on enhancing energy effi
ciency by minimizing the Primary Energy for Heating (EPH). The second This section presents the results obtained applying the proposed al
objective involves minimizing the total cost of the building, following gorithm to both the analytical test cases and the optimization process for
the comparative framework methodology of cost-optimal levels. To refurbishing the three buildings.
measure the total cost of the building over a 30-year lifespan, the Net Firstly, the optimization of the analytical test cases is presented, and
Present Value (NPV) indicator is used. The NPV is calculated by sum the algorithm’s performance is evaluated based on the minimum rela
ming the cash flows associated with each intervention over time. The tive objectives. Whenever possible, the results obtained from applying
Fig. 4. Reference buildings used in the optimization problems: from the left - intermediate flat (IF), penthouse (PH) and semidetached house (SD) [32].
6
R. Albertin et al. Energy & Buildings 297 (2023) 113433
other algorithms to the same test cases are reported to allow for com Table 2
parison. Additionally, graphical representations of the Pareto fronts Efficiency (NE), efficacy (PS and C) and quality (nPD) metrics related to the
obtained using the proposed algorithm are provided for a direct com optimal solutions found by applying the proposed algorithm to the IF optimi
parison against the results reported in Deb’s work [39]. zation problem both with and without hammering distance option enabled.
Subsequently, the optimization of the computationally expensive – Hammering distance
building models is discussed. In this section, the metrics described nsamp NE PS C nPD NE PS C nPD
earlier are used to compare the performance of the new algorithm (%) (%) (%) (%) (%) (%) (%) (%)
against existing literature. 32 15 47 7 130 39 73 5 93
By presenting these results and conducting the performance com 64 19 45 4 87 44 84 2 94
parisons, this section aims to demonstrate the effectiveness and capa 128 26 33 4 66 52 78 2 91
bilities of the proposed algorithm in addressing the optimization 256 44 24 20 82 74 53 13 83
challenges posed by both analytical test cases and real building refur
bishment scenarios.
Table 3
4.1. Analytical test cases Efficiency (NE), efficacy (PS and C) and quality (nPD) metrics related to the
optimal solutions found by applying the proposed algorithm to the PH optimi
zation problem both with and without hammering distance option enabled.
The algorithm parameters selected for each test case are shown in
Table 1. These values were chosen arbitrarily in order to limit the – Hammering distance
number of expensive simulations with respect to the number of possible nsamp NE PS C nPD NE PS C nPD
combinations of the design variables. A reduced number of weights’ sets (%) (%) (%) (%) (%) (%) (%) (%)
has been selected for the last test case, to focus the search of optimal 32 1.1 35 0 98 2.3 62 0 149
solutions on the Pareto front extremities. This was required given the 64 1.5 42 0 95 2.5 58 0 127
wide range of possible values related to the first objective of the gear 128 1.7 46 0 100 2.8 58 0 146
256 2.4 17 0 89 3.8 56 0 98
train design problem (see Table 2 and Table 3).
The first test case/problem to which the algorithm was applied is the
BNH problem. The solutions found by Binh and Korn [41] are min(f1 ) = variables’ values.
0 and min(f2 ) = 4, respectively when x1 , x2 = 0 and x1 = 5, x2 = 3. Finally, the results relative to the gear train design problem are
The Pareto front found by the proposed algorithm is reported in presented. In this case, the task was to minimize the error between the
Fig. 5. The black markers represent the objective functions of the points required and calculated gear ratios, as well as the size of the gears.
evaluated by the algorithm, while the red dots are the points of the NSGA-II was able to find the following solutions: min(f1 ) = 1.83*10− 8
Pareto front. The proposed algorithm was able to quickly find the two and min(f2 ) = 13 cm, with coordinates equal to xa = 12, xb = 12, xd =
objective functions’ minimums as well as many solutions on the Pareto 27, xf = 37 relative to min(f1 ), and xa = 12, xb = 12, xd = 13, xf = 13 to
front before reaching the maximum number of possible evaluations min(f2 ).
selected for the problem. The graph highlights the high percentage of
The minimums reached for the two objective functions are: min(f1 ) =
points found by the algorithm belonging to the Pareto front, thus
7.78*10− 7 and min(f2 ) = 12 cm, respectively at xa = 12, xb = 12, xd =
highlighting the algorithm’s effectiveness in simulating only those so
32, xf = 31 and xa = 12, xb = 12, xd = 12, xf = 12. In this case, the
lutions that are near the Pareto front. It is important to highlight that,
among the black markers, are also present the points of the initial proposed algorithm was able to cover most of the Pareto front before
population, coming from the sampling process. meeting a stopping condition and so, without reaching the maximum
The second test case is relative to the two-bar truss design problem. value of evaluations, nimax . The total number of evaluations performed
In the original study [42], the ε-constraint method was used to minimize was 499, including the initial sampling, while the number of possible
the volume and the tensional stresses in each of the two bars, with a combinations is 5 764 801. The Pareto front is represented in Fig. 7.
minimum of min(f1 ) = 0.00445 m3 and min(f2 ) = 83268 kPa respec
tively. The NSGA-II algorithm was also applied to the same problem, 4.2. Building simulation optimizations
which was able to find the following minimums: min(f1 ) = 0.00407 m3
and min(f2 ) = 8439 kPa. The optimization problem related to the refurbishment of three
The proposed algorithm was able to find lower values for both building models - i.e., the penthouse (PH), the intermediate flat (IF), and
minimum with respect to NSGA-II results, within the maximum number the semi-detached house (SD) – has been used to test the new algorithm
of possible expensive evaluations: min(f1 ) = 0.00385 m3 and min(f2 ) = starting from different samples sets, which are the same used for the
8433 kPa, with relative coordinates x1 = 5.0*10− 4 m2 , x2 = 9.0* benchmark tests in Prada et al., [32].
10− 4 m2 , y = 1.6m and x1 = 5.6*10− 3 m2 ,x2 = 1.0*10− 2 m2 ,y = 3m. It is For all building typologies a number of weights’ sets (i.e., nLHS ) of
worth noting that the minimum of f1 is lower than what was identified 200, a maximum number of allowed coordinates which lie outside the
by ε-constraint method and NSGA-II. Furthermore, it is possible to notice Pareto front (nend ) of 200 (no Hamming distance) or 20 (Hamming dis
that also in this case, the solutions are mainly located near the Pareto tance) are set. The maximum number of evaluations (nimax ) was set to
front (Fig. 6). In this case, the algorithm did not reach convergence 500 for the intermediate flat (to be less than the number of possible
within the maximum number of evaluations. The reason is most likely combinations, i.e., 600) and to 1000 for the two other cases. Given the
the continuous nature of the variables related to the problem, which discrete nature of the design variables and the high step on feasible
leads to a high number of possible combinations of the decision range ratio for some of them, the optimization problem has been solved
with and without the hammering distance option enabled. The increase
of nLHS from 100 to 200 allows to densify the solutions in the central part
Table 1
of the Pareto front.
Values of the algorithm’ parameters for each test case.
The results’ figures are presented by means of the metrics described
Analytical test case nLHS nimax nend Initial Population Size
in section 3.3 for each building and initial sampling size. Furthermore,
BNH problem 100 1000 100 20 the Pareto front found with the new algorithm and the true Pareto front
Two-bar truss design problem 100 1000 100 20 found through a brute force method are shown for the smallest initial
Gear train design problem 10 1000 100 20
sampling size without the hammering distance. The Pareto front found
7
R. Albertin et al. Energy & Buildings 297 (2023) 113433
Fig. 5. Pareto front found by applying the proposed algorithm to the BNH problem.
Fig. 6. Pareto front found by applying the proposed algorithm to the two-bar truss design problem.
by the algorithm is represented by red dots, while the true Pareto front simulations with and without the hammering distance) with respect to
by blue circles, and the black marks represent the coordinates evaluated, the maximum number of possible combinations of design variables’
including the sample. values. The high percentage of costly building simulations however is
balanced by a high number of true Pareto solutions found, especially
4.2.1. Intermediate flat (IF) when the hammering distance is used (Fig. 8). Furthermore, if the
The intermediate flat case is characterized by a high percentage of hammering distance option is not used, the number of costly simulations
costly building simulations (up to 44 % and 74 % respectively for the increases as well as the number of true Pareto solutions found decreases
8
R. Albertin et al. Energy & Buildings 297 (2023) 113433
Fig. 7. Pareto front found by applying the proposed algorithm to the gear train design problem.
Fig. 8. Solutions found related to the IF optimization problem both with the proposed algorithm (red dots) and with the brute force approach (blue circles – true
Pareto front). The black marks represent the non-optimal solutions found with the proposed algorithm.
(Table 2). This result highlights the key role of the initial sample size and value of the C metric increases, identifying suboptimal solutions that are
the dependence on the number of possible combinations. Indeed, in reality dominated by others. A clear trend on the quality of the front is
increasing the sampling size from 64 to 128 does not improve the al not easily identifiable. In fact, while for an initial sample of 32 points the
gorithm accuracy, but rather worsens its efficiency due to more expen hammering distance worsens the diversity (nPD) of front solutions, there
sive simulation runs. Similarly, as the initial population increases, the are weak improvements in the other cases.
9
R. Albertin et al. Energy & Buildings 297 (2023) 113433
4.2.2. Penthouse (PH) the results from the previous work conducted by Prada et al. [32]. Since
In the PH case the number of available combinations increases thus in the previous work multiple metamodels were tested for the same
stressing the efficiency metric of the algorithm which converges with cases, to facilitate the comparison, the process is divided into two steps.
higher accuracy with respect to the cases of the other buildings, simu In the first step, the results for each building category are graphically
lating less than 3 percent of the possible combinations. Again, the in represented for the metrics NE (number of expensive simulations), PS
crease in the initial sampling size does not always contribute to solution (efficacy), and C (accuracy). Fig. 11 illustrates these graphical repre
accuracy if the hammering option is not enabled (Table 3). The Coverage sentations, showcasing the comparison between the results obtained in
metric (C) always equal to 0 % indicates that all the solutions found by the present work and those from the previous study. In the second step,
the algorithm are part of the true Pareto front. The case with an initial the comparison focuses on each building typology and each sampling
sampling size of 32 points outperforms all the other cases, characterized size. The results from the present work are compared with the results
by a bigger sampling size, lower percentage of true Pareto solutions from the previous study, taking into consideration the proximity in
found and similar diversity metric (Fig. 9). In this test case, unlike the terms of the number of expensive simulations (NE). By matching the NE
previous one, there is an improvement in solution diversity when the number, a direct comparison can be made in terms of efficacy (PS),
hammering distance is adopted. accuracy (C), and quality (nPD). This two-step comparison process al
Overall, with respect to the previous cases, the efficiency has lows for a comprehensive evaluation of the results, enabling a mean
increased substantially without compromising both efficacy and quality, ingful assessment of the effectiveness, accuracy, and quality of the
compared to the previous building case. optimization algorithm employed in the present study in relation to the
findings of the previous work conducted by Prada et al. [32].
4.2.3. Semidetached house (SD) In Fig. 11, the results of the present work are represented by red dots,
The third test case is certainly the most challenging given the while the results of the previous work are represented by light blue dots.
greatest number of combinations available (Fig. 10). The data reported The metric PS is reported as the complement to the unit value (1-PS),
in Table 4 show the identification of about 25 % of the Pareto front meaning that the best solutions are in the left-bottom corner of all
solutions with less than 1 % of combinations simulated with the graphs. The results related to the IF show similarities in terms of NE, but
expensive model. The reduced percentage of costly simulation per significant differences in terms of PS. This indicates that while the
formed does not affect the number of true Pareto solutions found and if proposed algorithm quickly identified the best solutions within the
the hammering distance option is enabled, the PS metrics are compa initial simulations, it struggled to find new solutions that were different
rable with those obtained for PH although NE is significantly lower. from the ones already present in the dataset. This behavior is highlighted
Again, C metric highlights the algorithm’s ability to identify only by the C-NE graphs, which demonstrate high accuracy that is indepen
those solutions that actually belong to the Pareto front, thus avoiding the dent of the number of simulations performed. Moving on to the PH, the
identification of false optimums. As in PH, there is a benefit of new algorithm consistently outperforms the previous work in terms of
hammering distance, that leads to a greater diversity of front solutions. the number of solutions found on the true Pareto front (PS) when NE
values are similar. However, due to the relatively small number of
possible combinations, if some solutions from the true Pareto front are
4.3. Performance compared with other efficient optimization frameworks already found through the sampling process or within a few simulations,
the presented algorithm faces difficulties in finding solutions that are not
In the final step, the results obtained in this study are compared with
Fig. 9. Solutions found related to the PH optimization problem both with the proposed algorithm (red dots) and with the brute force approach (blue circles – true
Pareto front). The black marks represent the non-optimal solutions found with the proposed algorithm.
10
R. Albertin et al. Energy & Buildings 297 (2023) 113433
Fig. 10. Solutions found related to the SD optimization problem both with the proposed algorithm (red dots) and with the brute force approach (blue circles – true
Pareto front). The black marks represent the non-optimal solutions found with the proposed algorithm.
except for the case with a sampling size of 256 points where the algo
Table 4 rithm performs worse with respect to all metrics. The lower accuracy
Efficiency (NE), efficacy (PS and C) and quality (nPD) metrics related to the
observed in the C metric can be attributed to the true Pareto solutions
optimal solutions found by applying the proposed algorithm to the SD optimi
found within the sampling process. Since the algorithm has difficulties in
zation problem both with and without hammering distance option enabled.
finding new solutions close to points that are already on the Pareto front,
Hammering distance
–
it struggles to improve upon the solutions obtained through the initial
nsamp NE % PS % C% nPD % NE % PS % C% nPD % sampling. As a result, the algorithm may not explore the full extent of the
32 0.11 25 0 78 0.17 51 0 91 Pareto front, leading to lower accuracy compared to the previous work.
64 0.12 27 0 77 0.19 49 0 74 Overall, while the new algorithm shows similar results to the previous
128 0.13 22 0 76 0.19 52 0 75 work in terms of nPD and PS when matched in terms of NE, the lower
256 0.15 16 0 81 0.24 54 0 67
accuracy observed in the C metric highlights the algorithm’s limitations
in finding new solutions in proximity to the Pareto front.
yet present in the dataset. As a result, in some cases, certain algorithms In the penthouse case, the results with the hammering distance op
from Prada et al. [32] outperform the new algorithm in terms of PS. This tion enabled show significant improvements in terms of PS and C found
observation is validated by the last building category, where the larger by the proposed algorithm for almost all initial population sizes
number of possible combinations leads to better results for all the al (Table 6). The only exception is the case with a sampling size of 256
gorithms in terms of PS. The sampling process initially struggles to find points, where the results differ. In this scenario, characterized by a
solutions that are close or directly on the true Pareto front. However, the higher number of possible combinations if compared to the intermediate
new algorithm’s greater efficiency and effectiveness in identifying the flat, there could be two possible explanations why the proposed algo
front can be better appreciated given the limited impact of the initial rithm cannot perform as well as for the other initial population sizes.
sampling size. Moreover, the Pareto front found by the new algorithm is Firstly, it is possible that some solutions were found during the sampling
almost always included within the true Pareto front (C metric equal to process that were already on or close to the true Pareto front. This would
zero). This outcome allows the decision-maker to avoid considering explain the greater values in PS and C metrics found in the previous
suboptimal solutions, even though they are generally close to the true contribution, which heavily depend on the quality of the initial popu
front. Furthermore, the accuracy of the algorithm is not strongly lation. Alternatively, the points selected during the sampling process
dependent on the sampling size or the number of expensive simulations might have influenced the metamodel to develop with a wrong “shape,”
(NE). affecting the accuracy of the results. This observation is further sup
Table 5, 6 and 7 show the metrics relative to the results obtained with ported by the trend observed across all three building typologies. The
the proposed algorithm with hammering distance compared with the solutions obtained with a sampling size of 256 points are consistently
results of the previous contribution in case of similar NE values. By worse compared to the other cases, indicating that a smaller initial
matching the results in terms of NE it is possible to have a quantitative population size generally leads to better results. Overall, the hammering
comparison on the accuracy and quality of the Pareto fronts. distance option proves beneficial in improving the efficacy (PS) and
In the case of the IF, when comparing the results in terms of NE, both accuracy (C) of the algorithm for the penthouse case, while the quality
the nPD and PS metrics are similar to those obtained in the previous (nPD) remains comparable. The exception with the sampling size of 256
work (Table 5). However, the C metric tends to be significantly lower, points suggests that caution should be exercised when selecting the
11
R. Albertin et al. Energy & Buildings 297 (2023) 113433
Fig. 11. Efficiency (ne) and efficacy (ps and c) metrics related to the solutions found with the proposed algorithm (red dots) and with the algorithm in prada et al.
[32], for all three reference buildings (IF, PH and SD).
Table 5 Table 7
Metrics related to the solutions of the proposed algorithm with hammering Metrics related to the solutions of the proposed algorithm with hammering
distance option enabled compared with those similar in terms of ne found with distance option enabled compared with those similar in terms of ne found with
the algorithm in prada et al., [32] for the IF case. the algorithm in prada et al.,[32] for the SD case.
Intermediate flat (IF) Semidetached house (SD)
Algorithm in Prada et al., [32] New algorithm Algorithm in Prada et al., [32] New algorithm
5. Discussion
initial population size, as a smaller size may generally yield better
results.
The results presented in this section reveal several features of the
From the analysis of the semidetached house (Table 7), the presented
new algorithm, as well as its advantages and disadvantages, especially
algorithm excels in finding a higher number of true Pareto front solu
when compared to the outcomes of a previous contribution on the same
tions with high accuracy compared to the previous work, especially in
optimization problems.
relation to high-dimensional problems, while the quality of the Pareto
The application of the algorithm to the three analytical test functions
front found by the algorithm is comparable, considering the same
demonstrates its capability to find the same or even better solutions
number of expensive simulations.
12
R. Albertin et al. Energy & Buildings 297 (2023) 113433
compared to well-established algorithms such as NSGA-II and the further evaluation using the expensive models. This selective
ε-constraint method. The Pareto fronts depicted in the respective figures approach significantly reduces the number of expensive simulations
further confirm the algorithm’s ability to focus on the region of the required.
search space characterized by points near the global minimum. This • the metamodel fitting process is based on the Horseshoe method,
enables the algorithm to avoid unnecessary expensive simulations, which is a Bayesian probabilistic approach. This method ensures
resulting in improved efficiency. high-quality fitting by employing a multi-variate polynomial fitting
Considering the gear train design problem, even for cases with a wide process in each iteration, without constraining the search to specific
range of objective function values, the algorithm successfully identifies regions within the design variable space.
the Pareto front with a limited number of simulations compared to the • as the optimization process progresses, the metamodel fitting process
total number of possible combinations. This showcases the algorithm’s adapts its strategy. Instead of considering all available points, it fo
effectiveness in handling complex optimization problems. cuses on the most promising ones, specifically targeting the region
By comparing the results with the algorithms in Prada et al. [32] for around the Pareto front within the design variable space. This
the optimization problems regarding the refurbishment of three refer adaptive approach further narrows down the search, reducing the
ence buildings, certain advantages and disadvantages can be observed. reliance on expensive simulations.
One notable difficulty of the algorithm is in finding new solutions near • by transitioning from a multi-objective optimization to a single
the ones already identified on the true Pareto front. When the number of objective optimization, the quality of the fitting process is signifi
possible combinations is low, optimal solutions can be found early on, cantly enhanced. As a result, the number of expensive simulations
even during the sampling process. In such cases, the algorithm faces performed with design variables that are far from the Pareto front is
challenges in discovering different solutions, and the results deteriorate greatly reduced.
as the number of sampled points increases. On the other hand, the po
tential of the new algorithm becomes more apparent when dealing with By incorporating these strategies, the new algorithm showcases its
problems of increased complexity, characterized by a high number of ability to effectively reduce the number of expensive simulations
possible combinations and/or design variables. For simpler cases, such required to achieve satisfactory results, while maintaining or even sur
as the intermediate flat, the algorithm typically finds initial solutions on passing the performance of established methods like the ε-constraint
the true Pareto front within the first 100 expensive simulations. The method and NSGA-II. The algorithm’s applicability and advantages are
initial Pareto front identified by the algorithm may consist of sparse demonstrated across various optimization scenarios, validating its effi
solutions but still effectively covers a significant portion of the objective cacy and highlighting its potential for practical use. To this end, the
space. As the iterations progress, the algorithm achieves higher PS but at algorithm’s performance is evaluated on three multi-objective test
the expense of an increase in NE. The trade-off between PS and NE is functions with constraints, as well as on the optimization problems
influenced by the setting of nend , which needs to be carefully determined relative to the refurbishment of three reference buildings. The results are
based on whether the goal is to find as many solutions as possible on the compared with those obtained using the ε-constraint method and NSGA-
true Pareto front or to limit the number of expensive simulations. II for selected test functions, and, by means of several metrics, with
It is noteworthy that even with a low number of simulations, the NSGA-II coupled with different metamodels for the three reference
Pareto front identified by the algorithm is generally accurate, indicating building optimization problems.
that the solutions lie on the true Pareto front. This observation is re Through the comparison of test functions and building optimization
flected in the C metric, which is typically close to zero and shows weak problems results, it was possible to highlight several key capabilities of
correlation with the initial population density or NE. Thus, even with a the new algorithm:
limited number of expensive simulations, the solutions found by the new
algorithm are typically reliable. Additionally, the relationship between • the new algorithm successfully identifies minimum values for all test
PS and NE is weak, particularly for complex cases. More expensive functions that are on par with those obtained using extensively
simulations do not always lead to increased PS, especially if the increase validated algorithms like NSGA-II and the ε-constraint method. This
is due to a larger number of sampling points. Regardless of the settings, a demonstrates the algorithm’s ability to achieve competitive optimi
wider initial population often leads to suboptimal results. This consid zation results.
eration may vary depending on the case, particularly in relation to the • the algorithm exhibits a general ability to focus on the portion of the
number of design variables, but it highlights that a high number of objective space near the Pareto front. This significantly reduces the
sampling points is often unnecessary or even unfavourable. need for a large number of expensive simulations, making the opti
In summary, the new algorithm demonstrates promising features and mization process more efficient.
advantages in terms of finding optimal solutions, accurately identifying • the application of the algorithm is not limited to problems with
the Pareto front, and managing the trade-off between PS and NE. objective functions of different magnitudes or specific types of design
However, it also exhibits challenges when encountering solutions near variables. It can effectively handle continuous, discrete, and cate
the true Pareto front and in balancing the initial population density and gorical variables, making it versatile for a wide range of optimization
the number of expensive simulations. Understanding these characteris problems.
tics allows for better utilization and optimization of the algorithm in • solutions on the true Pareto front are found with a limited number of
different problem scenarios. expensive evaluations. Across all building optimization problems,
including the sampling process, fewer than 100 expensive simula
6. Conclusion tions are required to find the minimum values of the objective
functions.
A novel probabilistic optimization algorithm has been developed, • increasing the number of expensive simulations tends to result in
leveraging metamodels to reduce the computational burden of expen densification of the solutions on the true Pareto front. The number of
sive simulation runs in optimization processes. The algorithm in design variables has a weak relationship with this characteristic, as
corporates multiple approaches to mitigate the number of expensive the complexity of the problems increases without significantly
simulations required to achieve satisfactory results: impacting the number of solutions found on the true Pareto front.
• the new algorithm achieves a significantly higher level of accuracy
• the optimization algorithm employs a metamodel to explore the while maintaining a comparable diversity of solutions found. This
search space on behalf of the expensive models. By utilizing the indicates that the algorithm not only generates solutions with a wide
metamodel, only the most promising solutions are selected for
13
R. Albertin et al. Energy & Buildings 297 (2023) 113433
range of characteristics but also ensures their accuracy and high [9] X. Shi, Z. Tian, W. Chen, B. Si, X. Jin, A review on building energy efficient design
optimization from the perspective of architects, Renew. Sustain. Energy Rev. 65
quality.
(2016) 872–884.
• the Pareto front identified by the new algorithm in the initial simu [10] K. Klemm, W. Marks, A.J. Klemm, Multicriteria optimisation of the building
lations generally covers a significant portion of the objective space. arrangement with application of numerical simulation, Build. Environ. 35 (2000)
This indicates that the algorithm quickly explores and captures 537–544.
[11] J.H. Lee, Optimization of indoor climate conditioning with passive and active
important regions of interest. methods using GA and CFD, Build. Environ. 42 (9) (2007) 3333–3340.
[12] L. Magnier, F. Haghighat, Multiobjective optimization of building design using
For what concerns the optimization problems of the three reference TRNSYS simulations, genetic algorithm, and Artificial Neural Network, Build.
Environ. 45 (3) (2010) 739–746.
buildings, the number of solutions found by the proposed algorithm on [13] B. Eisenhower, Z. O’Neill, S. Narayanan, V.A. Fonoberov, I. Mezić, A methodology
the true Pareto front is generally equal to or greater than those found in a for meta-model based optimization in building energy models, Energ. Buildings 47
previous contribution, except if specific conditions are met. Specifically, (2012) 292–301.
[14] E. Tresidder, Y. Zhang, A. I. J. Forrester, Acceleration of building design optimisation
for the intermediate flat and penthouse cases, if the initial population through the use of kriging surrogate models, In: BSO12 Proceedings of the 1st IBPSA
size is 256, there is a potential issue where the algorithm may consis England conference building simulation and optimization, Loughborough, UK;
tently find the same solutions when encountering the true optimal so 2012. p. 118–125.
[15] C. J. Hopfe, M. Emmerich, R. Marijt, J. L. M. Hensen, Robust multi-criteria design
lutions during the sampling process or early iterations. As a result, the optimisation in building design, In: BSO12 Proceedings of the 1st IBPSA-England
number of unique solutions on the true Pareto front may be smaller conference building simulation and optimization, Loughborough, UK; 2012. p.
compared to the previous contribution in these specific cases. However, 118–125.
[16] A.E.I. Brownlee, J.A. Wright, Constrained, mixed-integer and multi-objective
in other scenarios characterized by a smaller initial population or a
optimisation of building designs by NSGA-II with fitness approximation, Appl. Soft
greater number of design variable possible combinations, the proposed Comput. 33 (2015) 114–126.
algorithm demonstrates its ability to find a comparable or even greater [17] W. Xu, A. Chong, O.T. Karaguzel, K.P. Lam, Improving evolutionary algorithm
number of solutions on the true Pareto front with respect to results performance for integer type multi-objective building system design optimization,
Energy and Building 127 (2016) 714–729.
available in literature. It is important to note that the accuracy of the [18] D.C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, 2017.
algorithm is not significantly affected by the number of expensive sim [19] R.H. Myers, D.C. Montgomery, C.M. Anderson-Cook, Response Surface
ulations, since the solutions found by the algorithm are usually part of Methodology: Process and Product Optimization Using Designed Experiments,
John Wiley & Sons, 2016.
the true Pareto front. [20] Santner, T. J., Williams, B. J., & Notz, W. I. The Design and Analysis of Computer
From the results, it is possible to conclude that the proposed algo Experiments, Springer, (2003).
rithm excels at finding satisfying results with great accuracy within a [21] N.V. Queipo, R.T. Haftka, W. Shyy, T. Goel, R. Vaidyanathan, P.K. Tucker,
Surrogate-Based Analysis and Optimization, Prog. Aerosp. Sci. 41 (1) (2005) 1–28.
limited number of expensive simulations, especially for high- [22] M.J.D. Powell, Radial Basis Functions for Multivariable Interpolation: A Review,
dimensional optimization problems. In cases where a higher density of Algorithms for Approximation 2 (1992) 143–167.
solutions on the Pareto front is desired, the algorithm can provide [23] I. Babuška, J.M. Melenk, The Partition of Unity Method, Int. J. Numer. Meth. Eng.
40 (4) (1996) 727–758.
additional solutions at the expense of efficacy, by increasing the number [24] I. Steinwart, A. Christmann, Support Vector Machines, Springer, 2008.
of expensive simulations. [25] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer, 2013.
Overall, the algorithm can successfully reduce the number of [26] N.D. Roman, F. Bre, V.D. Fachinotti, R. Lamberts, Application and characterization
of metamodels based on artificial neural networks for building performance
expensive simulations, and so, the computational time required to
simulation: A systematic review, Energ. Buildings 217 (2020) 109972.
complete an optimization process, while maintaining a high level of [27] S. Kalogirou, Optimization of solar systems using neural-networks and genetic
accuracy, which was the main objective of this work. algorithms, Appl. Energy 77 (2004).
[28] Y. Xu, G. Zhang, C. Yan, G. Wang, Y. Jiang, K. Zhao, A two-stage multi-objective
optimization method for envelope and energy generation systems of primary and
secondary school teaching buildings in China, Building and Environment 204
Declaration of Competing Interest (2021), 108142, https://doi.org/10.1016/j.buildenv.2021.108142. ISSN 0360-
1323.
[29] K. Deb, Multi-objective optimisation using evolutionary algorithms: an
The authors declare that they have no known competing financial
introduction, Springer, London, 2011.
interests or personal relationships that could have appeared to influence [30] Y. Jung, Y. Heo, H. Lee, Multi-objective optimization of the multi-story residential
the work reported in this paper. building with passive design strategy in South Korea, Build. Environ. 203 (2021),
108061, https://doi.org/10.1016/j.buildenv.2021.108061.
[31] X. Gao, Y. Hu, Z. Zeng, Y. Liu, A meta-model-based optimization approach for fast
Data availability and reliable calibration of building energy models, Energy 188 (2019), 116046,
https://doi.org/10.1016/j.energy.2019.116046.
Data will be made available on request. [32] A. Prada, A. Gasparella, P. Baggio, On the performance of meta-models in building
design optimization, Appl. Energy 225 (2018), https://doi.org/10.1016/j.
apenergy.2018.04.129.
References [33] Y. Jin, M. Olhofer, B. Sendhoff, A framework for evolutionary optimization with
approximate fitness functions, IEEE Trans. Evol. Comput. 6 (2002) 481–494.
[34] J. Knowles, ParEGO: a hybrid algorithm with on-line landscape approximation for
[1] Directive 2010/31/EU, of the European parliament and of the council of 19 may
expensive multiobjective optimization problems, IEEE Trans. Evol. Comput. 10 (1)
2010 on the energy performance of buildings OJ L 153/2010, 2010.
(2006) 50–66.
[2] P. Penna, A. Prada, F. Cappelletti, A. Gasparella, Multi-objectives optimization of
[35] A.T. Nguyen, S. Reiter, P. Rigo, A review on simulation-based optimization
energy efficiency measures in existing buildings, Energ. Buildings 95 (2015) 57–69.
methods applied to building performance analysis, Appl. Energy 113 (2014)
[3] E. Carlon, M. Schwarz, A. Prada, L. Golicza, V.K. Verma, M. Baratieri,
1043–1058.
A. Gasparella, W. Haslinger, C. Schmidl, On-site monitoring and dynamic
[36] J.H. Kämpf, M. Wetter, D. Robinson, A comparison of global optimization
simulation of a low energy house heated by a pellet boiler, Energ. Buildings 116
algorithms with standard benchmark functions and real-world applications using
(2016) 296–306.
EnergyPlus, J. Build. Perform. Simul. 3 (2) (2010) 103–120.
[4] V. Machairas, A. Tsangrassoulis, K. Axarli, Algorithms for optimization of building
[37] D. Tuhus-Dubrow, M. Krarti, Comparative analysis of optimization approaches to
design: A review, Renew. Sustain. Energy Rev. 31 (2014) 101–112.
design building envelope for residential buildings, ASHRAE Trans. 115 (2000)
[5] S. Sharma, V. Kumar, Comprehensive Review on Multi-objective Optimization
554–562.
Techniques: Past, Present and Future, Arch. Comput. Methods Eng. 29 (7) (2022)
[38] E. Makalic, D. Schmidt, A Simple Sampler for the Horseshoe Estimator, IEEE Signal
5605–5633.
Process Lett. (2015), https://doi.org/10.1109/LSP.2015.2503725.
[6] J. Wang, Z.J. Zhai, Y. Jing, C. Zhang, Particle swarm optimization for redundant
[39] K. Deb, Multi-Objective Optimization using Evolutionary Algorithms, John Wiley
building cooling heating and power system, Appl. Energy 87 (12) (2010)
& Sons, 2001.
3668–3679, https://doi.org/10.1016/j.apenergy.2010.06.021.
[40] M. Wetter, J. Wright, A comparison of deterministic and probabilistic optimization
[7] S. Attia, M. Hamdy, W. O’Brien, S. Carlucci, Assessing gaps and needs for
algorithms for non-smooth simulation-based optimization, Build. Environ. 39 (8)
integrating building performance optimization tools in net zero energy buildings
(2004) 989–999.
design, Energ. Buildings 60 (2013) 110–124.
[41] T.T. Binh, U. Korn, MOBES: A multiobjective evolution strategy for constrained
[8] R. Evins, A review of computational optimisation methods applied to sustainable
optimization problems, in: The Third International Conference on Genetic
building design, Renew. Sustain. Energy Rev. 22 (2013) 230–245.
14
R. Albertin et al. Energy & Buildings 297 (2023) 113433
Algorithms, Proceeding of the 3rd International Conference on Genetic Algorithm [43] B.K. Kannan, S.N. Kramer, An augmented lagrange multiplier based method for mixed
Mendel, Brno, Algorithm Mendel, Brno, 1997, pp. 176–182. integer discrete continuous optimization and its applications to mechanical design,
[42] N. Palli, S. Azram, P. McCluskey, R. Sundararajan, An interactive multistage ASME J. Mech. Des. 116–2 (1994) 405–411.
ε-inequality constraint method for multiple objectives decision making, ASME J.
Mech. Des. 120–4 (1999) 678–686.
15