Electronics 12 02343 v2
Electronics 12 02343 v2
Article
Parallelized A Posteriori Multiobjective Optimization in
RF Design
Jan Míchal and Josef Dobeš *
Department of Radioelectronics, Czech Technical University in Prague, Technická 2, 16627 Praha, Czech Republic;
[email protected]
* Correspondence: [email protected]
1. Introduction
One way of reducing computing time for multiobjective optimization consists in
using efficient algorithms based on replacing the explored objective function with a less
accurate but more easily computable surrogate objective function, as in [1–4]. Another
way is parallelization, which is able to decrease the time even more, depending on the
number of parallel processors. While many parallelization attempts have been used in
Citation: Míchal, J.; Dobeš, J. the area of evolutionary multiobjective optimization algorithms [5–11] and other types of
Parallelized A Posteriori metaheuristics [12–16], we are utilizing our original multiobjective optimization method
Multiobjective Optimization in RF with an asymptotically uniform coverage of the Pareto front, which improves on [17–21]
Design. Electronics 2023, 12, 2343.
substantially. Part of the published research aims at using alternative computing platforms
https://doi.org/10.3390/
(GPUs, CUDA) [6,22,23]. However, our main objective dealt with in this paper was to
electronics12102343
develop a technical solution of synchronization of individual threads run in parallel to
Academic Editor: Costas avoid collisions when accessing shared resources. It is an asynchronous task-launching
Psychalinos scheme (in the meaning introduced in [24]), which allows for the restarting a new task
immediately after any of the previously started ones finish, without having to wait for the
Received: 24 March 2023
end of all of the parallel runs. For generating tasks to be automatically distributed among a
Revised: 17 May 2023
given number of simultaneously running processors (threads), we have efficiently used
Accepted: 19 May 2023
Published: 22 May 2023
process synchronization objects.
The paper is organized as follows. In Section 2, we briefly specify our modification of
the goal attainment method used as an a posteriori multiobjective optimization method.
In Section 3, a method of solving obtained single-objective problems is defined in detail.
Copyright: © 2023 by the authors. Then, in Section 4, a way of utilizing our modification of method to generate a contour plot
Licensee MDPI, Basel, Switzerland. of the Pareto front is briefly proposed. Section 5 presents the parallelization scheme and
This article is an open access article explains its implementation using process synchronization objects. Finally, the application
distributed under the terms and of the method is demonstrated on an example RF circuit designs in Sections 6–10, including
conditions of the Creative Commons appropriate graphical results and some comparison with other procedures.
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
minimize { g1 ( x ) , g2 ( x ) , . . . , g k ( x ) } , (1)
x∈S
gi ( x) − z̄i
minimize max , (2)
x∈S i =1,...,k znad
i − zi∗
where z̄i stands for suitable reference goal values associated with the individual functions
gi . In the denominator of (2), zi∗ are elements of the ideal (or best-case) vector z∗ =
[z1∗ , z2∗ , . . . , z∗k ] obtained by independent minimizations:
∗
z = min g1 ( x), min g2 ( x), . . . , min gk ( x) . (3)
x∈S x∈S x∈S
Symbol znad
i represents a component of what is called the nadir (or worst-case) objective
nad
vector z , formed by the largest respective component (zi∗ ) j found in all k objective vectors
zi∗ obtained by independently minimizing the ith objective function:
znad = max(zi∗ )1 , . . . , max(zi∗ )k . (4)
i i
We proved that the following sequence of steps can be used to obtain such a reference
point zk−1,1 :
k−√
t1 = r1
1
for i = 1 to k − 1:
√
t i : = k −i r i ;
(7)
for j = 1 to k − i:
zi,j := (1 − ti )zi−1,1 + ti zi−1,j+1 ;
N.B.: Equations (1)–(7) represent only a very brief description of our method. We al-
ready defined more comprehensive descriptions in [25–27], e.g., where certain comparisons
with more other methods such as weighted Sum strategy, genetic, Nelder–Mead simplex,
and simulated annealing algorithms were performed. However, as the main focus of this
paper is devoted to the parallelization technique, we will not repeat such definitions and
comparisons here, but we will thoroughly elaborate on the parallelization strategy instead.
minimize α (8)
(α,x)∈S
S = { (α, x) ∈ Rn+1 | xl 5 x 5 xu
∧ − f i ( x) + a0i α = 0 ∀ i ∈ I1 (9)
∧ f i ( x) = 0 ∀ i ∈ I2 },
with xl and xu being given vectors of lower and upper bounds for the components of
decision vector x; I1 being the set of all indexes i, such that a0i 6= 0 (indexes of the
components f i of an objective function to be minimized); and I2 being the set of indexes i,
such that a0i = 0 (indexes of constraint functions f i to be kept non-negative).
On closer inspection, this form of SOP can be obtained from form (2) by substitutions
f i ( x) = gi ( x) − z̄i and a0i = znad
i − zi∗ for i ∈ I1 .
where ∆x j , j = 1, . . . , n are the decision variables of the linear optimization problem, whose
values are to be determined by the exchange algorithm.
min(sxj , x − xl ) + ∆x j = 0
(11)
min(sxj , xu − x) − ∆x j = 0
to (10) for each variable ∆x j , j = 1, . . . , n, with sxj > 0 being a sufficiently small number.
Note that the bounds on x are enforced here rather than in Step 5.
where
g = max f i ( x) (13)
i ∈ I1
is the objective function value in the current iterate x. This allows the α term in the
constraints without significantly changing them, because typically, α → g when vector x
approaches the optimum. (However, the step damping inequalities are still in action.)
This modification mode can be set as always on, on for finding the first feasible iterate
only, or permanently switched off.
The level of subtasks that can be run in parallel needs to be chosen such that the
subtasks are independent of each other, but are still as short as possible so that the waiting
time to finish the running task is not too long. This is fulfilled by taking the whole single
SO run for a single reference point as a unit task.
5.2.1. Semaphore
The semaphore is a special kind of a counter variable stored in the kernel memory
space that can be used to organize access by multiple threads or processes to a shared
system resource. This can be carried out due to a special treatment of a thread or process
manipulating the counter’s content.
The creation of a new Semaphore object is performed using the function Create-
Semaphore(), which also sets this variable to an non-negative initial value and takes an
optional identification string. The initial value specifies how many threads/processes can
simultaneously dispose of the particular system resource.
The semaphore gets decremented by calling the function WaitForSingleObject(), un-
less the value of the mutex is already zero. In such a case, the current thread/process is put
to sleep. When the counter value becomes nonzero, one of the sleeping processes/threads
wakes up, which also decrements the counter value back to zero.
The semaphore is incremented by means of the function ReleaseSemaphore(). This
also wakes one of the currently sleeping threads/processes (due to the previous call of the
WaitForSingleObject() function), if there are any.
5.2.2. Mutex
Mutex (whose name stands for mutually exclusive access) is essentially a special case
of semaphore with a maximum value of one. This means that only one thread/process can
dispose of the shared resource at a time (thus indeed providing it exclusive access).
Mutex is created by a call to the CreateMutex() API function, with an optional ID
string as an argument. Access to the shared resource is granted by calling WaitForSingle-
Object(), which possibly enqueues the current thread/process by putting it to sleep if the
mutex’s value is currently zero (indicating the protected resource currently being taken).
Mutex is incremented by a call to ReleaseMutex(), which subsequently wakes up one
of sleeping threads/processes (if any).
Start
NISfileMutex ← CreateMutex(”NISFmutex”)
i←0
j←0
writef(INP, i, j, Clist [i ])
WaitForSingleObject(ProcessorSemaphore)
CloseHandle(procInfo.hProcess)
CloseHandle(procInfo.hThread)
j ← j+1
i ← i+1
Yes i < Ni
p←0
WaitForSingleObject(ProcessorSemaphore)
p ← p+1
CloseHandle(ProcessorSemaphore)
CloseHandle(NISfileMutex)
End
Parallel run
readf(”INP”, i, j, C)
randSeed ← f s (i, j)
No
x feasible
NISfileMutex ← OpenMutex(”NISFmutex”)
WaitForSingleObject(NISfileMutex)
Close NISfile
ReleaseMutex(NISfileMutex)
CloseHandle(NISfileMutex)
ProcessorSemaphore ← OpenMutex(”procSemaphore”)
ReleaseSemaphore(ProcessorSemaphore)
CloseHandle(ProcessorSemaphore)
End
Figure 2. Flowchart of the parallel process.
Then two nested loops are used to launch individual parallel runs of the optimization
process, each in its own working subdirectory, over the predefined list of contour parameter
values Clist selected by the loop index i and for distinguishing values of the nested loop
index j, enabling the pseudorandomly generated inputs of each of the parallel runs to be
unique. Both loop indexes and the contour parameter value are passed to the individual
parallel process in an input file called "INP", written in its subdirectory. Immediately after
creating each parallel process, its process and thread handles can be closed by calls to
CloseHandle(), as they are no longer needed in the main process.
Electronics 2023, 12, 2343 10 of 20
After the whole of the intended number of parallel processes have been launched,
most of them have already finished, but the final batch of Nproc is still running. Therefore,
the main process waits for all of them to finish using yet another loop, this time controlled
by the p parameter. Then, it can close its semaphore and mutex handles, and thus allow
the operating system to free both of the synchronizing objects. After that, the main process
itself closes down. The resulting set of noninferior solutions is gathered in the NIS file and
is ready for inspection.
Note that no processor (core) apart from the Nproc total needs to be specifically reserved
for the main process itself because it spends most of the time sleeping and waiting for any
of the running processes to finish, and thus, its overall computational demand turns out to
be negligible.
L_i L_1
L_2
out
L_i
C_Li
R_Li
LP821 D C_1
L_D
R_RC
C_i
C_i L_C
C_RC
V_DD
D_BODY
C_RSS
R_Ci
C_D C_2 C_3 Z_L
JFET
in G L_GATE R_GATE
Z_gen NMOS
V_bias
C_ISS
C_G
L_S C_S
V_in
Xi
VgsACm = VinACm q , (14)
Xi2 + R2d
This arrangement allows limiting the peak gate voltage simply by an upper bound on
Vgs max to its maximum rating value (rather than performing it by means of an additional
inequality constraint on the gate voltage).
The parameters of stray phenomena of the L and C component models shown in
Figure 3 are obtained as mere estimates in a simple (but nontrivial) manner depending on
the components’ principal L/C values.
Table 1 contains a complete list of the design variables for the multiobjective optimiza-
tion process. (The ranges are chosen by a user with respect to the intended practical realiza-
tion of the amplifier. The “Coverage type” field specifies whether the random generator of
the initial iterate uniformly covers the specified range on a linear or logarithmic scale.)
Electronics 2023, 12, 2343 12 of 20
Bound Coverage
No. Symbol Lower Upper Unit Type
1 Vgs max 2 20 V linear
2 VgsACm 0.4 12 V linear
3 L1 3 30 nH logarithmic
4 C1 10 300 pF logarithmic
5 C2 3 300 pF logarithmic
6 L2 3 100 nH logarithmic
7 C3 3 100 pF logarithmic
Table 2. Design Goals for the power amplifier. (THD has the preselected values of the objective, the
contour parameter.)
Optimum
No. Symbol Type Direction or Bound Unit
1 Pout1 objective maximum 32.137826 W
2 η objective maximum 81.680676 %
3 THD objective minimum set to 0.25 %
4 Id avg constraint 5 5 A
5 Pdiss constraint 5 50 W
|vout1 |2 a2 + b12
Pout1 = = 1 , (16)
2RL 2RL
where vout1 is the phasor of the output voltage vout (t); ak and bk are the coefficients of
the k-th cosine and sine harmonic series components of the periodic steady-state output
voltage vout (t) of the period T:
2 2π 2 2π
Z Z
ak = vout (t) cos t dt and bk = vout (t) sin t dt. (17)
T T T T
T T
The integrals over period T are computed with the trapezoid method of numeric
integration.
• Power efficiency η, defined as the ratio of output power at the first harmonic frequency
and the sum of average power from DC power supply and power from the (imaginary)
input driving stage. (This definition pushes not only for lower power dissipation on
the transistor, but also for lower input power, and thus for higher power gain.)
• Total harmonic distortion THD of the output voltage vout (t) in %. Although this goal
is considered a minimized objective, it was chosen as the contour parameter and thus
changed into an inequality constraint, as explained in Section 3.
• Average drain current Id avg representing the supply current consumption of the
power amplifier, chosen to be kept below 5 A.
• Dissipated power of the LDMOS transistor Pdiss , representing the undesirable power
loss, and chosen to be kept below 50 W.
Electronics 2023, 12, 2343 13 of 20
80
75
70
𝜂 (%)
65
THD 0.5% THD 0.7%
60
55
THD 0.35%
THD 0.25%
10 15 20 25 30
𝑃out1 (W)
Figure 4. Pareto front in the form of contours for chosen levels of total harmonic distortion. Dotted
curves interpolate between points obtained by computation. Close groups of multiple points are
mutually noninferior and result from subsequent fine refinement optimization reruns with modified
numeric parameters controlling the used algorithm.
Electronics 2023, 12, 2343 14 of 20
6
40
4 20
2 −20
−40
0
0 1 2 3
𝑡 (ns)
Figure 5. The current flowing out of the power supply and the output voltage for the selected
point located on the curve THD = 0.25%. The results of the optimization are Pout1 = 22.512439 W,
η = 68.948036%, THD = 0.24999837%, Id avg = 2.7067285 A, and Pdiss = 8.2391780 W.
Although the complex LU factorization is more difficult than the real one, the com-
putational effort is clearly much lower than that in the time domain. Therefore, in the
frequency domain, the parallelization is much less urgent than in the time domain.
where the constraint condition on Vout implements the requirement of 1 Vpp minimum
guaranteed output voltage span.
Both the weighted sum strategy (WSS) and goal attainment method (GAM) have been
used to solve the problem (18).
Electronics 2023, 12, 2343 15 of 20
Icc o
fm
f 1 = 10(SWR − 1), f 2 = AovdB − AvdB , f3 = , f 4 = log10 , (19)
1 mA fm
4
f P ( x) = ∑ wi fi2 (x) + c21 (x) (21)
i =1
where the choice of weight values is further restricted with the usual condition ∑4i=1 wi = 1.
To minimize the objective function f P , our special version of the Levenberg–Marquardt
method was used (which normalizes the Jacobian matrix). Iterations end after the biggest
relative change in the design variables between iterations becomes smaller than 10−4 or
when a maximum allowed number of iterations is reached.
complemented with the original constraint penalty function c1 . Note that to reduce the
number of degrees of freedom, we set all the reference goals z̄i equal to the same scalar
value P, P = 10. This is possible because of the special choice of objective functions f i . The
resulting scalar objective function to be minimized is then composed as follows:
4
f P ( x) = γ2 + ∑ gi2 ( x) + c21 ( x). (23)
i =1
SWR Av Icc fm S1
No. w1 w2 w3 w4 (–) (dB) (mA) (MHz) (%)
1 0.25 0.25 0.25 0.25 1.02 40.1 0.486 128 –
2 0.4 0.2 0.2 0.2 1.00 40.0 0.485 127 75
3 0.2 0.4 0.2 0.2 1.02 40.1 0.479 126 75
4 0.2 0.2 0.4 0.2 1.01 40.3 0.376 112 50
5 0.2 0.2 0.2 0.4 1.02 40.0 0.487 128 50
6 0.7 0.1 0.1 0.1 1.00 40.0 0.478 125 75
7 0.1 0.7 0.1 0.1 1.07 40.2 0.481 128 50
8 0.1 0.1 0.7 0.1 1.00 40.2 0.311 94.3 50
9 0.1 0.1 0.1 0.7 1.03 40.1 0.488 128 75
Single-run average correlation 62.5 62.5 50.0 75.0 62.5
SWR Av Icc fm S1
No. w1 w2 w3 w4 (–) (dB) (mA) (MHz) (%)
1 1 1 1 1 1.07 40.0 0.505 134 –
2 1 0 0 0 1.00 32.4 0.844 302 75
3 0 1 0 0 1.18 40.5 0.477 129 75
4 0 0 1 0 1.63 30.7 0.404 174 75
5 0 0 0 1 1.67 30.7 1.780 583 100
6 2 1 1 1 1.00 38.9 0.973 131 100
7 1 2 1 1 1.18 40.5 0.477 129 75
8 1 1 2 1 1.18 35.6 0.463 154 75
9 1 1 1 2 1.48 35.5 0.848 296 100
10 2 2 2 1 1.06 40.1 0.491 130 100
11 2 2 1 2 1.05 40.0 0.491 130 25
12 2 1 2 2 1.02 35.5 0.500 203 100
13 1 2 2 2 1.12 40.0 0.477 129 50
Single-run average correlation 95.5 86.4 68.2 72.7 79.2
Figure 7. Four alternative ways of graphical presentation of the Pareto front in the form of a row of
graphs of contours obtained by a piecewise linear interpolation between computed solution points.
Each alternative row has a different objective chosen as graph parameter ( f m , SWR, Av , and Icc in
the first, second, third, and fourth rows, respectively) and another one as the contour parameter.
Note that the curves have subsequently been carefully smoothed by the third-order Bézier curves
(implemented in MetaPost) in this final refinement of the graph.
Electronics 2023, 12, 2343 18 of 20
Figure 8. Eight XMeters pictures for exploiting Intel Core i7-860 threads in the cases of one, two,
three, four, five, six, seven, and eight parallel tasks of the multiobjective optimization, respectively.
11. Conclusions
The following achievements described in the paper deserve to be highlighted:
• Our modification of the a posteriori goal attainment method has been devised and
presented, which leads to asymptotically uniform coverage of the reference set by
chosen reference points used to control the exploration of Pareto front.
• Regarding actual computation strategy, a way of parallel processing has been chosen,
such that the tasks run in parallel are automatically started one after another on each
of the available processor cores without any slack time needed, contrary to many other
published procedures. This is due to the specific organizing role of the main process
starting the child processes.
• The resulting graph Figure 4 in the form of contour plot is more informative of the
actual Pareto front shape than other frequently used forms of depiction in literature.
Author Contributions: Algorithm for the multiobjective optimization, software add-on to the simu-
lation program for performing the multiobjective optimization, concept of the nested parallelization,
flowcharts of the main and parallel processes, creation of application example, circuit diagram, J.M.;
computational core of the program for the circuit analysis, algorithm for numerical solution of the stiff
systems of nonlinear differential-algebraic equations, algorithm for steady-state analysis, contour plot
of the Pareto front, J.D. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Czech Science Foundation under the grant no. GA20-26849S.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author. The data are not publicly available due to its large extent and the grant policy.
Acknowledgments: This paper has been supported by the Czech Science Foundation under the grant
no. GA20-26849S.
Electronics 2023, 12, 2343 19 of 20
Conflicts of Interest: The authors declare no conflict of interest. The funder had no role in the design
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or
in the decision to publish the results.
References
1. de Winter, R.; Bronkhorst, P.; van Stein, B.; Bäck, T. Constrained Multi-Objective Optimization with a Limited Budget of Function
Evaluations. Memetic Comput. 2022, 14, 151–164. [CrossRef]
2. Akhtar, T.; Shoemaker, C.A. Efficient multi-objective optimization through population-based parallel surrogate search. arXiv
2019, arXiv:1903.02167.
3. Zhang, S.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. An Efficient Batch-Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multiobjective Acquisition Ensemble. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2021, 41, 1–14.
[CrossRef]
4. Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Batch Bayesian optimization via multi-objective acquisition ensemble for automated
analog circuit design. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018;
pp. 3306–3314.
5. Deb, K.; Sundar, J. Reference point based multi-objective optimization using evolutionary algorithms. In Proceedings of the 8th
Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 8–12 July 2006; pp. 635–642.
6. Gupta, S.; Tan, G. A scalable parallel implementation of evolutionary algorithms for multi-objective optimization on GPUs. In
Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1567–1574.
7. Zhang, K.; Chen, M.; Xu, X.; Yen, G.G. Multi-objective evolution strategy for multimodal multi-objective optimization. Appl. Soft
Comput. 2021, 101, 107004. [CrossRef]
8. Kimovski, D.; Ortega, J.; Ortiz, A.; Baños, R. Parallel alternatives for evolutionary multi-objective optimization in unsupervised
feature selection. Expert Syst. Appl. 2015, 42, 4239–4252. [CrossRef]
9. Saravana Sankar, S.; Ponnambalam, S.; Gurumarimuthu, M. Scheduling flexible manufacturing systems using parallelization of
multi-objective evolutionary algorithms. Int. J. Adv. Manuf. Technol. 2006, 30, 279–285. [CrossRef]
10. Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey.
ACM Comput. Surv. (CSUR) 2021, 54, 1–34. [CrossRef]
11. Belaiche, L.; Kahloul, L.; Benharzallah, S. Parallel Dynamic Multi-Objective Optimization Evolutionary Algorithm. In Proceedings
of the 2021 22nd International Arab Conference on Information Technology (ACIT), Muscat, Oman, 21–23 December 2021; pp. 1–6.
12. Alba, E.; Luque, G.; Nesmachnow, S. Parallel metaheuristics: Recent advances and new trends. Int. Trans. Oper. Res. 2013,
20, 1–48. [CrossRef]
13. Benítez-Hidalgo, A.; Nebro, A.J.; García-Nieto, J.; Oregi, I.; Del Ser, J. jMetalPy: A Python framework for multi-objective
optimization with metaheuristics. Swarm Evol. Comput. 2019, 51, 100598. [CrossRef]
14. Blank, J.; Deb, K. Pymoo: Multi-objective optimization in python. IEEE Access 2020, 8, 89497–89509. [CrossRef]
15. Li, X.; Gao, B.; Bai, Z.; Pan, Y.; Gao, Y. An improved parallelized multi-objective optimization method for complex geographical
spatial sampling: AMOSA-II. ISPRS Int. J. Geo-Inf. 2020, 9, 236. [CrossRef]
16. Nemura, M. Parallelization of Multi-Objective Optimization Methods. Ph.D. Thesis, Vilniaus Universitetas, Vilnius, Lithua-
nia, 2021.
17. Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012;
Volume 12.
18. Figueira, J.R.; Liefooghe, A.; Talbi, E.G.; Wierzbicki, A.P. A parallel multiple reference point approach for multi-objective
optimization. Eur. J. Oper. Res. 2010, 205, 390–400. [CrossRef]
19. Stroessner, S.; Lucero, R.; Kravits, J.; Russell, A.; Johannes, S.; Baker, K.; Kasprzyk, J.; Popović, Z. Power Amplifier Design Using
Interactive Multi-Objective Visualization. In Proceedings of the 2022 52nd European Microwave Conference (EuMC), Milan, Italy,
27–29 September 2022; pp. 500–503.
20. Bejarano, L.A.; Espitia, H.E.; Montenegro, C.E. Clustering Analysis for the Pareto Optimal Front in Multi-Objective Optimization.
Computation 2022, 10, 37. [CrossRef]
21. Blasco, X.; Reynoso-Meza, G.; Sánchez-Pérez, E.A.; Sánchez-Pérez, J.V.; Jonard-Pérez, N. A Simple Proposal for Including
Designer Preferences in Multi-Objective Optimization Problems. Mathematics 2021, 9, 991. [CrossRef]
22. Janssen, D.M.; Pullan, W.; Liew, A.W.C. Graphics processing unit acceleration of the island model genetic algorithm using the
CUDA programming platform. Concurr. Comput. Pract. Exp. 2022, 34, e6286. [CrossRef]
23. Bharti, V.; Singhal, A.; Saxena, A.; Biswas, B.; Shukla, K.K. Parallelization of corner sort with CUDA for many-objective
optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Boston, MA, USA, 9–13 July 2022;
pp. 484–492.
24. Yin, S.; Wang, R.; Zhang, J.; Wang, Y. Asynchronous Parallel Expected Improvement Matrix-Based Constrained Multi-objective
Optimization for Analog Circuit Sizing. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3869–3873. [CrossRef]
25. Dobeš, J.; Míchal, J. An implementation of the circuit multiobjective optimization with the weighted sum strategy and goal
attainment method. In Proceedings of the 2011 IEEE International Symposium of Circuits and Systems (ISCAS), Rio de Janeiro,
Brazil, 15–18 May 2011; pp. 1728–1731.
Electronics 2023, 12, 2343 20 of 20
26. Dobeš, J.; Míchal, J.; Biolková, V. Multiobjective Optimization for Electronic Circuit Design in Time and Frequency Domains.
Radioengineering 2013, 22, 136–152.
27. Dobeš, J.; Míchal, J. Comparing the L&S and L-Band Antenna Low-Noise Amplifiers Designed by Multi-Objective Optimization.
In Proceedings of the 2022 International Conference on IC Design and Technology (ICICDT), Hanoi, Vietnam, 21–23 September
2022; pp. 65–68.
28. Bown, G.; Geiger, G. Design and optimisation of circuits by computer. Proc. IEE 1971, 118, 649–661. [CrossRef]
29. Himmelblau, M. Nonlinear Programming; McGraw-Hill: New York, NY, USA, 1972.
30. Press, W.; Flannery, B.; Teukolsky, S.; Vetterling, W. Numerical Recipes in C: The Art of Scientific Computing; Cambridge University
Press: New York, NY, USA, 1992.
31. Richter, J. Advanced Windows; Microsoft Press: Unterschleissheim, Germany, 1996.
32. Downey, A.B. The Little Book of Semaphores; Green Tea Press: Erie, PA, USA, 2016.
33. Dobeš, J.; Míchal, J. Design of Dual-Band Antenna Low-Noise Preamplifiers by Multi-Objective Optimization and Its Verification
with More Precise Measurement Method. In Proceedings of the 2022 Asia-Pacific Microwave Conference (APMC), Yokohama,
Japan, 29 November–2 December 2022; pp. 743–745.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.