0% found this document useful (0 votes)
38 views6 pages

Automatic PID Tuning A Heuristic Optimization Appr

This document summarizes an article that presents an automatic PID tuning method using three heuristic optimization techniques: Hooke and Jeeves, Nelder and Mead (simplex), and simulated annealing. The proposed method first uses the Ziegler-Nichols tuning formula to obtain initial PID parameters, then optimizes the parameters by minimizing a cost function defined by a performance index using one of the three heuristic optimization algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views6 pages

Automatic PID Tuning A Heuristic Optimization Appr

This document summarizes an article that presents an automatic PID tuning method using three heuristic optimization techniques: Hooke and Jeeves, Nelder and Mead (simplex), and simulated annealing. The proposed method first uses the Ziegler-Nichols tuning formula to obtain initial PID parameters, then optimizes the parameters by minimizing a cost function defined by a performance index using one of the three heuristic optimization algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/268268955

Automatic PID tuning, a heuristic optimization approach

Article

CITATIONS READS

2 486

2 authors, including:

Luis A Márquez-Martínez
Ensenada Center for Scientific Research and Higher Education
57 PUBLICATIONS 828 CITATIONS

SEE PROFILE

All content following this page was uploaded by Luis A Márquez-Martínez on 05 February 2016.

The user has requested enhancement of the downloaded file.


Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.

Automatic PID tuning, a heuristic optimization


approach
Rodrigo Abrajan-Guerrero, Luis Alejandro Marquez-Martinez
CICESE, Carr. Ensenada-Tijuana #3918, CP 22860, Ensenada B.C. MEXICO
[email protected], [email protected]
Phone: (52)-646-175-0500

Abstract— This paper presents a method to tune a PID implementation of the optimization methods to find the PID
controller based on three heuristic optimization techniques: parameters. Section IV shows the results of some numerical
Hooke and Jeeves, Nelder and Mead (simplex), and simulated simulations done using the proposed approach. Section V
annealing. The proposed method uses Ziegler-Nichols tuning
formula to obtain a first approximation of the controller, concludes the paper.
then it is optimized by minimizing a cost function defined by II. M ATHEMATICAL P RELIMINARIES
a performance index.
Here we recall the ZN tuning method, define the opti-
Keywords: PID controllers, optimization, heuristic searches, mization methods we will use later in the paper, as well as
settling times, performance indices. some performance indices which will be used later with the
optimization methods.
I. I NTRODUCTION
ZN-PID Controller Overview
One of the most used controllers today in industry is the The considered PID structure is:
proportional-integral-derivative (PID). These controllers are
often used because of their simple structure and robustness. 1
P ID = Kp (1 + + Td s) (1)
A problem that can arise with these controllers is that a Ti s
bad tuning may result in a poor performance or even lead There are two Ziegler-Nichols methods to determine the
to instability. parameters of a PID controller: a step response method
Several attempts have been made to find a tuning for- and the frequency response method. Here we will show the
mula for PID controllers. The Ziegler-Nichols (ZN) method second one only, which is the most commonly used.
(Ziegler & Nichols, 1942) gives the parameter values as For the frequency response method, the formulas are
a function of the ultimate gain (Ku ) and period (Tu ) of given in function of the ultimate gain Ku and the ultimate
the system. The performance obtained by using ZN is not period Tu . An easy way to find this parameters is to connect
always the best. The formula was designed to give an a controller to the plant with only proportional control
overshoot around 25%, which is why it is convenient to fine action, that is, Ti = ∞ and Td = 0. Once connected this
tune the parameters to achieve an acceptable performance. way you have to start incrementing Kp , until you get a
This has motivated a lot of research and several methods sustained oscillation as an output. The value of Kp required
for PID tuning have been proposed. A modification in the to sustain the oscillation will be Ku , and the period of
ZN coefficients was proposed in (Chien, Hrones & Reswick, the oscillations will be Tu . The formulas for the controller
1952) to obtain a tuning method with an improved damping parameters are shown in Table I.
(CHR). A refinement to the ZN formula (RZN) (Astrom et
TABLE I
al., 1993) was done by adding a fourth parameter β. A
Z IEGLER -N ICHOLS FREQUENCY RESPONSE TUNING FORMULAS .
tuning method based on a magnitude optimum frequency
criterion was proposed in (Vranci, Peng & Strmenik, 1998). Controller K Ti Td
P 0.5Ku
Also in the last years some tuning has been made using PI 0.4Ku 0.8Tu
Genetic Programming techniques (de Almeida et al., 2005). P ID 0.6Ku 0.5Tu 0.12Tu
In this paper we propose the use of three heuristic opti-
mization techniques that have shown good results: Hooke
and Jeeves (Hooke & Jeeves, 1961), Nelder and Mead Optimization Methods
(simplex) (Nelder & Mead, 1965) and simulated annealing We have selected three heuristic optimization methods:
(Aarts & Lenstra, 2003). Hooke-Jeeves (HJ), Nelder-Mead (NM), and simulated an-
This paper is organized as follows. Section II recalls the nealing (SA). They were selected because of their simplicity
ZN PID tuning method, the three optimization methods that and ease of implementation. They only require to be able
will be used in later sections and also several performance to compute a cost function at any given point. Each method
indices are presented. Section III consists of a proposed is explained next.
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.

Hooke-Jeeves: This method basically consists of the For this method the way of changing from one state to
repetition of an exploratory movement followed by a pattern another will be decided probabilistically. It is necessary that
movement until a stopping criteria is met. the user defines the neighbors of a state; if it is a continuous
Let f : Rn → R be a cost function and b ∈ Rn a problem it has to be discretized. It also needs to be defined
starting point. Define ei as a base of orthonormal vectors how the new neighbors will be selected. All this is specified
for i = 1, 2, ..., n, and ρ an initial exploratory increment. by the user.
The exploratory movement works as follows. Let b1 ← b, The user also has to define an acceptance probability
compute f (b1 +ρe1 ). If the movement from b1 to b1 +ρe1 is function P (e, enew , T ), which depends on the energies of
an improvement, then b1 ← b1 +ρe1 . If it does not improve, the current and new state and a “temperature” parameter.
compute f (b1 − ρe1 ). If it improves, then b1 ← b1 − ρe1 , This probability should be non zero when enew > e if
on the contrary b1 remains the same. The next step is to T 6= 0, enabling the state to change even when it is
repeat the previous procedure, but changing e1 for e2 , then to a worse state. As T approaches zero, this probability
ρ
e3 , until you finish with en . Now, if b1 = b, make ρ ← . decreases. When T = 0, the probability should be zero if
2 enew > e.
If b1 6= b, then do a pattern movement.
The pattern movement basically consists in moving in When enew ≤ e the acceptance probability can be 1 or
the same direction that just yield an improvement in the it may also depend on the parameter T . If the user decides
cost function. This means you have to do b3 ← 2b2 − b. If to make it a function of T , it should be always non zero
f (b3 ) < f (b2 ), then b ← b3 , otherwise b ← b2 . Now return and as T approaches zero the acceptance probablity should
to an exploratory movement unless the stopping criteria is approach 1.
met. The annealing schedule also has to be defined by the user.
Nelder-Mead (simplex): This method is based on some That is, how the temperature will vary through iterations.
basic operations over a simplex. A simplex is defined by Having defined all that, the algorithm works as follows.
n + 1 vertices over Rn , that do not belong to a hyperplane. 1) Initialize s ← s0 , e ← E(s).
To use this method you first need to create an initial 2) sbest ← s, ebest ← e
simplex. The initial simplex may be created from an initial 3) snew ← neighbor(s)
guess, x1 ; the other n vertices, may be: xi+1 = x1 + ρei . 4) enew ← E(snew )
(f (•), ρ and ei are defined as in the previous method) 5) If enew < ebest , then sbest ← snew , ebest ← enew .
We define the worst, second worst, and best vertices as 6) If P (e, enew , T ) > random(), then s ← snew ,
follows: xh := {xi |f (xi ) ≥ f (xj ), ∀j = 1, .., n + 1}, e ← enew .
xs := {xi |f (xi ) ≥ f (xj ), ∀j = 1, .., n + 1, i 6= h}, 7) k ← k + 1
xl := {xi |f (xi ) ≤ f (xj ), ∀j = 1, .., n + 1}. For ease of 8) If stopping criteria is met or scheduled time is over,
notation we define fi := f (xi ). Being x̄ the centroid of all then stop, else go to 3.
vertices except xh , the valid operations are: Performance Indices
• Reflection xr = x̄ + α(x̄ − xh )
To compare two systems, performance indices will be
• Expansion xe = x̄ + γ(x̄ − xh )
considered as in (Dorf & Bishop, 1996). A performance
• Contraction xc = xh + β(x̄ − xh )
index must be always positive or zero. The best system is
• Reduction xi = (xi + xl )/2
defined as the system that minimizes the index. The indices
Typically α = 1, γ = 2, and β = 0.5 or −0.5. are calculated for a finite period of time T . For the use of
The algorithm is as follows. the indices we first define the error, e(t) = yref (t) − y(t);
1) Order the vertices, xl ≤ · · · ≤ xs ≤ xh . Calculate x̄. we alsodefine a modified version for the error:
2) Reflect; if fl ≤ fr ≤ fs , then xh ← xr and go to 1. 10e(t) for e(t) < 0
ê(t) = .
3) If fr < fl , then expand, else go to 5. e(t) for e(t) ≥ 0
4) If fe ≤ fr , then xh ← xe and go to 1, else xh ← xr Integral of time multiplied by the absolute magnitude of
and go to 1. the error (ITAE):
5) If fs ≤ fr < fh , then contract with β = 0.5. 1 T
Z
6) If fr ≥ fh , then contract with β = −0.5. IT AE = t|e(t)| dt (2)
T 0
7) If fc < fh , then xh ← xc , else make reduction.
Modified integral of time multiplied by the absolute
8) If stopping criteria is met, then stop, else go to 1.
magnitude of the error (MITAE):
Simulated Annealing: This method was inspired by an-
1 T
Z
nealing in metallurgy, where by a combination of heating M IT AE = t|ê(t)| dt (3)
and controlled cooling the material increases the size of its T 0
crystals and reduces defects. Integral of the square of the error (ISE):
Any point or combination of variables of the search space
1 T 2
Z
will be called a state, s. The cost function to minimize will ISE = e (t) dt (4)
be the energy of the states, e = E(s). T 0
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.

Modified integral of the square of the error (MISE): best settling time is obtained from either IT AE or IAE,
this parameters are used as an initial guess for optimization
1 T 2
Z
M ISE = ê (t) dt (5) using settling time as cost function.
T 0
Plant 1, large time-delay system
Integral of the absolute magnitude of the error (IAE):
The first considered system is the following:
1 T
Z
IAE = |e(t)| dt (6) e−5s
T 0 G1 (s) = (7)
(s + 1)2
III. P ROPOSED A PPROACH
A third order Pade approximation was used for the time-
We consider the tuning of the PID as an optimization delay in simulations, for comparison with (de Almeida et
problem. To do this we need the following: al., 2005). The PID parameters obtained with ZN are: Kp =
• Select a performance index from section II to be used 0.77, Ti = 13.20, and Td = 2.11. This parameters were
as cost function J. used as initial guess for the optimization methods.
• Select an optimization algorithm from section II.
• Define the stopping criteria. Step Responses

• Use the ZN formulas from section II to give an initial GP

ITAE
guess of the controller parameters in the search of an 1
IAE

optimum. 0.8
S.Time

ref

The algorithms HJ and NM will stop and return an ± 5% 1.05


0.6
optimum if any of the following occur:
Output
• Ji < Jmax ; where Ji is the value of the cost function 0.4 1
at the i − th iteration, and Jmax is a good enough cost
0.2
function value, defined by the user.
0.95
• Ji−k − Ji−k+1 ≤ ǫ ∀ k = 1, 2, · · · , nǫ . This means 0 10 12 14 16 18 20 22
that J has not improved (in at least ǫ) for the last nǫ
−0.2
iterations (nǫ is user defined). 0 5 10 15
Time (s)
20 25 30

The SA algorithm will stop if Ji < Jmax or if it reaches


a maximum number of iterations defined by the user.
Fig. 1. Hooke-Jeeves tuned PID step responses for G1 (s)
IV. N UMERICAL S IMULATIONS
Table II shows the optimum parameters obtained by
The three optimization algorithms from section II were
the HJ algorithm for each cost function, as well as the
implemented in Matlab R software. The algorithms were
settling time corresponding to each parameter configuration.
used to tune a PID controller for two nominal plants taken
Figure 1 shows the step response of the system with the
from (de Almeida et al., 2005) where a Genetic Program-
controller tuned with each cost function, it also includes
ming (GP) algorithm is used to tune the PID controller. The
the one generated by GP from (de Almeida et al., 2005) for
simulations were done using a large time-delay plant and
comparison. The settling time for GP is 10.697 seconds; its
a high-order process. In each case optimization was made
PID parameters are: Kp = 0.49, Ti = 3.56, and Td = 0.99.
with the three algorithms and all the performance indices.
TABLE III
TABLE II
N ELDER -M EAD OPTIMIZATION RESULTS FOR G1 (s).
H OOKE -J EEVES OPTIMIZATION RESULTS FOR G1 (s).
Cost Function Kp Ti Td Settling Time
Cost Function Kp Ti Td Settling Time IT AE 0.4805 3.6542 0.8447 11.023
IT AE 0.5552 3.6482 1.4411 9.461 IAE 0.6259 4.3192 1.0701 20.4980
M IT AE 0.6450 5.0594 1.2428 20.167 S.T IM E 0.4961 3.6542 0.8447 10.604
ISE 0.6255 3.2547 2.3014 22.129
M ISE 0.6216 4.0633 1.7623 17.029
IAE 0.5864 3.5496 1.7409 8.617
S.T IM E 0.5864 3.5105 1.7721 8.562 The results of optimization with NM are shown in Table
III and Figure 2.

The objective is to minimize the settling time, however TABLE IV


using this parameter as cost function, leads to a local S IMULATED A NNEALING OPTIMIZATION RESULTS FOR G1 (s).
minimum which is improved if other cost functions are Cost Function Kp Ti Td Settling Time
used. As shown in Table II the best results were obtained IT AE 0.56 3.63 1.49 9.325
IAE 0.59 3.51 1.78 20.901
using IT AE and IAE, this is why we show results for the S.T IM E 0.58 3.45 1.76 8.625
five performance indices only in the first simulation, and in
next simulations we only show IT AE and IAE. Once the
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.

Step Responses Step Responses

GP
GP
ITAE
ITAE
1 1
IAE
IAE
S.Time
S.Time
0.8 0.8 ref
ref
1.05 ± 5% 1.05
± 5%

0.6 0.6
Output

Output
0.4 1 0.4 1

0.2 0.2

0.95 0.95
0 10 12 14 16 18 20 22 0 10 12 14 16 18 20 22 24 26

−0.2 −0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (s) Time (s)

Fig. 2. Nelder-Mead tuned PID step responses for G1 (s) Fig. 4. Hook-Jeeves tuned PID step responses for G2 (s)

Table IV and Figure 3 show the parameters and step The optimum parameters obtained with the NM method
response when SA is used to tune the PID controller for are shown in Table VI, the step response for each tuned
plant G1 (s). controller using NM can be seen in Figure 5.

Step Responses
TABLE VI
N ELDER -M EAD OPTIMIZATION RESULTS FOR G2 (s).
GP

1 ITAE Cost Function Kp Ti Td Settling Time


IAE
IT AE 0.8141 5.1203 1.6723 9.679
S.Time
0.8 IAE 0.9376 5.3935 1.9002 18.613
ref

± 5%
1.05 S.T IM E 0.8702 5.3026 1.9688 9.088
0.6
Output

0.4 1
Step Responses
0.2
GP
0.95
ITAE
0 10 12 14 16 18 20 22 1
IAE

S.Time
−0.2 0.8 ref
0 5 10 15 20 25 30
Time (s) ± 5% 1.05
0.6
Output

Fig. 3. Simulated Annealing tuned PID step responses for G1 (s) 0.4 1

0.2

0.95
Plant 2, High-Order Process 0 10 12 14 16 18 20 22
The second considered system is:
−0.2
0 5 10 15 20 25 30
1 Time (s)
G2 (s) = (8)
(1 + s)8
Fig. 5. Nelder-Mead tuned PID step responses for G2 (s)
TABLE V
H OOKE -J EEVES OPTIMIZATION RESULTS FOR G2 (s). Table VII and Figure 6 show the parameters and step
Cost Function Kp Ti Td Settling Time responses resulting from SA optimization.
IT AE 0.8205 4.9223 1.9661 9.423
IAE 0.9025 4.6762 2.4700 23.196 V. C ONCLUSIONS
S.T IM E 0.8205 4.6118 2.4661 9.053 In this paper we have presented a simple method to tune
a PID controller. We give three options of optimization
The ZN formula yields the following PID parameters: algorithms, which are easy to implement. In the simulations
Kp = 2.34, Ti = 10.77, and Td = 1.72. we compared to a previously reported controller tuned with
Table V shows the parameters obtained with HJ. The Genetic Programming and the results show an improvement
step responses for HJ are shown in Figure 4. The PID in the settling time.
parameters for the GP tuning are: Kp = 0.68, Ti = 4.36, The best results were obtained using IT AE and IAE as
and Td = 1.47. Its settling time is 11.119 seconds. cost functions. These can be combined with a settling time
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.

TABLE VII
S IMULATED A NNEALING OPTIMIZATION RESULTS FOR G2 (s).
Cost Function Kp Ti Td Settling Time
IT AE 0.83 4.92 1.98 9.298
IAE 0.89 4.73 2.43 22.699
S.T IM E 0.90 4.89 2.40 8.343

Step Responses

GP

ITAE
1
IAE

S.Time
0.8 ref

± 5% 1.05
0.6
Output

0.4 1

0.2

0.95
0 8 10 12 14 16 18 20 22 24 26

−0.2
0 5 10 15 20 25 30
Time (s)

Fig. 6. Simulated Annealing tuned PID step responses for G2 (s)

cost function, if what we are interested in is a small settling


time.
To minimize the settling time, we found that the best
results are obtained by, starting from a ZN-tuned PID,
perform an optimization using the ITAE or IAE as cost
function, and then use the result to optimize the settling
time-based cost function.
Even though all of the simulations were carried on linear
plants, the same method can be used for nonlinear systems
with other type of controllers.
R EFERENCES
Aarts, E. & Lenstra, J. K. (2003). Simulated annealing. Local search in
combinatorial optimization, Princeton University Press. 91–120
Astrom, K. J., Hagglund, T., Hang, C. C., & Ho, W. K. (1993). Auto-
matic tuning and adaptation for PID controller - a survey. Control
Engineering Practice 4, 699–714
Chien, Hrones & Reswick (1952). On the automatic tuning of generalized
passive systems. Transactions ASME 74, 175–185
de Almeida, G.M. et al. (2005). Application of Genetic Programming for
Fine Tuning PID Controller Parameters Designed Through Ziegler-
Nichols Technique. In Advances in Natural Computation, 313–322
Dorf, R. C. & Bishop, R. H. (1996). Design Using Performance Indices,
in Levine, W.S. (ed.). The control handbook, CRC Press, 169–172
Hooke, R. & Jeeves, T.A. (1961). Direct Search Solution of Numerical
and Statistical Problems. J. ACM 8(2), 212–229
Nelder, J. A. & Mead, R. (1965). A simplex method for function mini-
mization. Computer Journal 7, 308–313
Vrancic, D., Peng, Y. & Strmenik, S. (1998). A new PID controller tuning
method based on multiple integrations. Control Engineering Practice
7, 623–633
Ziegler, J. G. & Nichols, N. B. (1942). Optimum settings for automatic
controllers. Transactions ASME 62, 759–768

View publication stats

You might also like