Automatic PID Tuning A Heuristic Optimization Appr
Automatic PID Tuning A Heuristic Optimization Appr
net/publication/268268955
Article
CITATIONS READS
2 486
2 authors, including:
Luis A Márquez-Martínez
Ensenada Center for Scientific Research and Higher Education
57 PUBLICATIONS 828 CITATIONS
SEE PROFILE
All content following this page was uploaded by Luis A Márquez-Martínez on 05 February 2016.
Abstract— This paper presents a method to tune a PID implementation of the optimization methods to find the PID
controller based on three heuristic optimization techniques: parameters. Section IV shows the results of some numerical
Hooke and Jeeves, Nelder and Mead (simplex), and simulated simulations done using the proposed approach. Section V
annealing. The proposed method uses Ziegler-Nichols tuning
formula to obtain a first approximation of the controller, concludes the paper.
then it is optimized by minimizing a cost function defined by II. M ATHEMATICAL P RELIMINARIES
a performance index.
Here we recall the ZN tuning method, define the opti-
Keywords: PID controllers, optimization, heuristic searches, mization methods we will use later in the paper, as well as
settling times, performance indices. some performance indices which will be used later with the
optimization methods.
I. I NTRODUCTION
ZN-PID Controller Overview
One of the most used controllers today in industry is the The considered PID structure is:
proportional-integral-derivative (PID). These controllers are
often used because of their simple structure and robustness. 1
P ID = Kp (1 + + Td s) (1)
A problem that can arise with these controllers is that a Ti s
bad tuning may result in a poor performance or even lead There are two Ziegler-Nichols methods to determine the
to instability. parameters of a PID controller: a step response method
Several attempts have been made to find a tuning for- and the frequency response method. Here we will show the
mula for PID controllers. The Ziegler-Nichols (ZN) method second one only, which is the most commonly used.
(Ziegler & Nichols, 1942) gives the parameter values as For the frequency response method, the formulas are
a function of the ultimate gain (Ku ) and period (Tu ) of given in function of the ultimate gain Ku and the ultimate
the system. The performance obtained by using ZN is not period Tu . An easy way to find this parameters is to connect
always the best. The formula was designed to give an a controller to the plant with only proportional control
overshoot around 25%, which is why it is convenient to fine action, that is, Ti = ∞ and Td = 0. Once connected this
tune the parameters to achieve an acceptable performance. way you have to start incrementing Kp , until you get a
This has motivated a lot of research and several methods sustained oscillation as an output. The value of Kp required
for PID tuning have been proposed. A modification in the to sustain the oscillation will be Ku , and the period of
ZN coefficients was proposed in (Chien, Hrones & Reswick, the oscillations will be Tu . The formulas for the controller
1952) to obtain a tuning method with an improved damping parameters are shown in Table I.
(CHR). A refinement to the ZN formula (RZN) (Astrom et
TABLE I
al., 1993) was done by adding a fourth parameter β. A
Z IEGLER -N ICHOLS FREQUENCY RESPONSE TUNING FORMULAS .
tuning method based on a magnitude optimum frequency
criterion was proposed in (Vranci, Peng & Strmenik, 1998). Controller K Ti Td
P 0.5Ku
Also in the last years some tuning has been made using PI 0.4Ku 0.8Tu
Genetic Programming techniques (de Almeida et al., 2005). P ID 0.6Ku 0.5Tu 0.12Tu
In this paper we propose the use of three heuristic opti-
mization techniques that have shown good results: Hooke
and Jeeves (Hooke & Jeeves, 1961), Nelder and Mead Optimization Methods
(simplex) (Nelder & Mead, 1965) and simulated annealing We have selected three heuristic optimization methods:
(Aarts & Lenstra, 2003). Hooke-Jeeves (HJ), Nelder-Mead (NM), and simulated an-
This paper is organized as follows. Section II recalls the nealing (SA). They were selected because of their simplicity
ZN PID tuning method, the three optimization methods that and ease of implementation. They only require to be able
will be used in later sections and also several performance to compute a cost function at any given point. Each method
indices are presented. Section III consists of a proposed is explained next.
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.
Hooke-Jeeves: This method basically consists of the For this method the way of changing from one state to
repetition of an exploratory movement followed by a pattern another will be decided probabilistically. It is necessary that
movement until a stopping criteria is met. the user defines the neighbors of a state; if it is a continuous
Let f : Rn → R be a cost function and b ∈ Rn a problem it has to be discretized. It also needs to be defined
starting point. Define ei as a base of orthonormal vectors how the new neighbors will be selected. All this is specified
for i = 1, 2, ..., n, and ρ an initial exploratory increment. by the user.
The exploratory movement works as follows. Let b1 ← b, The user also has to define an acceptance probability
compute f (b1 +ρe1 ). If the movement from b1 to b1 +ρe1 is function P (e, enew , T ), which depends on the energies of
an improvement, then b1 ← b1 +ρe1 . If it does not improve, the current and new state and a “temperature” parameter.
compute f (b1 − ρe1 ). If it improves, then b1 ← b1 − ρe1 , This probability should be non zero when enew > e if
on the contrary b1 remains the same. The next step is to T 6= 0, enabling the state to change even when it is
repeat the previous procedure, but changing e1 for e2 , then to a worse state. As T approaches zero, this probability
ρ
e3 , until you finish with en . Now, if b1 = b, make ρ ← . decreases. When T = 0, the probability should be zero if
2 enew > e.
If b1 6= b, then do a pattern movement.
The pattern movement basically consists in moving in When enew ≤ e the acceptance probability can be 1 or
the same direction that just yield an improvement in the it may also depend on the parameter T . If the user decides
cost function. This means you have to do b3 ← 2b2 − b. If to make it a function of T , it should be always non zero
f (b3 ) < f (b2 ), then b ← b3 , otherwise b ← b2 . Now return and as T approaches zero the acceptance probablity should
to an exploratory movement unless the stopping criteria is approach 1.
met. The annealing schedule also has to be defined by the user.
Nelder-Mead (simplex): This method is based on some That is, how the temperature will vary through iterations.
basic operations over a simplex. A simplex is defined by Having defined all that, the algorithm works as follows.
n + 1 vertices over Rn , that do not belong to a hyperplane. 1) Initialize s ← s0 , e ← E(s).
To use this method you first need to create an initial 2) sbest ← s, ebest ← e
simplex. The initial simplex may be created from an initial 3) snew ← neighbor(s)
guess, x1 ; the other n vertices, may be: xi+1 = x1 + ρei . 4) enew ← E(snew )
(f (•), ρ and ei are defined as in the previous method) 5) If enew < ebest , then sbest ← snew , ebest ← enew .
We define the worst, second worst, and best vertices as 6) If P (e, enew , T ) > random(), then s ← snew ,
follows: xh := {xi |f (xi ) ≥ f (xj ), ∀j = 1, .., n + 1}, e ← enew .
xs := {xi |f (xi ) ≥ f (xj ), ∀j = 1, .., n + 1, i 6= h}, 7) k ← k + 1
xl := {xi |f (xi ) ≤ f (xj ), ∀j = 1, .., n + 1}. For ease of 8) If stopping criteria is met or scheduled time is over,
notation we define fi := f (xi ). Being x̄ the centroid of all then stop, else go to 3.
vertices except xh , the valid operations are: Performance Indices
• Reflection xr = x̄ + α(x̄ − xh )
To compare two systems, performance indices will be
• Expansion xe = x̄ + γ(x̄ − xh )
considered as in (Dorf & Bishop, 1996). A performance
• Contraction xc = xh + β(x̄ − xh )
index must be always positive or zero. The best system is
• Reduction xi = (xi + xl )/2
defined as the system that minimizes the index. The indices
Typically α = 1, γ = 2, and β = 0.5 or −0.5. are calculated for a finite period of time T . For the use of
The algorithm is as follows. the indices we first define the error, e(t) = yref (t) − y(t);
1) Order the vertices, xl ≤ · · · ≤ xs ≤ xh . Calculate x̄. we alsodefine a modified version for the error:
2) Reflect; if fl ≤ fr ≤ fs , then xh ← xr and go to 1. 10e(t) for e(t) < 0
ê(t) = .
3) If fr < fl , then expand, else go to 5. e(t) for e(t) ≥ 0
4) If fe ≤ fr , then xh ← xe and go to 1, else xh ← xr Integral of time multiplied by the absolute magnitude of
and go to 1. the error (ITAE):
5) If fs ≤ fr < fh , then contract with β = 0.5. 1 T
Z
6) If fr ≥ fh , then contract with β = −0.5. IT AE = t|e(t)| dt (2)
T 0
7) If fc < fh , then xh ← xc , else make reduction.
Modified integral of time multiplied by the absolute
8) If stopping criteria is met, then stop, else go to 1.
magnitude of the error (MITAE):
Simulated Annealing: This method was inspired by an-
1 T
Z
nealing in metallurgy, where by a combination of heating M IT AE = t|ê(t)| dt (3)
and controlled cooling the material increases the size of its T 0
crystals and reduces defects. Integral of the square of the error (ISE):
Any point or combination of variables of the search space
1 T 2
Z
will be called a state, s. The cost function to minimize will ISE = e (t) dt (4)
be the energy of the states, e = E(s). T 0
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.
Modified integral of the square of the error (MISE): best settling time is obtained from either IT AE or IAE,
this parameters are used as an initial guess for optimization
1 T 2
Z
M ISE = ê (t) dt (5) using settling time as cost function.
T 0
Plant 1, large time-delay system
Integral of the absolute magnitude of the error (IAE):
The first considered system is the following:
1 T
Z
IAE = |e(t)| dt (6) e−5s
T 0 G1 (s) = (7)
(s + 1)2
III. P ROPOSED A PPROACH
A third order Pade approximation was used for the time-
We consider the tuning of the PID as an optimization delay in simulations, for comparison with (de Almeida et
problem. To do this we need the following: al., 2005). The PID parameters obtained with ZN are: Kp =
• Select a performance index from section II to be used 0.77, Ti = 13.20, and Td = 2.11. This parameters were
as cost function J. used as initial guess for the optimization methods.
• Select an optimization algorithm from section II.
• Define the stopping criteria. Step Responses
ITAE
guess of the controller parameters in the search of an 1
IAE
optimum. 0.8
S.Time
ref
GP
GP
ITAE
ITAE
1 1
IAE
IAE
S.Time
S.Time
0.8 0.8 ref
ref
1.05 ± 5% 1.05
± 5%
0.6 0.6
Output
Output
0.4 1 0.4 1
0.2 0.2
0.95 0.95
0 10 12 14 16 18 20 22 0 10 12 14 16 18 20 22 24 26
−0.2 −0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (s) Time (s)
Fig. 2. Nelder-Mead tuned PID step responses for G1 (s) Fig. 4. Hook-Jeeves tuned PID step responses for G2 (s)
Table IV and Figure 3 show the parameters and step The optimum parameters obtained with the NM method
response when SA is used to tune the PID controller for are shown in Table VI, the step response for each tuned
plant G1 (s). controller using NM can be seen in Figure 5.
Step Responses
TABLE VI
N ELDER -M EAD OPTIMIZATION RESULTS FOR G2 (s).
GP
± 5%
1.05 S.T IM E 0.8702 5.3026 1.9688 9.088
0.6
Output
0.4 1
Step Responses
0.2
GP
0.95
ITAE
0 10 12 14 16 18 20 22 1
IAE
S.Time
−0.2 0.8 ref
0 5 10 15 20 25 30
Time (s) ± 5% 1.05
0.6
Output
Fig. 3. Simulated Annealing tuned PID step responses for G1 (s) 0.4 1
0.2
0.95
Plant 2, High-Order Process 0 10 12 14 16 18 20 22
The second considered system is:
−0.2
0 5 10 15 20 25 30
1 Time (s)
G2 (s) = (8)
(1 + s)8
Fig. 5. Nelder-Mead tuned PID step responses for G2 (s)
TABLE V
H OOKE -J EEVES OPTIMIZATION RESULTS FOR G2 (s). Table VII and Figure 6 show the parameters and step
Cost Function Kp Ti Td Settling Time responses resulting from SA optimization.
IT AE 0.8205 4.9223 1.9661 9.423
IAE 0.9025 4.6762 2.4700 23.196 V. C ONCLUSIONS
S.T IM E 0.8205 4.6118 2.4661 9.053 In this paper we have presented a simple method to tune
a PID controller. We give three options of optimization
The ZN formula yields the following PID parameters: algorithms, which are easy to implement. In the simulations
Kp = 2.34, Ti = 10.77, and Td = 1.72. we compared to a previously reported controller tuned with
Table V shows the parameters obtained with HJ. The Genetic Programming and the results show an improvement
step responses for HJ are shown in Figure 4. The PID in the settling time.
parameters for the GP tuning are: Kp = 0.68, Ti = 4.36, The best results were obtained using IT AE and IAE as
and Td = 1.47. Its settling time is 11.119 seconds. cost functions. These can be combined with a settling time
Congreso Anual 2010 de la Asociación de México de Control Automático. Puerto Vallarta, Jalisco, México.
TABLE VII
S IMULATED A NNEALING OPTIMIZATION RESULTS FOR G2 (s).
Cost Function Kp Ti Td Settling Time
IT AE 0.83 4.92 1.98 9.298
IAE 0.89 4.73 2.43 22.699
S.T IM E 0.90 4.89 2.40 8.343
Step Responses
GP
ITAE
1
IAE
S.Time
0.8 ref
± 5% 1.05
0.6
Output
0.4 1
0.2
0.95
0 8 10 12 14 16 18 20 22 24 26
−0.2
0 5 10 15 20 25 30
Time (s)