PID Controllers
PID Controllers
Contents
[hide]
a constant, growing, or decaying sinusoid. A human would not do this because we are
adaptive controllers, learning from the process history, but PID controllers do not
have the ability to learn and must be set up correctly. Selecting the correct gains for
effective control is known as tuning the controller.
If a controller starts from a stable state at zero error (PV = SP), then further changes
by the controller will be in response to changes in other measured or unmeasured
inputs to the process that impact on the process, and hence on the PV. Variables that
impact on the process other than the MV are known as disturbances and generally
controllers are used to reject disturbances and/or implement setpoint changes.
Changes in feed water temperature constitute a disturbance to the shower process.
In theory, a controller can be used to control any process which has a measurable
output (PV), a known ideal value for that output (SP) and an input to the process
(MV) that will affect the relevant PV. Controllers are used in industry to regulate
temperature, pressure, flow rate, chemical composition, level in a tank containing
fluid, speed and practically every other variable for which a measurement exists.
Automobile cruise control is an example of a process which utilizes automated
control.
Due to their long history, simplicity, well grounded theory and simple setup and
maintenance requirements, PID controllers are the controllers of choice for many of
these applications.
where Pout, Iout, and Dout are the contributions to the output from the PID controller
from each of the three terms, as defined below.
Where
The integral term (when added to the proportional term) accelerates the movement of
the process towards setpoint and eliminates the residual steady-state error that occurs
with a proportional only controller. However, since the integral term is responding to
accumulated errors from the past, it can cause the present value to overshoot the
setpoint value (cross over the setpoint and then create a deviation in the other
direction). For further notes regarding integral gain tuning and controller stability, see
the section on Loop Tuning.
The derivative term slows the rate of change of the controller output and this effect is
most noticeable close to the controller setpoint. Hence, derivative control is used to
reduce the magnitude of the overshoot produced by the integral component and
improve the combined controller-process stability. However, differentiation of a signal
amplifies noise and thus this term in the controller is highly sensitive to noise in the
error term, and can cause a process to become unstable if the noise and the derivative
gain are sufficiently large.
[edit] Summary
The output from the three terms, the proportional, the integral and the derivative terms
are summed to calculate the output of the PID controller. Defining u(t) as the
controller output, the final form of the PID algorithm is:
If the PID controller parameters (the gains of the proportional, integral and derivative
terms) are chosen incorrectly, the controlled process input can be unstable, i.e. its
output diverges, with or without oscillation, and is limited only by saturation or
mechanical breakage. Tuning a control loop is the adjustment of its control
parameters (gain/proportional band, integral gain/reset, derivative gain/rate) to the
optimum values for the desired control response.
The optimum behavior on a process change or setpoint change varies depending on
the application. Some processes must not allow an overshoot of the process variable
beyond the setpoint if, for example, this would be unsafe. Other processes must
minimize the energy expended in reaching a new setpoint. Generally, stability of
response (the reverse of instability) is required and the process must not oscillate for
any combination of process conditions and setpoints. Some processes have a degree
of non-linearity and so parameters that work well at full-load conditions don't work
when the process is starting up from no-load. This section describes some traditional
manual methods for loop tuning.
There are several methods for tuning a PID loop. The most effective methods
generally involve the development of some form of process model, then choosing P, I,
and D based on the dynamic model parameters. Manual tuning methods can be
relatively inefficient.
The choice of method will depend largely on whether or not the loop can be taken
"offline" for tuning, and the response time of the system. If the system can be taken
offline, the best tuning method often involves subjecting the system to a step change
in input, measuring the output as a function of time, and using this response to
determine the control parameters.
Choosing a Tuning Method
Method
Advantages
Disadvantages
Manual
Tuning
Requires experienced
personnel.
Ziegler
Nichols
Software
Tools
CohenCoon
If the system must remain online, one tuning method is to first set the I and D values
to zero. Increase the P until the output of the loop oscillates, then the P should be left
set to be approximately half of that value for a "quarter amplitude decay" type
response. Then increase D until any offset is correct in sufficient time for the process.
However, too much D will cause instability. Finally, increase I, if required, until the
loop is acceptably quick to reach its reference after a load disturbance. However, too
much I will cause excessive response and overshoot. A fast PID loop tuning usually
overshoots slightly to reach the setpoint more quickly; however, some systems cannot
accept overshoot, in which case an "over-damped" closed-loop system is required,
which will require a P setting significantly less than half that of the P setting causing
oscillation.
Effects of increasing parameters
Parameter
Rise Time
Kp
Decrease
Increase
Ki
Decrease
Increase
Increase
Eliminate
Decrease
None
Kd
Control Type
Kp
Ki
Kd
0.5Kc
PI
0.45Kc 1.2Kp / Pc
PID
0.6Kc
2Kp / Pc KpPc / 8
Most modern industrial facilities no longer tune loops using the manual calculation
methods shown above. Instead, PID tuning and loop optimization software are used to
ensure consistent results. These software packages will gather the data, develop
process models, and suggest optimal tuning. Some software packages can even
develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system, and then uses the
controlled system's frequency response to design the PID loop values. In loops with
response times of several minutes, mathematical loop tuning is recommended,
because trial and error can literally take days just to find a stable set of loop values.
Optimal values are harder to find. Some digital loop controllers offer a self-tuning
feature in which very small setpoint changes are sent to the process, allowing the
controller itself to calculate optimal tuning values.
Other formulas are available to tune the loop according to different performance
criteria.
Many PID loops control a mechanical device (for example, a valve). Mechanical
maintenance can be a major cost and wear leads to control degradation in the form of
either stiction or a deadband in the mechanical response to an input signal. The rate of
mechanical wear is mainly a function of how often a device is activated to make a
change. Where wear is a significant concern, the PID loop may have an output
deadband to reduce the frequency of activation of the output (valve). This is
accomplished by modifying the controller to hold its output steady if the change
would be small (within the defined deadband range). The calculated output must leave
the deadband before the actual output will change.
The proportional and derivative terms can produce excessive movement in the output
when a system is subjected to an instantaneous "step" increase in the error, such as a
large setpoint change. In the case of the derivative term, this is due to taking the
derivative of the error, which is very large in the case of an instantaneous step change.
As a result, some PID algorithms incorporate the following modifications:
derivative of output In this case the PID controller measures the derivative of
the output quantity, rather than the derivative of the error. The output is always
continuous (i.e., never has a step change). For this to be effective, the
derivative of the output must have the same sign as the derivative of the error.
setpoint ramping In this modification, the setpoint is gradually moved from
its old value to a newly specified value using a linear or first order differential
ramp function. This avoids the discontinuity present in a simple step change.
setpoint weighting Setpoint weighting uses different multipliers for the error
depending on which element of the controller it is used in. The error in the
integral term must be the true control error to avoid steady-state control errors.
This affects the controller's setpoint response. These parameters do not affect
the response to load disturbances and measurement noise.
In the early history of automatic process control the PID controller was implemented
as a mechanical device. These mechanical controllers used a lever, spring and a mass
and were often energized by compressed air. These pneumatic controllers were once
the industry standard.
Electronic analog controllers can be made from a solid-state or tube amplifier, a
capacitor and a resistance. Electronic analog PID control loops were often found
within more complex electronic systems, for example, the head positioning of a disk
drive, the power conditioning of a power supply, or even the movement-detection
circuit of a modern seismometer. Nowadays, electronic controllers have largely been
replaced by digital controllers implemented with microcontrollers or FPGAs.
Most modern PID controllers in industry are implemented in software in
programmable logic controllers (PLCs) or as a panel-mounted digital controller.
Software implementations have the advantages that they are relatively cheap and are
flexible with respect to the implementation of the PID algorithm.
Where
Ti is the Integral Time
Td is the Derivative Time
In the ideal parallel form, shown in the Controller Theory section
the gain parameters are related to the parameters of the standard form through
and Kd = KpTd. This parallel form, where the parameters are treated as
simple gains, is the most general and flexible form. However, it is also the form where
the parameters have the least physical interpretation and is generally reserved for
theoretical treatment of the PID controller. The "standard" form, despite being slightly
more complex mathematically, is more common in industry.
Control Theory
Feedback
Instability
Oscillation
Oscillation (mathematics)
[edit] Simulations
[edit] References
2. The sum of errors over time. Give us a minute and we will show why simply
looking at the absolute error (proportional) only is a problem. The sum of
errors over time is important and is called the "integral" (I) component of the
PID controller. Every time we run the PID algorithm we add the latest error to
the sum of errors. In other words Sum of Errors = Error1 + Error2 + Error3 +
Error4 + ...
3. The dead time. Dead Time refers to the delay between making a change in the
output and seeing the change reflected in the PV. The classical example is
getting your oven at the right temperature. When you first turn on the heat, it
takes a while for the oven to "heat up". This is the dead time. If you set an
initial temperature, wait for the oven to reach the initial temperature, and then
you determine that you set the wrong temperature -- then it will take a while
for the oven to reach the new temperature setpoint. This is also referred to as
the "derivative" (D) component of the PID controller. This holds some future
changes back because the changes in the output have been made but are not
reflected in the process variable yet.
In this first example, we assumed that there was no dead time, meaning, that if we
made a change in the output of the controller, the input immediately changed. For
example, zero dead time on our oven means that if we changed the temperature
setpoint on the oven, then the temperature inside the oven instantly changed to the
new setpoint (the oven did not require time to heat up or cool down).
The blue line represents a proportional constant of .1, the magenta lines represents a
proportional constant of .2, the yellow line represents a proportional constant of .4,
and the white line represents the setpoint (SP). From this graph, hopefully two things
jump out at you. First, once the output settles out, the output (blue, magenta, and
yellow lines) are no where near the setpoint (SP) (the white line). Therefore, some
offset has to be added to the output to make the PV reach the SP. Second, the greater
the proportional constant, the less the offset needs to be. For example the yellow line,
with a proportional constant = .4 is closer to the white line than the blue line with a
proportional constant of .1.
If you download the Excel spreadsheet of the PID controller simulator and look at the
effects of increasing dead time you will notice that the outputs settle at the same
output level -- it simply takes longer for the output to reach its final level.
In summary, automatic proportional (only) controllers are not very good because there
is an offset that has to be continually adjusted.
As you can tell, the PI controller is much better than just the P controller. However,
dead time of zero (as shown in the above graph) is not common. So let's take a look
when the dead time equals two.
Now this graph is starting to look more typical of a PID controller. Notice how the
dark blue line quickly goes up to the SP (50) and cycles around 50 a little but quickly
settles down. In contrast, the dark purple line way overshoots the SP of 50, going
above 80, back down to 30, then over 50, and back and forth until it eventually settles
down.
If you download the Excel spreadsheet and look through the different scenarios you
will notice that the P & I parameters that look good for one dead time do not look
optimal for another dead time. In other words, for each process element (valve,
motor, pump, heater, chiller, etc) you are trying to control -- you will have different
process characteristics and will have to determine the optimal P, I, and possibly D
constants. Determining what these constants should be is called "tuning".
Theoretically, you want to minimize the sum of absolute errors, as given in the
spreadsheets.
Let's show one other graph to warn you about a very dangerous condition:
We wanted to show this graph to illustrate what can happen if you choose the wrong
parameters. The green line illustrates an unstable or "out-of-control" controller.
Notice how it continues to get worse and worse. This is not good. This is why you
want to start with very small P, I, and D constants and increase them to improve
performance. If you start with large constants, bad things can happen.
Derivative Control
Derivative control takes into consideration that if you change the output, then it takes
time for that change to be reflected in the input (PV). For example, let's take heating
of the oven. If we start turning up the gas flow, it will take time for the heat to be
produced, the heat to flow around the oven, and for the temperature sensor to detect
the increased heat. Derivative control sort of "holds back" the PID controller because
some increase in temperature will occur without needing to increase the output
further. Setting the derivative constant correctly, allows you to become more
aggressive with the P & I constants.
Typically each manufacturer's PID controller acts differently than controllers from
other manufacturers. In other words, do not expect PID controllers from different
manufacturers to act exactly the same.
Many PID controllers today have an "auto tune" feature that will calculate good
values for the P, I, & D constants for a process.
We typically use a product called Expertune which does exactly what it's name
implies.
Remember that as a systems integrator you do not need to be a master of every
technology. You are as good as the resources you manage. Therefore, learn what you
can do, and what you can not do, and then call in the experts listed below to help you
with the difficult applications.
2. Increase the proportional constant until you get a sinusoidal wave with a
constant amplitude (see Excel spreadsheet for examples at different
deadtimes). In our Excel PID simulations, this sinusoidal wave occurs when
the proportional constant = 1.0. See the chart below where the Deadtime = 1,
Proportional constant = 1, and the period = 4. Note that the proportional
constant of 1.0 was determined to give the process a constant amplitude wave.
3. For optimal P & I controller (no derivative), the proportional constant should
be 0.45 times the proportional constant. In this case 0.45 times 1.0 = 0.45.
4. The integral constant is 1.2 / period of the sinusoidal wave. For deadtime = 0,
the period is 2 so the optimal integral constant = 1.2 / 2 = 0.6. For deadtime =
1, the period is 4 so the integral constant = 1.2 / 4 = 0.3. For deadtime = 2, the
period is 6 so the integral constant = 1.2 / 6 = 0.2. The new PI controller
results are shown below.
PID Tutorial
John A. Shaw
Process Control
Solutions
PID Control
Description
Tutorial
Ch. 1
Ch. 2
Ch. 3
Ch. 4
Ch. 5
Ch. 6
Code
Multiple Loop
Cascade
Ratio
Override
Trim
The PID control algorithm is used for the control of almost all loops in the
process industries, and is also the basis for many advanced control
algorithms and strategies. In order for control loops to work properly, the
PID loop must be properly tuned. Standard methods for tuning loops and
criteria for judging the loop tuning have been used for many years, but
should be reevaluated for use on modern digital control systems.
While the basic algorithm has been unchanged for many years and is used
in all distributed control systems, the actual digital implementation of the
algorithm has changed and differs from one system to another and from
commercial equipment to academia.
We will discuss controller tuning methods and criteria. Also discussed will
be the digital PID control algorithm, how it works, the various
implementation methods and options, and how these affect the operation
and tuning of the controller.
Table of Contents
1. The Control Loop
Basic feedback control
Valve Linearity
Valve Linearity:
Fail Open Valves
2. Process responses
Steady state: effect of controller output
Steady state: effect of disturbances
Process Dynamics: Simple lag
Process Dynamics: Dead time
Measurement of dynamics
Disturbances
6. Loop tuning
Tuning Criteria
Mathematical criteria
On-line trial tuning
Ziegler Nichols tuning method: open loop reaction rate
Ziegler Nichols tuning method: open loop point of inflection
Ziegler Nichols tuning method: open loop process gain
Ziegler Nichols tuning method: closed loop
Controllability of processes
Flow loops
Multiple Loop
Cascade
Ratio
Override
Trim
Valve Linearity
Valves are usually non-linear. That is, the flow through the valve is
same as the valve position. Several types of valves exist:
Linear
Same gain regardless of valve position
Equal Percentage
Low gain when valve is nearly closed
High gain when valve is nearly open
Quick Opening
High gain when valve is nearly closed
Low gain when valve is nearly open
As we will see later, the gain of the process, including the valve, is
If the controller is tuned for one process gain, it may not work for
process gains.
ValveLinearity:
Installedcharacteristics
Theflowvs.percentopencurvechangesduetotheheadlossinthepipin
Atlowflow,theheadlossthroughthepipesisless,leavingalargerdiff
pressureacrossthevalve.
Athighflow,theheadlossthroughthepipeismore,leavingasmallerd
pressureacrossthevalve.
Theeffectistoincreasethenonlinearityofmostvalves.
FailOpenValves
Valvesareusuallyeither:FailClosed,airtoopenor
FailOpen,airtoclose
Regardlessofthewaythevalveoperates,theoperatorisin
theknowingandadjustingthepositionofthevalve,notthe
signal."Upisalwaysopen"
Allcontrollershavesomemeansofindicatingthecontroll
termsofthevalveposition.Whentheoperatorincreasesthe
indicatedonthecontroller,thevalveopens.
IndicationInversion
>
The output indication is inverted.
The controller action takes the valve action into acount.
The flow loop is direct acting.
Most analog controllers work like this.
Signal Inversion
The output signal is inverted. The controller action ignores the val
The flow loop is reverse acting. Some distributed control systems
this.
Multiple Loop
Cascade
Ratio
Override
Trim
When the load changes, either the process value changes or the v
position must be changed to compensate for the load change.
Dead Time: A delay in the loop due to the time it takes material to
one point to another
Also called: Distance Velocity Lag
Transportation Lag
Measurement of dynamics
The dynamics differ from one loop to another.
However, they usually result in a response curve like this:
Disturbances
Almost all processes contain disturbances.
Disturbances can enter anywhere in the process.
back to index
Multiple Loop
Cascade
Ratio
Override
Trim
CONTROLLERACTION
Definestherelationshipbetweenchangesinthemeasuredvariableandchangesin
thecontrolleroutput.
DIRECTIncreaseinmeasuredvariablecausesanincreaseintheoutput.
REVERSEIncreaseinmeasuredvariablecausesadecreaseintheoutput.
Thecontrolleractionmustbetheoppositeoftheprocessaction.
Auto/Manual
ManualMode:
Theoperatoradjusttheoutputtooperatetheplant.
Duringstartup,thismodeisnormallyused.
AutomaticMode:
Thecontrolalgorithmmanipulatestheoutputtoholdtheprocessmeasurementsat
theirsetpoints.
Thisshouldbethemostcommonmodefornormaloperation.
Keyconcepts
ThePIDcontrolalgorithmdoesnot"know"thecorrectoutputtobringthe
processtothesetpoint.
Itmerelycontinuestomovetheoutputinthedirectionwhichshouldmovethe
processtowardthesetpoint.
Thealgorithmmusthavefeedback(processmeasurement)toperform.
ThePIDalgorithmmustbe"tuned"fortheparticularprocessloop.Without
suchtuning,itwillnotbeabletofunction.
TobeabletotuneaPIDloop,eachofthetermsofthePIDequationmustbe
understood.
Thetuningisbasedonthedynamicsoftheprocessresponse.
ThePIDControlAlgorithm
The PID control algorithm comprises three elements:
Derivative
Proportional only
Proportional and Integral (most common)
E=SetpointMeasurement(reverseaction)
Output=E*G+k
Theoutputisequaltotheerrortimesthegainplusthemanualreset.
Ifthemanualresetstaysconstant,thereisafixedrelationshipbetweenthe
setpoint,themeasurement,andtheoutput.
Proportionalunits
Theproportionalorgaintermmaybecalibratedintwoways:
GainandProportionalBand
Gain=Output/Input
Increasingthegainwillcausetheoutputtomovemore.
Proportionalbandisthe%changeintheinputwhichwouldresultina100%
changeintheoutput.
ProportionalBand=100/Gain
Wewillusegaininthisdocument.
ProportionalOutputvs.Measurement
(Reverseacting)
Proportionalonlycontrolproducesanoffset.Onlytheadjustmentofthemanual
resetremovestheoffset.
ProportionalOffset
Offsetcanbereducedbyincreasinggain.
Proportionalcontrolwithlowgain
Proportionalcontrolwithhighergain
ProportionalReducingoffsetwithmanualreset
Offsetcanbeeliminatedbychangingmanualreset.
Proportionalcontroldifferentmanualreset
Addingautomaticreset
Withproportionalonlycontrol,theoperator"resets"thecontroller(toremove
offset)byadjustingthemanualreset:
Thismanualresetmaybereplacedbyautomaticresetwhichcontinuestomove
theoutputwheneverthereisanyerror:
Thisiscalled"Reset"orIntegralAction.
Notetheuseofthepositivefeedbacklooptoperformintegration.
Resetorintegralmode
ResetContribution:
Out=gXKrXintegral of error
wheregisgain,Kristheresetsettinginrepeatsperminute.
Unitsusedtosetintegralorreset
Assumeacontrollerwithproportionalandintegralonly.
Calculationofrepeattime:(gainandresettermsusedincontroller)
Withtheerrorsettozero(measurementinput=setpoint),makeachangeinthe
inputandnotetheimmediatechangeinoutput.Theoutputwillcontinuetochange
(itisintegratingtheerror).Notethetimeittakestheoutputto,duetotheintegral
action,repeattheinitialchangemadebythegainaction.
Somecontrolvendorsmeasureresetbyrepeattimeinminutes.Thisisthetimeit
takesthereset(orintegral)elementtorepeattheactionoftheproportional
element.
Othersmeasureresetby"repeatsperminute".
Repeatsperminuteistheinverseofminutesofrepeat
Thisdocumentwilluserepeatsperminute.
Derivative
Firstusedasapartofatemperaturetransmitter("SpeedAct"Taylor
InstrumentCompanies)toovercomelagintransmittermeasurement.
AlsoknownasPreActandRate.
DerivativeContribution:
Out=gxKdxde/dt
wheregisgain,Kdisthederivativesettinginminutes.
Responseofcontrollerwithproportionalandderivative:
Theamountoftimethatthederivativeactionadvancestheoutputisknownasthe
"derivativetime"measuredinminutes.
Allmajorvendorsmeasurederivative(Derivative,Rate)thesame.
CompletePIDresponse
NonInteractive(textbook)form:
Out=G(e+R+D )
Where
G=Gain
R=Reset(repeatsperminute)
D=Derivative(minutes)
Note:SeeInteractivevs.Noninteractive(below)
backtoindex
Multiple Loop
Cascade
Ratio
Override
Trim
Non-Interactive (Parallel):
Out = G(e + R+ D )
Interactive (series):
should be higher, the reset rate lower, and the derivative lower
than on a commercial interactive controller.
External feedback
If the input to the positive feedback loop is taken from the signal to the
process, it is called "external feedback" or "reset feedback". At steady state
the controller output is the Gain multiplied by Error added to external
feedback. If the error is zero, the output is equal to the external feedback.
Saturation Properties
Another difference is in the "Saturation Properties"
eg. what happens when output has been at the upper or lower limit.
Standard algorithm
Described on previous page.
Multiple Loop
Cascade
Ratio
Override
Trim
Multiple Loop
Cascade
Ratio
Override
Trim
Loop Tuning
Tuning Criteria
or
"How do we know when its tuned"
Elementary methods
1 The plant didnt blow up.
2 The process measurements stay close enough to the setpoint.
3 They say its OK and you can go home now.
Informal methods
1 Optimum decay ratio (1/4 wave decay).
2 Minimum overshoot.
The choice of methods depends upon the loops place in the process and its
relationship with other loops.
Mathematical criteria
Mathematical methodsminimization of index
%
%/min.
Change of output
Rate of change at the point of inflection (POI)
D min.
value
Reset
Derivative
X/DR
PI
0.9X/DR
PID
1.2X/DR
0.3/D
0.5/D
0.5D
Ti
min
%/min
Gp is the process gain - the change in measured value (%) divided by the
change in output (%)
The gain, reset, and Derivative are calculated using:
Gain
Reset
Derivative
L/GpD
PI
0.9 L/GpD
PID
1.2 L/GpD
0.3/D
0.5/D
0.5D
Note the gain (Ultimate Gain, Gu,) and Period (Ultimate Period, Pu.)
The Ultimate Gain, Gu, is the gain at which the oscillations continue with a
constant amplitude.
Reset
Derivative
0.5 GU
PI
0.45 GU
1.2/Pu
PID
0.6 GU
2/Pu
Pu/8
Controllability of processes
The "controllability" of a process is depends upon the gain which can be
used.
The higher the gain:
the greater rejection of disturbance and
the greater the response to setpoint changes.
The maximum gain which can be used depends upon the ratio .
From this we can draw two conclusions:
Decreasing the dead time increases the maximum gain and the
controllability.
Increasing the ratio of the longest to the second longest lag also increases
the controllability.
Flow loops
Flow loops are too fast to use the standard methods of analysis and tuning.
Analog vs. Digital control:
Some flow loops using analog controllers are tuned with high gain.
This will not work with digital control.
With an analog controller, the flow loop has a predominate lag (L) of a few
seconds and no subordinate lag.
With a digital controller, the scan rate of the controller can be considered
dead time.
Although this dead time is small, it is large enough when compared to L to
force a low gain.
Typical digital flow loop tuning: Gain= 0.5 to 0.7
Reset=15 to 20 repeats/min..
no derivative.
AT A GLANCE
A control loop is a feedback mechanism that
attempts to correct discrepancies between a
measured process variable and the desired setpoint.
The controller applies the necessary corrective
actions via an actuator that can drive the process
variable up or down.
PID explained
Value selection
tips
Open-and closedloop tuning
Software product
listing
A PID controller using the ideal or ISA standard form of the PID algorithm computes
its output CO(t) according to the formula shown in Figure 1. PV(t) is the process
variable measured at time t and the error e(t) is the difference between the process
variable and the setpoint. The PID formula weights the proportional term by a factor
of P, the integral term by a factor of P/T I, and
the derivative term by a factor of P. T D where
P is the controller gain, T I is the integral time,
and T D is the derivative time.
This terminology bears some explaining. Gain
refers to the amount by which the error signal CO(t) is the controller's current output
e(t) = SP - PV (t) is the error between the set point
will gain or lose strength as it passes through
(SP) and the process variable PV(t)
the controller en route to becoming part of the P is the controller gain
controller's output. A PID controller with a high T is the integral time T is the derivative time
gain will tend to generate aggressive corrective actions to eliminate errors.
1
The integral time refers to a hypothetical sequence of events where the error starts at
zero then abruptly jumps to a fixed value. Such an error would cause an instantaneous
response from the controller's proportional term and a response from the integral term
that starts at zero and increases steadily. The time required for the integral term to
catch up to the unchanging proportional term is the integral time T I. A PID controller
with a long integral time is more heavily weighted towards proportional action
thanintegral action.
Similarly, the derivative time T D is a measure of the relative influence of the
derivative term in the PID formula. If the error were to start at zero and begin
increasing at a fixed rate, the proportional term would start at zero while the
derivative term assumed a fixed value. The proportional term would then increase
steadily until it caught up to the derivative term at the end of the derivative time. A
PID controller with a long derivative time is more heavily weighted towards
derivative action than proportional action.
Historical note
The very first feedback controllers included just the proportional term. For
mathematical reasons that only became apparent later, a P-only controller tends to
drive the error downward to a small but non-zero value and then quit. Operators
observing this phenomenon would then manually increase the controller's output until
the last vestiges of the error were eliminated. They called this operation resetting the
controller.
An open-loop step test reveals the prcess' time constant T, deadtime d, and
gain k.
Ziegler-Nichols tuning
So how can a control engineer designing a PID loop determine the values for P, T I,
and T D that will work best for a particular application? John G. Ziegler and Nathaniel
B. Nichols of Taylor Instruments (now part of ABB Instrumentation in Rochester,
NY) addressed that question in 1942 when they published two loop-tuning techniques
that remain popular to this day.
Their open-loop technique is based on the results of a bump or step test for which the
controller is taken off-line and manually forced to increase its output abruptly. A strip
chart of the process variable's subsequent trajectory is known as the reaction curve
(see Figure 2).
A sloped line drawn tangent to the reaction curve at its steepest point shows how fast
the process reacted to the step change in the controller's output. The inverse of this
line's slope is the process time constant T which measures the severity of the lag.
The reaction curve also shows how long it took for the process to demonstrate its
initial reaction to the step (the dead time d) and how much the process variable
increased relative to the size of the step (the process gain K). By trial-and-error,
Ziegler and Nichols determined that the best settings for the tuning parameters P, T I,
and T D could be computed from T, d, and K as follows:
Once these parameter settings have been loaded into the PID formula and the
controller returned to automatic mode, the controller should be able to eliminate
future errors without causing the process variable to fluctuate excessively.
Ziegler and Nichols also described a closed loop-tuning technique that is conducted
with the controller in automatic mode, but with the integral and derivative actions shut
off. The controller gain is increased until even the slightest error causes a sustained
oscillation in the process variable (see Figure 3).
The smallest controller gain that can cause such an oscillation is called the ultimate
gain P u. The period of those oscillations is called the ultimate period T u. The
appropriate tuning parameters can be computed from these two values according to
the following rules:
P = 0.6 Pu
TI = 0.5 Tu
TD = 0.125 Tu
Caveats
Unfortunately, PID loop tuning isn't really that simple. Different PID controllers use
different versions of the PID formula, and each must be tuned according to the
appropriate set of rules. The rules also change when:
Recently, I wrote about refrigerators,1 pointing out that there are several ways to control a servo loop, such as
a temperature chamber, or an oven, or a refrigerator using thermoelectric coolers (let's leave bang-bang
controllers and on-off heat-movers out of this). Fuzzy Logic controllers can work pretty well, and so can a P-ID (or PID) controller. That's pronounced "pee-eye-dee", not "pid". Several readers said that they were not very
knowledgeable about PID controllers. They don't teach very much about them in schools these days, I guess.
They asked me, "Please show us a good example of a PID controller." Well, I agree completely that an
example is one of the most powerful tools. I'll show you a couple of examples, so you can see how easy it is to
come up with one yourself. And I will point out that, after a Fuzzy Logic expert showed us his best example of
a nice simple F.L. controller, I had no idea how to make it myself. Do you know how to run a F.L. controller
after seeing an example of one? I don't. I hope that would not be true for my examples.
One example is found in my book on Troubleshooting, 2 where I had to control the temperature of a blast of
heated air. When you apply more watts to the heater, there's a delay before the sensor warms up to its new
temperature. In fact, there are transport delays and thermal lags. This is a fairly interesting kind of system for
closing the loop accurately, but not really difficult. Engineers have known how to do this for many years
about 140 years, I would say. Back in the 1880s, when most servo loops were mechanical or pneumatic, and
the instrumentation was crude, it was a wise person who understood how to close a loop with it good accuracy
and loop stability. But for the last 40 years, when a good operational amplifier costs $22 or even less, it's been
a piece of cake.
Note, a wise old colleague observed that the introduction of the Integral term to Control Theory is credited to
the 1930s. But I found good documentation in my Encyclopedia Brittanica3 that a flyball governor with an
added integral term was invented by Sir W. Siemens in 1853.4 Never bet that the British didn't get there first.
However, I can't say that I've seen the Derivative effect exploited in that 1894 Encyclopedia. So maybe the
PID controller is only about 60 years old...
First of all, let's discuss the nomenclature"PID." That stands for Proportional, and Integral, and Derivative.
You can build some controllers using only P and I, and others using only P and D, but when you need good
performance, using all three terms can provide REAL advantages. Let's see how these terms are made, and
how they are used.
First, these functions are used to operate on the error signal, the difference between the feedback parameter
and the desired parameter. Let's spell out an example. Say that we want to define a precision heater controller
perhaps for an electric frying panwith a sensor for the controlled chamber that puts out 10mV/C. The
input command is -350mV (which corresponds to a desired temperature or "set point" of +35C),the output of
the temperature sensor is +250mV (which coresponds to a temperature of 25C), and the load must be heated
to get the feedback voltage to track (equal but with opposite sign). What's needed, then, is a circuit to operate
on the error, namely (Vout + Vin), or -0.1 V. Op amp A1 does just that (Fig. 1). (Let's keep things simple by
considering primarily linear systems; if the system actually has some nonlinearities, we can address them
later.)
After we generate that error term, you will want to generate a correction signal that's a function of the Error
Signal. As I discussed back in December, you might design your system so that a heater has its watts linearly
Proportional to the temperature error. "If chamber temperature is very cold, turn heat up high" is how the
Fuzzy Logic guys like to say this. This is partly wise, because if the temperature really is too cold, turning on
the heat is one of the good things to do. That is the Proportional term.
Referring to Figure, we follow the error-detector amplifier A1 with a Proportional amplifier A2. We can control
the gain of the Proportional path by adjusting the pot P2 at its output, so as to get the right gain going to the
power amplifier, A5. Eventually, we will figure out what to do with A3 and A4, but right now we can set their trim
pots to ZERO, and then they're out of the picture. Let's keep things simple, one step at a time.
Now, let's say you pour a bucket of very cold water into the electric frying pan, where the controller is set for
+35C. The sensor soon says it's much too cold, so the heater turns on pretty hard. As the temperature of the
sensor gets near the desired temperaturethe set pointthe heater will eventually be turned off. The problem
is that any heater has a delay before its heat gets to the chamber and load and sensor. So, when the error
gets to zero, and you turn off power to the heater, the chamber still keeps on heating for a while and overshoot
occurs. If you turn the gain down low, this overshoot may be minor. But if you decide that you must have very
high accuracy and turn up the gain, overshoot is certain, and oscillation or bad ringing is likely.
Now, how can we avoid this overshoot by foreseeing this situation and recognizing that the power needs to be
shut down a little early? The best solution is to add in A3 to compute the Derivative of the temperature error
the rate of changewhile the proportional amplifier is computing when the error becomes small. When these
two signals are combined properly, the Derivative signal lets the controller decide, "Whoa, we are getting very
close to the set point, and the sensor's temperature is still rising pretty rapidlytime to cut back on the power."
In practice, this works quite well. This is called P-D control, using just A2 and A3 for the Proportional and
Derivative terms. You trim P3 to the setting that gives good resultsnot too much overshoot (not enough
derivative) and not too slow (too much derivative term). The Fuzzy Logic guys achieve this same function by
using the words: "If the temperature error is small (negative) and the rate-of-change is small (positive), heating
power should be small." This works, too.
In theory, you can make a differentiatora rate-of-change computerby taking op amp A3 and just
connecting an input capacitor Cin and a feedback resistor Rf. But in practice, with real op-amps, this will cause
a local oscillation of the amplifier, due to the lag in the feedback loop. The fix is fairly simplefor most cases,
to prevent local oscillations, add a small resistor R111 in series with Cin, and add a small capacitor (Cf) in
parallel with Rf. In practice, if you make R111 about 1/10 or 1/100 of Rf, and Cf about 1/20 or 1/200 of Cin, that
works pretty well. In other cases, it may be a bit more critical which value you choose, either to prevent local
oscillation or avoid degradation of loop stability.
The P-D controller is quite good for many servo control applications. To a large extent, many Fuzzy Logic
ontrollers are quite analogous to a P-D loop, and often they work well. NOW let me invent a case where the PD controller (and the simple F.L. controller begins to work lousy (that's a technical term). Let's take this +35C
controller outside on a cold day. The water stays warm for a while, but the air starts to cool it off. After a while,
the Proportional path turns up the heater. But there's still an erroralways an error. If the fry-pan needs 50 W
to keep the water at about 35C, then the error will be 50 W divided by the gain.
To avoid a large error, it's natural to just turn up the gainwhich is what some people propose to do. But when
you turn up the gain, the loop stability is hurt. How badly? Ahhhm that is hard to predicthard to model. The
reason is that the thermal transfer from the heater to the water and the sensor isn't a simple model. It's not a
simple lag. The transfer might be similar to a cascade of 5 or 10 lags, so that a step of heat causes a slow
change of the sensor temperature. It's possible to do this in a computeror within SPICEbut it's not really
easy. Furthermore, you basically have to measure some real response of the system. You can guess, but it's
not really easy. Anyhow, if you take enough data and generate an accurate-enough model, you can show that
you can turn up the Proportional Gain a certain amount. But, if you go any higher your loop will oscillate, or
ring severely. Let's say that you can only set the Proportional gain as high as 50 W/5C, without oscillation.
Your need is for less than 1C of error. But there remains about 5.
FIRSTLY, you might add as much insulation as you can to cut down the amounts of watts needed. But let's say
there is still more than 2 of error. What can we do?
One alternative (secondly) is to sense the outside temperature. We could then use that to predict that 20 or 40
W of power will be needed. We could add that information into the controllerwhich does work at times,
sometimes very well. This is known as feedforward.
Another thing we could do (thirdly) is add not just extra insulation, but a heated shell around the experiment
perhaps at 30C. That could greatly improve the accuracy, but often this amount of complexity is
unacceptable.
Okay, the fourth optionand a fairly popular and inexpensive oneis to look at that eror signal (the output of
A1), and if there's any dc error, just INTEGRATE that signal using A4. Then feed that output through its
adjustment pot (P4) and sum it with the other signals. This beats the conundrum: to get full accuracy in view of
the rule, "You can only turn on the heat high when the error signal is large." In this case, you can turn on the
heat high even if the error signal is NOT largebut you may have to wait a while for the integrator to do its
job. Why not just turn up the GAIN for the Integrator? You can do that to some extent. If you overdo it, that
makes the loop unstable. So don't overdo it.
The best thing about using the Integrator is that it lets you turn down the gain of the Proportional amplifier. If
you turned up the Proportional gain too high to try to cut down the error, that will cause instability as I
mentioned earlier. When you have the integrator working, you can turn down the Proportional Gain and
improve the loop stability a LOT, yet still have infinite gain at dc. You can have ZERO static error.
But, don't turn the Proportional Gain down TOO FAR. If you did that, the I and D terms would act like an L-C
filter with no damping. So if you turn P2 down to zero, that's sure to cause oscillation, too. Now that you have
the Integral path working, the gain setting for the Proportional term acts as a DAMPING FACTOR to prevent
the loop from ringing. As you can imagine, the optimization of such a loop isn't trivial. Also, in many cases,
these loops may be slow, so it's hard to see if any changes you're making are doing more good than harm.
Hint 1: Use a slow strip-chart recorder so that you can see the shape of the loop's
step response, and if you're making any improvements. Or use a storage scope. Or a
Rustrak meter.
Hint 2: Take a little open-loop data to show the delay from a step of heat to a change
of output temperature. Build up a cascaded R-C network to stimulate that slow lag
(Fig. 2). Then change the lag's response by a factor of 100 by decreasing all of the
capacitors by a factor of 100. Then design your controller to make that loop stable at
a speed that's easy to observe. Then scale that controller 100:1 slower, and you're
fairly close to having a controller that will work. This is one form of Analog Computer.
Hint 3: If the system changesif the amount of water in the fry-pan decreasesyou
will probably need to change the coefficients of your system. You could turn those
pots, or you could use multiplying DACs in place of P2, P3, and P4, to get the
coefficients you want. Not trivialbecause the system won't do it for youyou have
to tell it what you want. But this IS feasible.
Hint 4: This circuit won't work well at all with general-purpose op amps An LM741 at
A3 or A4 would cause HORRIBLE errors, because the resistors in the differentiator
and in the integrator will be quite highperhaps 2 or 5 M forcing you to get op
amps with low bias currents. But, fortunately, good op amps with low imput current
(50pA or 0.05 pA) aren't expensive these days.M
Hint 5: When you have all of this worked out and optimized pretty well, you can do the
whole thing with one op ampyou may not need five amplifiers. In Figure 3, op amp
A6 does the whole thing. Of course, you don't have the flexibility of three independent
controls, but in many cases you don't need that. In this case, the output of A6 is a
summation of the Derivative and the Integral terms, with a Proportional (damping
factor) term also included.
Can Fuzzy Logic likewise take advantage of an Integrator to convert from PD to PID? Yes, and pretty easily, if
you figure out the right trick. Of course, if the system is REALLY nonlinear, or a nonlinear controller is really
needed, then nothing is simple, and you might have to write 125 or 343 rules. Still, a small Integral term could
let you effectively turn down the "gain" of the Proportional path and greatly improve the dc accuracy AND the
loop stability. Of course, this doesn't mean that you can easily get fast settling under difficult conditions, such
as "we have no idea how much water is in the pan." But there's still a definite opportunity for improvement by
adding in the integrator5.
As I mentioned in '93, F.L. does not, by itself, compute a derivative. So if you want a derivative term, you have
to generate a derivative and digitize it, then present it to the F.L. controller: OR, you digitize the proportional
signal and take a DIFFERENCE every few seconds, then present THAT as a derivative signal to the F.L.
controller: So, in exactly the same way, the F.L. controller can't generate an integral. But, you can program a
subroutine to compute the integral of the Error Signal and present it to the F.L. OR, you could compute the
Integral term and just ADD it to the Proportional term, and then process that total without any fanfare. If the
system is fairly linear, nobody will ever know that you cheated, and it will probably work perfectly. You may not
have to write 343 rulesmaybe 25 or 49 will work just fine!
So, if you have a few bucks worth of op amps and a little time, you can make a pretty darned good controller:
Much better than a bang-bang controller. When I came to NSC back in '76, I found several Application Notes
where we had recommended a temperature controller. But either the controller had finite gain (i.e., poor low
gain) or else ran bang-bang, with various kinds of noise and inaccuracyand bad error!! In my App notes, I
recommend that a proportional controller with stability enhanced by the PID terms, can be fairly simple and
effective.
OF COURSE, if the delay from the heat to the sensor is just too slow that makes everything much harder.
Locating the sensor where it gets a prompt response to the heat can help a lot. Also, you may get better
results from having two sensors. The one that drives the Differentiator may be located very close to the heater
as an aid to stability. But the one that drives the integrator may live in the "sweet spot"the exact place where
highest precision is needed. If there's an extra lag there, that will certainly make the loop difficult to engineer.
Then the other tricks mentioned abovethe feedforward path and the oven-within-the-ovenmay be
justifiable. In all of the cases where transport delays occur, system design can be very challenging. But it
needn't be considered impossible or even very difficult. And it's usually not a situation where Fuzzy Logic has
any inherent advantages. In fact, PID usually has advantages over a Fuzzy Logic controller if that controller
tries to do without any Integral term.
Comments invited!/RAP
Robert A. Pease/Engineer
P.S. If you have a heatersuch as a gas furnaceyou do not want to be turning it ON and OFF every few
seconds because you would wear it out. The same is true for an electromechanical refrigerator. But if you
have a thermoelectric cooler, you can turn that ON to any desired extent by driving the number of amperes the
loop calls for. The same thing applies with dc resistive heaters, but BEWAREthe amount of heat is normally
porportional to the SQUARE of the current. If you use the power transistor AND the resistor as heaters, the
total watts is about linearly proportional to the current, but you have to be careful where you locate the
transistor (a small source) compared to the power resistor, which often is made of wire wrapped all around the
temperature chamber. The management of thermal flow is a very important and tricky subject; I can't give any
easy answersyou really have to study it.
Modulating or controlling a high-power heater, such as a kilowatt of 115-V line power, sounds like it would be
much harder, but actually it's easy. You can drive a power Triac using a MOC3030 (zero-crossing firing circuit)
and add a dither circuit to ensure that the average heating of the power resistor is linearly porportional to the
duty cycle. In my book, I showed a 17-Hz dither that turned the Triac ON and OFF about 17 times per second.
Averaged over the internal time constant of the heater, you can hardly see any noise, and the duty cycle is
very well controlled without much heating in the control circuit. (You certainly don't want to control a kilowatt of
resistive heat with a linear amplifier.)
References:
1. "What's All This Refrigerator Stuff, Anyhow?," Electronic Design, Dec. 19, 1994, p. 122.
2. Troubleshooting Analog Circuits, Robert A. Pease, 1991, p.109. Butterworth-Heineman, Available from the
author for $31.95 (inc. tax and mailing).
3. Encyclopedia Brittanica, London, 1894, Vol. XXII, "The Steam Engine," pp. 508-509.
4. "A Differential Governor," Sir W. Siemens, Proceedings of the Royal Institute of Mechanical Engineering,
1853.
5. "The Basics of PID on the NLX220," by Adaptive Logic, Address: 800 Charcot Ave., Suite 112, San Jose,
CA 95131. This arrived recently and shows easy techniques to get full PID control with F.L. Ask for info at
(408) 383-7200.
Originally published in Electronic Design, June 26, 1995
RAP's 1997 comments: THIS is not just about PID. THIS is not just about Analog Computers. THIS is not just
about Op-Amps. THIS is not just about Fuzzy Logic. This is about THE REAL WORLD. The whole world. If you
ask for something and you get it, are you happy? If not, why not? I have seen some F.L. scientists who were
wise enough to agree that they could make do with a LOT LESS than 343 rules, by adding the I term or the D
term to the P term. Of course, this works best when there is not much nonlinearity. And when there is not much
nonlinearity, F.L. does not have much advantage over PID controllers.
Minor correction on the circuit referenced in my book: on page 109, I show a good circuit with a "17 Hz triangle
wave." The main function of the triangle wave is so you can see the LED blinking, and guess when the loop is
pegged, or whether the duty cycle is high or low. I built another copy of this, recently. The LED did not seem to
blink right. I checked, and the frequency was 170 Hz. So please mark up that schematic, to change the
capacitor that is connected to "17 Hz" from 0.01 F to 0.1 F, so you would see the right kind of blinking. This
is the only error I have found in my book. I think we can get the 1998 printing to have the correct C value.
rap.