Introduction To Pid Controllers Ed2
Introduction To Pid Controllers Ed2
PID Controllers
Written by George Gillard
Published: 22-July-2017
Updated: 10-November-2017
Introduction
A PID Controller, if created and tuned well, is a powerful tool in programming for
incredibly efficient and accurate movements. There are three key components
behind the PID Controller Proportional, Integral, and Derivative, from which the
acronym PID originates from. However, you dont strictly need to use all three
together for example, you could create just a P controller, a PI controller, or a PD
controller.
In this guide, well learn about each component, how they work together, and how
to put it all into practice. There will be some pseudocode examples to help out
along the way. PID by nature involves calculus, however dont be put off if you
havent learned calculus yet, as Ive attempted to design this guide to be easy
enough for anyone to understand.
Sections:
1. Background Information
2. P: Proportional
3. I: Integral
4. D: Derivative
5. Tuning
6. Conclusion
This guide is provided to assist those learning how to program VEX Robots. This is
a free document, but I ask that you ask for my consent before redistributing online.
Please feel free to share a link to the original source. This document, along with
my other guides, are available for free download from http://georgegillard.com.
It was not until 1922 that PID controllers were first developed using
a theoretical analysis, by Russian American engineer Nicolas
Minorsky for automatic ship steering. Minorsky was designing
automatic steering systems for the US Navy and based his analysis
on observations of a helmsman, noting the helmsman steered the
ship based not only on the current course error, but also on past
error, as well as the current rate of change; this was then given a
mathematical treatment by Minorsky. His goal was stability, not
general control, which simplified the problem significantly. While
proportional control provides stability against small disturbances,
it was insufficient for dealing with a steady disturbance, notably a
stiff gale (due to steady-state error), which required adding the
integral term. Finally, the derivative term was added to improve
stability and control.
Trials were carried out on the USS New Mexico, with the controller
controlling the angular velocity (not angle) of the rudder. PI control
yielded sustained yaw (angular error) of 2. Adding the D element
yielded a yaw error of 1/6, better than most helmsmen could
achieve.
- Wikipedia
Typical basic programming used on a robot is along the lines of run at a constant
power until you reach a certain point and then stop. In an ideal world, wed be
able to do this, and stop exactly on the spot. However, in the real world, there are
additional and largely unpredictable factors that will cause our system to
overshoot the setpoint (ideal target), such as momentum (influenced by speed
and hence battery voltage) or other external influences.
Speed
Distance Distance
As youd suspect, to calculate the error, youd create something as simple as this:
To solidify the understanding of the error, have a look at the following table. The
goal is for a robot to drive a total of 1000 units:
Similarly, if the robot then overshot the setpoint, the error would begin to become
negative (indicating the robot now needs to go in reverse):
power = error
However, you may find that the power values dont seem to be scaled right. The
robot may be a bit too gentle approaching the target, and may in fact not have
enough power at all to reach the setpoint when the error becomes small. Or
alternatively, the robot might be a bit aggressive, and it might significantly
overshoot, and then overcorrect, in a never-ending cycle.
To combat this issue, we introduce another value, the proportional constant (kP).
Simply put, we multiply the error by kP when we assign the error to the output
power. Later well tune this value to get the desired output, but for now heres how
wed implement it:
power = error*kP
Up until now, weve skipped over a fairly critical part of the PID (or P, so far)
controller. Currently, we could run the code and it would perform the calculations
once. However, wed obviously need to keep recalculating these values as our
robot moves, otherwise our error and powers will never update. To fix this, we put
everything in a loop.
Heres some slightly more realistic pseudocode, which with some completion
would work absolutely fine for many situations:
Time
The area under the curve for each cycle of our loop is going to be the current
error, multiplied by the time it takes for that cycle of the loop. Its a rough
approximation, but it works fine for us with such slim slices of time.
area = error * dT
The integral is equal to the sum of all of these areas. At any instant, it is the sum of
the areas of all the previous cycles, so we create a variable (integral), and add on
the new slice of area in each cycle of our loop:
Consider a case where our error is decreasing at a nice constant rate (not realistic),
ignoring dT. The following table describes how the integral would be calculated:
Now, consider what would happen if we had some external influence that caused
our error to reduce more slowly. In the above example it decreased 200 units per
cycle. The next table considers what would happen if that was 100:
As we can see, in the first example our integral was 3000 after 5 cycles. Now, with
a slower deceleration, its 4000. This increase in value is our indicator of some
external influence and will help create some more versatile control for our system.
Our code so far would now look something like this pseudocode:
The 15 milliseconds wait at the end is really important this is our dT. Without it,
your integral value will skyrocket to some huge number as numbers get added
together without any break, and 15 milliseconds is quite fast enough for an
accurate integral for most of our purposes. This delay will also prove to be critical
for the derivative term soon, as Im sure youll realise when we reach that section.
If you dont consider a constant dT, you will need to measure the time per cycle
and consider that in your integral calculation.
3.3 Issues
3.3.1 Problem No. 1:
When the error reaches zero, that is youve made it to the setpoint, the integral is
most likely going to be significant enough to keep the output power high enough
to continue. This can be a nuisance in situations where you dont need any
additional power to hold the position e.g. if this is for a drive train on a flat surface,
if your wheels continue turning thats somewhat of an issue!
Note: if this PID controller is for a system that needs a bit of help to hold its position
(e.g. an arm lifting up some weight), you absolutely should not try this. When your
error passes the setpoint, the integral value will gradually be diminished and it will
still settle. This is only suitable for systems that maintain their sensor value with
zero power (e.g. wheels on a flat surface).
Solution #1: Limit the value that the integral can reach.
if (integral is huge)
integral = maximum value;
Solution #2: Limit the range in which the integral is allowed to build up in (i.e. once
the error is below a certain value, or once the current output power is less than a
certain value, e.g. 100%).
if ( error is big )
integral = 0;
For this guide, well use the second solution using the error as a limiting factor, as
its a little more useful than the first but still simple to program.
Time
2 1
= =
2 1
For us, our Y axis is our error, and our change in X is dT, so our derivative is:
=
Just like with the integral, if we treat dT as a constant, we can ignore its effect in
our calculations and merge it in with our constants later on.
To incorporate to our existing output power, we add our derivate component onto
the end, like as follows:
First of all, well look into some factors that determine the behaviour and
performance of our PID controller in reality:
Rise time the time it takes to get from the beginning point to the target
point
Overshoot how far beyond the target your system goes when passing the
target
Settling time the time it takes to settle back down when encountering a
change
Steady-state error the error at the equilibrium, when its stopped moving
Stability the smoothness of the motion
Now, lets check out how these are effected by an increase in our three constants:
Settling Steady-State
Parameter Rise Time Overshoot Stability
Time Error
kP Decrease Increase N/A Decrease Worsens
(faster) (further) (more precise)
* If kD is small enough. Too much kD can make it worse! Since the derivative term
acts in the opposite direction to the proportional and integral components, if the
power produced by the derivative term is too great it will outweigh the
proportional and integral components, causing the robot to slow down and
potentially stop when it shouldnt. When the robot slows down, the derivative
component will weaken and the robot will once again be able to continue, only
until the derivative term becomes strong enough once again to slow the robot
down unnecessarily. The resulting motion looks jumpy, or jittery.
First of all, you set all three constants (kP, kI, kD) to zero. This disables them.
Well tune them one by one, rather than jumping straight in. We generally tune
in the order of Proportional, Derivative, Integral, that is, we tune in the order of kP,
kD and finally kI. This entire process relies on making a prediction for your
constant (trial), and then adjusting it when it doesnt go to plan (error). Its
important to be prepared to stop your robot (both by disabling it from your
program or a switch, and by physically catching it if necessary), as youll likely
make a prediction that is far off an appropriate value. So long as youre ready, there
typically isnt too much harm in just experimenting.
1. Increase kP until the robot oscillates just slightly, once or twice. Were
interested in achieving a fast motion to the target here, but not too violent
it needs to settle, and in a reasonable amount of time!
3. Start adding kI until any minor steady-state error and disturbances are
accounted for. You may need to adjust kD when doing this.
4. Using the knowledge from the table on the previous page, keep adjusting
the constants until you end up with a nice, quick but smooth motion that
youre happy with.
This can be very frustrating and difficult the first few times, but it gets a lot better
with practice, and youll be able to guess fairly accurate values for your constants
with a bit of experience.
Just as for the trial & error method, begin by disabling all three constants (set them
to zero).
kP kI kD
P 0.5*kU 0 0
PI 0.45*kU 0.54*kU/pU 0
PD 0.8*kU 0 0.1*kU*pU
PID 0.6*kU 1.2*kU/pU 0.075*kU*pU
- Wikipedia
Its important to note that there are other features you may need to implement to
your code to improve the controller. Its also important to note that you dont need
all three components to create a good controller depending on your situation, a
P, PI, or PD controller might be just as good, if not more appropriate.
Understanding the fundamentals behind how and why a PID controller works will
aid you tremendously with your programming. It is worthwhile searching for
some examples online for various applications to broaden your knowledge.
I hope this guide has helped you, and I wish you the best of luck with your
programming!