Frequency-Response Design Method Hand-Out
Frequency-Response Design Method Hand-Out
DESIGN METHOD
Instrumentation Control Engineering
BSME 5B
SUBMITTED TO:
0
Frequency Response – Design Method
1 Introduction
Frequency Response (sometimes called FR) is a key analysis tool for control of some dynamic
systems. This analysis is based on the fact that if the input to a stable process is oscillated at a
frequency ω, the long-time output from the process will also oscillate at a frequency ω, though
with a different amplitude and phase.
Advanced control techniques related to frequency response are crucial to the performance of
devices in the communications industry and other facets of electrical engineering. Imagine what
would happen, for example, if your cellular phone amplified all frequencies, including the high-
frequency noise produced by air moving past the mouthpiece, at the same volume as your voice!
The frequency response technique can also be valuable to mechanical engineers studying
things like airplane wing dynamics or chemical engineers studying diffusion or process dynamics.
As an example of what happens to a system with an oscillatory input, consider a system
consisting of a damped harmonic oscillator (I’ve used symbols usually reserved for masses on
springs here):
(1)
where x is the deviation from the equilibrium position, t is time, is a friction coefficient, and F is
the force applied to the spring (the input). The output is simply the measured position, so y = x.
We will define the input (for convenience) as u = F/m, the force per unit mass. The transfer
function from input to output (assuming zero initial conditions) is thus
, (2)
where and ω0 = pk/m. If we put an oscillatory signal (such as a sine function) as
input to this “plant,” we get an expression for the output, y(t).
The Laplace transform of so,
. (3)
We can decompose this equation by partial fractions into the following form:
. (4)
The output y(t) is thus of the form
1
where αi, βi, and γi are positive real constants. The first two terms eventually decay away, leaving
the last two terms. These two terms oscillate with frequency ω, as expected from the note above.
We can use some trigonometry to express these in a more obvious way:
The phase angle can thus be computed as φ = arctan(D/C). The magnitude, M, and the phase, φ,
are intimately linked with the nature of the transfer function and thus of the plant itself.
. (7)
The Kronecker delta can be written as δnm instead of δ[n − m]. Either way, it is true that
(8)
This function serves to “pull out” the value of the coefficient a in the series an corresponding to
m. This works well for discrete systems, but this delta function is useless for continuous functions,
where integrals instead of sums are involved.
Another type of delta function, introduced by British mathematician Paul Dirac in the 1920’s,
is called the Dirac delta, defined—by analogy to Equation (7)—such that
(9)
Thus the Dirac delta function serves the same purpose for continuous functions as its partner the
Kronecker delta serves for discrete functions: it “pulls out” the value of the function at that point
in the domain.
The Dirac delta function has a few properties similar to those of the Kronecker delta. First, we
can immediately see from Eq. (9) that
2
. (10)
(11)
,
meaning that this function can be envisioned as the limit of the Kronecker delta function as the
size of the discrete unit approaches zero, preserving the area of each discrete cell.
(13)
This integral is the “generalized multiplication” in the time domain, and it works out that
L[f ◦ g] = F(s)G(s). (14)
It is therefore not true that y(t) = g(t)u(t); it is true that Y (s) = G(s)U(s).
Equation (13) combined with Eq. (12) provides a simple way to derive the Laplace transform
of a function with a time delay: rewrite the function as the convolution of a Delta function and
the un-delayed function via Eq. (9), then take the transform of each function and multiply.
(15a)
(15b)
Some people call Eq. (15) the definition of the Finite Fourier Transform1 or the Discrete Fourier
Transform.
F (16a)
F (16b)
1 This is occasionally, and somewhat awkwardly, referred to as the FFT by some authors. That term is (almost) universally reserved for the Fast
in the exponent, provided the reverse transform has the opposite sign. The factor of 1/2π can also be moved between the forward and reverse
transforms, depending on the desired units of the transformed function.
4
In this form, x has units of length, and ¯ν has units of inverse length (wavenumbers). A similar
form which is more common is
F (17a)
F (17b)
F (18a)
F (18b)
Which you use is largely a matter of preference, provided you remember that the factor of 2π is
to cancel the units of k, which is in radians per unit length. The form defined by Eq. (18) would
therefore have a transformed function with the somewhat awkward units of radians 1/2.
If the function in question is a function of time (instead of distance), then two other forms
come into play. They are analogous to the spatial forms, except that the transformed variable has
units of frequency instead of wavenumber. For frequencies in Hz (cycles/second) and analogous
units:
F (19a)
F (19b)
Similarly, for angular frequencies:
F (20a)
F (20b)
We will define Equations (20) as the definition of the Fourier Transform, F[f(t)], for elements in
the time domain.
5
the Laplace transform of the impulse response. The transfer function can thus be found by the
definition of the Laplace transform:
(21)
Strictly speaking, this is the unilateral Laplace transform; the bilateral Laplace transform is
integrated over the entire real axis. Since outputs are assumed to be at steady-state (or at least
unknown) prior to time t = 0, these are equivalent for our purposes. If we restrict s to be purely
imaginary,3 we can let s = iω and Eq. (21) becomes
. (22)
Comparing this to Eq. (20a) leads us to the equality on the right (recalling that g(t) = 0 for t < 0).
In short, G(iω) is the Fourier transform of the impulse response: the value of G(iω) at each value
of ω represents the contribution of a wave of frequency ω to the value of g(t). If we eliminate
the contributions from all other waves (say, by exciting at that frequency and allowing the other
states to relax away), the Fourier transform tells us the final response of the system to that
oscillation, including its magnitude and phase-shift.
To obtain the magnitude and phase directly from the transfer function G(s) (the open-loop
transfer function), we separate G(iω) into real and imaginary parts:4 (see Seborg et al., page 337–
338)
The amplitude ratio (AR) and phase lag (φ) of the long-time oscillations are then
. (24)
Note that the magnitude of the actual oscillation will be aAR, where a is the amplitude of the
exciting oscillation.
3 I mean this in the mathematical sense, not the metaphysical sense, of course.
4 Hint: multiply the numerator and denominator by the complex conjugate of the denominator.
6
5 Analysis Tools
5.1 Bode Plots
Named for Henrik Bode (b¯od0 , but everyone says b¯od0¯e), the Bode diagram is one possible
representation of the long-time modulus and phase of the output in response to a sinusoidal
input. It consists of two stacked plots with frequency5on the horizontal axis.
The upper plot is a log-log plot of magnitude against frequency. The magnitude is nearly
always displayed in units of decibels (dB), such that
M (dB) = 20log10 M.
Note that a decibel is usually defined in this way, though occasionally the factor in front is 10
instead of 20: this is because a decibel was originally invented to describe loss of power over
telephone cables, and power is proportional to the square of amplitude (so the factor of 10 is for
ratios of power, while the factor of 20 is for ratios of the amplitude). It is worth checking which
definition they are using if you find a paper in the literature.
The lower plot is a log-linear plot of phase lead against frequency. The phase is typically
plotted in degrees. The phase lead is always negative (the output lags behind the error signal as
opposed to leading it).
Examples of Bode diagrams are shown in Figures 1–6.
The Bode plot is very useful for determining various aspects of a process. For example, it is
desirable in most processes to have good disturbance rejection at high frequencies (that is, the
process is not affected overly much if the signal is noisy). This corresponds to a fall-off in
magnitude to the right of the Bode plot. The phase plot gives information about the order of the
process (if such information is unknown): the general rule is, the phase will shift −90 ◦ for each
first-order term in the plant and/or controller. Thus, a third-order system would exhibit a phase
shift of −270◦ at high frequencies. At intermediate frequencies, however, the decay times of each
of the individual first-order terms may or may not be significant, so the phase lag usually follows
a “step” pattern, with the number of steps being equal to the order of the system.
Figure 2: Open-loop Bode plots for second-order systems defined by the transfer function
. Left: Overdamped (ζ = 3). Right: underdamped
(ζ = 0.25). The maximum on the underdamped plot corresponds to the resonance frequency of
this second-order system.
8
Figure 3: Open-loop Bode plot for the third-order system defined by the transfer
function.
Bode Diagram
−5
−10
−15
−20
−89
−89.5
−90
−90.5
−91
0 1
10 10
Frequency (rad/sec)
9
Bode Diagram
20
15
10
−5
91
90.5
90
89.5
89
0 1
10 10
Frequency (rad/sec)
(25)
If the controller is a simple PI or PID controller (with integrator on), then C(s) = Kc(1+1/τis+τds).
Multiplying the top and bottom of the closed-loop transfer function by τis yields
(26)
The apparent order of the process is now n + 1. However, the right choice of gains can cause very,
very odd behavior in even simple plants, as can be seen in Figure 6. The errant behavior is due to
the presence of zeros that were not present in the open-loop transfer function that are artifacts
of the integrator and differentiator.
10
6.1 Nyquist Plots
Nyquist plots provide an alternative to Bode diagrams in which the frequency does not appear
explictly, instead being a polar plot of magnitude and phase in which the frequency is an implicit
parameter. The Nyquist diagram is simpler than the Bode diagram, and is often used for multiple
loop control schemes (such as cascade control) for that reason.
The idea of the Nyquist plot is to plot G(iω) in polar coordinates with r = |G(iω)| and θ = 6
G(iω). Another equivalent method is to plot <[G(iω)] on the horizontal axis and =[G(iω)] on the
vertical axis. Note that, unlike Bode plots, Nyquist plots must be plotted using the open-loop
transfer functions in order to provide any sort of useful information.
The Nyquist plot provides an immediate determination of loop stability for proposed
interacting loops, for example. These conditions are discussed in the next section. Some examples
of Nyquist plots are shown in Figures 7–9.
Figure 6: Closed-loop Bode plots for a second-order plant with a P, PI, and PID controller,
respectively. The plant is given by . The gains are Kp = 1, τi = 10 s, and τd = 0.5 s.
11
Nyquist Diagram
0.5
0.4
0.3
0.2
0.1
−0.1
−0.2
−0.3
−0.4
−0.5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
Real Axis
Figure 7: Nyquist plot for a second-order stable system defined by the transfer function
.
Nyquist Diagram
1.5
0.5
−0.5
−1
−1.5
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5
Real Axis
Figure 8: Nyquist plot for a second-order unstable system defined by the transfer function
.
12
Real Axis Frequency (rad/sec)
Figure 9: Nyquist plot and corresponding Bode plot for a second-order stable system defined by
the transfer function . The time delay actually creates a spiral inward (which
Matlab does a bad job of rendering due to discretization).
6.2 Stability Criteria
The analysis diagrams presented in this section can be used to determine if a proposed control
scheme will be asymptotically stable. These methods are generally preferable to other criteria
such as the Routh criterion (see Seborg et al., Chapter 11), as they handle time delays exactly and
provide an estimate of the relative stability of a process (how stable the system is, not just
whether it is stable).
Gain and Phase Margins The idea with the Bode criterion is to find the roots of 1 + G(s) = 0, the
characteristic equation. Each root must therefore satisfy G(s) = −1 = eiπ, which corresponds to a
magnitude of 1 and a phase of −180◦. We thus define a crossover frequency as being the frequency
at which the magnitude of the transfer function is exactly 1 (0 dB). We also define the critical
frequency as the frequency at which the phase lag is equal to 180◦. The gain margin is defined as
the inverse of the amplitude ratio at the critical frequency. The phase margin is 180◦ plus the
phase angle at the crossover frequency. The gain margin is a measure of how much the overall
open-loop gain can be increased before the process becomes unstable. The phase margin
provides an indication of how much time delay the system can tolerate before becoming unstable.
A guideline from Seborg et al.: “A well-tuned controller should have a gain margin between 1.7
and 4.0 and a phase margin between 30◦ and 45. ”◦
Bode Stability Criterion The Bode stability criterion is thus: If G(s) has more poles than zeros and
no poles in the right half-plane (excluding the origin), and if G(iω) has only one critical frequency
ωc and one crossover frequency, then the loop is stable if and only if |G(iωc)| < 1. This is
equivalent to saying the gain margin must be greater than 1 (i.e., 0 dB).
13
Nyquist Stability Criterion The idea with the Nyquist criterion is to find the number of times 1 +
G(iω) circles the origin, equivalent to the number of times G(iω) circles the point (−1,0), in the
clockwise direction. This number, N, is the number of right half-plane roots of the denominator
of the closed-loop transfer function (including effects due to time delays). Next, find the number
of right half-plane poles of the open-loop transfer function, P, which will also be poles of the
closed-loop transfer function. The total number of RHP poles of the closed-loop transfer function
is always Z = N + P = 0 for a stable system.
The derivation of the Nyquist criterion is in Seborg et al. I have borrowed their notation for
most of this writeup, since that is what you are familiar with.
14