0% found this document useful (0 votes)
95 views

Frequency-Response Design Method Hand-Out

1. Frequency response analysis examines how a system responds to oscillating inputs of different frequencies. It is useful for studying dynamics in various engineering fields like communications, mechanical systems, and chemical processes. 2. The impulse response of a system describes its output when given a brief input signal called a Dirac delta function or unit impulse. Taking the Laplace transform of the impulse response gives the system's transfer function. 3. Fourier analysis represents functions as sums of sine and cosine waves. The Fourier transform expresses a function as a combination of complex exponential waves, analogous to how the Laplace transform expresses a function as a combination of complex exponential terms.

Uploaded by

Trixie Nuyles
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

Frequency-Response Design Method Hand-Out

1. Frequency response analysis examines how a system responds to oscillating inputs of different frequencies. It is useful for studying dynamics in various engineering fields like communications, mechanical systems, and chemical processes. 2. The impulse response of a system describes its output when given a brief input signal called a Dirac delta function or unit impulse. Taking the Laplace transform of the impulse response gives the system's transfer function. 3. Fourier analysis represents functions as sums of sine and cosine waves. The Fourier transform expresses a function as a combination of complex exponential waves, analogous to how the Laplace transform expresses a function as a combination of complex exponential terms.

Uploaded by

Trixie Nuyles
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

FREQUENCY-RESPONSE

DESIGN METHOD
Instrumentation Control Engineering

ARCHINUE, JOHN PAULO A.


DEL CASTILLO, DWIGHT EVAN B.
GAÑOLO, RONALD JR. P.
LLAMERA, ROM JEHNRY LOUIS A.
MADRIDEO, JOSEPH P.
NUYLES, TRIXIE MAE B.

BSME 5B

SUBMITTED TO:

ENGR. MARY JOY R. MANDANE


Instructor

0
Frequency Response – Design Method

1 Introduction
Frequency Response (sometimes called FR) is a key analysis tool for control of some dynamic
systems. This analysis is based on the fact that if the input to a stable process is oscillated at a
frequency ω, the long-time output from the process will also oscillate at a frequency ω, though
with a different amplitude and phase.
Advanced control techniques related to frequency response are crucial to the performance of
devices in the communications industry and other facets of electrical engineering. Imagine what
would happen, for example, if your cellular phone amplified all frequencies, including the high-
frequency noise produced by air moving past the mouthpiece, at the same volume as your voice!
The frequency response technique can also be valuable to mechanical engineers studying
things like airplane wing dynamics or chemical engineers studying diffusion or process dynamics.
As an example of what happens to a system with an oscillatory input, consider a system
consisting of a damped harmonic oscillator (I’ve used symbols usually reserved for masses on
springs here):

(1)
where x is the deviation from the equilibrium position, t is time, is a friction coefficient, and F is
the force applied to the spring (the input). The output is simply the measured position, so y = x.
We will define the input (for convenience) as u = F/m, the force per unit mass. The transfer
function from input to output (assuming zero initial conditions) is thus

, (2)
where and ω0 = pk/m. If we put an oscillatory signal (such as a sine function) as
input to this “plant,” we get an expression for the output, y(t).
The Laplace transform of so,

. (3)
We can decompose this equation by partial fractions into the following form:

. (4)
The output y(t) is thus of the form

y(t) = α1e−β1t sin(γ1t) + α2e−β2t cos(γ2t) + C sin(ωt) + Dcos(ωt), (5)

1
where αi, βi, and γi are positive real constants. The first two terms eventually decay away, leaving
the last two terms. These two terms oscillate with frequency ω, as expected from the note above.
We can use some trigonometry to express these in a more obvious way:

The phase angle can thus be computed as φ = arctan(D/C). The magnitude, M, and the phase, φ,
are intimately linked with the nature of the transfer function and thus of the plant itself.

2 The Impulse Response


What, exactly does the function P(s), the “plant transfer function,” represent?
That is, if we invert the Laplace transform and obtain the function p(t) = L −1[P(s)], what does p(t)
represent? Remember that P(s) may be a composite of several differential equations! However,
inverting P(s) by itself corresponds to the response due to U(s) = 1; we now need only to find the
function u(t) that transforms to this in the Laplace domain.

2.1 Delta Functions


There are two kinds of so-called delta functions that will be of use to use here. The first is the
Kronecker delta, defined such that

. (7)

The Kronecker delta can be written as δnm instead of δ[n − m]. Either way, it is true that

(8)
This function serves to “pull out” the value of the coefficient a in the series an corresponding to
m. This works well for discrete systems, but this delta function is useless for continuous functions,
where integrals instead of sums are involved.
Another type of delta function, introduced by British mathematician Paul Dirac in the 1920’s,
is called the Dirac delta, defined—by analogy to Equation (7)—such that

(9)
Thus the Dirac delta function serves the same purpose for continuous functions as its partner the
Kronecker delta serves for discrete functions: it “pulls out” the value of the function at that point
in the domain.
The Dirac delta function has a few properties similar to those of the Kronecker delta. First, we
can immediately see from Eq. (9) that

2
. (10)

Second, the function has the property that

(11)
,

meaning that this function can be envisioned as the limit of the Kronecker delta function as the
size of the discrete unit approaches zero, preserving the area of each discrete cell.

2.2 Laplace Transform of the Dirac Delta Function


By the property in Eq. (9), we immediately see that the Laplace transform of the delta function is
L[δ(t − τ)] = e−τs, (12)
meaning that L[δ(t)] = 1. From our discussion earlier, this means that the input that produces a
transformed input U(s) = 1 is none other than u(t) = δ(t). In this context, the Dirac delta function
is called the unit impulse function: it can be modeled as a sudden burst of input with integral 1 at
time zero. The output p(t) resulting from a delta function input is thus termed the impulse
response. This means that the transfer function P(s) is the Laplace transform of the impulse
response, p(t).
At this point, it is important to note that multiplication of transfer functions in the Laplace
domain corresponds to convolution of functions in the time domain, where the convolution of
two functions f and g is defined as

(13)
This integral is the “generalized multiplication” in the time domain, and it works out that
L[f ◦ g] = F(s)G(s). (14)
It is therefore not true that y(t) = g(t)u(t); it is true that Y (s) = G(s)U(s).
Equation (13) combined with Eq. (12) provides a simple way to derive the Laplace transform
of a function with a time delay: rewrite the function as the convolution of a Delta function and
the un-delayed function via Eq. (9), then take the transform of each function and multiply.

3 Aside: Fourier Series and Fourier Transforms


3.1 Fourier Series
Fourier series are a method to express any function that is continuous on a given interval as a
sum of of sines and cosines on that interval. Fourier supposed, and later proved, that if a function
f(x) is continuous on the interval (0,L), then one can write
3

f(x) = X an sin(knx) + bn cos(knx), n=−∞

where ; this makes for the following properties:

Therefore if we multiply both sides by sin(mπx/L) or cos(mπx/L) and integrate, we obtain

(15a)

(15b)

Some people call Eq. (15) the definition of the Finite Fourier Transform1 or the Discrete Fourier
Transform.

3.2 The Fourier Transform


The purpose of the Fourier transform is similar to that of the Laplace transform in that it seeks to
transform a differential equation into an algebraic equation. However, whereas the Laplace
transform uses the complex variable s (which has units of inverse time but is otherwise difficult
to interpret), the Fourier transform uses complex waves of the from eikx or some variation on that
form to write the transformed function as a sum of waves of all frequencies (represented here
by the frequency ν or wavenumber ¯ν, or their angular equivalents ω and k).
The Fourier Transform can be defined by several integral pairs, depending on the units of the
quantity to be transformed and the desired units of the transformed variable. If f is a function of
displacement x, with units of length, then the resulting variable can be defined to have units of
wavenumber (¯ν) or angular wavenumber (k). In wavenumber units, we can define2

F (16a)

F (16b)

1 This is occasionally, and somewhat awkwardly, referred to as the FFT by some authors. That term is (almost) universally reserved for the Fast

Fourier Transform, an algorithm used to take discrete Fourier transforms....


2 Note that each of these transforms can be replaced by its complex conjugate with impunity. That is, the forward transform can have −i or i

in the exponent, provided the reverse transform has the opposite sign. The factor of 1/2π can also be moved between the forward and reverse
transforms, depending on the desired units of the transformed function.
4
In this form, x has units of length, and ¯ν has units of inverse length (wavenumbers). A similar
form which is more common is

F (17a)

F (17b)

where k is an angular wavenumber (radians/unit length). The factor of 2π is sometimes moved


to the forward transform; the only important part is that the entire transform have a factor of
1/2π somewhere in this form. In fact, some authors (especially physicists), define the transform
with these units as

F (18a)

F (18b)
Which you use is largely a matter of preference, provided you remember that the factor of 2π is
to cancel the units of k, which is in radians per unit length. The form defined by Eq. (18) would
therefore have a transformed function with the somewhat awkward units of radians 1/2.
If the function in question is a function of time (instead of distance), then two other forms
come into play. They are analogous to the spatial forms, except that the transformed variable has
units of frequency instead of wavenumber. For frequencies in Hz (cycles/second) and analogous
units:

F (19a)

F (19b)
Similarly, for angular frequencies:

F (20a)

F (20b)

We will define Equations (20) as the definition of the Fourier Transform, F[f(t)], for elements in
the time domain.

4 Frequency from the Transfer Function


Recall that the open-loop transfer function G(s) = P(s)C(s), where C(s) is the transfer function of
the controller (from error to input, as it were). Also recall (Section 2) that the transfer function is

5
the Laplace transform of the impulse response. The transfer function can thus be found by the
definition of the Laplace transform:

(21)
Strictly speaking, this is the unilateral Laplace transform; the bilateral Laplace transform is
integrated over the entire real axis. Since outputs are assumed to be at steady-state (or at least
unknown) prior to time t = 0, these are equivalent for our purposes. If we restrict s to be purely
imaginary,3 we can let s = iω and Eq. (21) becomes

. (22)
Comparing this to Eq. (20a) leads us to the equality on the right (recalling that g(t) = 0 for t < 0).
In short, G(iω) is the Fourier transform of the impulse response: the value of G(iω) at each value
of ω represents the contribution of a wave of frequency ω to the value of g(t). If we eliminate
the contributions from all other waves (say, by exciting at that frequency and allowing the other
states to relax away), the Fourier transform tells us the final response of the system to that
oscillation, including its magnitude and phase-shift.
To obtain the magnitude and phase directly from the transfer function G(s) (the open-loop
transfer function), we separate G(iω) into real and imaginary parts:4 (see Seborg et al., page 337–
338)

G(iω) = R(ω) + iI(ω).

The amplitude ratio (AR) and phase lag (φ) of the long-time oscillations are then

AR = |G(iω)| = pR2 + I2 (23)

. (24)

Note that the magnitude of the actual oscillation will be aAR, where a is the amplitude of the
exciting oscillation.

3 I mean this in the mathematical sense, not the metaphysical sense, of course.
4 Hint: multiply the numerator and denominator by the complex conjugate of the denominator.
6
5 Analysis Tools
5.1 Bode Plots
Named for Henrik Bode (b¯od0 , but everyone says b¯od0¯e), the Bode diagram is one possible
representation of the long-time modulus and phase of the output in response to a sinusoidal
input. It consists of two stacked plots with frequency5on the horizontal axis.
The upper plot is a log-log plot of magnitude against frequency. The magnitude is nearly
always displayed in units of decibels (dB), such that

M (dB) = 20log10 M.

Note that a decibel is usually defined in this way, though occasionally the factor in front is 10
instead of 20: this is because a decibel was originally invented to describe loss of power over
telephone cables, and power is proportional to the square of amplitude (so the factor of 10 is for
ratios of power, while the factor of 20 is for ratios of the amplitude). It is worth checking which
definition they are using if you find a paper in the literature.
The lower plot is a log-linear plot of phase lead against frequency. The phase is typically
plotted in degrees. The phase lead is always negative (the output lags behind the error signal as
opposed to leading it).
Examples of Bode diagrams are shown in Figures 1–6.
The Bode plot is very useful for determining various aspects of a process. For example, it is
desirable in most processes to have good disturbance rejection at high frequencies (that is, the
process is not affected overly much if the signal is noisy). This corresponds to a fall-off in
magnitude to the right of the Bode plot. The phase plot gives information about the order of the
process (if such information is unknown): the general rule is, the phase will shift −90 ◦ for each
first-order term in the plant and/or controller. Thus, a third-order system would exhibit a phase
shift of −270◦ at high frequencies. At intermediate frequencies, however, the decay times of each
of the individual first-order terms may or may not be significant, so the phase lag usually follows
a “step” pattern, with the number of steps being equal to the order of the system.

5 That’s angular frequency!


7
Figure 1: Open-loop Bode plot for a first-order system defined by the transfer function .

Figure 2: Open-loop Bode plots for second-order systems defined by the transfer function
. Left: Overdamped (ζ = 3). Right: underdamped
(ζ = 0.25). The maximum on the underdamped plot corresponds to the resonance frequency of
this second-order system.

8
Figure 3: Open-loop Bode plot for the third-order system defined by the transfer
function.

Bode Diagram

−5

−10

−15

−20
−89

−89.5

−90

−90.5

−91
0 1
10 10
Frequency (rad/sec)

Figure 4: Open-loop Bode plot for an integrating controller element.

9
Bode Diagram

20

15

10

−5
91

90.5

90

89.5

89
0 1
10 10
Frequency (rad/sec)

Figure 5: Open-loop Bode plot for a differentiating controller element.

6 Bode Diagrams for Closed-Loop Processes


Usually Bode diagrams are constructed for open-loop processes—this gives information about
the relative gains and such. For processes, however, it is occassionally necessary to resort to
constructing the diagram on the closed-loop process. In this case, you oscillate not the error but
the set point.
If the plant, P(s), is described by the ratio of two polynomials in s, then P(s) = Pn−1(s)/Pn(s). The
transfer function from the set point, r, to the output, y, is

(25)
If the controller is a simple PI or PID controller (with integrator on), then C(s) = Kc(1+1/τis+τds).
Multiplying the top and bottom of the closed-loop transfer function by τis yields

(26)

The apparent order of the process is now n + 1. However, the right choice of gains can cause very,
very odd behavior in even simple plants, as can be seen in Figure 6. The errant behavior is due to
the presence of zeros that were not present in the open-loop transfer function that are artifacts
of the integrator and differentiator.

10
6.1 Nyquist Plots
Nyquist plots provide an alternative to Bode diagrams in which the frequency does not appear
explictly, instead being a polar plot of magnitude and phase in which the frequency is an implicit
parameter. The Nyquist diagram is simpler than the Bode diagram, and is often used for multiple
loop control schemes (such as cascade control) for that reason.
The idea of the Nyquist plot is to plot G(iω) in polar coordinates with r = |G(iω)| and θ = 6
G(iω). Another equivalent method is to plot <[G(iω)] on the horizontal axis and =[G(iω)] on the
vertical axis. Note that, unlike Bode plots, Nyquist plots must be plotted using the open-loop
transfer functions in order to provide any sort of useful information.
The Nyquist plot provides an immediate determination of loop stability for proposed
interacting loops, for example. These conditions are discussed in the next section. Some examples
of Nyquist plots are shown in Figures 7–9.

Figure 6: Closed-loop Bode plots for a second-order plant with a P, PI, and PID controller,
respectively. The plant is given by . The gains are Kp = 1, τi = 10 s, and τd = 0.5 s.

11
Nyquist Diagram

0.5

0.4

0.3

0.2

0.1

−0.1

−0.2

−0.3

−0.4

−0.5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
Real Axis

Figure 7: Nyquist plot for a second-order stable system defined by the transfer function
.
Nyquist Diagram

1.5

0.5

−0.5

−1

−1.5

−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5
Real Axis

Figure 8: Nyquist plot for a second-order unstable system defined by the transfer function
.

12
Real Axis Frequency (rad/sec)

Figure 9: Nyquist plot and corresponding Bode plot for a second-order stable system defined by
the transfer function . The time delay actually creates a spiral inward (which
Matlab does a bad job of rendering due to discretization).
6.2 Stability Criteria
The analysis diagrams presented in this section can be used to determine if a proposed control
scheme will be asymptotically stable. These methods are generally preferable to other criteria
such as the Routh criterion (see Seborg et al., Chapter 11), as they handle time delays exactly and
provide an estimate of the relative stability of a process (how stable the system is, not just
whether it is stable).

Gain and Phase Margins The idea with the Bode criterion is to find the roots of 1 + G(s) = 0, the
characteristic equation. Each root must therefore satisfy G(s) = −1 = eiπ, which corresponds to a
magnitude of 1 and a phase of −180◦. We thus define a crossover frequency as being the frequency
at which the magnitude of the transfer function is exactly 1 (0 dB). We also define the critical
frequency as the frequency at which the phase lag is equal to 180◦. The gain margin is defined as
the inverse of the amplitude ratio at the critical frequency. The phase margin is 180◦ plus the
phase angle at the crossover frequency. The gain margin is a measure of how much the overall
open-loop gain can be increased before the process becomes unstable. The phase margin
provides an indication of how much time delay the system can tolerate before becoming unstable.
A guideline from Seborg et al.: “A well-tuned controller should have a gain margin between 1.7
and 4.0 and a phase margin between 30◦ and 45. ”◦

Bode Stability Criterion The Bode stability criterion is thus: If G(s) has more poles than zeros and
no poles in the right half-plane (excluding the origin), and if G(iω) has only one critical frequency
ωc and one crossover frequency, then the loop is stable if and only if |G(iωc)| < 1. This is
equivalent to saying the gain margin must be greater than 1 (i.e., 0 dB).

13
Nyquist Stability Criterion The idea with the Nyquist criterion is to find the number of times 1 +
G(iω) circles the origin, equivalent to the number of times G(iω) circles the point (−1,0), in the
clockwise direction. This number, N, is the number of right half-plane roots of the denominator
of the closed-loop transfer function (including effects due to time delays). Next, find the number
of right half-plane poles of the open-loop transfer function, P, which will also be poles of the
closed-loop transfer function. The total number of RHP poles of the closed-loop transfer function
is always Z = N + P = 0 for a stable system.
The derivation of the Nyquist criterion is in Seborg et al. I have borrowed their notation for
most of this writeup, since that is what you are familiar with.

14

You might also like