Chapter 5 Discrete Sampling and Analysis of TIme-Varying Signals
Chapter 5 Discrete Sampling and Analysis of TIme-Varying Signals
Lecture Notes
Unlike analog recording systems, which can record signals continuously in time, digital data-
acquisition systems record signals at discrete times and record no information about the signal
in between these times. Unless proper precautions are taken, this dis crete sampling can
cause the experimenter to reach incorrect conclusions about the original analog signal. In this
chapter we introduce restrictions that must be placed on the signal and the discrete sampling
rate. In addition, techniques are introduced to determine the frequency components of time-
varying signals (spectral analysis), which can be used to specify and evaluate instruments and
also determine the required sampling rate and filtering.
1
(normally expressed in hertz). Figures 5.2 to 5.5 show the sampled values for sampling rates
of 5, 11, 18, and 20.1 samples per second. To infer the form of the original signal, the sample
data points have been connected with straight-line segments.
In examining the data in Figure 5.2, with the sampling rate of 5 Hz, it is reason able to
conclude that the sampled signal has a constant (dc) value. However, we know that the
sampled signal is, in fact, a sine wave. The amplitude of the sampled data is also misleading-it
depends on when the first sample was taken. This behavior (a constant value of the output)
occurs if the wave is sampled at any rate that is an integer fraction of the base frequency fm
(e.g., fm, fm/2, fm/3, etc.).
The data in Figure 5.3 appear to be a sine wave, as are the sampled data, but only one cycle
appears in the time that 10 cycles occurred for the sampled data. The frequency, 1 Hz, is the
difference between the sampled-data frequency, 10 Hz, and the sampling rate, 11 Hz. The
data in Figure 5.4, sampled at 18 Hz, also represent a periodic wave. The apparent frequency
is 8 Hz, the difference between the sampling rate and the signal frequency, and is again
incorrect relative to the input frequency. These incorrect frequencies that appear in the
output data are known as aliases. Aliases are false frequencies that appear in the output data,
that are simply artifacts of the sampling process, and that do not in any manner occur in the
original data.
2
Figure 5.5, with a sampling rate of 20.1 Hz, can be interpreted as showing a frequency of 10
Hz, the same as the original data. It turns out that for any sampling rate greater than twice fm
the lowest apparent frequency will be the same as the actual frequency. This restriction on
the sampling rate is known as the sampling-rate theorem.
This theorem simply states that the sampling rate must be greater than twice the highest-
frequency component of the original signal in order to reconstruct the original waveform
correctly. In equation form this is expressed as;
fs > 2fm (5.1)
where fm is the signal frequency (or the maximum signal frequency if there is more than one
frequency in the signal) and fs is the sampling rate. The theorem also specifies methods that
can be used to reconstruct the original signal. The amplitude in Figure 5 .5 is not correct, but
this is not a major problem, as discussed in Section 5.4.
The sampling-rate theorem has a well-established theoretical basis. There is some evidence
that the concept dates back to the nineteenth-century mathematician Augustin Cauchy
(Marks, 1991). The theorem was formally introduced into modern technology by Nyquist
(1928) and Shannon (1948) and is fundamental to communication theory. The theorem is
often known by the names of the latter two scientists. A comprehensive but advanced
discussion of the subject is given by Marks (1991). In the design of an experiment, to eliminate
3
alias frequencies in the data sampled, it is necessary to determine a sampling rate and
appropriate signal filtering. This process will be discussed in some detail later in the chapter.
Even if the signal is correctly sampled (i.e., at a frequency greater than twice the signal
frequency), the data can be interpreted to be consistent with specific frequencies that are
higher than the signal frequency. For example, Figure 5.6 shows the same data as in Figure
5.5: a 10-Hz signal sampled at 20.1 samples per second. The sampled data are shown as the
small squares. However, these data are not only consistent with a 10-Hz sine wave but in this
case, the data are also consistent with 30.1 Hz.
Actually, there are an infinite number of higher frequencies that are consistent with the data.
If, however, the requirements of the sampling-rate theorem have been met (perhaps with
suitable filtering), there will be no frequencies less than half the sampling rate that are
consistent with the data except the correct signal frequency. The higher frequencies can be
eliminated from consideration since it is known that they don't exist.
4
In some cases, the requirements of the sampling-rate theorem may not have been met, and
it is desired to estimate the lowest alias frequency. The lowest is usually the most obvious in
the sampled data. A simple method to estimate alias frequencies involves the folding diagram
as shown in Figure 5.7 [Taylor (1990)]. This diagram enables one to predict the alias
frequencies based on a knowledge of the signal frequency and the sampling rate. To use this
diagram, it is necessary to compute a frequency fN called the folding frequency. fN is half the
sampling rate, fs. The use of this diagram is demonstrated in Example 5.1
Example 5.1
Compute the lowest alias frequencies for the following cases:
(a) fm = 80 Hz and fs = 100 Hz.
(b) fm = 100 Hz and fs = 60 Hz.
(c) fm = 100 Hz and fs = 250 Hz.
Solution:
(a) fN = 100/2 = 50 Hz
fm/fN = 80/50 = 1.6
Find fm /fN on the folding diagram, draw a vertical line down to the intersection with line AB,
and read 0.4 on line AB. The lowest alias frequency can then be determined from
5
fa = (fa/fN)fN = 0.4 X 50 = 20 Hz (a false frequency)
(b) fN = 60/2 = 30
fm/fN = 100/30 = 3.333
Finding 3.333 on the folding diagram and drawing the vertical line down to AB, we find
fa/fN = 0.667. The lowest alias frequency is then
fa = 0.667 X 30 = 20 Hz (a false frequency)
(c) fN = 250/2 = 125 Hz
fm/fN = 100/125 = 0.8
This falls on the line AB, so fa/fN = 0.8 and fa = 125 Hz, which is the same as the sampled
frequency.
Comment: In part (a), the sampling frequency is between the signal frequency and the
minimum frequency for correct sampling. The lowest alias frequency is the difference
between the sampling frequency and the signal frequency.
In part (b), the sampling frequency is less than the signal frequency. The folding diagram is the
simplest method to determine the lowest alias frequency.
In part (c), the requirement of the sampling-rate theorem has been met, and the alias
frequency is in fact the signal frequency.
We will always find a lowest frequency using the folding diagram, whether it is a correct
frequency or a false alias. To know that the frequency is correct, we must ensure that the
sampling rate is at least twice the actual frequency, usually by using a filter to remove any
frequency higher than half the sampling rate.
6
There are two times in an experimental program when it may be necessary to perform spectral
analysis on a waveform. The first time is in the planning stage and the second is in the final
analysis of the measured data. In planning experiments in which the data vary with time, it is
necessary to know, at least approximately, the frequency characteristics of the measurand in
order to specify the required frequency response of the transducers and other instruments
and to determine the sampling rate required.
While the actual signal from a planned experiment will not be known, data from similar
experiments may be used to determine frequency specifications.
In many time-varying experiments, the frequency spectrum of a signal is one of the primary
results. In structural vibration experiments, for example, acceleration of the vibrating body
may be a complicated function resulting from various resonant frequencies of the system. The
measurement system is thus designed to respond properly to the expected range of
frequencies, and the resulting data are analyzed for the specific frequencies of interest.
To examine the methods of spectral analysis, we first look at a relatively simple waveform, a
simple 1000-Hz sawtooth wave as shown in Figure 5.9. At first, one might think that this wave
contains only a single frequency, 1000 Hz. However, it is much more complicated, containing
all frequencies that are an odd-integer multiple of 1000, such as 1000, 3000, and 5000 Hz. The
method used to determine these component frequencies is known as Fourier-series analysis.
The lowest frequency, f0, in the periodic wave shown in Figure 5.9, 1000 Hz, is called the
fundamental or the first harmonic frequency. The fundamental frequency has period T0 and
angular frequency w0 (Note: The angular frequency, w = 2πf, where f= 1/T.) as discussed by
Den Hartog (1956), Churchill (1987) , and Kamen (1990), any periodic function f(t) can be
represented by the sum of a constant and a series of sine and cosine waves. In symbolic form,
this is written
f(t) = a0 + a1 cos w0t + a2 cos 2w0t + … + an cos nw0t
+ b1 sin w0t + b2 sin 2w0t + … + bn sin nw0t (5.2)
The constant a0 is simply the time average of the function over the period T. This can be
evaluated from
7
(5.3)
The constants an can be evaluated from
(5.4)
and the constants bn can be evaluated from
(5.5)
Although it can be tedious, the constants an and bn can be computed in a straightforward
manner for any periodic function. Of course, Eq. (5.2) is an infinite series, so the constants a
and b can only be determined for a limited number of terms. Since f(t) cannot, in general, be
expressed in equation form, it is normal to evaluate Eqs. (5.3), (5.4), and (5.5) by means of
numerical methods.
The function f(t) is considered to be an even function if it has the property that f(t) = f( -t). f(t)
is considered to be an odd function if f(t) = -f( -t). If f(t) is even, it can be represented entirely
with a series of cosine terms, which is known as a Fourier cosine series. If f(t) is odd, it can be
represented entirely with a series of sine terms, which is known as a Fourier sine series. Many
functions are neither even nor odd and require both sine and cosine terms. If Eqs. (5.3)
through (5.5) are applied to the sawtooth wave in Figure 5.9 (either using direct integration
or a numerical method), it will be found that all the a's are zero (it is an odd function) and that
the first seven b's are
It is not surprising that the a's are zero since the wave in Figure 5.9 looks much more like a
sine wave than a cosine wave. b1 b3, b5, and b7 are the amplitudes of the first, third, fifth, and
seventh harmonics of the function f(t). These have frequencies of 1000, 3000, 5000, and 7000
Hz, respectively. It is useful to present the amplitudes of the harmonics on a plot of amplitude
versus frequency as shown in Figure 5.10. As can be seen, harmonics beyond the fifth have a
very low amplitude. Often, it is the energy content of a signal that is important, and since the
energy is proportional to the amplitude squared, the higher harmonics contribute very little
energy.
8
Figure 5.11 shows the first and third harmonics and their sum compared with the function f(t).
As can be seen, the sum of the first and third harmonics does a fairly good job of representing
the sawtooth wave. The main problem is apparent as a rounding near the peak - a problem
that would be reduced if the higher harmonics (e.g., fifth, seventh, etc.) were included. Fourier
analysis of this type can be very useful in specifying the frequency response of instruments. If,
for example, the experimenter considers the first-plus-third harmonics to be a satisfactory
approximation to the sawtooth wave, the sensing instrument need only have an upper
frequency limit of 3000 Hz. If a better representation is required, the experimenter can
examine the effects of higher harmonies on the representation of the wave and then select a
suitable transducer.
One problem associated with Fourier-series analysis is that it appears to only be useful for
periodic signals. In fact, this is not the case and there is no requirement that f(t) be periodic
to determine the Fourier coefficients for data sampled over a finite time. We could force a
9
general function of time to be periodic simply by duplicating the function in time as shown in
Figure 5.12 for the function in Figure 5.8. If we directly apply Eqs. (5.3), (5.4), and (5.5) to a
function taken over a time period T, the resulting Fourier series will have an implicit
fundamental angular frequency w0 equal to 2π/T. However, if the resulting Fourier series were
used to compute values off(t) outside the time interval 0-T, it would result in values that would
not necessarily (and probably would not) resemble the original signal. The analyst must be
careful to select a large enough value of T so that all wanted effects can be represented by
the resulting Fourier series. An alternative method of finding the spectral content of signals is
that of the Fourier transform, discussed next.
(5.6)
where j = √-1. These relationships can be used to transform Eq. (5.2) into a complex
exponential form of the Fourier series. The resulting exponential form for the Fourier series
can be stated as
10
Each coefficient, cn is, in general, a complex number with a real and an imaginary part.
In Section 5.2 we showed how a portion of a nonperiodic function can be represented by a
Fourier series by assuming that the portion of duration T is repeated periodically.
The fundamental angular frequency, w0, is determined by this selected portion of the signal
(w0 = 2π/T). If a longer value of T is selected, the lowest frequency will be reduced. This
concept can be extended to make T approach infinity and the lowest frequency approach zero.
In this case, frequency becomes a continuous function. It is this approach that leads to the
concept of the Fourier transform. The Fourier transform of a function f(t) is defined as
(5.9)
F(w) is a continuous, complex-valued function. Once a Fourier transform has been
determined, the original function f(t) can be recovered from the inverse Fourier transform:
(5.10)
In experiments, a signal is measured only over a finite time period, and with computerized
data-acquisition systems, it is measured only at discrete times. Such a signal is not well suited
to analysis by the continuous Fourier transform. For data taken at discrete times over a finite
time interval, the discrete Fourier transform (DFT) has been defined as
(5.11)
where N is the number of samples taken during a time period T. The increment of f, Δf, is equal
to 1/T, and the increment of time (the sampling period Δt) is equal to T/N.
The F’s are complex coefficients of a series of sinusoids with frequencies of 0, Δf, 2Δf, 3Δf, ... ,
(N - 1)Δf. The amplitude of F for a given frequency represents the relative contribution of that
frequency to the original signal.
Only the coefficients for the sinusoids with frequencies between 0 and (N/2 - 1) Δf are used in
the analysis of signal. The coefficients of the remaining frequencies provide redundant
information and have a special meaning, as discussed by Bracewell (2000).
The requirements of the Shannon sampling-rate theorem also prevent the use of any
frequencies above (N/2)Δf. The sampling rate is N/T, so the maximum allowable frequency in
the sampled signal will be less than one-half this value, or N /2T = NΔf/2.
The original signal can also be recovered from the DFT using the inverse discrete Fourier
transform, given by
11
(5.12)
The F values from Eq. (5.11) can be evaluated by direct numerical integration. The amount of
computer time required is roughly proportional to N2. For large values of N, this can be
prohibitive. A sophisticated algorithm called the Fast Fourier Transform (FFT) has been
developed to compute discrete Fourier transforms much more rapidly.
This algorithm requires a time proportional to Nlog2N to complete the computations, much
less than the time for direct integration. The only restriction is that the value of N be a power
of 2: for example, 128, 256, 512, and so on. Programs to perform fast Fourier transforms are
widely available and are included in major spreadsheet programs. The fast Fourier transform
algorithm is also built into devices called spectral analyzers, which can discretize an analog
signal and use the FFT to determine the frequencies.
It is useful to examine some of the characteristics of the discrete Fourier transform. To do this,
we will use as an example a function that has 10- and 15-Hz components:
(5.13)
This function is plotted in Figure 5.13. Since this function is composed of two sine waves with
frequencies of 10 and 15 Hz, we would expect to see large values of F at these frequencies in
the DFT analysis of the signal.
If we discretize one second of the signal into 128 samples and perform an FFT, the result will
be as shown in Figure 5.14, which is a plot of the magnitude of the DFT component, │F(k Δf)│,
versus the frequency, k Δf. As expected, the magnitudes of F at f = 10 and f = 15 are dominant.
12
However, there are some adjacent frequencies showing appreciable magnitudes. In this case,
these significant magnitudes of F at frequencies not in the signal are due to the relatively small
number of points used to discretize the signal. If we use N = 512, the situation improves
significantly, as shown in Figure 5.15.
It can be noticed that the magnitude of │F│ for the 10 Hz is different in Figures 5.14 and 5.15
and does not equal the amplitude of the first term on the right side of Eq. 5.13. This is a
consequence of the definition of the discrete Fourier transform and the FFT algorithm. To get
the correct amplitude of the input sine wave, │F│ should be multiplied by 2/N. In Figure 5.14,
the value of │F│ for 10 Hz is 128. If we multiply this by 2/N we get 128 x 2/128 = 2, the same
as Eq. 5.13. Similarly for Figure 5.15, │F│ is 512 so the amplitude of the input sine wave is 512
x 2/5 12 = 2. In many cases (finding a natural frequency for example), only the relative
amplitudes of the Fourier components are important so this conversion step is not necessary.
13
It should be noted that the requirements of the Shannon sampling-rate theorem are satisfied
for the results shown in both Figures 5.14 and 5.15. For Figure 5.14, the sampling rate is 128
samples per second, so the maximum frequency in the sample should be less than 64-Hz. The
actual maximum frequency is 15-Hz.
For the FFTs shown in Figures 5.14 and 5.15, the data sample contained integral numbers of
complete cycles of both component sinusoids. In general, the experimenter will not know the
spectral composition of the signal and will not be able to select a sampling time T such that
there will be an integral number of cycles of any frequency in the signal. This complicates the
process of Fourier decomposition. To demonstrate this point, we will modify Eq. (5.13) by
changing the lowest frequency from 10-Hz to 10.3333-Hz.
(5.14)
An FFT is then performed in a 1-s period with 512 samples. Although 15 complete cycles of
the 15-Hz components are sampled, 10.3333 cycles of the 10.3333-Hz component are
sampled. The results of the DFT are shown in Figure 5.16.
The first thing we notice is that the Fourier coefficient for 10.3333-Hz is distributed between
10 and 11-Hz with the magnitude of the 10-Hz component slightly higher than at 11-Hz. It
should be recognized that without a priori knowledge, the user would not be able to deduce
whether the signal had separate 10-Hz and 11-Hz components or just a single component at
10.3333-Hz, as in this case.
An unexpected result is the fact that the entire spectrum (outside of 10, 11, and 15-Hz) has
also been altered, yielding significant coefficients at frequencies not present in the original
signal. This effect is called leakage and is caused by the fact that there are a non-integral
number of cycles of the 10.3333-Hz sinusoid. Since one does not know the signal frequency in
advance, this leakage effect will normally occur in a sample.
The actual cause is that the sampled value of a particular frequency component at the start of
the sampling interval is different from the value at the end. A common method to work around
14
this problem is the use of a windowing function to attenuate the signal at the beginning and
the end of the sampling interval. A windowing function is a waveform that is applied to the
sampled data. A commonly used window function is the Hann Function, which is defined as
(5.15)
The length of the window, N, is the same as the length of the sampled data. This equation is
plotted in Figure 5.17. The sampled data is multiplied by this window function producing a
new set of data with smoother edges. Figure 5.18 shows the data used to plot Fig. 5.16 both
with and without the Hann window function applied to it. The Hann function is superimposed
on top of the data with the sinusoidal shape apparent. The central portion of the signal is
unaffected while the amplitude at the edges is gradually reduced to create a smoother
transition.
Figure 5.19 shows the distribution of Fourier coefficients for the windowed data. Compared
to Fig 5.16, the frequency content at 10 and 15 Hz is much more distinct; however, the
amplitude of the signal at these frequencies is clearly underestimated.
15
This should not come as a surprise as the windowing function clearly suppresses the average
amplitude of the original signal. This tradeoff between frequency resolution and amplitude is
inherent for all window types. Windows that present good resolution in frequency but poor
determination of amplitude are often referred to as being of high resolution with low dynamic
range. A variety of window functions and their characteristics have been defined in the
literature. Each type of window has its own unique characteristics, with the proper choice
depending on the application and preferences of the user. Some common window functions
are rectangular, Hamming, Hann, cosine, Lanczos, Bartlett, triangular, Gauss, Bartlett-Hann,
Blackman, Kaiser, BlackmanHarris, and Blackman-Nutall. See Engelberg (2008), Lyons (2004),
Oppenheim et. al. (1999), and Smith (2003) for more information on the windowing process.
An additional consideration in spectral analysis is the types of plots used to display the data.
In the previous figures, a linear scale has been used both for the amplitude and frequency. It
is common, however, to plot one or both axes on a logarithmic scale. In the case of amplitude,
it is common to plot the spectral power density, which is typically represented in units of
decibels (dB).
The majority of signals encountered in practice will have record lengths much longer than the
examples presented here. Consider a microphone measurement sampled at 40 kHz over a 10-
s period of time. The record length in this case would be 400,000 values. While it is possible
to compute the DFT of a signal with N = 400,000, this is not always optimal and can give
misleading results. In this case, the apparent frequency resolution, Δf, of the FFT would be 0.1
Hz. As already seen, spectral leakage is likely to limit the resolution to much higher values and
it is unlikely that this level of resolution would practically be needed. Rather, it is more
common to use methods such as Bartlett's or Welch's methods. In these methods, the
sampled signal is divided into equal length segments with a window function applied to each
segment. An FFT is then computed for each segment and then averaged together to produce
a single FFT for the entire signal. For example, if one were to take the example above, it could
be divided into approximately 390 segments with 1024 values in each segment. The FFT of
each segment would have a frequency resolution 39.06 Hz, which would be sufficient for many
applications. The major advantage to this form of analysis is that any uncertainty in the Fourier
coefficients is reduced through the averaging of multiple FFT coefficients. This typically yields
a much smoother curve than a single FFT. In Bartlett's method, there is no overlap between
16
segments while in Welch's method, some overlap (typically 50 %) is allowed, thus taking more
of the signal content into account. See Lyons (2004) and Oppenheim, et. al. (1999) for more
information on these more advanced windowing functions.
(5.16)
If we set the sampling rate, fs, to a value that is greater than twice fm, we should not only avoid
aliasing but also be able to recover (at least theoretically) the original waveform.
In the foregoing example, in which f m is 360 Hz , we would select a sampling rate, fs , greater
than 720 Hz.
If this sampling-rate restriction is met, the original waveform can be recovered using the
following formula called the cardinal series (Marks, 1991):
(5.17)
In this formula, f(t) is the reconstructed function. The first term after the summation, f(n ΔT),
represents the discretely sampled values of the function, n is an integer corresponding to each
sample, and ΔT is the sampling period, 1/fs. One important characteristic of this equation is
that it assumes an infinite set of sampled data and is hence an infinite series. Real sets of
sampled data are finite. However, the series converges and discrete samples in the vicinity of
the time t contribute more than terms not in the vicinity. Hence, the use of a finite number of
samples can lead to an excellent reconstruction of the original signal.
As an example, consider a function sin(2π 0.45t), that is, a simple sine wave with a frequency
of 0.45 Hz and a peak amplitude of 1.0. It is sampled at a rate of 1 sample per second and 700
17
samples are collected. Note that fs exceeds 2fm so this requirement of the sampling-rate
theorem is satisfied. A portion of the sampled data is shown in Figure 5.20. The sampled data
have been connected with straight-line segments and do not appear to closely resemble the
original sine wave. Using Eq. (5.17) and the 700 data samples, the original curve has been
reconstructed for the interval of time between 497 and 503 s, as shown by the heavy curve
on Figure 5.20. The original sine wave has been recovered from the data samples with a high
degree of accuracy. Reconstructions with small sample sizes or at the ends of the data samples
will, in general, not be as good as this example. Although Eq. (5.17) can be used to reconstruct
data, other methods, beyond the scope of the present text, are often used (Marks, 1991). In
most cases, the use of very high sampling rates can eliminate the need to use reconstruction
methods to recover the original signal.
18
When we specify fm, we need to define what is meant by an effectively zero amplitude since
filters can attenuate signals but will not reduce them to zero amplitude.
The question then arises as to what level of attenuation is adequate for practical purposes.
For systems using an analog-to-digital converter, this decision can be based on the
characteristics of the converter. For a given converter, there is an input voltage below which
the converter will produce an output that is the same as would be produced by a zero-input
voltage.
The required signal attenuation can be determined by computing a parameter of the A/D
converter called the dynamic range. For a converter with N bits, the dynamic range is
(5.18)
For monopolar 8- and 12-bit converters, the dynamic ranges are 48 and 72 dB, respectively.
For bipolar converters N is reduced by one since one bit is effectively used to
represent the sign of the input. For 8- and 12-bit bipolar converters, the values of
dynamic range would be respectively 42 and 66 dB.
If we know that the amplitude of an input frequency component does not exceed the input
range of the A/D converter, and we then attenuate that signal component an amount equal
to the converter dynamic range, we can be sure that it will produce no digital output. To select
a filter, we need to define a comer frequency and an attenuation rate. In most cases, we can
select the comer frequency as the maximum frequency of interest, fc. We can then determine
fm based on the filter attenuation rate (dB/octave). Following Taylor (1990), we use the
dynamic range to determine the required sampling rate (or conversely, the required filtering
for a given sampling rate).
The number of octaves required to attenuate a signal the number of decibels corresponding
to the dynamic range is
(5.19)
fm can then be evaluated from
(5.20)
Equation (5.16) is used to set the sampling rate at twice fm. If the sampling rate so determined
is higher than practical, a higher filter attenuation rate must be specified.
This approach is bounding and assumes only that the voltage of the composite signal (sum of
all Fourier components) is scaled to be just within the A/D converter input range. In most
cases, the higher-frequency components of the signal that require attenuation have
amplitudes much lower than the composite signal and hence require less attenuation. Using
a more detailed analysis that takes account of the actual amplitudes of the frequency
components above fc, it is usually possible to use a lower sampling rate or a lesser filter
attenuation rate.
19
Antialiasing filters often require substantial rates of attenuation. These relatively high levels
of attenuation present practical problems in actual systems. As discussed in Chapter 3, a first-
order Butterworth filter attenuates at only 6 dB per octave (an octave is a doubling of
frequency) and eight octaves are required to attenuate a signal 48 dB. As a result, higher-order
filters are usually required - an eighth-order Butterworth filter can attenuate a signal 48 dB in
one octave. High-order filters have other effects, however, such as large phase shifts in the
passband. Since there is a trade-off between filter order and sampling rate, a design
compromise is typically required.
Reference
Introduction to engineering experimentation, (2010) Anthony J. Wheeler, Ahmad R. Ganji;
with contributions by V. V. Krishnan, Brian S. Thurow. -3rd ed.
20