Digfilt
Digfilt
Ricardo A. Losada†
The MathWorks, Inc.
I Filter Design 8
1 Basic FIR Filter Design 9
1.1 Why FIR filters? . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Lowpass filters . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 FIR lowpass filters . . . . . . . . . . . . . . . . . . . . 11
1.2.2 FIR filter design specifications . . . . . . . . . . . . . 11
1.2.3 Working with Hertz rather than normalized frequency 14
1.3 Optimal FIR filter design . . . . . . . . . . . . . . . . . . . . . 15
1.3.1 Optimal FIR designs with fixed transition width and
filter order . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.2 Optimal equiripple designs with fixed transition width
and peak passband/stopband ripple . . . . . . . . . . 22
1.3.3 Optimal equiripple designs with fixed peak ripple
and filter order . . . . . . . . . . . . . . . . . . . . . . 26
1.3.4 Constrained-band equiripple designs . . . . . . . . . 27
1.3.5 Sloped equiripple filters . . . . . . . . . . . . . . . . . 28
1.4 Further notes on equiripple designs . . . . . . . . . . . . . . 32
1.4.1 Unusual end-points in the impulse response . . . . . 32
1.4.2 Transition region anomalies . . . . . . . . . . . . . . . 33
1.5 Maximally-flat FIR filters . . . . . . . . . . . . . . . . . . . . . 35
1.6 Summary and look ahead . . . . . . . . . . . . . . . . . . . . 37
3 Nyquist Filters 56
3.1 Design of Nyquist filters . . . . . . . . . . . . . . . . . . . . . 57
3.1.1 Equiripple Nyquist filters . . . . . . . . . . . . . . . . 58
3.1.2 Minimum-order Nyquist filters . . . . . . . . . . . . . 61
3.2 Halfband filters . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2.1 IIR halfband filters . . . . . . . . . . . . . . . . . . . . 63
3.3 Summary and look ahead . . . . . . . . . . . . . . . . . . . . 65
∗This document may be updated from time to time. The latest version can be found at
www.mathworks.com/matlabcentral
Part I
Filter Design
Chapter 1
Overview
In this chapter we discuss the basic principles of FIR filter design. We con-
centrate mostly on lowpass filters, but most of the results apply to other
response types as well. We discuss the basic trade offs and the degrees
of freedom available for FIR filter design. We motivate the use of optimal
designs and introduce both optimal equiripple and optimal least-squares
designs. We then discuss optimal minimum-phase designs as a way of
surpassing in some sense comparable optimal linear-phase designs. We
introduce sloped equiripple designs as a compromise to obtain equiripple
passband yet non-equiripple stopband. We also mention a few caveats
with equiripple filter design. We end with an introductory discussion of
different filter structures that can be used to implement an FIR filter in
hardware.
The material presented here is based on [1] and [2]. However, it has
been expanded and includes newer syntax and features from the Filter
Design Toolbox.
FIR filters have some drawbacks however. The most important is that
they can be computationally expensive to implement. Another is that
they have a long transient response. It is commonly thought that IIR fil-
ters must be used when computational power is at a premium. This is
certainly true in some cases. However, in many cases, the use of multi-
stage/multirate techniques can yield FIR implementations that can com-
pete (and even surpass) IIR implementations while retaining the nice char-
acteristics of FIR filters such as linear-phase, stability, and robustness to
quantization effects.∗ However, these efficient multistage/multirate de-
signs tend to have very large transient responses, so depending on the
requirements of the filter, IIR designs may still be the way to go.
In terms of the long transient response, we will show in Chapter 2
that minimum-phase FIR filters can have a shorter transient response than
comparable IIR filters.
sin(ωc n)
hLP [n] = , −∞ < n < ∞. (1.2)
πn
Zero−Phase Response
1.2
1
Passband ripple
0.8
Ideal
Amplitude
0.6 lowpass
filter
0.4 Transition
width
0.2
0
Stopband ripple
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 1.1: Illustration of the typical deviations from the ideal lowpass filter when ap-
proximating with an FIR filter, ωc = 0.4π.
Example 1 As an example, consider the design of an FIR filter that meets the
following specifications:
Specifications Set 1
The filter can easily be designed with the truncated-and-windowed impulse re-
sponse algorithm (a.k.a. the “window method”) if we use a Kaiser window∗ :
∗Notice that when specifying frequency values in MATLAB, the factor of π should be
omitted.
Filter order
The zero-phase response of the filter is shown in Figure 1.3. Note that since we
have fixed the allowable transition width and peak ripples, the order is determined
for us.
Close examination at the passband-edge frequency∗ , ω p = 0.37π, and at the
stopband-edge frequency, ωs = 0.43π, shows that the peak passband/stopband
ripples are indeed within the allowable specifications. Usually the specifications
are exceeded because the order is rounded to the next integer greater than the
actual value required.
∗ The passband-edge frequency is the boundary between the passband and the transition
band. If the transition width is Tw , the passband-edge frequency ω p is given in terms of
the cutoff frequency ωc by ω p = ωc − Tw /2. Similarly, the stopband-edge frequency is
given by ωs = ωc + Tw /2.
Zero−phase Response
0.9
0.8
0.7
Amplitude
0.6
0.5
0.4
0.3
0.2
0.1
2π f
ω=
fs
Hf = fdesign.lowpass('Fp,Fst,Ap,Ast',.5,.6,1,80);
H1 = design(Hf);
Hf2 = fdesign.lowpass('Fp,Fst,Ap,Ast',250,300,1,80,1000);
H2 = design(Hf2);
Notice that we don’t add 'Fs' to the string 'Fp,Fst,Ap,Ast' (or any
other specification string) when we specify parameters in Hertz. Simply
appending the sampling frequency to the other design parameters indi-
cates that all frequencies specified are given in Hertz.
In order to allow for different peak ripples in the passband and stop-
band, a weighting function, W (ω ) is usually introduced,
Linear-phase designs
Equiripple filters
Linear-phase equiripple filters are desirable because they have the small-
est maximum deviation from the ideal filter when compared to all other
linear-phase FIR filters of the same order. Equiripple filters are ideally
suited for applications in which a specific tolerance must be met. For ex-
ample, if it is necessary to design a filter with a given minimum stopband
attenuation or a given maximum passband ripple.
Zero−phase Response
1.06
1.03
1.02
Amplitude
1.01
0.99
0.98
0.97
0.96
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Normalized Frequency (×π rad/sample)
Figure 1.4: Passband ripple for of both the Kaiser-window-designed FIR filter and the
equiripple-designed FIR filter.
Figure 1.4 shows the superposition of the passband details for the filters de-
signed with the Kaiser window and with the equiripple design. Clearly the
maximum deviation is smaller for the equiripple design. In fact, since the filter
is designed to minimize the maximum ripple (minimax design), we are guaran-
teed that no other linear-phase FIR filter of 42nd order will have a smaller peak
ripple for the same transition width.
We can measure the passband ripple and stopband attenuation in dB units
using the measure command,
Meq = measure(Heq);
Least-squares filters
Equiripple designs may not be desirable if we want to minimize the en-
ergy of the error (between ideal and actual filter) in the passband/stopband.
Consequently, if we want to reduce the energy of a signal as much as pos-
sible in a certain frequency band, least-squares designs are preferable.
Example 3 For the same specifications, Hf, as the equiripple design of Example
2, a least-squares FIR design can be computed from
Hls = design(Hf,'firls');
The stopband energy for this case is given by
Z π 2
2 jω
Esb = H ( e ) dω
2π 0.43π
where H (e jω ) is the frequency response of the filter.
In this case, the stopband energy for the equiripple filter is approximately
3.5214-004 while the stopband energy for the least-squares filter is 6.6213e-005.
(As a reference, the stopband energy for the Kaiser-window design for this order
and transition width is 1.2329e-004).
The stopband details for both equiripple design and the least-squares design
are shown in Figure 1.5.
So while the equiripple design has less peak error, it has more “to-
tal” error, measured in terms of its energy. However, although the least-
squares design minimizes the energy in the ripples in both the passband
and stopband, the resulting peak passband ripple is always larger than
that of a comparable equiripple design. Therefore there is a larger distur-
bance on the signal to be filtered for a portion of the frequencies that the
filter should allow to pass (ideally undisturbed). This is a drawback of
least-squares designs. We will see in Section 1.3.5 that a possible compro-
mise is to design equiripple filters in such a way that the maximum ripple
in the passband is minimized, but with a sloped stopband that can reduce
the stopband energy in a manner comparable to a least-squares design.
Zero−phase Response
0.15
Equiripple design
Least−squares design
Normalized Frequency: 0.4299316
0.1
Amplitude: 0.08960634
Amplitude
0.05
Normalized Frequency: 0.4858398
Amplitude: 0.0352379
−0.05
0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 1.5: Comparison of an optimal equiripple FIR design and an optimal least-squares
FIR design. The equiripple filter has a smaller peak error, but larger overall error.
Using weights
Both equiripple and least-squares designs can be further controlled by
using weights to instruct the algorithm to provide a better approximation
to the ideal filter in certain bands. This is useful if it is desired to have less
ripple in one band than in another.
Example 4 In Example 2 above, the filter that was designed had the same ripples
in the passband and in the stopband. This is because we implicitly were using a
weight of one for each band. If it is desired to have a stopband ripple that is say ten
times smaller than the passband ripple, we must give a weight that is ten times
larger:
Heq2 = design(Hf,'equiripple','Wpass',1,'Wstop',10);
The result is plotted in Figure 1.6.
It would be desirable to have an analytic relation between the maxi-
mum ripples in a band and the weight in such band. Unfortunately no
such relation exists. If the design specifications require a specific maxi-
mum ripple amount, say δp in the passband and δs in the stopband (both
in linear units, not decibels), for a lowpass filter we can proceed as follows:
0.08
0.06
0.04
0.02
Amplitude
−0.02
−0.04
−0.06
−0.08
−0.1
Figure 1.6: Passband and stopband ripples obtained from weighing the stopband 10 times
higher than the passband.
since both the filter order and the transition width are assumed to be fixed,
this will not result in the desired ripples unless we are very lucky. How-
ever, the relative amplitude of the passband ripple relative to the stopband
ripple will be correct. In order to obtain a ripple of δp in the passband and
δs in the stopband we need to vary either the filter order or the transition
width.
The procedure we have just described requires trial-and-error since ei-
ther the filter order or the transition width may need to be adjusted many
times until the desired ripples are obtained. Instead of proceeding in such
manner, later we will describe ways of designing filters for given pass-
band/stopband ripples and either a fixed transition width or a fixed filter
order.
For least-squares designs, the relative weights control not the ampli-
tude of the ripple but its energy relative to the bandwidth it occupies. This
means that if we weigh the stopband ten times higher than the passband,
the energy in the stopband relative to the stopband bandwidth will be 10
times smaller than the energy of the ripples in the passband relative to the
passband bandwidth. For the case of lowpass filters these means that
Z π 2
Esb 2 jω
= H (e ) dω
π − ωs 2π (π − ωs ) ωs
Minimum-phase designs
One of the advantages of FIR filters, when compared to IIR filters, is the
ability to attain exact linear phase in a straightforward manner. As we
have already mentioned, the linear phase characteristic implies a symme-
try or antisymmetry property for the filter coefficients. Nevertheless, this
symmetry of the coefficients constraints the possible designs that are at-
tainable. This should be obvious since for a filter with N + 1 coefficients,
only N/2 + 1 of these coefficients are freely assignable (assuming N is
even). The remaining N/2 coefficients are immediately determined by the
linear phase constraint.
If one is able to relax the linear phase constraint (i.e. if the application at
hand does not require a linear phase characteristic), it is possible to design
minimum-phase equiripple filters that are superior to optimal equiripple
linear-phase designs based on a technique described in [8].
Example 5 For the same specification set of Example 2 the following minimum-
phase design has both smaller peak passband ripple and smaller peak stopband
ripple∗ than the linear-phase equiripple design of that example:
Hmin = design(Hf,'equiripple','Wpass',1,'Wstop',10,...
'minphase',true);
its zeros on or inside the unit circle.∗ However, the design is optimal in
the sense that it satisfies the minimum-phase alternation theorem [9].
Having smaller ripples for the same filter order and transition width
is not the only reason to use a minimum-phase design. The minimum-
phase characteristic means that the filter introduces the lowest possible
phase offset (that is, the smallest possible transient delay) to a signal being
filtered.
n = 0:500;
x = sin(0.1*pi*n');
yeq = filter(Heq,x);
ymin = filter(Hmin,x);
The output from both filters are plotted overlaid in Figure 1.7. The delay intro-
duced is equal to the group delay of the filter at that frequency. Since group-
delay is the negative of the derivative of phase with respect to frequency, the
group-delay of a linear-phase filter is a constant equal to half the filter order.
This means that all frequencies are delayed by the same amount. On the other
hand, minimum-phase filters do not have constant group-delay since their phase
response is not linear. The group-delays of both filters can be visualized using
fvtool(Heq,Hmin,'Analysis','Grpdelay');. The plot of the group-delays
is shown in Figure 1.8.
1
Linear−phase filtered sinusoid
Minimum−phase filtered sinusoid
0.8
0.6
0.4
Amplitude
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 10 20 30 40 50 60 70 80 90 100
Sample index (n)
Figure 1.7: Sinusoid filtered with a linear-phase filter and a minimum-phase filter of the
same order.
Group Delay
20
Linear−phase filter
Group delay (in samples)
Minimum−phase filter
15
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 1.8: Group-delay of a linear-phase filter and a minimum-phase filter of the same
order.
Magnitude Response
1.008
1.006
Kaiser window design
1.004
Equiripple design
1.002
Magnitude
0.998
0.996
0.994
0.992
0 0.05 0.1 0.15 0.2 0.25 0.3
Normalized Frequency (×π rad/sample)
Figure 1.9: Passband ripple details for both the Kaiser-window-designed FIR filter and
the equiripple-designed FIR filter. The Kaiser-window design over-satisfies the require-
ment at the expense of increase number of taps.
Note that for these designs minimum-phase filters will have a much
smaller transient delay not only because of their minimum-phase property
but also because their filter order is lower than that of a comparable linear-
phase filter. In fact this is true in general we had previously seen that a
filter with the same transition width and filter order with minimum-phase
has a smaller delay than a corresponding linear-phase filter. Moreover,
the minimum-phase filter has smaller ripples so that the two filters are not
really comparable. In order to compare apples to apples, the order of the
linear-phase filter would have to increased until the ripples are the same
as those of the minimum-phase design. This increase in filter order would
of course further increase the delay of the filter.
Example 9 Consider the following design of an equiripple with the same cutoff
frequency as in Example 7. The filter order is set to be the same as that needed for
a Kaiser-window design to meet the ripple specifications
Hf = fdesign.lowpass('N,Fc,Ap,Ast',49,0.375,0.13,60);
Heq = design(Hf,'equiripple');
The comparison of this new design with the Kaiser-window design is shown in
Figure 1.10. The transition width has been reduced from 0.15π to approximately
0.11π.
Zero−phase Response
1
Kaiser−window design
0.9 Equiripple design
0.8
0.7
Amplitude
0.6
0.5
0.4
0.3
0.2
0.1
0
0.32 0.34 0.36 0.38 0.4 0.42 0.44
Normalized Frequency (×π rad/sample)
Example 10 Compared to the 50th order linear-phase design Heq, the following
design has a noticeably smaller transition width:
Hmin = design(Hf,'equiripple','minphase',true);
If the filter order is fixed - for instance when using specialized hard-
ware - there are two alternatives available in the Filter Design Toolbox for
optimal equiripple designs. One possibility is to fix the transition width,
the other is to fix the passband ripple.
Example 11 For example, the design specifications of Example 7 call for a stop-
band that extends from 0.45π to π and provide a minimum stopband attenuation
of 60 dB. Instead of a minimum-order design, suppose the filter order available is
40 (41 taps). One way to design this filter is to provide the same maximum pass-
band ripple of 0.13 dB but to give up control of the transition width. The result
will be a filter with the smallest possible transition width for any linear-phase FIR
filter of that order that meets the given specifications.
Hf = fdesign.lowpass('N,Fst,Ap,Ast',40,.45,0.13,60);
Heq = design(Hf,'equiripple');
If instead we want to fix the transition width but not constrain the pass-
band ripple, an equiripple design will result in a filter with the smallest
possible passband ripple for any linear-phase FIR filter of that order that
meets the given specifications.
Hf = fdesign.lowpass('N,Fp,Fst,Ast',40,.3,.45,60);
Heq = design(Hf,'equiripple');
The passband details of the two filters are shown in Figure 1.11. Note
that both filters meet the Specifications Set 2 because the order used (40) is
larger than the minimum order required (37) by an equiripple linear phase
filter to meet such specifications. The filters differ in how they “use” the
extra number of taps to better approximate the ideal lowpass filter.
0.04
Magnitude (dB)
0.02
−0.02
−0.04
−0.06
Figure 1.11: Comparison of a two optimal equiripple FIR filters of 40th order. Both
filters have the same stopband-edge frequency and minimum stopband attenuation. One
is optimized to minimize the transition width while the other is optimized to minimize the
passband ripple.
equiripple filters but allowing for a slope in the stopband of the filter. The
passband remains equiripple thus minimizing the distortion of the input
signal in that region.
There are many ways of shaping the slope of the stopband. One way
[11] is to allow the stopband to decay as (1/ f )k , that is as a power of the
inverse of frequency. This corresponds to a decay of 6k dB per octave.
Another way of shaping the slope is to allow it to decay in logarithmic
fashion so that the decay appears linear in dB scale.
Of course there is a price to pay for the sloped stopband. Since the
design provides smaller stopband energy than a regular equiripple design,
the passband ripple, although equiripple, is larger than that of a regular
equiripple design. Also, the minimum stopband attenuation measured in
dB is smaller than that of a regular equiripple design.
0.3
0.2
Magnitude (dB)
0.1
−0.1
−0.2
−0.3
−0.4
−0.5
Figure 1.12: Passband details of a sloped optimal equiripple FIR design and an optimal
least-squares FIR design. The equiripple filter has a smaller peak error or smaller transi-
tion width depending on the interpretation.
Hf = fdesign.lowpass('N,Fp,Fst',42,Fp,Fst);
Hsloped = design(Hf,'equiripple','StopbandShape','1/f',...
'StopbandDecay',2);
results in a stopband energy of approximately 8.4095e-05, not much larger that
the least-squares design (6.6213e-005), while having a smaller transition width
(or peak passband ripple - depending on the interpretation). The passband details
of both the least-squares design and the sloped equiripple design are shown in
Figure 1.12 (in dB). The stopband details are shown in Figure 1.13 (also in dB).
If we constrain the filter order, the passband ripple, and the minimum
stopband attenuation, it is easy to see the trade-off between a steeper slope
and the minimum stopband attenuation that can be achieved. Something
has to give and since everything else is constrained, the transition width
increases as the slope increases as well.
Example 13 Design two filters with the same filter order, same passband ripple,
and same stopband attenuation. The slope of the stopband decay is zero for the
first filter and 40 for the second.
−30
Magnitude (dB)
−35
−40
−45
−50
−55
0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
Normalized Frequency (×π rad/sample)
Figure 1.13: Stopband details of a sloped optimal equiripple FIR design and an optimal
least-squares FIR design. The overall error of the equiripple filter approaches that of the
least-squares design.
Hf = fdesign.lowpass('N,Fst,Ap,Ast',30,.3,0.4,40);
Heq = design(Hf,'equiripple','StopbandShape','linear',...
'StopbandDecay',0);
Heq2 = design(Hf,'equiripple','StopbandShape','linear',...
'StopbandDecay',40);
The second filter provides better total attenuation throughout the stopband. Since
everything else is constrained, the transition width is larger as a consequence.
This is easy to see with fvtool(Heq,Heq2).
Impulse Response
0.1
0.08
0.06
Amplitude
0.04
0.02
−0.02
0 5 10 15 20 25 30 35 40
Samples
Figure 1.14: Impulse response of equiripple filter showing anomalous end-points. These
end points are the result of an equiripple response.
Example 14 Consider the design of this lowpass filter with band edges that are
quite close to DC:
Hf = fdesign.lowpass('N,Fp,Fst',42,0.1,0.12);
Heq = design(Hf,'equiripple');
fvtool(Heq,'Analysis','impulse')
The impulse response is show in Figure 1.14. Notice that the two end points seem
completely out of place.
−10
−20
Magnitude (dB)
−30
−40
−50
−60
−70
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 1.15: Bandpass filter with transition-band anomaly due to having different
transition-band widths.
Example 16 Consider the design of a bandpass filter with a first transition band
0.1π rad/sample wide and a second transition band 0.2π rad/sample wide:
Hf = fdesign.bandpass('Fst1,Fp1,Fp2,Fst2,Ast1,Ap,Ast2',...
.2,.3,.6,.8,60,1,60);
Heq = design(Hf,'equiripple');
The magnitude response is shown in Figure 1.15. The anomalies in the second
“don’t-care” band are obvious. The design can be fixed by making the second
transition band 0.1π rad/sample wide. Since this is a minimum-order design, the
price to pay for this is an increase in the filter order required to meet the modified
specifications:
Hf2 = fdesign.bandpass('Fst1,Fp1,Fp2,Fst2,Ast1,Ap,Ast2',...
.2,.3,.6,.7,60,1,60);
Heq2 = design(Hf,'equiripple');
cost(Heq)
cost(Heq2)
Magnitude Response
1
10th−order maximally flat FIR filter
0.9 20th−order maximally flat FIR filter
0.8
0.7
Magnitude
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 1.16: Maximally-flat FIR filters. The smaller transition width in one of the filters
is achieved by increasing the filter order.
Example 17 Figure 1.16 shows two maximally-flat FIR filters. Both filters have
a cutoff frequency of 0.3π. The filter with smaller transition width has twice the
filter order as the other.
The Maximally-flat stopband of the filter means that its stopband at-
tenuation is very large. However, this comes at the price of a very large
transition width. These filters are seldom used in practice (in particular
with fixed-point implementations) because when the filter’s coefficients
are quantized to a finite number of bits, the large stopband cannot be
−50
Magnitude (dB)
−100
−150
−200
achieved (and often is not required anyway) but the large transition band
is still a liability.
−30
−40
−50
−60
−70
−80
−90
Figure 1.18: Equiripple filters with passband approximating maximal flatness. The better
the approximation, the larger the transition band.
filter order and stopband attenuation, the transition width increases as a result.
Consider the following two filter designs:
Hf = fdesign.lowpass('N,Fc,Ap,Ast',70,.3,1e-3,80);
Heq = design(Hf,'equiripple');
Hf2 = fdesign.lowpass('N,Fc,Ap,Ast',70,.3,1e-8,80);
Heq2 = design(Hf2,'equiripple');
The two filters are shown in Figure 1.18. It is generally best to allow some pass-
band ripple as long as the application at hand supports it given that a smaller
transition band results. The passband details are shown in Figure 1.19.
x 10
−4 Magnitude Response (dB)
6
−2
−4
−6
0 0.05 0.1 0.15 0.2 0.25
Normalized Frequency (×π rad/sample)
Figure 1.19: Passband details of equiripple filters with very small passband ripple. The
flatter passband is obtained at the expense of a larger transition band, i.e. a smaller usable
passband.
and/or multirate techniques that use various FIR filters connected in cas-
cade (in series) in such a way that each filter shares part of the filtering
duties while having reduced complexity when compared to a single-stage
design. The idea is that for certain specifications to combined complexity
of the filters used in multistage design is lower than the complexity of a
comparable single-stage design.
We will be looking at all these approaches in the following chapters.
We will then look into implementation of filters and discuss issues that
arise when implementing a filter using fixed-point arithmetic.
Overview
One of the drawbacks of FIR filters is that they require a large filter order
to meet some design specifications. If the ripples are kept constant, the
filter order grows inversely proportional to the transition width. By using
feedback, it is possible to meet a set of design specifications with a far
smaller filter order than a comparable FIR filter∗ . This is the idea behind
IIR filter design. The feedback in the filter has the effect that an impulse
applied to the filter results in a response that never decays to zero, hence
the term infinite impulse response (IIR).
We will start this chapter by discussing classical IIR filter design. The
design steps consist of designing the IIR filter in the analog-time domain
where closed-form solutions are well known and then using the bilinear
transformation to convert the design to the digital domain. We will see
that the degrees of freedom available are directly liked to the design al-
gorithm chosen. Butterworth filters provide very little control over the
resulting design since it is basically a maximally-flat design. Chebyshev
designs increase the degrees of freedom by allowing ripples in the pass-
band (type I Chebyshev) or the stopband (type II Chebyshev). Elliptic
filters allow for maximum degrees of freedom by allowing for ripples in
both the passband and the stopband. In fact, Chebyshev and Butterworth
designs can be seen as a special case of elliptic designs, so usually one
should only concentrate on elliptic filter design, decreasing the passband
∗ However, see Chapter 5.
40 Basic IIR Filter Design
−30
−40
−50
−60
−70
−80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 2.1: 7th order Butterworth filter with a 3-dB frequency of 0.3π. Also shown is an
FIR filter with a cutoff frequency of 0.3π.
0.8
0.7
Magnitude squared
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 2.2: Comparison of a 7th-order and a 12th-order Butterworth filter. The filters
intersect at the 3-dB point.
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Hf = fdesign.lowpass('N,F3db,Ap',7,0.3,1);
Hc = design(Hf,'cheby1');
The magnitude-squared responses are shown in Figure 2.3. The passband ripple
of the Chebyshev filter can be made arbitrarily small, until the filter becomes the
same as the Butterworth filter. That exemplifies the trade-off between passband
ripple and transition width.
With IIR filters, we not only need to consider the ripple/transition-
width trade-offs, but also the degree of phase distortion. We know that
linear-phase throughout the entire Nyquist interval is not possible. There-
fore, we want to look at how far from linear the phase response is. A good
way to look at this is to look at the group-delay (which ideally would be
constant) and see how flat it is.
Of the classical IIR filters considered here∗ , Butterworth filters have the
least amount of phase distortion (phase non-linearity). Since Chebyshev
and elliptic filters can approximate Butterworth filters arbitrarily closely
by making the ripples smaller and smaller, it is clear that the amount of
ripple is related to the amount of phase distortion. Therefore, when we
refer to the trade-off between ripples and transition-width, we should re-
ally say that the trade-off is between the transition-width and the amount
of ripple/phase-distortion. As an example, we show the group-delay of
the Butterworth and Chebyshev type I filters of the previous example in
Figure 2.4. Notice that not only is the group-delay of the Chebyshev fil-
ter less flat, it is also larger for frequencies below the 3-dB point (which
is the region we care about). This illustrates that although the filters have
the same order, a Chebyshev type I filter will have a larger transient-delay
than a Butterworth filter.
Group Delay
30 Butterworth
Chebyshev type I
Group delay (in samples)
25
20
15
10
Figure 2.4: Group-delay responses for a 7th-order Butterworth and a 7th-order Cheby-
shev type I filter. Both filters have a 3-dB point of 0.3π.
Example 23 Design a 6th order filter with a 3-dB point of 0.45π. The filter must
have an attenuation of at least 80 dB at frequencies above 0.75π and the passband
ripple must not exceed 0.8 dB.
Hf1 = fdesign.lowpass('N,F3db',6,0.45);
Hf2 = fdesign.lowpass('N,F3db,Ap',6,0.45,0.8);
Hf3 = fdesign.lowpass('N,F3db,Ast',6,0.45,80);
Hb = design(Hf1,'butter');
Hc1 = design(Hf2,'cheby1');
Hc2 = design(Hf3,'cheby2');
The three designs are shown in Figure 2.5. Only the Chebyshev type II filter
reaches the required attenuation of 80 dB by 0.75π.
The group-delay responses for the designs of the previous example are
shown in Figure 2.6. Although the Butterworth’s group-delay is slightly
−30
−40
−50
−60
−70
−80
Figure 2.5: 6th order Butterworth, Chebyshev type I, and Chebyshev type II filters with
a 3-dB point of 0.45π.
Group Delay
18
Butterworth
16 Chebyshev type I
Chebyshev type II
Group delay (in samples)
14
12
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
flatter than that of the Chebyshev type II, the latter’s group-delay is smaller
than the former’s for most of the passband.
Even though all three designs in the previous example are of 6th order,
the Chebyshev type II implementation actually requires more multipliers
as we now explain.
−30
−40
−50
−60
−70
−80
−90
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 2.7: Elliptic and Chebyshev type II filters with a 3-dB point of 0.45π.
The resulting filter is shown in Figure 2.7 along with the Chebyshev type II filter
we had already designed. The passband ripple allows for the 80 dB attenuation to
be reached by about 0.63π rad/sample, far before the required 0.75π. If the target
application does not benefit from this the passband ripple can be reduced in order
to have smaller passband distortion.
The group-delay of the elliptic filter is similar but even less flat than
that of the Chebyshev type I filter. Figure 2.8 shows the group-delay re-
sponses of the Chebyshev type I design of Example 23 and the elliptic
design of Example 24.
Group Delay
20
Chebyshev type I filter
Elliptic filter
18
Group delay (in samples)
16
14
12
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 2.8: Group-delay response of elliptic and Chebyshev type I filters with a 3-dB
point of 0.45π.
0 Butterworth
Chebyshev type I
−10
Chebyshev type II
Elliptic
Magnitude (dB)
−20
−30
−40
−50
−60
Figure 2.9: Magnitude responses of highpass filters designed with all four classical design
methods.
Hf = fdesign.highpass('Fst,Fp,Ast,Ap',.45,.55,60,1);
Hd = design(Hf,'alliir');
The response of all four filters can be visualized using fvtool(Hd) (see Figure
2.9). Since we have designed all four filters, Hd is an array of filters. Hd(1)
is the Butterworth filter, Hd(2) is the Chebyshev type I, Hd(3) the Chebyshev
type II, and Hd(4) the elliptic filter. By inspecting the cost of each filter (e.g.
cost(Hd(2))) we can see that the Butterworth filter takes the most multipliers
(and adders) to implement while the elliptic filter takes the least. The Chebyshev
designs are comparable.
As with Chebyshev type II filters, elliptic filters do not have trivial nu-
merators of the form 1 − 2z−1 + z−2 ∗ . In contrast, the actual cost of Butter-
worth and Chebyshev type I filters is a little lower than what is indicated
by the cost function as long as we are able to implement the multiplica-
tion by two without using an actual multiplier (the cost function does not
assume that this is the case; it is implementation agnostic).
Fc = 0.0625;
TW = 0.008;
Fp = Fc-TW/2;
Fst= Fc+TW/2;
Ap = 1;
Ast= 80;
Hf = fdesign.lowpass('Fp,Fst,Ap,Ast',Fp,Fst,Ap,Ast);
∗ Notice the negative sign for the term 2z−1 . This is because we are designing highpass
filters rather than lowpass.
Clearly there is not contest between the FIR and the IIR design. In the
following chapters we will see that with the use FIR multirate/multistage
techniques we will be able to achieve designs that are more efficient than
the elliptic design shown here in terms not of the number of multipliers,
but of the number of multiplications per input sample (MPIS). However,
we will then go on to show that by using efficient multirate IIR filters based
on allpass decompositions, we can achieve even lower MPIS. Note that for
the designs discussed so far, both the FIR and IIR cases (being single-rate),
the number of MPIS is the same as the number of multipliers.
In addition to computational cost, another common reason for using
IIR filters is their small group-delay when compared to FIR filters. How-
ever, let us compare with minimum-phase FIR filters via an example.
Group Delay
Chebyshev type II
Linear−phase FIR
25
Group delay (in samples) Minimum−phase FIR
20
15
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
−20
Magnitude (dB)
−30
−40
−50
−60
−70
−80
−90
Figure 2.11: Least pth norm IIR filter with a 6th order numerator and a 4th order de-
nominator. Also shown is the Chebyshev type II filter design in Example 23.
Nyquist Filters
Overview
Nyquist filters are a special class of filters which are useful for multirate
implementations. Nyquist filters also find applications in digital commu-
nications systems where they are used for pulse shaping (often simultane-
ously performing multirate duties). The widely used raised-cosine filter is
a special case of Nyquist filter that is often used in communications stan-
dards.
Nyquist filters are also called Lth-band filters because the passband of
their magnitude response occupies roughly 1/L of the Nyquist interval.
The special case, L = 2 is widely used and is referred to as Halfband filters.
Halfband filters can be very efficient for interpolation/decimation by a
factor of 2.
Nyquist filters are typically designed via FIR approximations. How-
ever, IIR designs are also possible. We will show FIR and IIR designs
of halfband filters. The IIR case results in extremely efficient filters and
through special structures such as parallel (coupled) allpass-based struc-
tures and wave digital filters can be efficiently implemented for multirate
applications.
Later, in Chapter 5, we will see a special property of Nyquist filters
when used in multirate applications. The cascade of Nyquist interpolators
(or decimators) results in an overall Nyquist interpolator (or decimator).
This makes Nyquist filters highly desirable for multistage designs.
Moreover, as we will see, Nyquist filters have very small passband rip-
3.1 Design of Nyquist filters 57
ple even when the filter is quantized for fixed-point implementations. This
makes Nyquist filters ideally suitable for embedded applications.
x 10
−4 Magnitude Response (dB)
8
4
Magnitude (dB)
−2
−4
−6
−8
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14
Normalized Frequency (×π rad/sample)
Figure 3.1: Passband ripple details for 5th-band Nyquist FIR filter designed with a Kaiser
window.
for such branch. The impulse response for the filter designed in Exam-
pled 30 can be seen in Figure 3.2. Note that every 5th sample is equal to
zero except for the middle sample which is the peak value of the impulse
response.
Impulse Response
0.2
0.15
Amplitude
0.1
0.05
−0.05
0 10 20 30 40 50 60 70 80
Samples
Figure 3.2: Impulse response of 5th-band Nyquist FIR filter. Every 5th sample is equal
to zero (except at the middle).
Kaiser-window designs are not as clear for Nyquist filters as they are for
regular filter designs. In particular, the passband ripples of Kaiser-window
designs may be smaller than those of equiripple designs. Moreover, as we
will see in Chapter 4, the increasing attenuation in the stopband may be
desirable for interpolation and especially for decimation purposes.
While it is clear that the Kaiser-window design has the smallest minimum stop-
band attenuation, it does provide increased stopband attenuation as frequency
increases. The magnitude responses are shown in Figure 3.3.
Moreover, the Kaiser window design results in better passband ripple perfor-
mance than either equiripple design (this can be verified with the measure com-
mand). The passband ripple details are shown in Figure 3.4.
0 Kaiser window
Equiripple
−10
Sloped equiripple
−20
Magnitude (dB)
−30
−40
−50
−60
−70
−80
−90
Figure 3.3: Comparison of Kaiser window, equiripple, and sloped equiripple Nyquist
filter designs of 72nd order and a transition width of 0.1π.
0.005
−0.005
−0.01
−0.015
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Normalized Frequency (×π rad/sample)
Figure 3.4: Passband ripple details for filters designed in Example 31.
Example 32 Consider the following design examples, compare the results of the
measure command and the cost command for both Kaiser window designs and
equiripple designs.
f = fdesign.nyquist(8,'TW,Ast',0.1,60);
f2 = fdesign.nyquist(4,'TW,Ast',0.2,50);
hk = design(f,'kaiserwin');
he = design(f,'equiripple');
hk2 = design(f2,'kaiserwin');
he2 = design(f2,'equiripple');
Both cases illustrate the trade-off between filter order and passband ripple for
Nyquist filter design choices.
f = fdesign.halfband('N,TW',N,TW);
f2 = fdesign.halfband('N,Ast',N,Ast);
f3 = fdesign.halfband('TW,Ast',TW,Ast);
In all three cases, either Kaiser window or equiripple FIR designs are pos-
sible. Moreover, in the first case, it is also possible to design least-squares
FIR filters.
Unlike the general case, there are usually no convergence issues with
equiripple halfband filter designs. In addition to that, the fact the pass-
band ripple is the same as the stopband ripple means that regardless of
the design algorithm, the resulting peak-to-peak passband ripples will be
about the same.
So for halfband filters, the advantages of optimal equiripple designs
over Kaiser window designs resurface in a more clear manner than for
other Nyquist filters. For a given set of specifications, the equiripple de-
signs will have either a larger minimum stopband attenuation, a smaller
transition width, or a smaller number of coefficients.
Example 33 Consider Kaiser window and equiripple designs for the following to
cases:
f = fdesign.halfband('N,TW',50,0.1);
f2 = fdesign.halfband('TW,Ast',0.1,75);
Using the measure command in the first case shows that the resulting stopband
attenuation (and consequently the passband ripple) is better in the equiripple case.
Similarly, using the cost command in the second case shows that the same per-
formance is achieved with fewer coefficients in the equiripple case.
Sloped equiripple designs are also possible, but in the case of halfband
filters, they will result in similarly sloped passbands, given the symmetry
constraints on halfband filters.
x 10
−7 Magnitude Response (dB)
0
−0.5
−1
Magnitude (dB)
−1.5
−2
−2.5
−3
−3.5
−4
−4.5
0 0.05 0.1 0.15 0.2 0.25
Normalized Frequency (×π rad/sample)
Figure 3.5: Passband details of quasi-linear phase IIR halfband filter with 70 dB of stop-
band attenuation.
Minimum-order designs
Minimum-order designs provide a good framework to compare imple-
mentation cost of FIR halfband filters vs. IIR halfbands. Elliptic halfbands
are the most efficient while quasi-linear phase halfbands give up some ef-
ficiency in the name of phase linearity. Either case is significantly more
efficient than FIR equiripple halfband.
Group Delay
iirlinphase
40
ellip
35 equiripple
Group delay (in samples)
30
25
20
15
10
Figure 3.6: Group-delay comparison for IIR and FIR halfband filters.
lth sample of their impulse response is zero. They also tend to have very
small passband ripples.
Halfband filters are particularly interesting given their high efficiency.
IIR halfband filters are even more efficient and have extremely small pass-
band ripples.
Given all their advantages, Nyquist filters (FIR or IIR) should be the
first choice for multirate applications (see Chapter 4). The fact that their
cutoff frequency is given by π/L results in transition-band overlap in dec-
imation applications. This however is not a problem given that no aliasing
will occur in the band of interest ((see Chapter 4 and Appendix B.)
Moreover, we will see (Chapter 5) that cascade multirate Nyquist fil-
ters possess an interesting property: the overall equivalent multirate fil-
ter is also Nyquist. For example, a decimation/interpolation filter with a
rate change of say 8 can be implemented as a cascade of three halfband
decimation/interpolation filters (each with a rate change of 2). The three
halfband filters in cascade act a single Nyquist filter (the equivalent im-
pulse response of the cascade will have a value of zero every 8th sample)
but can be significantly more efficient than a single-stage design. This is
particularly true if IIR halfband designs are used.
Overview
Multirate signal processing is a key tool to use in many signal processing
implementations. The main reason to use multirate signal processing is
efficiency. Digital filters are a key part of multirate signal processing as
they are an enabling component for changing the sampling rate of a signal.
If multirate concepts are not fresh, it may be helpful to read through
Appendix B prior to reading through this chapter.
In this chapter we will talk about designing filters for multirate sys-
tems. Whether we are increasing or decreasing the sampling rate of a sig-
nal, a filter is usually required. The most common filter type used for mul-
tirate applications is a lowpass filter.∗ Nyquist filters, described in Chapter
3, are the preferred design to use for multirate applications.
We start by presenting filters in the context of reducing the sampling
rate of a signal (decimation). We want to emphasize the following: if you
reduce the bandwidth of a signal through filtering, you should reduce its
sampling rate accordingly.
We will see that the above statement applies whether the bandwidth
is decreased by an integer or a fractional factor. Understanding fractional
sampling-rate reduction requires understanding interpolation. We present
first interpolation as simply finding samples lying between existing sam-
ples (not necessarily increasing the sampling rate). We then use this to
∗ However, as we will see, highpass, bandpass and in general any filter that reduces the
bandwidth of a signal may be suitable for multirate applications.
68 Multirate Filter Design
Bx
=M
By
Figure 4.1: Reducing the sampling rate after the bandwidth of the signal is reduced.
−10
Magnitude spectrum of signal after decimation
−20 Spectral replica
Spectral replica
Magnitude (dB)
−30
−40
−50
−60
−70
−80
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Normalized Frequency (×π rad/sample)
Figure 4.4: Spectrum of decimated signal with ωst = π/M along with spectral replicas.
is equal to M − 1.
We’d like to compare two different cases, one in which the cutoff fre-
quency is π/M, and another in which the stopband-edge frequency is
π/M. First let’s look at the latter case.
For illustration purposes, assume M = 3 and assume that the spectrum
of the signal to be filtered is flat and occupies the entire Nyquist interval.
This means that the spectrum of the output signal (prior to downsam-
pling) will take the shape of the filter. The conclusions we will reach are
valid for any value of M and any input signal (but the aliasing will be
lower than what we show if the input signal is already somewhat attenu-
ated in the stopband; we are showing the worst-case scenario). The spec-
trum of the decimated signal along with the replicas are shown in Figure
4.4. Aliasing invariably occurs because of the finite amount of stopband
attenuation. However, presumably we select the amount of stopband at-
tenuation so that the aliasing is tolerable. Notice the problem with using
filters with equiripple stopband when decimating. The energy in the stop-
band is large and it all aliases into the spectrum of the signal we are in-
terested in. Also notice that the usable bandwidth extends from zero to
the passband-edge frequency. The passband-edge frequency in this case is
given by π/M − Tw , where Tw is the transition-width.
−10
−40
−50
−60
−70
−80
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Normalized Frequency (×π rad/sample)
Figure 4.5: Spectrum of decimated signal with ωc = π/M along with spectral replicas.
Now let’s look at the case where the cutoff frequency is set to π/M.
Recall that for FIR filters the cutoff frequency is typically located at the
center of the transition band. The spectrum of the decimated signal along
with its corresponding replicas for this case are shown in Figure 4.5. Since
the cutoff frequency is in the middle of the transition band, the transition
bands alias into each other. This is not a problem since that region is dis-
torted by the filter anyway so that it should not contain any information
of interest. Notice that we have used a sloped stopband to reduce the to-
tal amount of aliasing that occurs. Of course this could have been done
in the previous case as well. Also, the usable passband extends now to
π/M − Tw /2 where once again Tw is the transition width. This means
that the usable passband is larger in this case than in the case ωst = π/M
by Tw /2. In order to have the same usable passband in both cases, we
would have to make transition width of the case ωst = π/M half as wide
as the transition width of the case ωc = π/M. Given that for FIR filters
the filter order grows inversely proportional to the transition width, this
would imply having to increase the filter order by a factor of about two in
order to obtain the same usable bandwidth with ωst = π/M.
While both selections, ωst = π/M, or ωc = π/M are valid. There are
clear advantages to using ωc = π/M. In practice, we usually have some
f = fdesign.decimator(4,'Nyquist',4,'TW,Ast',0.1,60);
h = design(f,'kaiserwin');
Ap = 1;
Ast = 80;
Hf = fdesign.decimator(M,'lowpass','Fp,Fst,Ap,Ast',...
Fp,Fst,Ap,Ast);
Hd = design(Hf,'equiripple');
cost(Hd)
Through the use of multirate techniques, we are within a factor of two
of the number of MPIS required to implement an IIR filter that meets the
specifications. However, we still have perfect linear phase and no stability
issues to worry about.
For the case just discussed, a Kaiser-window-designed∗ Nyquist fil-
ter would require more computations per input sample than a regular
lowpass filter. The main reason is the passband ripple. With the given
specifications, the Kaiser-window design will have a mere 0.0015986 dB of
passband ripple. The cost to pay for this vast over-satisfying of the speci-
fications is a computational cost of 73.6875 MPIS.
In Appendix C we study various approaches to filter design for these
same specifications. We will see that multistage/multirate designs and in
particular multistage Nyquist multirate designs will be the most efficient
of all approaches looked at for these specifications. This is true despite the
fact that such multistage Nyquist designs also will vastly over-satisfy the
passband ripple requirements.
Decimating by a factor of 2
When reducing the bandwidth/decimating by a factor of two, if the cutoff
frequency is set to π/2 (and transition-band aliasing is tolerated), FIR or
IIR halfband filters can be used to very efficiently implement the filter.
In the FIR case, the halfband decimator by 2 takes full advantage of the
fact that about half the filter coefficients are zero.
The halfband is already efficient in that for a filter of length 95, it only requires 49
multipliers. By reducing the sampling rate, a further gain factor of two is obtained
in computational cost. The number of MPIS is 24.5.
h1 = design(f,'ellip');
h2 = design(f,'iirlinphase');
cost(h1)
cost(h2)
As usual, the elliptic design is the most efficient, requiring only 6 multipliers and
3 MPIS. If approximate linear phase is desired, the iirlinphase design requires
19 multipliers and 9.5 MPIS.
−10
−20
Magnitude (dB)
−30
−40
−50
−60
−70
Example 39 For example, consider using a highpass filter that reduces the band-
width by a factor M = 3. If we do not downsample, and the input to the filter
has a flat spectrum, the output signal will have the spectrum of the filter. Some-
thing like what is shown in Figure 4.7. If we downsample the filtered signal by a
−10
−20
Alias from spectral replicas
Magnitude (dB)
−40
−50
−60
−70
−80
−0.3 −0.2 −0.1 0 0.1 0.2 0.3
Normalized Frequency (×π rad/sample)
−10
−30
−40
−50
−60
−70
−80
−0.3 −0.2 −0.1 0 0.1 0.2 0.3
Normalized Frequency (×π rad/sample)
bandpass filter does not meet the restrictions just mentioned, we should
still try to downsample as much as possible.
Example 40 Suppose we are interested in retaining the band between 0.35π and
0.5167π. The bandwidth is reduced by a factor M = 6. However, the band-edges
are not between kπ/M and (k + 1)π/M for M = 4, 5, 6. The band-edges do lie
between π/3 and 2π/3, so if we design a bandpass filter we can at least decimate
by 3.
will be [−π/3, π/3]. Notice that we haven’t been able to remove all the white
space since we could only downsample by 3 even though the bandwidth was re-
duced by 6.
M = 3;
Band = 3;
Fs = 30e3;
Hf = fdesign.decimator(M,'Nyquist',Band,'TW,Ast',1500,65,Fs);
Hd = design(Hf,'kaiserwin');
1
Original samples
0.9
Underlying signal
0.8 Fractionally−decimated samples
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
30 35 40 45 50 55
Time (samples)
4.2 Interpolation
Roughly speaking, interpolation consists of computing new samples be-
tween existing samples. In the context of signal processing, ideally we
interpolate by finding the desired points on the underlying continuous-
time analog signal that corresponds to the samples we have at hand. This
is done without actually converting the signal back to continuous time.
The process is referred to as sinc interpolation since an ideal lowpass filter
(with a sinc-shaped impulse response) is involved (see Appendix B).
0.5
Underlying analog signal
0.4
Sampled signal
0.3 Fractionally advanced signal
0.2
0.1
Amplitude
−0.1
−0.2
−0.3
−0.4
−0.5
30 32 34 36 38 40 42 44 46 48 50
Time
β = 1 − α. Both α and β are a fraction between zero and one. Thus, if x [n]
is our signal prior to interpolation, the signal consisting of interpolated
values is computed by passing the signal through a filter with transfer
function
Hfrac (z) = zα .
Of course advancing a signal in time is a non-causal operation. Since
the advance is less than one sample, we can make it causal by delaying
everything by one sample,
IIR approximations to this filter later, but for now we will look at how a
bank of fractional advance filters can be used to increase the sampling-rate
of a signal.
0.5
Underlying analog signal
0.4
Original samples
0.3
Interpolated samples
0.2
0.1
Amplitude
−0.1
−0.2
−0.3
−0.4
−0.5
34 36 38 40 42 44 46
Time
• x T ′ [5n] = x T [n]
• x T ′ [5n + 1] = x T [n + 51 ]
• x T ′ [5n + 2] = x T [n + 52 ]
• x T ′ [5n + 3] = x T [n + 53 ]
• x T ′ [5n + 4] = x T [n + 54 ]
Figure 4.14: Bank of filters used for interpolation. Each filter performs a fractional ad-
vance of the input signal and has a different phase. The overall implementation is re-
ferred to as a polyphase interpolation filter. For every input sample, we cycle through all
polyphase branches.
Hk (e jω ) = e jωk/L , k = 0, . . . , L − 1
so that each filter Hk (e jω ) is allpass, i.e. Hk (e jω ) = 1 and has linear phase,
arg{ Hk (e jω )} = ωk/L. ∗
∗ The term polyphase stems from this derivation. Each filter in the filter bank has a
different phase. The interpolator filter consists of L polyphase parallel branches, each
branch tasked with computing one of the L interpolated outputs. The time-domain view
of the ideal interpolation filter thus has the polyphase structure built-in.
− Lk , k = 0, . . . , L − 1
−20
Magnitude (dB)
−40
−60
−80
−100
Figure 4.15: FIR Nyquist interpolate by 4 filter designed with a Kaiser window. The
filter removes 3 spectral replicas and operates at four times the input rate.
L = 4; % Interpolation factor
Band = 4; % Band for Nyquist design
Hf = fdesign.interpolator(L,'Nyquist',Band,'TW,Ast',0.1,80);
Hint = design(Hf,'kaiserwin');
The resulting Nyquist filter is shown in Figure 4.15.
For the design from the previous example, it is worth taking a look at
the magnitude response and group-delay of the polyphase components of
the resulting FIR filter. The magnitude response is shown in Figure 4.16
while the group-delay is shown in Figure 4.17. The magnitude response
reveals that only one of the polyphase filters is a perfect allpass, while
the others are approximations that fade-off at high frequency. The magni-
tude responses of the 2nd and 4th polyphase sub-filters are identical. The
group-delays show a fractional advance of 0, 0.25, 0.5, and 0.75 for the
four polyphase sub-filters. This advance is relative to a nominal delay of
M = 13 samples. The filter length is 2LM + 1 = 105. Note that two of the
polyphase components do not have perfectly flat group delays. However,
the nonlinear shape of one compensates for the other so that overall the
interpolation filter has linear phase.
−1
Magnitude (dB)
−2
−3
−7
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 4.16: Magnitude response of polyphase components for a Nyquist FIR interpolate-
by-four filter.
Group Delay
13
12
11
10
p = polyphase(Hint);
isequal(p(2,1:end-1),Hint.Numerator(2:4:end)) % Returns true
Hf = fdesign.interpolator(2,'halfband','TW,Ast',0.08,55);
Hlin = design(Hf,'iirlinphase');
Hellip = design(Hf,'ellip');
cost(Hlin)
cost(Hellip)
The two polyphase branches for either design are perfect allpass filters. The filters
deviate from the ideal interpolation filter in their phase response. The group-delay
of the polyphase branches for the quasi linear-phase IIR design is shown in Figure
4.19. Note that one of the branches is a pure delay and therefore has perfectly flat
group-delay. The group-delay of the polyphase sub-filters for the elliptic design is
shown in Figure 4.20. Note how neither of the polyphase components has a flat
group-delay in this case.
Group Delay
30
Filter #1: Polyphase(1)
28
Filter #1: Polyphase(2)
Group delay (in samples)
26
24
22
20
18
16
14
Figure 4.19: Group-delay of polyphase components for a quasi linear-phase IIR halfband
interpolate-by-two filter.
Group Delay
30
Filter #1: Polyphase(1)
25 Filter #1: Polyphase(2)
Group delay (in samples)
20
15
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
−20
Magnitude (dB)
−40
−60
−80
−100
0 5 10 15 20 25 30 35 40 45
Frequency (kHz)
Figure 4.21: Magnitude response of halfband filter used to increase the sampling rate by
2.
f = fdesign.interpolator(2,'halfband','TW,Ast',8000,96,96000);
H = design(f,'equiripple');
The magnitude response of the filter can be seen in Figure 4.21. Note that the
passband extends to 20 kHz as desired. The cutoff frequency for the filter is 24
kHz. The spectral replica of the original audio signal centered around 48 kHz will
be removed by the halfband filter.
Figure 4.22: Conceptual configuration for increasing the sampling rate of a signal by a
fractional factor. L > M.
Figure 4.22 illustrates the idea. The role of the upsampler+filter combi-
nation is the same as for the case when we increase the rate by an integer
factor. Once we have increased the rate, we discard samples we don’t
need, keeping only the ones required for the new rate.
While the procedure we have just described is trivial, it is inefficient
since many of the samples that have been computed via interpolation are
subsequently discarded. Instead, to implement fractional interpolation ef-
ficiently we compute only the samples we intend to keep.
Specifically, if the polyphase filters are H0 (z), H1 (z), . . . , HL−1 (z), in-
stead of using H0 (z) to compute the first output, H1 (z) to compute the
second output and so forth, we use H0 (z) for the first output, we skip
M − 1 polyphase filters and use H M (z) for the second output and so forth.
As an example, if L = 3 and M = 2, the sequence of polyphase filters
used are H0 (z), then skip H1 (z), then use H2 (z), then, for the next input,
skip H0 (z), then use H1 (z), then skip H2 (z), and then start again by using
H0 (z).
The idea in general is shown in Figure 4.23. The structure resembles
that of interpolation by an integer factor except that only the branches
that are going to produce an output we will keep are used for every input
sample.
Example 45 We will work with Hertz to illustrate the fact that when we inter-
polate (by either an integer or a fractional factor) we never reduce the bandwidth
of the input signal.
Suppose we have a signal sampled at 500 Hz. The band of interest for the
signal is from 0 to 200 Hz, i.e. a transition width of 100 Hz has been allocated.
Say we want to increase the sampling rate to 1600 Hz. We choose to use a 16th-
band Nyquist filter. The transition width is set to 100 Hz in order to not disturb
information contained in the band of interest.
L = 16;
M = 5;
Figure 4.23: Bank of filters used for fractional interpolation. Unlike interpolation by an
integer factor, not all polyphase branches are used for each input sample.
Band = L;
Fs = 500*L; % High sampling rate
TW = 100;
Ast = 80;
Hf = fdesign.rsrc(L,M,'Nyquist',Band,'TW,Ast',TW,Ast,Fs);
Hrsrc = design(Hf,'kaiserwin');
Note that the sampling frequency we had to set for the design was that corre-
sponding to full interpolation by 16, i.e. 16 × 500. This is because, as we have
explained, the filter is designed as a regular interpolate-by-16 filter, but is im-
plemented efficiently so that no sample to be discarded is actually computed. The
magnitude response of the filter is shown in Figure 4.24. The passband of the filter
extends to 200 Hz so that the band of interest is left undisturbed. The resulting
sampling frequency is 1600. However, this filter removes 15 spectral replicas as
if the sampling frequency was being increased to 8000. The downsampling by 5
produces 4 spectral replicas centered around multiples of the resulting sampling
frequency, i.e. 1600.
20
0
Magnitude (dB)
−20
−40
−60
−80
−100
Figure 4.24: Magnitude response of filter used for fractional interpolation. The passband
extends to 200 Hz so that the band of interest is left untouched.
factor of 7/10.
L = 7;
M = 10;
Band = M;
Fs = 1000*L; % High sampling rate
TW = 140;
Ast = 80;
Hf = fdesign.rsrc(L,M,'Nyquist',Band,'TW,Ast',TW,Ast,Fs);
Hrsrc = design(Hf,'kaiserwin');
Note that once again, the sampling frequency set for the filter designed
is set as if full interpolation were to be implemented. The choice of the
band however, is made as M rather than L (compare to the previous exam-
ple) so that the resulting filter reduces the bandwidth by a factor of L/M.
The transition width is chosen so that the passband of the filter extends to
the desired 280 Hz. The computation is simple, f c − TW/2, where f c is the
cutoff frequency which is given by f s /(2Band) = 350 Hz. Subtracting half
the transition width, i.e. 70, means the passband of the filter will extend
to 280 Hz as intended. The passband details for the filter can be seen in
Figure 4.25.
If the input signal had a flat spectrum occupying the full Nyquist in-
terval (worst case), after filtering and fractional decimation, the spectrum
of the signal would take the shape of the filter. Its spectral replicas would
be centered around multiples of the new sampling frequency, i.e. 700 Hz.
The baseband spectrum along with the first positive two spectral replicas
is shown in Figure 4.26. Note that as before, we have allowed for transition
band overlap, but the baseband of interest presents no significant aliasing.
16.9026
16.9024
Magnitude (dB)
16.9022
16.902
16.9018
16.9016
16.9014
16.9012
Figure 4.25: Passband details of filter designed for fractional decimation. The passband
extends to 280 Hz so that the band of interest is left untouched.
Lowpass filters are the most common filters for multirate applications.
However, highpass and bandpass filters can also be used for both decreas-
ing or increasing the sampling rate of a signal. Although we did not touch
upon the case when increasing the sampling rate, using a bandpass rather
than a lowpass for such case is simply a matter of choosing which spectral
replicas should be removed.
We have presented Nyquist filters as the preferred type of lowpass fil-
ters for multirate applications, however any lowpass filter in principle can
be used.
When using Nyquist filters, equiripple FIR filters are not always the
best choice. Kaiser window designs may have a smaller passband ripple
along with increasing stopband attenuation that may be desirable for re-
moval of spectral replicas/aliasing attenuation.
IIR halfband filters have presented as an extremely efficient way of im-
plementing decimate/interpolate by two filters.
In the next chapter we will see that multirate Nyquist filters have the
interesting property that cascades of such filters remain Nyquist overall.
As such, we will see that to perform efficient multirate filtering, multistage
designs can be a very attractive solution. In particular, cascading efficient
20
0
Baseband spectrum
First replica
−20 Second replica
Magnitude (dB)
−40
−60
−80
−100
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Frequency (Hz)
Figure 4.26: Baseband spectrum and two positives spectral replicas of fractionally dec-
imated signal. The input signal is assumed to have a flat spectrum which occupies the
entire Nyquist interval.
Multistage/Multirate Filter
Design
Overview
Generally, given a fixed passband ripple/stopband attenuation, the nar-
rower the transition band of a filter, the more expensive it is to implement.
A very effective way of improving the efficiency of filter designs is to use
several stages connected in cascade (series). The idea is that one stage
addresses the narrow transition band but manages to do so without re-
quiring a high implementation cost, while subsequent stages make-up for
compromises that have to be taken in order for the first stage to be effi-
cient (usually this means that subsequent stages remove remaining spec-
tral replicas).
In this chapter we start by discussing the so-called interpolated FIR
(IFIR) filter design method. This methods breaks down the design into
two stages. It uses upsampling of an impulse response in order to achieve
a narrow transition band while only adding zeros to the impulse response
(therefore not increasing the complexity). The upsampling introduces spec-
tral replicas that are removed by the second stage filter which is a lowpass
filter with less stringent transition bandwidth requirements.
We will show that multirate implementations of multistage designs are
the most effective way of implementing these filters. The idea follows
upon the notion we have already discussed in Chapter 4 of reducing the
sampling rate whenever the bandwidth of a signal is reduced.
100 Multistage/Multirate Filter Design
Figure 5.1: The IFIR implementation. An upsampled filter is cascaded with an image
suppressor filter to attain an overall design with a reduced computational cost.
ear phase FIR filters that can meet the given specifications with a reduced
number of multipliers.
The idea is rather simple. Since the length of the filter grows as the
transition width shrinks, we don’t design a filter for a given (small) tran-
sition width. Rather, we design a filter for a multiple L of the transition
width. This filter will have a significantly smaller length than a direct
design for the original (small) transition width. Then, we upsample the im-
pulse response by a factor equal to the multiple of the transition width, L.
Upsampling will cause the designed filter to compress, meeting the origi-
nal specifications without introducing extra multipliers (it only introduces
zeros, resulting in a larger delay). The price to pay is the appearance of
spectral replicas of the desired filter response within the Nyquist interval.
These replicas must be removed by a second filter (called in this context
the interpolation filter or image suppressor filter) that is cascaded with the
original to obtain the desired overall response. Although this extra filter
introduces additional multipliers, it is possible in many cases to still have
overall computational savings relative to conventional designs. The im-
plementation is shown in Figure 5.1.
The idea is depicted by example in Figure 5.2 for the case of an upsam-
pling factor of 3. The “relaxed” design is approximately of one third the
length of the desired design, if the latter were to be designed directly. The
upsampled design has the same transition width as the desired design.
All that is left is to remove the spectral replica introduced by upsampling.
This is the job of the image suppressor filter.
As an example of the computational cost savings, consider once again
the design Specifications Set 3. The number of multipliers required for a
single linear phase design was 263. An IFIR design can attain the same
specs with 127 multipliers when using an upsampling factor of 6:
1.5
1 Desired design
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
1 Relaxed design
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
image supressor Upsampled design
1
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
1 Resulting design
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 5.2: Illustration of the IFIR design paradigm. Two filters are used to attain strin-
gent transition width specifications with reduced total multiplier count when compared
to a single filter design.
Hf = fdesign.lowpass('Fp,Fst,Ap,Ast',.12,.14,.175,60);
Hifir = design(Hf,'ifir','UpsamplingFactor',6);
cost(Hifir)
The response of the upsampled filter and the image suppressor filter is
shown in Figure 5.3. The overall response, compared to a single linear
phase equiripple design is shown in Figure 5.4.
−20
Magnitude (dB)
−30
−40
−50
−60
−70
Figure 5.3: Magnitude response of the upsampled filter and the image suppressor filter
in an IFIR design.
can add up, requiring the design to ensure that the sum of the two peak
passband ripples does not exceed the original set of specifications. Close
inspection of the passband of the overall design in the previous example,
shown in Figure 5.5, reveals a rather chaotic behavior (but certainly within
spec.) of the ripple.
Further optimized designs, [4], [28], attain a much cleaner passband
behavior by jointly optimizing the design of the two filters to work better
together. This results in a filter that can meet the specifications set with an
even further reduction in the number of multipliers. The savings are espe-
cially significant for the image suppressor filter, which is greatly simplified
by this joint optimization.
Utilizing this joint optimization, the Specifications Set 3 can be met
with only 74 multipliers, once again for an upsampling factor of 6.
Hf = fdesign.lowpass('Fp,Fst,Ap,Ast',.12,.14,.175,60);
Hifir = design(Hf,'ifir','UpsamplingFactor',6,...
'JointOptimization',true);
cost(Hifir)
The manner in which the two filters work together is best described
0 IFIR design
Conventional equiripple design
−10
−20
Magnitude (dB)
−30
−40
−50
−60
−70
−80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 5.4: Overall magnitude response of an IFIR design and a conventional equiripple
design. The IFIR implementation requires 127 multipliers vs. 263 for the conventional
implementation.
0.06
0.04
Magnitude (dB)
0.02
−0.02
−0.04
−0.06
−0.08
0 0.02 0.04 0.06 0.08 0.1 0.12
Normalized Frequency (×π rad/sample)
Figure 5.5: Passband details of an IFIR design revealing a rather chaotic behavior of the
ripple.
−10
Magnitude (dB)
−20
−30
−40
−50
−60
−70
−80
−90
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 5.6: Magnitude response of the upsampled filter and the image suppressor filter
in an optimized IFIR design. The two filters are jointly optimized in the design to achieve
a specifications set with a reduced number of multipliers.
'Fp,Fst,Ap,Ast',.12,.14,.175,60);
Hifir = design(Hf,'ifir','UpsamplingFactor',6,...
'JointOptimization',true);
cost(Hifir)
Looking at the results from the cost function, we can see that while the num-
ber of multipliers remains 74 (the same as for a single-rate design), the number
of multiplications per input sample has been reduced substantially from 74 to
12.333. The number 12.333 is computed as follows. Of the 29 multipliers used
for the image suppressor filter, because of its efficient decimating implementa-
tion, only 29/6 multiplications are performed on average per input sample. Be-
cause of decimation, only one out of 6 input samples (to the entire cascade) ever
reaches U (z), so per input sample, the total number of multiplications on average
is 29/6 + 45/6 = 12.333.
0.08
0.06
0.04
Magnitude (dB)
0.02
−0.02
−0.04
−0.06
−0.08
−0.1
0 0.02 0.04 0.06 0.08 0.1 0.12
Normalized Frequency (×π rad/sample)
Figure 5.7: Passband details of an optimized IFIR design. The optimized design exhibits
nice equiripple behavior.
Figure 5.9: Interchange of the downsampler and the upsampled filter using the Noble
identities.
Example 47 Consider the design of a lowpass filter with the following specifica-
tions:
Specifications Set 6
The specifications imply that the band of interest extends from 0 to 500 Hz.
Since the bandwidth is being reduced by about a factor of 8, we decide to design a
decimation filter with a decimation factor of 8 to meet the specifications and reduce
the sampling-rate after filtering accordingly.
If we were to design a single-stage equiripple decimator, the design would
take 343 multipliers and about 48 multiplications per input sample. In order to
design a general multistage/multirate filter and allow for the tools to determine
the optimal number of stages we use the following commands:
Hf = fdesign.decimator(8,'Lowpass',...
'Fp,Fst,Ap,Ast',500,600,.1,80,1e4);
Hmulti = design(Hf,'multistage');
cost(Hmulti)
−10
Stage one
−20 Stage two
−30 Stage three
Magnitude (dB)
−40
−50
−60
−70
−80
−90
−100
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Frequency (kHz)
Example 48 We can use the Nstages option to control the number of stages in
a design. Consider the following two designs for the same design specifications:
−20
−40
Magnitude (dB)
−60
−80
−100
−120
−140
Figure 5.11: Overall magnitude response of multistage design for Specifications Set 6.
Hf = fdesign.decimator(16,'Lowpass',...
'Fp,Fst,Ap,Ast',.05,.06,.5,70);
Hmulti = design(Hf,'multistage');
Hmulti2 = design(Hf,'multistage','Nstages',3);
We already know that Nyquist filters are well-suited for (either deci-
mation or interpolation) multirate applications. The ability to break down
a multirate Nyquist design into several multirate Nyquist stages provides
very efficient designs. In many cases, halfband filters are used for the in-
dividual stages that make up the multistage design. Whenever a halfband
filter is used for a stage, there is the possibility to use an extremely efficient
IIR multirate halfband design as long as we allow for IIR filters as part of
the design.
Example 49 As an example, consider the design of a Nyquist filter that deci-
mates by 8. We will compare single-stage a multistage design to illustrate the
computational savings that can be afforded by using multistage Nyquist filters.
Hf = fdesign.decimator(8,'nyquist',8,.016,80);
H1 = design(Hf,'kaiserwin');
H2 = design(Hf,'multistage');
H3 = design(Hf,'multistage','Nstages',2);
The computational costs of the three designs are summarized in the following
table:
NMult NAdd Nstates MPIS APIS NStages
H1 551 550 624 68.875 68.75 1
H2 93 90 174 15.625 14.75 3
H3 106 104 182 17.125 16.75 2
The equivalent overall filters for all three designs are 8-band Nyquist filters.
The default multistage design, H2, is a 3-stage filter with each individual stage
being a halfband filter (each halfband filter is different though). H3 is 4-band
Nyquist decimator followed by a halfband Nyquist decimator.
−50
−100
−150
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 5.12: Passband phase response for multistage Nyquist FIR and IIR filters.
The magnitude response of the 3-stage Nyquist FIR design along with
the two 3-stage Nyquist IIR designs is shown in Figure 5.12. All three
designs behave similarly overall as expected since the specifications for
each halfband (each stage) are the same. However, although the 3-stage
x 10
−3 Magnitude Response (dB)
0.5
−0.5
−1
−1.5
Figure 5.13: Passband phase response for multistage Nyquist FIR and IIR filters.
FIR Nyquist design has a small passband ripple, it is nowhere near the
tiny passband ripple of the IIR designs. The passband details are shown
in Figure 5.13.
As stated before, the computational savings of the elliptic design come
at the expense of phase non-linearity. On the other hand, the group-delay
of the elliptic design is much lower than the other two designs. The pass-
band group-delay is shown in Figure 5.14.
We can use realizemdl(H4) or realizemdl(H5) to generate Simulink
blocks that implement these multistage decimators using halfband IIR fil-
ters in efficient polyphase form.
Group Delay
350
300
Group delay (in samples)
250
100
50
Figure 5.14: Passband group-delay for multistage Nyquist FIR and IIR filters.
Single−stage design
First stage of a multistage design
Magnitude (dB) (normalized to 0 dB)
−20
−40
−60
−80
−100
−120
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Frequency (kHz)
Figure 5.15: Passband group-delay for multistage Nyquist FIR and IIR filters.
f = fdesign.interpolator(8,'Nyquist',8,'TW,Ast',312.5,80,1e4);
h = design(f,'kaiserwin');
g = design(f,'multistage','Nstages',3);
−20
Magnitude (dB)
−40
−60
−80
−100
−120
Figure 5.16: Passband group-delay for multistage Nyquist FIR and IIR filters.
hm = design(f,'multistage','Nstages',3,...
'HalfbandDesignMethod','iirlinphase');
First stage
Second stage
−20
Third stage
Magnitude (dB)
−40
−60
−80
−100
Figure 5.17: Passband group-delay for multistage Nyquist FIR and IIR filters.
Overview
In this chapter we look at some special filters that come in handy for mul-
tirate applications.
We first look at hold interpolators which perform basic interpolation by
simply repeating input samples L times (L being the interpolation factor).
Next, we look at linear interpolators and show how they can be thought
of as two hold interpolators “put together”.
Both linear and hold interpolators are attractive because of their sim-
plicity. We see that they are very crude approximations to an ideal lowpass
filter used for interpolation. Usually, because of their simplicity, they are
used at the last stage of a multistage implementation, operating at high
sampling rates.
We then move on to CIC interpolators and show how these are just
generalizations of hold/linear interpolators. By using multiple (more than
two) sections, CIC interpolators can obtain better attenuation than hold or
linear interpolators. The nice thing about CIC interpolators is that they can
be implemented without using multiplication. This is an attractive feature
for certain hardware such as FPGAs and ASICs because multipliers take
up quite a bit of area and are difficult to make to operate at very high clock
rates.
CIC filters can also be used for decimation. Unlike the interpolation
case, in decimation, CIC filters are usually used at the first stage of a multi-
stage design. This is because at that stage the sampling-rate is the highest,
6.1 Hold interpolators 121
0.9
Original samples
0.8
Held−interpolated samples
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25
Time (n)
Magnitude Response
4 Hold interpolator
Ideal interpolator
3.5
2.5
Magnitude
1.5
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 6.2: Magnitude response of a hold interpolator with a factor L = 4. The ideal
interpolator is shown for comparison.
Hm = mfilt.holdinterp(4);
Hm.Numerator
Magnitude Response
8
Hold interpolator
Multistage interpolator
7 Ideal interpolator
5
Magnitude
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 6.3: Hold interpolator compared with multistage interpolator for L = 8. For
reference, the ideal interpolator is shown.
For these reasons, hold interpolators are usually not used in isolation,
but rather are used as part of a cascade of interpolators (typically at the last
stage, which operates at the fastest rate and therefore benefits the most of
the fact that no multiplications are required). When used in such cascade
configurations, most of the high frequency content has been removed by
previous stages. Moreover, the distortion of the signal in the band of in-
terest is much less severe since the band of interest will occupy a much
smaller portion of the passband of the hold interpolator.
Example 50 Let us compare the effect of using a hold interpolator for a full in-
terpolation by a factor L = 8 vs. using a hold interpolator for a factor L3 = 2 in
conjunction with two halfband filters, each one interpolating by two.
Figure 6.3 shows a comparison of a possible multistage design including two
halfband filters followed by a hold interpolator vs. a single-stage interpolate-by-
8 hold interpolator design. For reference, the ideal interpolate-by-8 filters is also
shown. While the multistage interpolator does allow some high-frequency content
through, it is nowhere as bad as the hold interpolator acting alone. Also, assuming
the band of interest extends to 0.1π, the multistage interpolation introduces a
maximum distortion of about 0.15 dB while the hold interpolator introduces a
distortion of almost 2 dB.
−10
−20
−50
−60
−70
−80
−90
Figure 6.4: Magnitude response of each of the three stages of the multistage design.
1.2
Held−interpolated samples
Multistage−interpolated samples
0.8
Amplitude
0.6
0.4
0.2
0
0 20 40 60 80 100 120
Time (n)
Figure 6.5: Comparison of interpolated data using a hold interpolator and a multistage
interpolator with L = 8.
One thing to keep in mind that since interpolation is usually used right be-
fore D/A conversion, the remnant high frequency content can be removed by an
analog (anti-image) post-filter. This will ensure a smooth analog waveform. As is
apparent from Figure 6.3, because of the multistage interpolation, the remaining
high frequency content will be far away from the band of interest (the passband of
the filter), making it easy for a low-order analog lowpass post-filter (with a wide
transition band) to remove said high frequencies.
Figure 6.4 shows the magnitude response of each of the three stages of the
multistage design. Note that because of the prior interpolation by 4 (provided by
the first two stages) the band of interest is only a small fraction of the passband of
the hold interpolator. For this reason, the passband distortion is minimal.
Figure 6.5 shows the result of filtering the same data with the hold interpola-
tor compared to using the multistage interpolator. While it is obvious that some
high frequency content remains in either case, it is also obvious that overall the
multistage-interpolated data is much smoother and therefore has much less high-
frequency content.
Original samples
1.8
Linear−interpolated samples
1.6
1.4
1.2
Amplitude
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
Time (n)
values of L). For every two existing samples, the interpolated values are
computed by forming a weighted average of the existing samples. The
weights are determined by the proximity of the interpolated values to each
of the existing samples. In this case, the interpolated sample closest to an
existing sample is four times closer to that existing sample than it is to the
existing sample on its other side. Therefore, when we form the weighted
average in order to compute the interpolated sample, we weight one of the
existing samples four times more than the other. This results in coefficients
equal to 0.2 and 0.8 (they always add to 1) for that polyphase branch. The
coefficients for other polyphase branches are similarly computed based
on the ratio of the distance between the new interpolated sample and the
existing surrounding two samples.
This means that for the case L = 5, the 5 polyphase branches can triv-
ially be computed as:
L = 5;
Hl = mfilt.linearinterp(L);
p = polyphase(Hl)
Notice that the last polyphase branch is trivial (its coefficients are 1 and 0).
This allows for the existing samples to be contained unchanged within the
interpolated samples. That is, linear interpolators (and hold interpolators)
are special cases of Nyquist filters.
As usual, the overall interpolation filter is a lowpass filter whose coef-
ficients can be formed from the polyphase branches.
Hl.Numerator
ans =
Columns 1 through 6
0.2000 0.4000 0.6000 0.8000 1.0000 0.8000
Columns 7 through 9
0.6000 0.4000 0.2000
Hm = mfilt.holdinterp(5);
Hm.Numerator
ans =
1 1 1 1 1
And of course
1/5*conv(Hm.Numerator,Hm.Numerator)
ans =
Columns 1 through 6
0.2000 0.4000 0.6000 0.8000 1.0000 0.8000
Columns 7 through 9
0.6000 0.4000 0.2000
Hold interpolator
0 Linear interpolator
−30
−40
−50
−60
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
−20
Magnitude (dB)
−40
−60
−80
z −1 H ( z ) = z −1 + z −2 + . . . + z − L (6.2)
H (z)(1 − z−1 ) = 1 − z− L
1 − z− L
H (z) =
1 − z −1
If we think of this as two filters in cascade, H (z) = H1 (z) H2 (z), with
H1 (z) = 1 − z− L (the “comb” due to its magnitude response) and H2 (z) =
∗ Post-equalize in the case of decimation.
Figure 6.10: Actual implementation of a CIC interpolator obtained by use of Noble iden-
tities.
Example 51 As if often the case, an example is the best way to understand this.
Suppose we want to interpolate by a large factor, say 64, in order to make our life
easy when designing an analog post-filter for D/A conversion. The following is a
possible set of specifications for our interpolator:
f = fdesign.interpolator(64,'Nyquist',64,'TW,Ast',0.0078125,80);
These specifications state the the cutoff frequency for the interpolator is π/64 and
that the passband-edge frequency is π/64 − TW/2 = 0.0117π. The minimum
stopband attenuation for the entire stopband region is 80 dB.
First, let’s design this filter using conventional Nyquist filters and let us con-
strain the design to be done in 3 stages:
Hc = design(f,'multistage','Nstages',3);
cost(Hc)
f2 = fdesign.interpolator(8,'CIC',1,'Fp,Ast',0.0117,80);
Hcic = design(f2); % Results in 4 sections
The following commands can be used to replace the 3rd stage with a CIC and
compare the two multistage designs:
Hc2 = copy(Hc);
Hc2.stage(3) = Hcic;
fvtool(Hc,Hc2)
−40
−60
−80
−100
−120
−140
−160
−180
−200
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 6.13: Comparison of a conventional 3-stage interpolation filter and one which
uses a CIC interpolator in its last stage.
−10
Stage 1
Magnitude (dB) (normalized to 0 dB)
−20
Stage 2
−30 Stage 3 (CIC)
−40
−50
−60
−70
−80
−90
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 6.14: Magnitude response of each stage of a 3-stage interpolation filter using a
CIC interpolator in its last stage.
Hh = mfilt.holdinterp(3);
fvtool(Hh,'MagnitudeDisplay','Magnitude')
Hl = mfilt.linearinterp(3);
fvtool(Hl,'MagnitudeDisplay','Magnitude')
Therefore the convolution of two hold interpolators has a gain of L2 =
9.
Since CIC filters are obtained by (unscaled) convolutions of hold in-
terpolators, their gain is given by LK . If we account for the fact that we
expect any interpolator to have a gain of L, the extra gain introduced by a
CIC interpolator is LK −1 .∗ For example, for L = 3 and K = 4, we can easily
see that the passband gain is 81:
Hcic = mfilt.cicinterp(3,1,4);
fvtool(Hcic,'MagnitudeDisplay','Magnitude')
The extra gain, that is LK −1 , can be computed simply by using the gain
command,
gain(Hcic)
ans =
27
0
dB
−50
−100
−150
0 10 20 30 40 50 60 70 80 90 100
Frequency (MHz)
Figure 6.15: Final band of interest of signal decimated with a CIC filter.
the final band of interest will occur at f p . The actual maximum amount of
aliasing at f p is due to the adjacent replica. For a CIC filter, the larger the
number of sections K, the smaller the gain for the replica at such frequency,
i.e. in order to limit the amount of aliasing, we would like to have a large
number of sections. However, the more sections we have, the larger the
droop of the filter in the band of interest that we need to compensate for.
In order to design a CIC filter, we reverse the previous statement. Namely,
given an amount of aliasing that can be tolerated, determine the minimum
number of sections K that are required. The full design parameters are
listed next.
Example 52 Given these parameters, a CIC decimation filter with the necessary
number of sections K such that the amount of aliasing tolerated at f = f p is not
exceeded can be designed as follows:
M = 8; % Decimation factor
D = 1; % Differential delay
Fp = 2e6; % 2 MHz
Ast = 80; % 80 dB
Fs = 100e6; % 100 MHz
Hf = fdesign.decimator(M,'CIC',D,'Fp,Ast',Fp,Ast,Fs);
Hcic = design(Hf);
The resulting design that meets the required attenuation consists of 6 sections.
100
80
60
Magnitude (dB)
40
20
−20
−40
−60
−80
0 5 10 15 20 25 30 35 40 45
Frequency (MHz)
Figure 6.16: Overall two-stage design consisting of CIC decimator and compensator.
In the case of decimation, the larger the ratio of the final passband of
interest f p to the original sampling rate f s , the smaller the amount of droop
since only a very small portion of the passband of the CIC filter is involved.
In some cases, we can do without the compensation altogether, since the
small droop may not warrant equalization.
Similarly, in the case of interpolation, the larger the overall interpola-
tion factor, the smaller the droop will be in the baseband spectrum. Again,
it may be so that we choose not to equalize if the overall multistage inter-
polation factor is large enough.
−20
Magnitude (dB) (normalized to 0 dB)
CIC Decimator
CIC compensator
−40
−60
−80
−100
−120
−140
−160
0 5 10 15 20 25 30 35 40 45
Frequency (MHz)
Figure 6.17: Magnitude response of each of the two stages in the design.
We can obtain the resulting overall decimator of cascading the two filters from:
Hcas = cascade(Hcic,Hd);
fvtool(Hcas) % Show overall response
Figure 6.16 shows the resulting filter after cascading the CIC decimator and
its compensator. Figure 6.17 shows how the CIC decimator attenuates the spectral
replicas from the compensator. Figure 6.18 shows the passband of the overall two-
stage decimator. Notice how the droop in the passband has been equalized. As
with CIC interpolators, CIC decimators have a large passband gain (for the same
reason). The gain can be found from gain(Hcic) and is once again given by MK .
In Appendix C, we will compare a multistage decimator design includ-
ing a CIC decimator and a CIC compensator with various other single- and
multistage designs.
109.5
Magnitude (dB)
109
108.5
108
107.5
Figure 6.18: Equalized passband of two-stage design showing the effect of the CIC com-
pensator on the droop.
Figure 6.19: Two-tap filter that can be used for fractional delay by linear interpolation.
y[n] = (1 − α) x [n − 1] + αx [n]
left and two samples to the right of where we wish to interpolate.∗ The
situation is depicted in Figure 6.23. Once we have four samples (such as
the first four input samples), we can interpolate, but we compute values
that lie between the two inner-most samples only (between the second
and third from the left). To interpolate, we fit a 3rd-order polynomial to
the four samples (the polynomial is unique) and we evaluate the polyno-
mial at the point we wish to interpolate. Then we advance one sample, the
leftmost sample is discarded and the right-most sample comes into play.
Once again, we fit the unique (new) 3rd-order polynomial to these four
samples. Once we have the polynomial, we interpolate by computing the
value of the polynomial between the two inner-most samples. After this,
we would advance another sample and so on.
As in the linear case, for higher-order polynomials, the Farrow struc-
ture makes use of Horner’s rule so that all coefficients in the filter are con-
stant while the fractional delay is an input that is tunable at run-time.
∗ As always, we must allow for sufficient delay in practice in order to make things causal.
Magnitude (dB) −5
−10
−15
−20
Figure 6.24: Magnitude response of linear and cubic fractional delays designed via the
Lagrange interpolation formula.
Example 54 Let us compare linear and cubic fractional delays using the La-
grange interpolation formula:
Group Delay
1.5
1.4
Linear−Lagrange Farrow Filter
1.3 Cubic−Lagrange Farrow Filter
Group delay (in samples)
1.2
1.1
0.9
0.8
0.7
0.6
0.5
Figure 6.25: Group delay of linear and cubic fractional delays designed via the Lagrange
interpolation formula.
Simulink models for the Farrow filters resulting from these designs can
be obtained as usual by using the realizemdl() command (e.g. realizemdl(Hlin)
or realizemdl(Hcub) in the Example above).
Before the input sample changes, one more output sample will be com-
puted. β m will take the value 0 and the output will simply be
y [ m + 1] = x [ n ];
Subsequently, the input sample will change, β m will be once again set
to 0.5 and so forth.
In summary, when increasing the sampling-rate by a factor of two, β m
will cycle between the values {0.5, 0} twice as fast as the input, producing
an output each time it changes.
In the general case, it is simply a matter of determining which values β
must take. The formula is simply
m fs
βm = mod 1
f s′
where f s is the input sampling rate and f s′ is the output sampling rate.
3f
Example 55 Let’s say we want f s′ = 5 s . Then β m will cycle through the values
{0, 2/3, 1/3}. The following code designs such a multirate filter (for the first-
order case) and filters some random data:
Figure 6.27 shows partial plots of the input and output signals assuming f s =
1. Notice that the delay follows the values of β m that we have indicated.
Figure 6.28 shows partial plots for the cubic case rather than the linear case.
The improved smoothness of using a cubic polynomial is apparent from the figure.
Input signal
2
Output signal
Linear interpolant
1.5
1
Amplitude
0.5
−0.5
−1
−1.5
10 12 14 16 18 20 22 24
Time (sec)
2
Input signal
Output signal
1.5
Cubic interpolant
1
Amplitude
0.5
−0.5
−1
−1.5
8 10 12 14 16 18 20 22
Time (sec)
To interpolate by 4, the fractional delay will take on values of {3/4, 1/2, 1/4, 0}
in a cyclic manner. If we set the fractional delay of the filter to each of those
four values and in each case compute the equivalent transfer function:
Hf = fdesign.fracdelay(0.75)
H = design(Hf,'lagrange');
[b0,a0] = tf(H)
H.FracDelay=0.5;
[b1,a1] = tf(H)
H.FracDelay=0.25;
[b2,a2] = tf(H)
H.FracDelay=0;
[b3,a3] = tf(H)
bint = intfilt(L,N,'lagrange');
Magnitude Response
4
Maximally flat filter
Ideal interpolation filter
3.5
2.5
Magnitude
1.5
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Example 56 Suppose we have a signal sampled at 680 kHz, but the band of inter-
est extends to 17 kHz only. We may have oversampled as signal as part of analog
to digital conversion for the purpose of simplifying the anti-aliasing analog filter.
We wish to reduce the bandwidth and the rate by decimating. The ideal decima-
tion factor would be M = 17. However, since 17 is a prime number, we can only
design a single-stage decimator such as:
TW = 6e3; % 6 kHz
Ast = 80; % 80 dB
M = 17; % Decimation factor
Hf = fdesign.decimator(M,'Nyquist',M,'TW,Ast',TW,Ast,Fs);
Hd = design(Hf,'kaiserwin');
cost(Hd)
Hf = fdesign.decimator(16,'Nyquist',16,'TW,Ast',TW/2,Ast,Fs);
Hmult = design(Hf,'multistage')
cost(Hmult)
f = fdesign.polysrc(16,17);
H = design(f,'lagrange');
We can cascade this filter after the multistage decimator to achieve the goal we
want:
Hcas = cascade(Hmult,H);
Filter Implementation
Chapter 7
There are several factors that influence the selection of the filter structure
used to implement an FIR filter. If the filter length is very large, it may be
preferable to use a frequency-domain implementation based on the FFT.
If it is to be implemented in the time domain, the target hardware is an
important factor.
From a design perspective, there is an important distinction between
minimum-phase and linear phase filters. Minimum-phase FIR filters do
not have any symmetry in their coefficients while linear phase FIR filters
have either symmetric or antisymmetric coefficients. Depending on the
target hardware, it may be possible to implement a linear-phase FIR filter
using less multipliers than the minimum-phase filter by taking advantage
of the symmetry even if the filter length of the linear-phase is larger. In
other words the implementation advantages of linear-phase filters may
offset the gains of a minimum-phase design making it preferable to use a
linear-phase filter even when linearity of phase is not critical for the appli-
cation at hand. Of course there are other reasons to use minimum-phase
filters that may give a compelling reason to do so. For instance, as we have
seen, minimum-phase filter introduce a minimal transient delay. This may
be important enough to stick with a minimum-phase design.
156 Implementing FIR Filters
Hf = fdesign.lowpass('N,Fp,Fst',3,.3,.8);
Heq = design(Hf,'equiripple','FilterStructure','dffir');
The filter order is very low for illustration purposes. The filter struc-
ture that is used to implement the filter is specified by the string 'dffir'
(which is the default) and corresponds to the so-called direct-form struc-
ture (also called a tapped delay line). To visualize the structure we can
create a Simulink block for the resulting filter using the realizemdl com-
mand,
realizemdl(Heq);
Hsym = design(Hf,'equiripple','FilterStructure','dfsymfir');
realizemdl(Hsym);
cost(Hsym)
Figure 7.2: A 4-tap symmetric FIR filter implemented using the symmetric direct-form
filter structure.
quired is half∗ of the number required for the direct-form structure. Notice
that even though there are only two multipliers, there are still three delays
(same as for the direct-form structure) required since the number of delays
corresponds to the order of the filter.
Htran = design(Hf,'equiripple','FilterStructure','dffirt');
realizemdl(Htran)
Example 57 Let’s compute the SNR when quantizing the impulse response of an
FIR filter using 16 bits to represent the coefficients:
∗ See Appendix D.
Hf = fdesign.lowpass(0.4,0.5,0.5,80);
Hd = design(Hf,'equiripple');
himp = Hd.Numerator; % Non-quantized impulse response
Hd.Arithmetic = 'fixed'; % Uses 16-bit coefficients by default
himpq = Hd.Numerator; % Quantized impulse response
SNR = 10*log10(var(himp)/var(himp-himpq))
SNR =
84.3067
When we quantize the impulse response of an FIR filter, we may use
the additive noise model to get an idea of how the frequency response is
affected. If the impulse response is given by h[n], the quantized impulse
response is
hq [n] = h[n] + e[n]
The additive noise, e[n], will affect the frequency response of the quan-
tized impulse response by adding a noise-floor of intensity ε2 /12, distort-
ing both the passband and the stopband of the filter. In most cases, the
effect on the stopband is most critical, reducing the attainable stopband
attenuation of the filter.
A good rule-of-thumb is that we can expect about 5 dB/bit when quan-
tizing an FIR filter, as long as we use the bits wisely. By this we mean, scale
them so that impulse response is representable with the bits we have with-
out wasting any bits.
The range of values that can be represented with a given wordlength/fraction
length combination falls between a negative and a positive power of two
(for instance the intervals [-0.125,0.125), [-0.5,0.5), [-1,1),[-8,8), etc)∗ . We
want to choose the smallest interval that contains the largest (absolute)
value of the impulse response. If we choose an interval larger than the
minimum, we are just wasting bits. If the largest value of the impulse re-
sponse is positive and a power of two, then it is somewhat of a judgement
call whether or not we want to allow for a small overflow error for that
one coefficient.
Example 58 Consider the following equiripple Nyquist design:
f = fdesign.nyquist(4,'TW,Ast',0.2,80);
h = design(f,'equiripple');
∗ Note the open interval on the right. That’s because we cannot represent exactly that
value. How close we get will depend on the number of bits available.
The filter has a minimum attenuation of 80 dB. Its largest coefficient is 0.25.
Using the 5 dB/bit rule, we need at least 16 bits in order to provide the 80
dB of attenuation. Since the largest coefficient is 0.25, we can choose a fractional
length of 17 to scale the bits so that all values in the interval [-0.25,0.25) are
representable. In this example we choose to live with small overflow in the quan-
tization of the largest coefficient (we quantize the value to 0.25 − 2−17 ). Since the
value is a power of two, depending on the hardware, another option would be not
to implement this coefficient as a multiplier at all (instead simply perform a bit
shift). This is a nice property of all Nyquist filters, the largest coefficient is always
a power of two.
In order to set a 16-bit wordlength and a 17-bit fraction length, we perform
the following steps:
Note that there is automatic scaling of the coefficient bits by default. It is designed
to avoid overflow in the quantization while minimizing the interval so that the
bits are used as best possible. As we have said, strictly speaking the quantization
of the coefficient equal to 0,25 overflows with a fraction length of 17, so a fraction
length of 16 is used by default (which along with the wordlength, means any value
in the interval [-0.5,0.5) is representable without overflow).
The magnitude response of the quantized filter is shown in Figure 7.4. ∗ The
16 bits are adequate in order to preserve quite faithfully the intended magnitude
response of the filter.
To emphasize the point regarding the need to use both the right num-
ber of bits and use them wisely, consider what would have happened in
the previous example if instead of a fraction length of 17, we used a frac-
tion length of 15. The magnitude response for this case is shown in Figure
7.5. Notice that the quantized filter no longer meets the required 80 dB
stopband attenuation (the passband also has greater error than in the pre-
vious case).
∗ Note that this analysis along with most others (including the impulse response) only
takes into account the quantization of the coefficients. It does not take into account the
fact that there may be roundoff/overflow introduced by the multiplications/additions
when actually filtering data.
−10
−20
−50
−60
−70
−80
−90
Figure 7.4: Magnitude response of equiripple Nyquist filter quantized with 16-bit
wordlength and a fraction length of 17.
Example 59 Let’s look again at what happens when we quantize the following
filter:
−10
−20
−40
Filter #1: Reference
−50
−60
−70
−80
−90
Figure 7.5: Magnitude response of equiripple Nyquist filter quantized with 16-bit
wordlength and a fraction length of 15.
Hf = fdesign.lowpass(0.4,0.5,0.5,80);
Hd = design(Hf,'equiripple');
Hd.Arithmetic = 'fixed';
B = fi(Hd.Numerator,true,16,16);
The variable B contains the fixed-point numbers corresponding to the filter’s coef-
ficients. Because the largest value of the coefficients is about 0.434, they have been
quantized with a fractional length of 16 (the default wordlength is in turn 16 bits
which is in-line with the requirement for this filter following the 5dB/bit rule).
Now let’s look at the binary representation of say the first three coefficients:
B.bin
ans =
1111111111011100 1111111110100110 1111111111010100 ...
The repeated 1’s towards the left (the MSB) are all unnecessary in order to rep-
resent the value. Indeed, we could represent the first value simply as 1011100,
the second value as 10100110, and so forth. So in reality we need only 7 bits to
represent the first coefficient, 8 bits to represent the second coefficient, etc.
C1 = fi(-0.00054931640625,true,7,16)
C1.bin
The full 16 bits are used for the largest coefficients. For instance for the middle
one (the largest one):
Hd.Numerator(30)
ans =
0.4339599609375
C30 = fi(0.4339599609375,true,16,16);
C30.bin
ans =
0110111100011000
filtering with an FIR filter implemented in direct form. The only quanti-
zation error is due to the coefficient quantization (the quantization of the
input signal is considered separately as it is not affected by what happens
within the filter itself.
No overflows will occur in full precision mode because we assume we
will grow enough bits when adding (see below) to accommodate for signal
levels increasing.
Hf = fdesign.lowpass(0.4,0.5,0.5,80);
Hd = design(Hf,'equiripple');
Hd.Arithmetic = 'fixed';
The coefficients are represented with 16 bits. If the input is represented with 16
bits as well, full precision multiplications would mean that the values to be added
are represented with 32 bits. Since there are 58 additions to be performed, the
number of bits we need in order to perform the additions with full precision is
⌊log2 (58)⌋ + 1 = 6
Given that we should know what are the maximum values that the
input can take, we want to determine what are the maximum numbers the
output can take. To do so, we use some simple math. We start by taking
the absolute value at both sides of the convolution equation,
N
|y[n]| = ∑ h[m] x [n − m]
m =0
Let’s say that the input covers the range [− R/2, R/2). This means that
in the worst case,
R N R
|y[n]| = ∑ |h[m]| = kh[n]k1
2 m =0 2
Thus the 1-norm of the impulse response provides a gain factor for the
output relative to the input. For example, if R/2 = 1, the maximum, value
the output can take is given by k h[n]k1 . This tells us how many bits we
need to grow in order to perform full-precision arithmetic throughout the
filter.
Example 61 For the 59-tap filter Hd we have been using, we can compute the
1-norm of its impulse response as follows:
norm(Hd,'l1')
ans =
1.9904
This means that we need to ensure the additions can represent the range [−2, 2).
Since the multiplications fall in the interval [−0.5, 0.5), it is necessary to add
two bits in order to implement a full-precision filter. If we look at the values for
AccumWordLength and OutputWordLength when we write get(Hd), we see than
indeed these values have been set to 33 bits, i.e. two more bits than what is used
for the multiplications.
We have seen how we determine the number of bits required for full-
precision implementations. As a matter a fact, this value is still somewhat
conservative because it assumes the worst possible case for the input. If
we were to actually take into account typical values we think the input
can assume, we may even be able to reduce the required number of bits
further.
The only difference has to do with the wordlength necessary for the
state registers. In the direct-form case, the states needed to have the same
wordlength as the input signal in order to avoid roundoff. In the trans-
posed structure, the states must have the same wordlength as the wordlength
used for addition. This is typically more than twice the wordlength of the
input, so that the transposed structure clearly has higher memory require-
ments for a given filter.
2ˆ(-14)ˆ2/12
ans =
3.1044e-10
We can also find this value by integrating the power-spectral density of
the output signal due to the noise signal e[n] by using (D.4). Note that the
transfer function between the noise input, e[n], and the output is simply
He (z) = 1. The integral of the PSD gives the average power or variance:
P = noisepsd(Hd);
avgpower(P)
ans =
3.0592e-10
This value is close to the theoretical value we expect from the additive
noise model as computed above.∗ The PSD is plotted in Figure 7.8. Note
that the PSD is approximately flat as it should be for white noise (the plot
can be obtained by calling the noisepsd command with no outputs).
∗ The PSD here includes a factor of 1/2π, therefore the intensity of the PSD is given by
the variance divided by 2π. When we integrate in order to compute the average power,
we should keep in mind that the variance has already been scaled by 2π.
• The best we can hope to compute, yd [n]. This is the result of using the
quantized coefficients, but without allowing any further round-off
noise within the filter or at its output. All multiplications/additions
are performed with full-precision.
• What we actually compute, y[n]. This is the result we get with all final
−105
−110
Power/frequency (dB/rad/sample)
−115
Computed PSD
Theoretical PSD
−120
−125
−130
−135
−140
−145
−150
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 7.8: Power spectral density of the quantization noise at the output of an FIR filter.
Clearly, what we actually compute can at best be the same as the best
we can hope to compute. In order to make the best we can hope to com-
pute closer to what we ideally would compute, we would have to use
more bits to represent the filter coefficients (and throughout the filter).
Hf = fdesign.lowpass(0.4,0.5,0.5,80);
Hd = design(Hf,'equiripple');
Hd.Arithmetic = 'fixed';
As we have said, the quantized filter uses 16-bits to represent the coefficients by
default. A plot of the magnitude response of Hd shows that 16-bits are not quite
enough in this case to get the full 80 dB attenuation throughout the stopband.
Nevertheless, if we choose to use 16 bits for the coefficients, the best we can hope
to achieve can be computed for some test data as follows:
rand('state',0);
x = fi(2*(rand(1000,1)-.5),true,16,15);
0 Full precision
16−bit output
−10 14−bit output
12−bit output
−20
−30
Magnitude (dB)
−40
−50
−60
−70
−80
−90
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Example 64 Let us continue with the same design we have been using the last
several examples. If we start with full precision, the magnitude response estimate
is almost indistinguishable from the magnitude response:
−60
−65
Magnitude (dB)
−70
−75
−80
−85
−90
−95
−100
0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 7.10: Stopband details of magnitude response estimates for various output
wordlengths.
Hf = fdesign.lowpass(0.4,0.5,0.5,80);
Hd = design(Hf,'equiripple');
Hd.Arithmetic = 'fixed'; % 16-bit coefficients
H = freqz(Hd);
He = freqrespest(Hd,1,'NFFT',8192);
norm(H-He)
ans =
0.0023
Now, we will compute the magnitude response estimate using always the same
16-bit coefficients, but in each case discarding a different number of bits at the
output. Notice that we now use a relatively large number of trials, 50, in order
to get a reliable average from the estimate (the computation takes a while). We’ll
start by discarding 17 bits,
Hd.FilterInternals = 'specifyPrecision';
Hd.OutputWordLength = 16; % 33-16 = 17
Hd.OutputFracLength = Hd.OutputFracLength-17;
He16 = freqrespest(Hd,50,'NFFT',8192);
Hd.OutputWordLength = 14;
Hd.OutputFracLength = 12;
He14 = freqrespest(Hd,50,'NFFT',8192);
Hd.OutputWordLength = 12;
Hd.OutputFracLength = 10;
He12 = freqrespest(Hd,50,'NFFT',8192);
The plot of the various magnitude response estimates is shown in Figure 7.9. Note
the quantization at the output basically results in reduced stopband attenuation.
The stopband is shown in greater detail in Figure 7.10.
−300
Magnitude (dB)
−400
−500
−600
−700
−800
−900
−1000
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
Figure 8.1: A Chebyshev type I filter implemented in second-order sections and with the
transfer function.
The resulting filter order is very high (67). By forming the transfer function
using the tf command, we have completely distorted the response of the filter (see
Figure 8.1). Moreover, the filter has become unstable! The transfer function of the
polynomials is completely wrong as evidenced by the pole/zero plot (Figure 8.2).
Compare to the pole/zero plot of the second-order sections to see what it should
look like (Figure 8.3).
Pole/Zero Plot
2 Filter #1: Zero
1.5
Filter #1: Pole
1
Imaginary Part
0.5
−0.5
−1
−1.5
−2
−4 −3 −2 −1 0 1 2
Real Part
Figure 8.2: Pole/zero plot of a Chebyshev type I filter implemented with the transfer
function.
Pole/Zero Plot
1
Filter #1: Zero
0.8
Filter #1: Pole
0.6
0.4
Imaginary Part
0.2
67
0
−0.2
−0.4
−0.6
−0.8
−1
Figure 8.3: Pole/zero plot of a Chebyshev type I filter implemented with second-order
sections.
tions. These have the advantage that they require less multipliers than
corresponding second-order sections implementations.
The idea is to implement the filter H (z) as the sum of two allpass filters
Example 66 To illustrate, consider the elliptic design of Example 26. When us-
ing the direct-form II second-order sections implementation we found that the
number of multipliers required was 20, the number of adders was also 20 and the
number of delays was 10. If we use cascaded low-order allpass sections,
He2 = design(Hf,'ellip','FilterStructure','cascadeallpass');
we can find through the cost function that the number of multipliers reduces to
12. The number of adders increases to 23, and the number of delays is 15.
Example 67 For illustration purposes, we can easily get to the transfer function∗
of the allpass filters as follows:
Hf = fdesign.lowpass('N,F3dB,Ap,Ast',5,.3,1,60);
He = design(Hf,'ellip','FilterStructure','cascadeallpass');
[b1,a1] = tf(He.stage(1).stage(1)); % A0(z)
[b2,a2] = tf(He.stage(1).stage(2)); % A1(z)
Note that He.stage(1) contains the term A0 (z) + A1 (z) while He.stage(2)
is simply the 1/2 factor. If we look at the numerators b1 and b2, we can see that
they are reversed versions of a1 and a2 respectively. This is an indication of their
allpass characteristic.
Hf = fdesign.lowpass(0.45,0.5,0.5,80);
He = design(Hf,'ellip'); % Already in SOS form
Let’s construct a direct-form II filter using the transfer function equivalent to the
SOS we have:
[b,a] = tf(He);
Htf = dfilt.df2(b,a);
Now let’s quantize the coefficients using 16 bits and compare the resulting mag-
nitude responses.
He.Arithmetic = 'fixed';
Htf.Arithmetic = 'fixed';
The magnitude responses are shown in Figure 8.5. Note that in this case, forming
the double-precision floating-point transfer function does not significantly alter
the magnitude response (try fvtool(b,a)). The deviation that we observe for the
transfer function in Figure 8.5 has to do with the quantization.
It is worth noting that the magnitude responses shown take into ac-
count only the quantization of coefficients. Any other round-off error in-
troduced by the fixed-point settings is not reflected in the magnitude re-
sponse analysis shown in fvtool.
−10
−20
SOS filter
−30 TF filter
Magnitude (dB)
−40
−50
−60
−70
−80
−90
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency (×π rad/sample)
Figure 8.5: Comparison of magnitude response of the same filter quantized to 16-bit
coefficients. In one case, we use second-order sections (SOS) while in the other case we
use the transfer function (TF).
• yr [n], what we ideally would want to compute. This is the result of us-
ing the non-quantized original filter coefficients and performing all
multiplications/additions with infinite precision (or at least double-
precision floating-point arithmetic).
• yd [n], the best we can hope to compute. Since full-precision is not an op-
tion with IIR filters, in order to obtain a baseline, we can use the
double() command on the filter to obtain a filter that has quan-
tized coefficients, but performs all multiplications/additions using
double-precision floating-point arithmetic. ∗
∗ Note that the commands reffilter() and double() differ in that the former uses the orig-
inal floating-point coefficients, while the later uses the quantized coefficients.
• y[n], what we actually compute,. This is the result we get with all final
fixed-point settings in our filter.
Example 69 Let us repeat the previous design with a slight change. By default,
we scale and re-order the second-order sections in order to help obtaining good
fixed-point filtering results from the start. However, for now, let’s remove the
scaling/re-ordering to show how a good magnitude response is necessary but not
sufficient in order to obtain satisfactory results.
Hf = fdesign.lowpass(0.45,0.5,0.5,80);
He = design(Hf,'ellip','SOSScaleNorm',''); % Don't scale
He.Arithmetic = 'fixed';
We’ll compute the ideal output, yr [n], and compare to the best we can hope
to compute, yd [n]. The difference between the two is due solely to coefficient
quantization.
rand('state',0);
x = fi(2*(rand(1000,1)-.5),true,16,15);
Hr = reffilter(He);
yr = filter(Hr,x); % Ideal output
Hd = double(He);
yd = filter(Hd,x); % This is the best we can hope for
var(yr-yd)
ans =
1.5734e-08
Now, let’s enable min/max/overflow logging and filter the same data with the
fixed-point filter:
fipref('LoggingMode','on')
ye = filter(He,x);
qreport(He)
If we look at the results from qreport, we can see that the states have overflown
37 out of 10000 times. Because of the overflow, it is not worth comparing ye to
yd.
Example 70 Let us design the elliptic filter again, but using ℓ1 -norm scaling this
time. For now, we will disable re-ordering of the second-order sections:
s = fdopts.sosscaling;
s.sosReorder = 'none';
Hl1 = design(Hf,'ellip','SOSScaleNorm','l1',...
'SOSScaleOpts',s);
Hl1.Arithmetic = 'fixed';
Now let’s filter with this scaled structure,
yl1 = filter(Hl1,x);
qreport(Hl1)
Notice from the report that we have no overflows. Therefore we can compare the
output to our baseline in order to evaluate performance,
var(yd-double(yl1))
ans =
1.8339e-04
Example 71 Let’s design the filter and scale using L∞ -norm scaling. For now,
we still do not re-order the sections.
HLinf = design(Hf,'ellip','SOSScaleNorm','Linf',...
'SOSScaleOpts',s);
HLinf.Arithmetic = 'fixed';
Example 72 Let’s once again use L∞ -norm scaling, but this time use automatic
re-ordering of the sections.
s = fdopts.sosscaling;
s.sosReorder = 'auto';
HLinf = design(Hf,'ellip','SOSScaleNorm','Linf',...
'SOSScaleOpts',s);
HLinf.Arithmetic = 'fixed';
Once again, let’s filter the same data, and look at the report.
yLinf = filter(HLinf,x);
qreport(HLinf)
Since we did not overflow, it is meaningful to compare the output to our baseline
to evaluate performance.
var(yd-double(yLinf))
ans =
1.4809e-05
As we can see, the result is more than an order of magnitude better than what we
got with ℓ1 -norm scaling.
Yet a less stringent scaling is to use L2 -norm scaling.∗ Yet the chances
of overflowing when using that type of scaling are even larger.
It is worth noting that in some applications, it is preferable to allow
for the occasional overflow in order to increase the SNR overall. The as-
sumption is that an overflow once in a while is not critical for the overall
performance of the system.
∗ This norm can be shown to be the same as ℓ2 -norm. This is Parseval’s theorem.
8.2.2 Autoscaling
The scaling we have seen so far is data agnostic. As a result sometimes
overflow occurs.
We can use the autoscale command in order to optimize the fixed-
point fractional length settings in a filter based on specific input data that
is given. Doing so, we can get even better results than if we use data-
agnostic scaling.
Example 73 Let’s try once again to design the filter without data-agnostic scal-
ing.
Hf = fdesign.lowpass(0.45,0.5,0.5,80);
He = design(Hf,'ellip','SOSScaleNorm',''); % Don't scale
He.Arithmetic = 'fixed';
We know that this filter will overflow with the input data we have. However, if
we use this input data to autoscale,
Hs = autoscale(He,x);
ys = filter(Hs,x);
qreport(Hs)
we can see from the report that now overflow has been avoided. We now evaluate
the output relative to our baseline,
var(yd-double(ys))
ans =
1.4144e-07
var(yd-double(ys2))
ans =
2.3167e-08
−40
−50
−60
−70
−80
−90
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (×π rad/sample)
0.5
−0.5
−1
−1.5
−2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
Normalized Frequency (×π rad/sample)
Figure 8.7: Passband details of magnitude response estimates for various scaling cases.
the roundoff can also affect the passband response of the filter. The pass-
band details are shown in Figure 8.7.
Appendices
Appendix A
f = fdesign.lowpass('Fp,Fst,Ap,Ast',0.3,0.4,0.5,75);
help fdesign/responses
f.Specification = 'N,Fc,Ap,Ast';
f.FilterOrder = 60;
...
setspecs(f,'N,Fc,Ap,Ast',60,0.3,0.5,75);
192 Summary of relevant filter design commands
set(f,'Specification')
designoptions(f,'equiripple')
f = fdesign.lowpass('Fp,Fst,Ap,Ast',1.3e3,1.4e3,0.5,75);
h = design(f,'equiripple','StopbandShape','1/f','StopbandDecay',3);
fvtool(h)
measure(h)
f = fdesign.halfband('TW,Ast',0.01,80);
h = design(f,'equiripple');
cost(h)
ans =
Number of Multipliers : 463 % Number of non-zero and non-one coefficients
Number of Adders : 462 % Total number of additions
Number of States : 922 % Total number of states
MultPerInputSample : 463 % Non-zero/non-one multiplications per input sample
AddPerInputSample : 462 % Additions per input sample
info(h)
f = fdesign.lowpass('N,Fp,Fst',40,0.4,0.5);
h1 = design(f,'Wpass',1,'Wstop',10,'FilterStructure','dffir'); % default
h2 = design(f,'Wpass',1,'Wstop',10,'FilterStructure','dfsymfir');
cost(h1)
cost(h2)
For IIR filters, the default structure is direct-form II. IIR filters are de-
signed by default as cascaded second-order sections (SOS). Direct-form I
SOS structures use more states, but may be advantageous for fixed-point
implementations,
f = fdesign.lowpass('N,F3dB,Ap,Ast',5,0.45,1,60);
h1 = design(f,'FilterStructure','df1sos');
h2 = design(f,'FilterStructure','df2sos');
cost(h1)
cost(h2)
f = fdesign.lowpass;
h1 = design(f,'ellip','FilterStructure','df1sos');
h2 = design(f,'ellip','FilterStructure','df2sos');
s = dspopts.sosview;
s.view='Cumulative'; % Set view to cumulative sections
fvtool(h1,'SOSViewSettings',s)
s.secondaryScaling = true; % Use secondary scaling point for df2sos
fvtool(h2,'SOSViewSettings',s)
This scaling helps avoid overflows when the filter is implemented with
fixed-point arithmetic, but doesn’t totally eliminate the possibility of them.
To eliminate overflows, the more restrictive l1 -norm scaling can be used (at
the expense of reduced SNR due to sub-utilization of the dynamic range):
f = fdesign.lowpass;
h1 = design(f,'ellip','FilterStructure','df1sos','SOSScaleNorm','l1');
s = dspopts.sosview;
s.view = 'Cumulative';
fvtool(h1,'SOSViewSettings',s)
with 16 bits and covering the range [−1, 1). The LSB is scaled by 2−15 .
(Two’s-complement is assumed for all fixed-point quantities.)
Coefficient quantization affects the frequency (and time) response of
the filter. This is reflected in fvtool and measure,
f = fdesign.decimator(2,'Halfband','TW,Ast',0.01,80);
h = design(f,'equiripple');
fvtool(h)
measure(h)
% Compare to
h.Arithmetic = 'fixed';
fvtool(h)
measure(h)
For FIR filters, in order to change fixed-point settings for the output and/or
additions/multiplications within the filter, change the filter internals first∗ ,
h.FilterInternals = 'SpecifyPrecision'
For example to view the effect solely due to the use of a smaller number
of bits for the output of a filter, one could write something like,
There are a couple of commands that can come in handy for analysis
when converting filters from floating point to fixed point: double, and
reffilter,
f = fdesign.decimator(2,'Halfband','TW,Ast',0.01,80);
h = design(f,'equiripple');
h.Arithmetic = 'fixed';
hd = double(h); % Cast filter arithmetic to double. Use quantized coeffs
hr = reffilter(h); % Get reference double-precision floating-point filter
The difference between hd and hr lies in the filter coefficients. The refer-
ence filter, hr, has the original “ideal” double-precision coefficients, while
the casted-to-double filter, hd, has a casted version of the quantized coeffi-
cients.
These functions can be used to compare what we ideally would like to
compute, with the best we can hope to compute∗ , and with what we can
actually compute. Continuing with the previous two examples:
yr = filter(hr,xq); % What we would ideally like to compute
yd = filter(hd,xq); % Best we can hope to compute. Same as yfull
plot(double([yr-yd,yr-y16]))
realizemdl will build the filter from scratch using basic elements like
adders/multipliers.
The use of the block command will generally result in faster simula-
tion speeds and better code generation. However, realizemdl provides
support for more filter structures and will permit customization of the re-
sulting block diagram (say to reduce the number of bits of some of the
multipliers of a filter or to rewire the filter in some way).
filterbuilder
Note that to design multirate filters, you should first select the response
(lowpass, halfband, etc) and then set the Filter Type to decimator, etc (if
available).
Once designed (and possibly quantized), the filter can be saved to the
workspace as an object named with the variable name specified at the top.
Sampling, Downsampling,
Upsampling, and Analog
Reconstruction
This appendix provides a short review of both theoretical and practical is-
sues related to sampling of a signal and reconstruction of an analog signal
from a sampled version of it. In general, understanding sampling and ana-
log reconstruction is essential in order to understand multirate systems. In
fact, as we will see, sampling an analog signal is just the extreme case of
downsampling a digital signal thereby reducing its sampling rate. Simi-
larly, analog reconstruction is just the edge case of increasing the sampling
rate of a signal.
The material presented here is standard in many textbooks∗ , but there
are a couple of practical issues that are often overlooked and that we wish
to highlight.
First, it is well-known that in order to avoid aliasing when sampling
it is necessary to sample at least twice as often as the highest frequency
component present. What is somewhat less well-known is that in lieu of
ideal brickwall anti-aliasing and reconstructing lowpass filters, we should
always sample a bit higher than twice the maximum frequency compo-
nent. As a rule of thumb, anywhere between 10 and 20 percent higher.
This is done to accommodate for the transition bands in the various fil-
ters involved. Moreover, if we oversample by a factor of say two or four
∗ In particular the textbooks [3] and [11] are recommended in addition to the article [35].
200Sampling, Downsampling, Upsampling, and Analog Reconstruction
to ease the burden of the analog anti-aliasing filter, we still leave an extra
10-20 percent of room even though at this point we would be sampling
anywhere between 4 and 8 times higher than the highest frequency com-
ponent.
Moreover, since we know that the information in the excess frequency
bands is of no interest for our application, we allow for aliasing to occur
in this band. This enables us to sample at a rate that, while larger than
twice the frequency of interest, does not need to be as large as twice the
stopband-edge of the anti-aliasing filter. Equivalently, this allows for the
transition width of the anti-aliasing filter to be twice as large (resulting a
simpler, lower-order filter) than what it would be if did not permit aliasing
in the excess frequency bands.
As an example consider an audio signal. The frequency range of inter-
est is 0 to 20 kHz. Ideally we would bandlimit the analog signal exactly to
20 kHz prior to sampling with an anti-alias analog filter. We would then
sample at 40 kHz. Subsequently, in order to reconstruct the analog signal
from the samples, we would use a perfect lowpass analog filter with a cut-
off frequency of 20 kHz that removes all spectral replicas introduced by
sampling. In reality, we band-limit the signal with an analog anti-aliasing
filter whose passband extends to 20 kHz but whose stopband goes be-
yond 20 kHz by about 20 to 40 percent. Typical sampling rates for audio
are 44.1 kHz or 48 kHz, meaning that we have an excess bandwidth of
about 10 to 20 percent. The stopband-edge of the anti-aliasing filter will
be at about 24.1 kHz or 28 kHz respectively. These would also be the max-
imum frequency components of the audio signal that is bandlimited with
such anti-alias filter. Since we don’t quite sample at twice the maximum
frequency components, aliasing occurs. However, aliasing will only occur
in the excess frequency band, 20 kHz to 22.05 kHz or 20 kHz to 24 kHz
depending on the sampling frequency.
Other sampling rates used for audio are 88.2 kHz, 96 kHz, 176.4 kHz,
and 192 kHz. These sampling rates correspond to 2x or 4x oversampling,
but the important thing is that they still include an extra 10 to 20 percent
cushion in addition to the 2x or 4x oversampling.
The second issue we wish to highlight is that when choosing the stop-
band edge frequency for a decimation filter we can in many cases allow
for transition-band overlap (i.e. aliasing in the transition band). This issue
is related to the excess bandwidth resulting from the 10 to 20 percent over-
sampling that we have just mentioned and it is the reason the extra 10 to
1 ∞
Xs ( f ) = ∑ Xa ( f − k f s )
T k=−
(B.1)
∞
that is, we form a sequence by recording the value of the analog signal
x a (t) at each multiple of T, we can determine the continuous-time Fourier
transform of xs (t) by computing the discrete-time Fourier transform of
x [ n ],
∞
Xs ( f ) = X ( f ) = ∑ x [n]e−2πj f nT .
n=−∞
So even though xs (t) and x [n] are conceptually different, they have the
same frequency response.
Eq. (B.1) is valid for any signal x a (t) whether band-limited or not [3].
The terms Xa ( f − k f s ) are shifted versions of the spectrum of x a (t), and
are called spectral replicas. They are centered around multiples of f s . This
concept is key to understanding multirate signal processing so we’d like
to emphasize it. Any sampled signal has spectral replicas centered around
multiples of its sampling frequency. The higher the sampling frequency,
the further apart the spectral replicas will lie.
In general, when we form the sum of all the spectral replicas as in Eq.
(B.1), the replicas will interfere with each other due to frequency overlap-
ping. This interference is called aliasing. However, if the signal is ban-
dlimited in such a way that only one replica has a non-zero value for any
given frequency, then the replicas will not overlap and they will not inter-
fere with each other when we add them as in Eq. (B.1).
Obviously if Xa ( f ) 6= 0 ∀ f , the replicas will invariably overlap. How-
ever, if x a (t) is bandlimited, i.e. its spectrum is equal to zero for all fre-
quencies above a certain threshold and all frequencies below a certain
threshold, then by spreading the replicas enough apart from each other,
i.e. choosing a large enough value for f s , the replicas will not overlap
thereby avoiding aliasing.
It is easy to see that if the analog signal’s spectrum satisfies
fs
Xa ( f ) = 0, |f| > (B.2)
2
then the replicas will not overlap. Aliasing will thus be avoided if the
sampling frequency is at least equal to the two-sided bandwidth.
Figure B.1: Spectrum of band-limited analog signal with maximum frequency given by
f s /2.
Figure B.2: Spectrum of sampled signal with maximum frequency given by f s /2.
The interval [− f s /2, f s /2] is called the Nyquist interval. Since f s deter-
mines the number of samples per second, when comparing two Nyquist
intervals of different size, it is useful not only to realize that one encom-
passes a larger range of frequencies than the other, but also that it involves
a larger amount of samples. Therefore, in general, any digital processing
performed on a signal on a larger Nyquist interval requires more com-
putation than comparable signal processing performed on a signal that
occupies a smaller Nyquist interval. This means that we generally want
to avoid oversampling (i.e. avoid white-space in the spectrum). However,
oversampling can be useful in some particular cases.
The spectral replicas that appear in a sampled signal make the signal
periodic in the frequency domain. This periodicity means that any opera-
tion performed on a signal within the Nyquist interval will automatically
happen in all spectral replicas as well. We can take advantage of this fact
to efficiently process analog bandpass signal as is explained next.
Now consider the sampling of the bandpass signal depicted in Figure B.4.
If we were to sample at f s = f max − f min = 2 f c2 the resulting signal would
have the spectrum shown in Figure B.5. The white-space between replicas
tells us that the signal has been oversampled. In fact, for this signal we
can sample at a much lower rate as depicted in Figure B.6 and still avoid
aliasing.
xd [n] = x [ Mn].
and
1 ∞
X( f ) = ∑ Xa ( f − k f s′ )
T ′ k=− ∞
1 M −1
M k∑
Xd ( f ) = X ( f − k f s ). (B.3)
=0
Note the scaling factor 1/M. When we downsample a signal, the base-
band replica retains the same shape (hence the same information) as long
as no aliasing occurs. However, all replicas are scaled down by the down-
sampling factor.
ically sampled signal, while Figure B.8 shows the spectrum of the same
signal oversampled.
In those figures, the oversampling factor is 3, but the general principle
applies to any integer oversampling factor L. The question is how to in-
crease the sampling rate of a signal without converting the signal back to
the analog domain and resampling at a higher rate.
Increasing the sampling rate is just a digital lowpass filtering problem.
The filter must keep the baseband spectrum and all replicas of Figure B.8,
but remove all remaining replicas, i.e. for the case of 3x oversampling,
remove all replicas in Figure B.7 that are not in Figure B.8.
In order to do so, the lowpass digital filter needs to operate at the high
sampling rate so that it has replicas of its own centered around multiples
of the high sampling rate that will preserve the sampled replicas at such
frequencies.
It is well-know that the cutoff frequency for the lowpass filter should
be π/L. However, rather than memorizing that formula it is helpful to
realize that the cutoff frequency must be selected in such a way that no in-
formation is lost. This should be obvious since we are trying to reconstruct
samples from the analog signal based on the samples we have. Therefore,
the baseband spectrum must be left untouched. The lowpass filter must
remove the L − 1 adjacent replicas.
The (ideal) lowpass filter specifications used to increase the sampling
rate by a factor L are thus,
(
L if | f | ≤ f s /2,
H( f ) = (B.4)
0 if f s /2 < | f | ≤ L f s /2.
where f s is the sampling rate before we increase the rate and L f s is the
sampling rate that results after increasing the rate. The gain of L in the
passband of the filter is necessary since as we saw the spectrum of the
low-rate signal has a gain that is L times smaller than that of the high-rate
signal.
The frequency response of the filter used to increase the sampling rate
by a factor L = 5 is shown in Figure B.10. The filter removes L − 1 spec-
tral replicas between the replicas centered around multiples of the high
sampling rate L f s .
Figure B.10: Frequency response of filter used to increase the sampling rate by a factor
L = 5.
1 ∞
X ( f ) = Xs ( f ) = ∑ X a ( f − k f s ).
T k=− ∞
Figure B.11: Spectral characteristics of lowpass filter used for analog reconstruction.
B.5.1 Oversampling
As we pointed out in the precious section, the further we can push out the
stopband-edge frequency of the anti-alias analog filter, the easier it is for
such filter to be built. The idea can be extended to 2x or 4x oversampling
(or more). In such cases, the stopband-edge can be pushed out all the way
to two times or four times the maximum frequency of interest plus an extra
10 to 20 percent cushion.
There are two main consequences to doing this. First, since we over-
sample, we have more samples to deal with than we’d like. Usually we
avoid having more samples than necessary, but in this case we purposely
can be designed so that it does all the work done to get |Y ( f )| in addi-
tion to removing spectral replicas in one step. The resulting magnitude
spectrum of the upsampled signal is represented by |Yu ( f )|. The remains
spectral replicas are now widely separated so that an analog reconstruc-
tion filter with a wide transition band can be used for the final digital to
analog conversion.
As mentioned before, the analog reconstruction filter is usually built
in two stages. The staircase reconstruction filter causes some distortion
in the band of interest. The more we increase the sampling rate prior to
reconstruction, the less the distortion. If necessary, the distortion can be
compensated for by pre-equalizing the signal digitally prior to sending it
through the staircase reconstructor. To do so, the filter used for increasing
the rate is designed in such a way that it boosts the signal over a part of
the band of interest that will be attenuated by the staircase reconstructor.
Finally, we mention that in practice noise-shaping sigma-delta quan-
tizers are usually used in conjunction with the techniques outlined here in
order to reduce the number of bits required to represent the signal at high
sampling rates. The noise-shaping allows for the use of a smaller number
of bits without decreasing the signal-to-noise ratio.
Fs = 100e6;
TW = 0.4e6;
Fp = 3.125e6 - TW/2;
Fst = 3.125e6 + TW/2;
Ap = 1;
Ast = 80;
Hf = fdesign.lowpass(Fp,Fst,Ap,Ast,Fs);
220 Case Study: Comparison of Various Design Approaches
IFIR JO denotes that the joint optimization option is used with the IFIR
design. See Chapter 5 for more on this.
Next, we compare 3 designs that include decimation by a factor of 15.
The following command sets the specs.:
Hf = fdesign.decimator(15,'lowpass',Fp,Fst,Ap,Ast,Fs);
NMult NAdd Nstates MPIS APIS NStages Decim. Fact
equiripple 642 641 630 42.8 42.7333 1 15
IFIR JO 224 222 216 17.333 16.933 2 15
multistage 169 167 161 14.6 14.3333 2 15
Hf = fdesign.decimator(16,'nyquist',16,TW,Ast,Fs);
For the CIC design, we break it down in 3 stages. The first stage, which
is the CIC decimator, will provide a decimation of 4. The next stage, the
CIC compensator, will provide decimation of 2. The final stage will be a
regular halfband, also providing decimation by 2.
The code is as follows:
% Halfband design
M3 = 2;
Hf3 = fdesign.decimator(M3,'halfband',...
TW,Ast,Fs/(M1*M2));
Hhalf = design(Hf3,'equiripple');
Hcas = cascade(Hcic,Hcomp,Hhalf); % Overall design
Overview of Fixed-Point
Arithmetic
Overview
∗ For a more complete coverage, Chapter 2 of [3] and Chapter 8 of [36] are recommended.
D.1 Some fixed-point basics 223
b B −1 b2 b1 b0
| . .{z
. }
B−bits
The register has B bits (it has a wordlength of B), the value of the kth bit
(from right to left) is given by bk which can obviously be only 0 or 1. The
bits themselves are not enough to determine the value being held in the
register. For this, we need to define the scaling of the bits. Specifically, a
two’s complement fixed-point number stored in such a register has a value
given by
B −2
value = −bB−1 2B−1− F + ∑ bk 2 k − F (D.1)
k =0
Figure D.1:
For example, with 4 bits available and a fraction length of 8 (so that
the scaling of the LSB is 2−8 and the scaling of the MSB is 2−5 ), the most
negative value that can be represented is −2−5 and the most positive value
that can be represented is 2−6 + 2−7 + 2−8 (which is equal to 2−5 − 2−8 ).
In total, we can represent 24 = 16 numbers between −2−5 and 2−5 − 2−8
with a resolution or increment of 2−8 between each number and the next
consecutive one.
In general, the number of values that can be represented within those
limits is given by 2B . The step size or increment between the values repre-
sented is given by ε = 2− F . So for a given dynamic range to be covered, the
number of bits determines the granularity (precision) with which we can
represent numbers within that range. The more bits we use, the smaller
the quantization error when we round a value to the nearest representable
value. As the number of bits increases, the SNR increases because the
quantization noise decreases.
Specifically, consider Figure D.1. The range that is covered is denoted
by R. The quantization step is denoted ε = 2− F . The number of values that
can be represented is given by 2B . There are only two degrees of freedom
in these quantities which are related by
R
=ε
2B
If we round all values to their nearest representable value, the maxi-
mum error introduced by such rounding is given by ε/2. If R is constant,
x = randn(10000,1);
x = x/norm(x,inf);
xq = fi(x,true,16,15);
Histogram of x
350
300
250
200
150
100
50
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x
Figure D.2: Histogram of a white-noise signal normalized so that the maximum values
is ±1.
SNR = 10*log10(var(x)/var(x-double(xq)))
SNR =
89.9794
The SNR is no longer satisfying the 6 dB-per-bit rule. The reason is that
the signal very rarely takes values close to the largest available range (i.e.
close to -1 or 1). This can easily be verified by looking at the histogram:
[n,c]=hist(x,100);bar(c,n)
The histogram for the white-noise case is shown in Figure D.2. On
average the signal strength is not that large, while the quantization error
strength remains the same. In other words, the quantization error can be
equally large (on average) whether x takes a small or a large value. Since
x takes small values more often than it take large values, this reduces the
SNR.
Indeed, if we look at the var(x) for the uniform case, it is much larger
than var(x) in the Gaussian case, while var(x-double(xq)) is the same on
both cases.
While it may seem not so useful to consider white noise examples, it
is worth keeping in mind that many practical signals that are quantized,
such as speech and audio signals, can behave similar to white noise.
In summary, depending on the nature of the signal being quantized,
the achievable SNR will vary. In some cases, depending on whether the
application at hand permits it, it may be preferable to allow for the occa-
sional overflow with the aim of increasing the SNR on average for a given
signal.
Hf = fdesign.highpass('Fst,Fp,Ast,Ap',0.5,0.6,80,.5);
Hd = design(Hf,'equiripple');
h = Hd.Numerator; % Get the impulse response
norm(h,inf)
The impulse response is shown in Figure D.3. The largest sample has a magnitude
of 0.434. Therefore, our fixed-point settings should be such to cover the range
[−0.5, 0.5) since this is the smallest power-of-two interval that encompasses the
values of the impulse response. This can be accomplished by setting F to be equal
to B.
Now let’s compute the SNR for various values of B and see what SNRs we
get. We will perform the following computation for different values of B:
∗ Typically, the middle sample dominates for linear-phase filters.
Impulse Response
0.5
0.4
0.3
0.2
0.1
Amplitude
−0.1
−0.2
−0.3
−0.4
−0.5
0 10 20 30 40 50
Samples
Figure D.3: Impulse response of highpass filter. The largest coefficient has a magnitude
of 0.434.
For example suppose we want to add two 4-bit numbers that fall in the
range [−1, 1), say x1 = 0.875 and x2 = 0.75. x1 has the following binary
representation (with F = 3): 0111. x2 has 0110 as its binary representation.
The sum of the two is 1.6250. We need to represent it with 5 bits so that
no roundoff error is introduced. The binary representation of the result is
01101. Note that in general because of the possibility that the LSB is equal
to 1, it is not possible to discard the LSB without introducing roundoff
error, hence the bit growth.
Many filter structures contain a series of additions, one after the other.
However, it is not necessary to increase the wordlength by one bit for ev-
ery addition in the filter in order to avoid roundoff (i.e., in order to have
full precision). If N is the number of additions, it is only necessary to in-
crease the wordlength by ∗
G = ⌊log2 ( N )⌋ + 1. (D.2)
∗ The operator ⌊·⌋ denotes the largest integer smaller or equal to the value operated on
(i.e. the floor command in MATLAB). † A similar argument can be made by using the
largest negative value.
Figure D.4: Tree-adder view of adding 4 B-bit numbers. Only B + 2 bits are needed for
a full precision result.
As is the case with additions, in Chapter 7 we will see that for FIR fil-
ters, we can reduce the bit growth required to avoid roundoff when mul-
tiplying by looking at the actual coefficients of the filter.
need to re-quantize a quantized signal, i.e. reduce the number of bits used
to represent it.
Examples where this is common include throwing out bits when mov-
ing data from an accumulator (all the additions in a direct-form FIR filter)
to the output of the filter because we need to preserve the input signal
wordlength at the output. A typical example is keeping only 16 bits for
the output from an accumulator that may have anywhere from 32 to 40
bits.
Another common case is in the feedback portion of an IIR filter since
otherwise the repeated additions would result in unbounded bit growth.
Of course the bits that we throw out when we re-quantize a signal are
the LSBs. This way the same range can be covered after re-quantization
and overflow is avoided. The precision is clearly reduced as we throw out
LSBs (the quantization step increases from 2− F to 2−( F−Q) ; where Q is the
number of bits that are thrown away.
The variance of the quantization noise that we have just derived doesn’t
quite apply for the case of re-quantizing a quantized signal [36]. However,
such variance becomes a very good approximation if at least 3 or 4 bits are
thrown out. Since we are typically removing anywhere from 16 bits on-
ward, the ε2 /12 value is usually used without giving it a second thought.
The value of ε in this case corresponds of course to the quantization step
after re-quantizing.
scaled by 2π, i.e. σx2 /2π. In this case, the PSD of the output will be
jω σx2 2
jω
Pyy (e ) = H (e ) . (D.3)
2π
Therefore, in this case the PSD will take the spectral shape of the squared
magnitude-response of the filter.
[1] R. A. Losada, “Design finite impulse response digital filters. Part I,”
Microwaves & RF, vol. 43, pp. 66–80, January 2004.
[2] R. A. Losada, “Design finite impulse response digital filters. Part II,”
Microwaves & RF, vol. 43, pp. 70–84, February 2004.
[5] L. R. Rabiner and B. Gold, Theory and Application of Digital Signal Pro-
cessing. Englewood Cliffs, New Jersey: Prentice Hall, 1975.
[9] T. W. Parks and C. S. Burrus, Digital Filter Design. New York, New
York: Wiley-Interscience, 1987.
236 BIBLIOGRAPHY
[11] fred harris, Multirate Signal Processing for Communication Systems. Up-
per Saddle River, New Jersey: Prentice Hall, 2004.
[12] R. A. Losada and V. Pellissier, “Designing IIR filters with a given 3-dB
point,” IEEE Signal Proc. Magazine; DSP Tips & Tricks column, vol. 22,
pp. 95–98, July 2005.
[18] L. Gazsi, “Explicit formulas for wave digital filters,” IEEE Trans. on
Circuits and Systems, vol. CAS-32, pp. 68–88, 1985.
[19] M. Lutovac, D. Tosic, and B. Evans, Filter Design for Signal Processing
Using MATLAB and Mathematica. Upper Saddle River, New Jersey:
Prentice Hall, 2001.
[23] N. J. Fliege, Multirate Digital Signal Processing. New York, New York:
John Wiley & Sons, 1994.