Doc11-B14-dspcourse-lec1
Doc11-B14-dspcourse-lec1
Stephen Roberts
Recommended texts
1
Lecture 1 - Introduction
1.1 Introduction
2
1.1.1 Analogue vs. Digital Signal Processing
3
1.2 Summary/Revision of basic definitions
A linear system may be defined as one which obeys the Principle of Superposi-
tion. If x1(t) and x2(t) are inputs to a linear system which gives rise to outputs
y1(t) and y2 (t) respectively, then the combined input ax1(t) + bx2 (t) will give
rise to an output ay1(t) + by2 (t), where a and b are arbitrary constants.
Notes
• Linear systems can be broken down into simpler sub-systems which can be
re-arranged in any order, i.e.
x −→ g1 −→ g2 ≡ x −→ g2 −→ g1 ≡ x −→ g1,2
4
1.2.2 Time Invariance
A time-invariant system is one whose properties do not vary with time (i.e. the
input signals are treated the same way regardless of their time of arrival); for
example, with discrete systems, if an input sequence x(n) produces an output se-
quence y (n), then the input sequence x(n −no ) will produce the output sequence
y (n − no ) for all no .
Most of the lecture course will focus on the design and analysis of systems which
are both linear and time-invariant. The basic linear time-invariant operation is
“filtering” in its widest sense.
1.2.4 Causality
In a causal (or realisable) system, the present output signal depends only upon
present and previous values of the input. (Although all practical engineering
systems are necessarily causal, there are several important systems which are
non-causal (non-realisable), e.g. the ideal digital differentiator.)
5
1.2.5 Stability
A stable system (over a finite interval T ) is one which produces a bounded output
in response to a bounded input (over T ).
Some of the common signal processing functions are amplification (or attenua-
tion), mixing (the addition of two or more signal waveforms) or un-mixing1 and
filtering. Each of these can be represented by a linear time-invariant “block” with
an input-output characteristic which can be defined by:
• The impulse response g(t) in the time domain.
• The transfer function in a frequency domain. We will see that the choice of
frequency basis may be subtly different from time to time.
As we will see, there is (for the systems we examine in this course) an invertable
mapping between the time and frequency domain representations.
1
This linear unmixing turns out to be one of the most interesting current topics in signal processing.
6
1.4 Time-Domain Analysis – convolution
Convolution allows the evaluation of the output signal from a LTI system, given
its impulse response and input signal.
The input signal can be considered as being composed of a succession of
impulse functions, each of which generates a weighted version of the impulse
response at the output, as shown in 1.1. The output at time t, y (t), is obtained
simply by adding the effect of each separate impulse function – this gives rise to
the convolution integral :
! " ∞
dτ→0
y (t) = {x(t − τ )dτ }g(τ ) −→ x(t − τ )g(τ )dτ
τ 0
τ is a dummy variable which represents time measured “back into the past” from
the instant t at which the output y (t) is to be calculated.
7
0.4 0.25
0.2
0.3
0.15
0.2
0.1
0.1
0.05
0 0
0 2 4 6 0 20 40 60 80 100
components total
0.1 0.25
0.08 0.2
0.06 0.15
0.04 0.1
0.02 0.05
0 0
0 20 40 60 80 100 0 20 40 60 80 100
8
1.4.1 Notes
9
• Step response The step function is the time integral of an impulse. As inte-
gration (and differentiation) are linear operations, so the order of application
in a LTI system does not matter:
"
δ(t) −→ dt −→ g(t) −→ step response
"
δ(t) −→ g(t) −→ dt −→ step response
10
1.5 Frequency-Domain Analysis
LTI systems, by definition, may be represented (in the continuous case) by lin-
ear differential equations (in the discrete case by linear difference equations).
Consider the application of the linear differential operator, D, to the function
f (t) = e st :
Df (t) = sf (t)
An equation of this form means that f (t) is the eigenfunction of D. Just like the
eigen analysis you know from matrix theory, this means that f (t) and any linear
operation on f (t) may be represented using a set of functions of exponential
form, and that this function may be chosen to be orthogonal. This naturally
gives rise to the use of the Laplace and Fourier representations.
11
• The Laplace transform:
X(s) −→ Transfer function G(s) −→ Y (s)
where,
" ∞
X(s) = x(t)e −st dt Laplace transform of x(t)
0
Y (s) = G(s)X(s)
where G(s) can be expressed as a pole-zero representation of the form:
A(s − z1) . . . (s − zm )
G(s) =
(s − p1)(s − p2) . . . (s − pn )
(NB: The inverse transformation, ie. obtaining y (t) from Y (s), is not a
straightforward mathematical operation.)
12
• The Fourier transform:
X(jω) −→ Frequency response G(jω)Y (jω)
where,
" ∞
X(jω) = x(t)e −jωt dt Fourier transform of x(t)
−∞
and
Y (jω) = G(jω)X(jω)
The output time function can be obtained by taking the inverse Fourier
transform: " ∞
1
y (t) = Y (jω)e jωt dω
2π −∞
13
1.5.1 Relationship between time & frequency domains
Theorem
If g(t) is the impulse response of an LTI system, G(jω), the Fourier transform
of g(t), is the frequency response of the system.
Proof
Consider an input x(t) = A cos ωt to an LTI system. Let g(t) be the impulse
response, with a Fourier transform G(jω).
14
t < 0)
A jωt
= {e G(jω) + e −jωt G(−jω)}
2
Let G(jω) = Ce jφ ie. C = |G(jω)|, φ = ar g{G(jω)}
AC j(ωt+φ)
Then y (t) = {e + e −j(ωt+φ) } = CA cos(ωt + φ)
2
i.e. an input sinusoid has its amplitude scaled by |G(jω)| and its phase changed
by arg{G(jω)}, where G(jω) is the Fourier transform of the impulse response
g(t).
15
Theorem
Convolution in the time domain is equivalent to multiplication in the frequency
domain i.e.
y (t) = g(t) ∗ x(t) ≡ F −1 {Y (jω) = G(jω)X(jω)}
and
y (t) = g(t) ∗ x(t) ≡ L−1 {Y (s) = G(s)X(s)}
Proof
Consider the general integral (Laplace) transform of a shifted function:
"
L{f (t − τ )} = f (t − τ )e −st dt
t
−sτ
= e L{f (t)}
Now consider the Laplace transform of the convolution integral
" "
L{f (t) ∗ g(t)} = f (t − τ )g(τ )dτ e −st dt
"t τ
= g(τ )e −sτ dτ L{f (t)}
τ
= L{g(t)}L{f (t)}
By allowing s → jω we prove the result for the Fourier transform as well.
16
1.6 Filters
17
notch
| G(ω ) |
ω
low-pass band-pass high-pass
Figure 1.2: Standard filters.
18
1.7 Design of Analogue Filters
We will start with an analysis of analogue low-pass filters, since a low-pass filter
can be mathematically transformed into any other standard type.
Design of a filter may start from consideration of
• The desired frequency response.
• The desired phase response.
The majority of the time we will consider the first case. Consider some desired
response, in the general form of the (squared) magnitude of the transfer function,
i.e. |G(s)|2 . This response is given as
|G(s)|2 = G(s)G ∗ (s)
where ∗ denotes complex conjugation. If G(s) represents a stable filter (its poles
are on the LHS of the s-plane) then G ∗ (s) is unstable (as its poles will be on the
RHS).
The design procedure consists then of
• Considering some desired response |G(s)|2 as a polynomial in even powers
of s.
19
• Designing the filter with the stable part of G(s), G ∗ (s).
This means that, for any given filter response in the positive frequency domain,
a mirror image exists in the negative frequency domain.
| G(ω ) |
0 ωc ω
Figure 1.3: The ideal low-pass filter. Note the requirement of response in the negative frequency domain.
" ∞ " ωc
1 1 1
g(t) = G(jω)e jωt dω = 1 · e jωt dω = (e jωc t − e −jωc t )
2π −∞ 2π −ωc 2πjt
hence, # $
ωc sin ωc t
g(t) =
π ωc t
Figure 1.4 shows the impulse response for the filter (this is also referred to as
the filter kernel ).
The output starts infinitely long before the impulse occurs – i.e. the filter is
not realisable in real time.
21
2
1.5
g(t)
0.5
−0.5
−5 −4 −3 −2 −1 0 1 2 3 4 5
t
Figure 1.4: Impulse response (filter kernel) for the ILPF. The zero crossings occur at integer multiples of
π/ωc .
22
1.8 Practical Low-Pass Filters
Assume that the low-pass filter transfer function G(s) is a rational function in s.
The type of filter to be considered in the next few pages is all-pole design, which
means that G(s) will be of the form:
1
G(s) =
(an s n + an−1 s n−1 + . . . a1s + a0)
1
orG(jω) =
(an (jω)n + an−1(jω)n−1 + . . . a1jω + a0)
The magnitude-squared response is |G(jω)|2 = G(jω) · G(−jω). The denom-
inator of |G(jω)|2 is hence a polynomial in even powers of ω. Hence the task
of approximating the ideal magnitude-squared characteristic is that of choosing
a suitable denominator polynomial in ω 2 , i.e. selecting the function H in the
following expression:
1
|G(jω)|2 =
1 + H{( ωωc )2}
where ωc = nominal cut-off frequency and H = rational function of ( ωωc )2 .
23
The choice of H is determined by functions such that 1 + H{(ω/ωc )2} is close
to unity for ω < ωc and rises rapidly after that.
24
(a)
1
0.8
n=1
0.6
|G(ω)|
0.4 n=2
0.2 n=6
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
ω
0
(b)
10
log |G(ω)|
−1
10
−2 −1 0
10 10 10
log ω
Figure 1.5: Butterworth filter response on (a) linear and (b) log scales. On a log-log scale the response, for
ω > ωc falls off at approx -20db/decade.
25
1.9.1 Butterworth filter – notes
2. For large n:
• in the region ω < ωc , |G(jω)| = 1
• in the region ω > ωc , the steepness of |G| is a direct function of n.
3. Response is known as maximally flat, because
n %
%
d G%
=0 for n = 1, 2, . . . 2N − 1
dω n % ω=0
Proof
& 1
G(s)G(−s) = '
1 + (−j ωsc )2n
1
or G(s)G(−s) =
1 + ( −js
ωc )
2n
27
Thus
−js π
= e j(2k+1) 2n
ωc
π
Since j = e j 2 , then we have the final result:
π π
s = ωc exp j[+ (2k + 1) ]
2 2n
i.e. the poles have the same modulus ωc and equi-spaced arguments. For
example, for a fifth-order Butterworth low-pass filter (LPF), n = 5:
π π π
= 18o → + (2k + 1) = 90o + (18o , 54o , 90o , 126o , etc . . .)
2n 2 2n
i.e. the poles are at:
o
108
( , 144o , 180 o o o
)* , 216 , 252 +, 288
(
o
, 324o , 360 o o o
)* , 396 , 432+
in L.H. s-plane therefore stable in R.H.S. s-plane therefore unstable
We want to design a stable filter. Since each unstable pole is (−1)× a stable
pole, we can let the stable ones be in G(s), and the unstable ones in G(−s).
o o o o o
Therefore the poles of G(s) are ωc e j108 , ωc e j144 , ωc e j180 , ωc e j216 , ωc e j252
as shown in Figure 1.6.
28
0.5
−0.5
29
1 1
G(s) = =
1 − psk (1 + ( ωsc )){1 + 2 cos 72o ωsc + ( ωsc )2 }{1 + 2 cos 36o ωsc + ( ωsc )2 }
1
G(s) =
1 + 3.2361 ωsc + 5.2361( ωsc )2 + 5.2361( ωsc )3 + 3.2361( ωsc )4 + ( ωsc )5
Note that the coefficients are “palindromic” (read the same in reverse order) –
this is true for all Butterworth filters. Poles are always on same radii, at πn angular
spacing, with “half-angles” at each end. If n is odd, one pole is real.
n a1 a2 a3 a4 a5 a6 a7 a8
1 1.0000
2 1.4141 1.0000
3 2.0000 2.0000 1.0000
4 2.6131 3.4142 2.6131 1.0000
5 3.2361 5.2361 5.2361 3.2361 1.0000
6 3.8637 7.4641 9.1416 7.4641 3.8637 1.0000
7 4.4940 10.0978 14.5918 14.5918 10.0978 4.4940 1.0000
8 5.1258 13.1371 21.8462 25.6884 21.8462 13.1371 5.1258 1.0000
Butterworth LPF coefficients for a ≤ 8
30
1.9.3 Design of Butterworth LPFs
The only design variable for Butterworth LPFs is the order of the filter n. If a
filter is to have an attenuation A at a frequency ωa :
1 1
|G|2ωa = =
A2 1 + ( ωωca )2n
log(A2 − 1)
i.e. n=
2 log ωωca
log A
or since usually A ≫ 1, n≈
log ωωca
31
log10 100 2
Therefore n ≈ = = 6.64
log10 2 0.301
Hence n = 7 meets the specification
Check: Substitute s = j2 into the transfer function from the above table for
n=7
1
|G(j2)| =
|87.38 − 93.54j|
which gives A = 128.
32