0% found this document useful (0 votes)
7 views

Doc11-B14-dspcourse-lec1

The document outlines a series of lectures on signal processing, focusing on the treatment of signals to extract useful information while removing noise. It covers both analogue and digital signal processing techniques, emphasizing linear time-invariant systems and their properties, including stability and causality. The document also discusses convolution, frequency-domain analysis, and the design of various types of filters.

Uploaded by

Javier Giner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Doc11-B14-dspcourse-lec1

The document outlines a series of lectures on signal processing, focusing on the treatment of signals to extract useful information while removing noise. It covers both analogue and digital signal processing techniques, emphasizing linear time-invariant systems and their properties, including stability and causality. The document also discusses convolution, frequency-domain analysis, and the design of various types of filters.

Uploaded by

Javier Giner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

SIGNAL PROCESSING

B14 Option – 4 lectures

Stephen Roberts
Recommended texts

• Lynn. An introduction to the analysis and processing of signals. Macmillan.


• Oppenhein & Shafer. Digital signal processing. Prentice Hall
• Orfanidis. Introduction to Signal Processing. Prentice Hall.
• Proakis & Manolakis. Digital Signal Processing: Principles, Algorithms and
Applications.
• Matlab Signal processing toolbox manual.

1
Lecture 1 - Introduction

1.1 Introduction

Signal processing is the treatment of signals (information-bearing waveforms or


data) so as to extract the wanted information, removing unwanted signals and
noise. There are compelling applications as diverse as the analysis of seismic
waveforms or the recording of moving rotor blade pressures and heat transfer
rates.
In these lectures we will consider just the very basics, mainly using filtering as
a case example for signal processing.

2
1.1.1 Analogue vs. Digital Signal Processing

Most of the signal processing techniques mentioned above could be used to


process the original analogue (continuous-time) signals or their digital version
(the signals are sampled in order to convert them to sequences of numbers). For
example, the earliest type of “voice coder” developed was the channel vocoder
which consists mainly of a bank of band-pass filters. The trend is, therefore,
towards digital signal processing systems; even the well-established radio receiver
has come under threat. The other great advantage of digital signal processing
lies in the ease with which non-linear processing may be performed. Almost all
recent developments in modern signal processing are in the digital domain.
This lecture course will start by covering the basics of the design of simple
(analogue) filters as this a pre-requisite to understanding much of the underlying
theory of digital signal processing and filtering.

3
1.2 Summary/Revision of basic definitions

1.2.1 Linear Systems

A linear system may be defined as one which obeys the Principle of Superposi-
tion. If x1(t) and x2(t) are inputs to a linear system which gives rise to outputs
y1(t) and y2 (t) respectively, then the combined input ax1(t) + bx2 (t) will give
rise to an output ay1(t) + by2 (t), where a and b are arbitrary constants.

Notes

• If we represent an input signal by some support in a frequency domain, Fin


(i.e. the set of frequencies present in the input) then no new frequency
support will be required to model the output, i.e.
Fout ⊆ Fin

• Linear systems can be broken down into simpler sub-systems which can be
re-arranged in any order, i.e.
x −→ g1 −→ g2 ≡ x −→ g2 −→ g1 ≡ x −→ g1,2

4
1.2.2 Time Invariance

A time-invariant system is one whose properties do not vary with time (i.e. the
input signals are treated the same way regardless of their time of arrival); for
example, with discrete systems, if an input sequence x(n) produces an output se-
quence y (n), then the input sequence x(n −no ) will produce the output sequence
y (n − no ) for all no .

1.2.3 Linear Time-Invariant (LTI) Systems

Most of the lecture course will focus on the design and analysis of systems which
are both linear and time-invariant. The basic linear time-invariant operation is
“filtering” in its widest sense.

1.2.4 Causality

In a causal (or realisable) system, the present output signal depends only upon
present and previous values of the input. (Although all practical engineering
systems are necessarily causal, there are several important systems which are
non-causal (non-realisable), e.g. the ideal digital differentiator.)

5
1.2.5 Stability

A stable system (over a finite interval T ) is one which produces a bounded output
in response to a bounded input (over T ).

1.3 Linear Processes

Some of the common signal processing functions are amplification (or attenua-
tion), mixing (the addition of two or more signal waveforms) or un-mixing1 and
filtering. Each of these can be represented by a linear time-invariant “block” with
an input-output characteristic which can be defined by:
• The impulse response g(t) in the time domain.
• The transfer function in a frequency domain. We will see that the choice of
frequency basis may be subtly different from time to time.
As we will see, there is (for the systems we examine in this course) an invertable
mapping between the time and frequency domain representations.
1
This linear unmixing turns out to be one of the most interesting current topics in signal processing.

6
1.4 Time-Domain Analysis – convolution

Convolution allows the evaluation of the output signal from a LTI system, given
its impulse response and input signal.
The input signal can be considered as being composed of a succession of
impulse functions, each of which generates a weighted version of the impulse
response at the output, as shown in 1.1. The output at time t, y (t), is obtained
simply by adding the effect of each separate impulse function – this gives rise to
the convolution integral :
! " ∞
dτ→0
y (t) = {x(t − τ )dτ }g(τ ) −→ x(t − τ )g(τ )dτ
τ 0

τ is a dummy variable which represents time measured “back into the past” from
the instant t at which the output y (t) is to be calculated.

7
0.4 0.25

0.2
0.3

0.15
0.2
0.1

0.1
0.05

0 0
0 2 4 6 0 20 40 60 80 100

components total
0.1 0.25

0.08 0.2

0.06 0.15

0.04 0.1

0.02 0.05

0 0
0 20 40 60 80 100 0 20 40 60 80 100

Figure 1.1: Convolution as a summation over shifted impulse responses.

8
1.4.1 Notes

• Convolution is commutative. Thus:


" ∞
y (t) is also x(τ )g(t − τ )dτ
0

• For discrete systems convolution is a summation operation:



! ∞
!
y [n] = x[k]g[n − k] = x[n − k]g[k]
k=0 k=0

• Relationship between convolution and correlation The general form of


the convolution integral
" ∞
f (t) = x(τ )g(t − τ )dτ
−∞
is very similar2 to that of the cross-correlation function relating 2 variables
x(t) and y (t) " ∞
Rx y (τ ) = x(t) · y (t − τ )dt
−∞
Convolution is hence an integral over lags at a fixed time whereas correlation
is the integral over time for a fixed lag.
2
Note that the lower limit of the integral can be −∞ or 0. Why?

9
• Step response The step function is the time integral of an impulse. As inte-
gration (and differentiation) are linear operations, so the order of application
in a LTI system does not matter:
"
δ(t) −→ dt −→ g(t) −→ step response
"
δ(t) −→ g(t) −→ dt −→ step response

10
1.5 Frequency-Domain Analysis

LTI systems, by definition, may be represented (in the continuous case) by lin-
ear differential equations (in the discrete case by linear difference equations).
Consider the application of the linear differential operator, D, to the function
f (t) = e st :
Df (t) = sf (t)
An equation of this form means that f (t) is the eigenfunction of D. Just like the
eigen analysis you know from matrix theory, this means that f (t) and any linear
operation on f (t) may be represented using a set of functions of exponential
form, and that this function may be chosen to be orthogonal. This naturally
gives rise to the use of the Laplace and Fourier representations.

11
• The Laplace transform:
X(s) −→ Transfer function G(s) −→ Y (s)
where,
" ∞
X(s) = x(t)e −st dt Laplace transform of x(t)
0

Y (s) = G(s)X(s)
where G(s) can be expressed as a pole-zero representation of the form:
A(s − z1) . . . (s − zm )
G(s) =
(s − p1)(s − p2) . . . (s − pn )
(NB: The inverse transformation, ie. obtaining y (t) from Y (s), is not a
straightforward mathematical operation.)

12
• The Fourier transform:
X(jω) −→ Frequency response G(jω)Y (jω)
where,
" ∞
X(jω) = x(t)e −jωt dt Fourier transform of x(t)
−∞

and
Y (jω) = G(jω)X(jω)
The output time function can be obtained by taking the inverse Fourier
transform: " ∞
1
y (t) = Y (jω)e jωt dω
2π −∞

13
1.5.1 Relationship between time & frequency domains

Theorem
If g(t) is the impulse response of an LTI system, G(jω), the Fourier transform
of g(t), is the frequency response of the system.

Proof
Consider an input x(t) = A cos ωt to an LTI system. Let g(t) be the impulse
response, with a Fourier transform G(jω).

Using convolution, the output y (t) is given by:


" ∞
y (t) = A cos ω(t − τ )g(τ )dτ
0
" ∞ " ∞
A A
= e jω(t−τ) g(τ )dτ + e −jω(t−τ) g(τ )dτ
2 0 2 0
" " ∞
A jωt ∞ A
= e g(τ )e −jωτ dτ + e −jωt g(τ )e jωτdτ
2 −∞ 2 −∞
(lower limit of integration can be changed from 0 to −∞ since g(τ ) = 0 for

14
t < 0)

A jωt
= {e G(jω) + e −jωt G(−jω)}
2
Let G(jω) = Ce jφ ie. C = |G(jω)|, φ = ar g{G(jω)}

AC j(ωt+φ)
Then y (t) = {e + e −j(ωt+φ) } = CA cos(ωt + φ)
2
i.e. an input sinusoid has its amplitude scaled by |G(jω)| and its phase changed
by arg{G(jω)}, where G(jω) is the Fourier transform of the impulse response
g(t).

15
Theorem
Convolution in the time domain is equivalent to multiplication in the frequency
domain i.e.
y (t) = g(t) ∗ x(t) ≡ F −1 {Y (jω) = G(jω)X(jω)}
and
y (t) = g(t) ∗ x(t) ≡ L−1 {Y (s) = G(s)X(s)}
Proof
Consider the general integral (Laplace) transform of a shifted function:
"
L{f (t − τ )} = f (t − τ )e −st dt
t
−sτ
= e L{f (t)}
Now consider the Laplace transform of the convolution integral
" "
L{f (t) ∗ g(t)} = f (t − τ )g(τ )dτ e −st dt
"t τ
= g(τ )e −sτ dτ L{f (t)}
τ
= L{g(t)}L{f (t)}
By allowing s → jω we prove the result for the Fourier transform as well.
16
1.6 Filters

Low-pass : to extract short-term average or to eliminate high-frequency fluctu-


ations (eg. noise filtering, demodulation, etc.)
High-pass : to follow small-amplitude high-frequency perturbations in presence
of much larger slowly-varying component (e.g. recording the electrocardio-
gram in the presence of a strong breathing signal)
Band-pass : to select a required modulated carrier frequency out of many (e.g.
radio)
Band-stop : to eliminate single-frequency (e.g. mains) interference (also known
as notch filtering)

17
notch

| G(ω ) |

ω
low-pass band-pass high-pass
Figure 1.2: Standard filters.

18
1.7 Design of Analogue Filters

We will start with an analysis of analogue low-pass filters, since a low-pass filter
can be mathematically transformed into any other standard type.
Design of a filter may start from consideration of
• The desired frequency response.
• The desired phase response.
The majority of the time we will consider the first case. Consider some desired
response, in the general form of the (squared) magnitude of the transfer function,
i.e. |G(s)|2 . This response is given as
|G(s)|2 = G(s)G ∗ (s)
where ∗ denotes complex conjugation. If G(s) represents a stable filter (its poles
are on the LHS of the s-plane) then G ∗ (s) is unstable (as its poles will be on the
RHS).
The design procedure consists then of
• Considering some desired response |G(s)|2 as a polynomial in even powers
of s.

19
• Designing the filter with the stable part of G(s), G ∗ (s).
This means that, for any given filter response in the positive frequency domain,
a mirror image exists in the negative frequency domain.

1.7.1 Ideal low-pass filter

| G(ω ) |

0 ωc ω
Figure 1.3: The ideal low-pass filter. Note the requirement of response in the negative frequency domain.

Any frequency-selective filter may be described either by its frequency response


20
(more common) or by its impulse response. The narrower the band of frequencies
transmitted by a filter, the more extended in time is its impulse response wave-
form. Indeed, if the support in the frequency domain is decreased by a factor of
a (i.e. made narrower) then the required support in the time domain is increased
by a factor of a (you should be able to prove this).
Consider an ideal low-pass filter with a “brick wall” amplitude cut-off and no
phase shift, as shown in Fig. 1.3.
Calculate the impulse response as the inverse Fourier transform of the fre-
quency response:

" ∞ " ωc
1 1 1
g(t) = G(jω)e jωt dω = 1 · e jωt dω = (e jωc t − e −jωc t )
2π −∞ 2π −ωc 2πjt
hence, # $
ωc sin ωc t
g(t) =
π ωc t
Figure 1.4 shows the impulse response for the filter (this is also referred to as
the filter kernel ).
The output starts infinitely long before the impulse occurs – i.e. the filter is
not realisable in real time.

21
2

1.5

g(t)
0.5

−0.5
−5 −4 −3 −2 −1 0 1 2 3 4 5
t

Figure 1.4: Impulse response (filter kernel) for the ILPF. The zero crossings occur at integer multiples of
π/ωc .

A delay of time T such that


ωc sin ωc (t − T )
g(t) =
π ωc (t − T )
would ensure that most of the response occurred after the input (for large T ).
The use of such a delay, however, introduces a phase lag proportional to frequency,
since ar g{G(jω} = ωT . Even then, the filter is still not exactly realisable; instead
the design of analogue filters involves the choice of the most suitable approxima-
tion to the ideal frequency response.

22
1.8 Practical Low-Pass Filters

Assume that the low-pass filter transfer function G(s) is a rational function in s.
The type of filter to be considered in the next few pages is all-pole design, which
means that G(s) will be of the form:
1
G(s) =
(an s n + an−1 s n−1 + . . . a1s + a0)
1
orG(jω) =
(an (jω)n + an−1(jω)n−1 + . . . a1jω + a0)
The magnitude-squared response is |G(jω)|2 = G(jω) · G(−jω). The denom-
inator of |G(jω)|2 is hence a polynomial in even powers of ω. Hence the task
of approximating the ideal magnitude-squared characteristic is that of choosing
a suitable denominator polynomial in ω 2 , i.e. selecting the function H in the
following expression:
1
|G(jω)|2 =
1 + H{( ωωc )2}
where ωc = nominal cut-off frequency and H = rational function of ( ωωc )2 .

23
The choice of H is determined by functions such that 1 + H{(ω/ωc )2} is close
to unity for ω < ωc and rises rapidly after that.

1.9 Butterworth Filters


ω 2 ω ω
H{( ) } = {( )2}n = ( )2n
ωc ωc ωc
i.e.
1
|G(jω)|2 =
1 + ( ωωc )2n
where n is the order of the filter. Figure 1.5 shows the response on linear (a)
and log (b) scales for various orders n.

24
(a)
1

0.8

n=1
0.6

|G(ω)|
0.4 n=2

0.2 n=6

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
ω

0
(b)
10
log |G(ω)|

−1
10
−2 −1 0
10 10 10
log ω

Figure 1.5: Butterworth filter response on (a) linear and (b) log scales. On a log-log scale the response, for
ω > ωc falls off at approx -20db/decade.

25
1.9.1 Butterworth filter – notes

1. |G| = √12 for ω = ωc (i.e. magnitude response is 3dB down at cut-off


frequency)

2. For large n:
• in the region ω < ωc , |G(jω)| = 1
• in the region ω > ωc , the steepness of |G| is a direct function of n.
3. Response is known as maximally flat, because
n %
%
d G%
=0 for n = 1, 2, . . . 2N − 1
dω n % ω=0
Proof

Express |G(jω)| in a binomial expansion:


ω 1 1 ω 3 ω 5 ω
|G(jω)| = {1 + ( )2n }− 2 = 1 − ( )2n + ( )4n − ( )6n + . . .
ωc 2 ωc 8 ωc 16 ωc
It is then easy to show that the first 2n − 1 derivatives are all zero at the
origin.
26
1.9.2 Transfer function of Butterworth low-pass filter
&
|G(jω)| = G(jω)G(−jω)
Since G(jω) is derived from G(s) using the substitution s → jω, the reverse
operation can also be done, i.e. ω → −js

& 1
G(s)G(−s) = '
1 + (−j ωsc )2n
1
or G(s)G(−s) =
1 + ( −js
ωc )
2n

Thus the poles of


1
1 + ( −js
ωc )
2n

belong either to G(s) or G(−s). The poles are given by:


# $2n
−js
= −1 = e j(2k+1)π , k = 0, 1, 2, . . . , 2n − 1
ωc

27
Thus
−js π
= e j(2k+1) 2n
ωc
π
Since j = e j 2 , then we have the final result:
π π
s = ωc exp j[+ (2k + 1) ]
2 2n
i.e. the poles have the same modulus ωc and equi-spaced arguments. For
example, for a fifth-order Butterworth low-pass filter (LPF), n = 5:
π π π
= 18o → + (2k + 1) = 90o + (18o , 54o , 90o , 126o , etc . . .)
2n 2 2n
i.e. the poles are at:
o
108
( , 144o , 180 o o o
)* , 216 , 252 +, 288
(
o
, 324o , 360 o o o
)* , 396 , 432+
in L.H. s-plane therefore stable in R.H.S. s-plane therefore unstable
We want to design a stable filter. Since each unstable pole is (−1)× a stable
pole, we can let the stable ones be in G(s), and the unstable ones in G(−s).
o o o o o
Therefore the poles of G(s) are ωc e j108 , ωc e j144 , ωc e j180 , ωc e j216 , ωc e j252
as shown in Figure 1.6.

28
0.5

−0.5

−1.5 −1 −0.5 0 0.5


Figure 1.6: Stable poles of 5–th order Butterworth filter.

29
1 1
G(s) = =
1 − psk (1 + ( ωsc )){1 + 2 cos 72o ωsc + ( ωsc )2 }{1 + 2 cos 36o ωsc + ( ωsc )2 }
1
G(s) =
1 + 3.2361 ωsc + 5.2361( ωsc )2 + 5.2361( ωsc )3 + 3.2361( ωsc )4 + ( ωsc )5
Note that the coefficients are “palindromic” (read the same in reverse order) –
this is true for all Butterworth filters. Poles are always on same radii, at πn angular
spacing, with “half-angles” at each end. If n is odd, one pole is real.
n a1 a2 a3 a4 a5 a6 a7 a8
1 1.0000
2 1.4141 1.0000
3 2.0000 2.0000 1.0000
4 2.6131 3.4142 2.6131 1.0000
5 3.2361 5.2361 5.2361 3.2361 1.0000
6 3.8637 7.4641 9.1416 7.4641 3.8637 1.0000
7 4.4940 10.0978 14.5918 14.5918 10.0978 4.4940 1.0000
8 5.1258 13.1371 21.8462 25.6884 21.8462 13.1371 5.1258 1.0000
Butterworth LPF coefficients for a ≤ 8

30
1.9.3 Design of Butterworth LPFs

The only design variable for Butterworth LPFs is the order of the filter n. If a
filter is to have an attenuation A at a frequency ωa :
1 1
|G|2ωa = =
A2 1 + ( ωωca )2n
log(A2 − 1)
i.e. n=
2 log ωωca
log A
or since usually A ≫ 1, n≈
log ωωca

Butterworth design – example

Design a Butterworth LPF with at least 40 dB attenuation at a frequency of


1kHz and 3dB attenuation at fc = 500Hz.
Answer

40dB → A = 100; ωa = 2000π and ωc = 1000π rads/sec

31
log10 100 2
Therefore n ≈ = = 6.64
log10 2 0.301
Hence n = 7 meets the specification

Check: Substitute s = j2 into the transfer function from the above table for
n=7
1
|G(j2)| =
|87.38 − 93.54j|
which gives A = 128.

32

You might also like