EC6502 Principles of Digital Signal Processing
EC6502 Principles of Digital Signal Processing
PRINCIPLESOFDIGITALSIGNALPROCESSING
By
Mr.A.MUTHUKUMAR
ASSISTANTPROFESSOR
DEPARTMENTOFELECTRONICSANDCOMMUNICATIONENGINEERING
SASURIECOLLEGEOFENGINEERING
VIJAYAMANGALAM638056
QUALITYCERTIFICATE
Thisistocertifythattheecoursematerial
SubjectCode :EC6501
Class :IIIYearECE
beingpreparedbymeanditmeetstheknowledgerequirementoftheuniversitycurriculum.
SignatureoftheAuthor
Name: MUTHUKUMAR.A
Designation: AP
ThisistocertifythatthecoursematerialbeingpreparedbyMr.M.SHANMUGHARAJisofadequate
quality.Hehasreferredmorethanfivebooksamongthemminimumoneisfromabroadauthor.
SignatureofHD
Name:Dr.K.PANDIARAJAN
SEAL
TEXT BOOKS
1. John G Proakis and Manolakis, Digital Signal Processing Principles, Algorithms and
Applications, Pearson, Fourth Edition, 2007.
2. S.Salivahanan, A. Vallavaraj, C. Gnanapriya, Digital Signal Processing,
TMH/McGraw Hill International, 2007
REFERENCES
1. E.C.Ifeachor and B.W.Jervis,Digital signal processingA practical approach
Second edition, Pearson, 2002.
2. S.K. Mitra, Digital Signal Processing, A Computer Based approach, Tata
Mc GrawHill, 1998.
3. P.P.Vaidyanathan, Multirate Systems & Filter Banks, Prentice Hall, Englewood
cliffs, NJ, 1993.
4. Johny R. Johnson, Introduction to Digital Signal Processing, PHI, 2006.
S.No Topic Page No.
UNIT I - FREQUENCY TRANSFORMATIONS 1
1.1 INTRODUCTION 1
1.2 DIFFERENCE BETWEEN FT & DFT 2
1.3 CALCULATION OF DFT & IDFT 2
1.4 DIFFERENCE BETWEEN DFT & IDFT 3
1.5 PROPERTIES OF DFT 4
1.6 APPLICATION OF DFT 10
1.7 FAST FOURIER ALGORITHM (FFT) 13
1.8 RADIX-2 FFT ALGORITHMS 14
1.9 COMPUTATIONAL COMPLEXITY FFT V/S DIRECT 18
COMPUTATION
1.10 BIT REVERSAL 19
1.11 DECIMATION IN FREQUENCY (DIFFFT) 20
1.12 GOERTZEL ALGORITHM 26
UNIT II - IIR FILTER DESIGN 29
2.1 INTRODUCTION 29
2.2 TYPES OF DIGITAL FILTER 33
2.3 STRUCTURES FOR FIR SYSTEMS 34
2.4 STRUCTURES FOR IIR SYSTEMS 36
2.5 CONVERSION OF ANALOG FILTER INTO DIGITAL FILTER 43
2.6 IIR FILTER DESIGN - BILINEAR TRANSFORMATION 44
METHOD (BZT)
2.7 LPF AND HPF ANALOG BUTTERWORTH FILTER TRANSFER 47
FUNCTION
2.8 METHOD FOR DESIGNING DIGITAL FILTERS USING BZT 47
2.9 BUTTERWORTH FILTER APPROXIMATION 48
2.10 FREQUENCY RESPONSE CHARACTERISTIC 48
2.11 FREQUENCY TRANSFORMATION 48
2.11.1 FREQUENCY TRANSFORMATION (ANALOG FILTER) 48
2.11.2 FREQUENCY TRANSFORMATION (DIGITAL FILTER) 48
UNIT III - FIR FILTER DESIGN 50
3.1 FEATURES OF FIR FILTER 50
3.2 SYMMETRIC AND ANTI-SYMMETRIC FIR FILTERS 50
3.3 GIBBS PHENOMENON 51
3.4 DESIGNING FILTER FROM POLE ZERO PLACEMENT 54
3.5 NOTCH AND COMB FILTERS 54
3.6 DIGITAL RESONATOR 55
UNIT IV FINITE WORD LENGTH EFFECTS 80
4.1 NUMBER REPRESENTATION 80
4.1.1 FIXED-POINT QUANTIZATION ERRORS 80
4.1.2 FLOATING-POINT QUANTIZATION ERRORS 82
4.2 ROUNDOFF NOISE 83
4.3 ROUNDOFF NOISE IN FIR FILTERS 83
4.4 ROUNDOFF NOISE IN FIXED POINT FIR FILTERS 85
4.5 LIMIT CYCLE OSCILLATIONS 89
4.6 OVERFLOW OSCILLATIONS 90
4.7 COEFFICIENT QUANTIZATION ERROR 91
4.8 REALIZATION CONSIDERATIONS 93
UNIT V - APPLICATIONS OF DSP 95
5.1 SPEECH RECOGNITION 95
5.2 LINEAR PREDICTION OF SPEECH SYNTHESIS 95
5.3 SOUND PROCESSING 97
5.4 ECHO CANCELLATION 97
5.5 VIBRATION ANALYSIS 98
5.6 MULTISTAGE IMPLEMENTATION OF DIGITAL FILTERS 100
5.7 SPEECH SIGNAL PROCESSING 100
5.8 SUBBAND CODING 102
5.9 ADAPTIVE FILTER 107
5.10 AUDIO PROCESSING 110
5.10.1 MUSIC SOUND PROCESSING 110
5.10.2 SPEECH GENERATION 110
5.10.3 SPEECH RECOGNITION 111
5.11 IMAGE ENHANCEMENT 111
UNIT I
FREQUENCY TRANSFORMATIONS
1.1 INTRODUCTION
Time domain analysis provides some information like amplitude at sampling instant but does not
convey frequency content & power, energy spectrum hence frequency domain analysis is used.
For Discrete time signals x(n) , Fourier Transform is denoted as x() & given by
X() = x (n) e jn FT.(1)
n=-
DFT is denoted by x(k) and given by (= 2 k/N)
N-1
X(k) = x (n) e j2 kn / N DFT.(2)
n=0
IDFT is given as
N-1
1
x(n) =1/N X (k) e j2 kn / N IDFT(3)
k=0
For calculation of DFT & IDFT two different methods can be used. First method is using mathematical
equation & second method is 4 or 8 point DFT. If x(n) is the sequence of N samples then consider WN= e
j2 / N (twiddle factor)
k=0 W4 0 W4 0 W4 0 W4 0
[WN] = k=1 W4 0 W4 1 W4 2 W4 3 k=2
2
W4 0 W4 2 W4 4 W4 6
k=3 W4 0 W4 3 W4 6 W4 9
1 1 1 1
[WN] = 1 j -1 j
1 -1 1 -1
1 j -1 -j
Examples:
Q) Compute DFT of x(n) = {0,1,2,3} Ans: x4=[6, -2+2j, -2, -2-2j ]
Q) Compute DFT of x(n) = {1,0,0,1} Ans: x4=[2, 1+j, 0, 1-j ]
Q) Compute DFT of x(n) = {1,0,1,0} Ans: x4=[2, 0, 2, 0 ]
Q) Compute IDFT of x(k) = {2, 1+j, 0, 1-j } Ans: x4=[1,0,0,1]
3
3 Mathematical Equation to calculate Mathematical Equation to calculate IDFT is given
DFT is given by by
N-1 N-1
X(k) = x (n) e j2 kn / N x(n) = 1/N X (k)e j2 kn / N
n=0 n=0
4 Thus DFT is given by In DFT and IDFT difference is of factor 1/N &
X(k)= [WN][xn] sign of exponent of twiddle factor.
Thus
x(n)= 1/N [ WN]-1[XK]
2. Linearity
The linearity property states that if
DFT
x1(n) X1(k) And
4
N
DFT
x2(n) X2(k) Then
N
Then DFT
a1 x1(n) + a2 x2(n) a1 X1(k) + a2 X2(k) N
DFT of linear combination of two or more signals is equal to the same linear combination of DFT of
individual signals.
A) A sequence is said to be circularly even if it is symmetric about the point zero on the circle. Thus
X(N-n) = x(n)
B) A sequence is said to be circularly odd if it is anti symmetric about the point zero on the circle.
Thus X(N-n) = - x(n)
D) Anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence.
Thus delayed or advances sequence x`(n) is related to x(n) by the circular shift.
This property states that if the sequence is real and even x(n)= x(N-n) then DFT becomes
N-1
X(k) = x(n) cos (2kn/N)
n=0
This property states that if the sequence is real and odd x(n)=-x(N-n) then DFT becomes
N-1
X(k) = -j x(n) sin (2kn/N)
n=0
This property states that if the sequence is purely imaginary x(n)=j XI(n) then DFT
becomes
N-1
XR(k) = xI(n) sin (2kn/N)
n=0
5
N-1
XI(k) = xI(n) cos (2kn/N)
n=0
5. Circular Convolution
The Circular Convolution property states that if
DFT
x1(n) X1(k) And
N
DFT
x2(n) X2(k) Then
N
DFT
Then x1(n) N x2(n) x1(k) x2(k) N
It means that circular convolution of x1(n) & x2(n) is equal to multiplication of their DFT s.
Thus circular convolution of two periodic discrete signal with period N is given by
N-1
y(m) = x1 (n) x2 (m-n)N .(4)
n=0
Multiplication of two sequences in time domain is called as Linear convolution while Multiplication of
two sequences in frequency domain is called as circular convolution. Results of both are totally different
but are related with each other.
There are two different methods are used to calculate circular convolution
1) Graphical representation form
2) Matrix approach
6
k= -
4 Linear Convolution of two signals returns N-1 Circular convolution returns same
elements where N is sum of elements in both number of elements that of two signals.
sequences.
Q) The two sequences x1(n)={2,1,2,1} & x2(n)={1,2,3,4}. Find out the sequence x3(m)
which is equal to circular convolution of two sequences. Ans: X3(m)={14,16,14,16}
Q) Perform Linear Convolution of x(n)={1,2} & h(n)={2,1} using DFT & IDFT.
Q) Perform Linear Convolution of x(n)={1,2,2,1} & h(n)={1,2,3} using 8 Pt DFT & IDFT.
7
6. Multiplication
The Multiplication property states that if
DFT
X1(n) x1(k) And
N
DFT
X2(n) x2(k) Then
N
DFT
Then x1(n) x2(n) 1/N x1(k) N N x2(k)
8
It means that multiplication of two sequences in time domain results in circular
convolution of their DFT s in frequency domain.
This means multiplication of DFT of one sequence and conjugate DFT of another sequence is equivalent
to circular cross-correlation of these sequences in time domain.
12.ParsevalsTheorem
N-1 N-1
X(n) y*(n) = 1/N x (k) y*(k)
n=0 n=0
This equation give energy of finite duration sequence in terms of its frequency components.
Consider that input sequence x(n) of Length L & impulse response of same system is h(n) having M
samples. Thus y(n) output of the system contains N samples where N=L+M-1. If DFT of y(n) also
contains N samples then only it uniquely represents y(n) in time domain. Multiplication of two DFT s is
equivalent to circular convolution of corresponding time domain sequences. But the length of x(n) &
h(n) is less than N. Hence these sequences are appended with zeros to make their length N called as
Zero padding. The N point circular convolution and linear convolution provide the same
sequence. Thus linear convolution can be obtained by circular convolution. Thus linear filtering is
provided by
DFT.
When the input data sequence is long then it requires large time to get the output sequence. Hence other
techniques are used to filter long data sequences. Instead of finding the output of complete input
sequence it is broken into small length sequences. The output due to these small length sequences are
computed fast. The outputs due to these small length sequences are fitted one after another to get the
final output response.
Step 1> In this method L samples of the current segment and M-1 samples of the
previous segment forms the input data block. Thus data block will be
Step2> Unit sample response h(n) contains M samples hence its length is made N by padding
zeros. Thus h(n) also contains N samples.
Step3> The N point DFT of h(n) is H(k) & DFT of mth data block be xm(K) then
corresponding DFT of output be Y`m(k)
Y`m(k)= H(k) xm(K)
Step 4> The sequence ym(n) can be obtained by taking N point IDFT of Y`m(k). Initial
(M-1) samples in the corresponding data block must be discarded. The last L samples are
the correct output samples. Such blocks are fitted one after another to get the final output.
X(n) of Size N
Size L
X1(n)
M-1 Size L
Zeros
X2(n)
Size L
X3(n)
Y1(n)
Y3(n)
Discard M-1 Points
Y(n) of Size N
Step 1> In this method L samples of the current segment and M-1 samples of the
previous segment forms the input data block. Thus data block will be
X1(n) ={x(0),x(1),.x(L-1),0,0,0,.}
X2(n) ={x(L),x(L+1),x(2L-1),0,0,0,0}
X3(n) ={x(2L),x(2L+2),,,,,,,,,,,,,x(3L-1),0,0,0,0}
Step2> Unit sample response h(n) contains M samples hence its length is made N by padding
zeros. Thus h(n) also contains N samples.
Step3> The N point DFT of h(n) is H(k) & DFT of mth data block be xm(K) then
corresponding DFT of output be Y`m(k)
X(n) of Size N
Size L
M-1
X1(n) Zeros
Size L
M-1
X2(n) Zeros
Size L
M-1
X3(n) Zeros
Y1(n)
47
Y2(n)
M-1
Points add
together
Y(n) of Size N
DFT of the signal is used for spectrum analysis. DFT can be computed on digital computer or digital
signal processor. The signal to be analyzed is passed through anti-aliasing filter and samples at the rate
of Fs 2 Fmax. Hence highest frequency component is Fs/2.
Frequency spectrum can be plotted by taking N number of samples & L samples of waveforms. The
total frequency range 2 is divided into N points. Spectrum is better if we take large value of N & L But
this increases processing time. DFT can be computed quickly using FFT algorithm hence fast processing
can be done. Thus most accurate resolution can be obtained by increasing number of samples.
1. Large number of the applications such as filtering, correlation analysis, spectrum analysis
require calculation of DFT. But direct computation of DFT require large number of computations and
hence processor remain busy. Hence special algorithms are developed to compute DFT quickly called
as Fast Fourier algorithms (FFT).
48
2. The radix-2 FFT algorithms are based on divide and conquer approach. In this method, the
N-point DFT is successively decomposed into smaller DFT s. Because of this decomposition, the
number of computations are reduced.
N point sequence x(n) be splitted into two N/2 point data sequences f1(n) and f2(n). f1(n) contains even
numbered samples of x(n) and f2(n) contains odd numbered samples of x(n). This splitted operation is
called decimation. Since it is done on time domain sequence it is called Decimation in Time.
Thus
N-1
kn
X(k) = x (n) WN
(1)
n=0
Since the sequence x(n) is splitted into even numbered and odd numbered samples, thus
N/2-1 N/2-1
2m
k k(2m+1)
X(k) = x (2m) WN + x (2m+1) WN (2)
m=0 m=0
k
X(k) =F1(k) + WN F2(k) (3)
X(k+N/2) =F1(k) - WN k F2(k) (Symmetry property) (4)
Fig 1 shows that 8-point DFT can be computed directly and hence no reduction in
computation.
x(0) X(0)
x(1) X(1)
x(2) X(2)
8 Point
x(3) X(3)
DFT
x(7) X(7)
49
Fig 1. DIRECT COMPUTATION FOR N=8
50
x(0) f1(0) X(0)
x(2) N/2 Point f1(1) X(1)
x(4) f1(2) X(2)
x(6) DFT f1(3) X(3)
Fig 3 shows N/2 point DFT base separated in N/4 boxes. In such cases equations become
2
g1(k) =P1(k) + WN k P2(k) (5)
g1(k+N/2) =p1(k) - WN 2k P2(k) (6)
x(0) F(0)
N/4 Point
x(4) F(1)
DFT
a A= a + W Nr b
b WNr B= a - W Nr b
51
Fig 5. SIGNAL FLOW GRAPH FOR RADIX- DIT FFT N=4
x(0) A1 A2 X(0)
Each butterfly operation takes two addition and one multiplication operations. Direct computation
requires N2 multiplication operation & N2 N addition operations.
a A= a + W NrN b
b WNr B= a - W r b
From values a and b new values A and B are computed. Once A and B are computed, there is no
need to store a and b. Thus same memory locations can be used to store A
and B where a and b were stored hence called as In place computation. The advantage of in place
computation is that it reduces memory requirement.
Thus for computation of one butterfly, four memory locations are required for storing two complex
numbers A and B. In every stage there are N/2 butterflies hence total 2N memory locations are
required. 2N locations are required for each stage. Since stages are computed successively these memory
locations can be shared. In every stage N/2 twiddle factors are required hence maximum storage
requirements of N point DFT will be (2N + N/2).
For 8 point DIT DFT input data sequence is written as x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7) and the
DFT sequence X(k) is in proper order as X(0), X(1), X(2), X(3), X(4), x(5), X(6), x(7). In DIF FFT it is
exactly opposite. This can be obtained by bit reversal method.
Table shows first column of memory address in decimal and second column as binary. Third
column indicates bit reverse values. As FFT is to be implemented on digital computer simple integer
division by 2 method is used for implementing bit reversal algorithms. Flow chart for Bit reversal
algorithm is as follows
DECIMAL
NUMBER B TO BE
REVERSED
I=1
B1=B
BR=0
Store BR as Bit
reversal of B
In DIF N Point DFT is splitted into N/2 points DFT s. X(k) is splitted with k even and k odd
this is called Decimation in frequency(DIF FFT).
N/2-1 N/2-1
X(k) = x (n) WN + x (n + N/2) WN (2)
m=0 m=0
N/2-1 N/2-1 N
X(k) = x (n) W kn
N + W NkN/2 x (n + N/2) W kn
m=0 m=0
N/2-1 N/2-1 N
X(k) = x (n) W kn
N + (-1)k x (n + N/2) W kn
m=0 m=0
N/2-1
X(k) = x (n) + (-1)k x(n + N/2) kn (3)
WN
m=0
N/2-1
X(2k) = x (n) + (-1)2k x(n + N/2) WN 2kn (4)
m=0
54
N/2-1
X(2k+1) = x (n)+(-1)(2k+1) x(n + N/2)WN (2k+1)n (5)
m=0
a A= a + b
b
WNr B= (a b)W rN
Fig 2 shows signal flow graph and stages for computation of radix-2 DIF FFT algorithm of
N=4
x(0) A X(0)
55
Fig 3 shows signal flow graph and stages for computation of radix-2 DIF FFT algorithm of
N=8
x(0) A1 A2 X(0)
56
x(3) D1 w82 D2 w80 X(6)
57
3 Direct computation does not requires Splitting operation is done on time
splitting operation. domain basis (DIT) or frequency domain
basis (DIF)
4 As the value of N in DFT increases, the As the value of N in DFT increases, the
efficiency of direct computation decreases. efficiency of FFT algorithms increases.
58
59
1.12 GOERTZEL ALGORITHM:
FFT algorithms are used to compute N point DFT for N samples of the sequence x(n). This requires N/2
log2N number of complex multiplications and N log2N complex additions. In some applications DFT is
to be computed only at selected values of frequencies and selected values are less than log2N, then
direct computations of DFT becomes more efficient than FFT. This direct computations of DFT can
be realized through linear filtering of x(n). Such linear filtering for computation of DFT can be
implemented using Goertzel algorithm.
60
By definition N point DFT is given as
N-1
k
X(k) = x (m) WN m (1)
m=0
yk(n) = x (m) WN u(nm) (3)
m=-
N-1
yk(n) = x (m) WN - (4)
m=0
= yk(n)| n=N
Thus DFT can be obtained as the output of LSI system at n=N. Such systems can give
X(k) at selected values of k. Thus DFT is computed as linear filtering operations by
Goertzel Algorithm.
GLOSSARY:
Fourier Transform:
The Transform that used to analyze the signals or systems Characteristics in frequency domain , which
is difficult in case of Time Domain.
61
Laplace Transform:
Laplace Transform is the basic continuous Transform. Then it is developed to represent the continuous
signals in frequency domain.
For analyzing the discrete signals, the DTFT (Discrete Time Fourier Transform) is used. The output, that
the frequency is continuous in DTFT. But the Transformed Value should be discrete. Since the Digital
Signal Processors cannot work with the continuous frequency signals. So the DFT is developed to
represent the discrete signals in discrete frequency domain.
Discrete Fourier Transform is used for transforming a discrete time sequence of finite length N into a
discrete frequency sequence of the same finite length N.
Periodicity:
If a discrete time signal is periodic then its DFT is also periodic. i.e. if a signal or sequence is
repeated after N Number of samples, then it is called periodic signal.
Symmetry:
If a signal or sequence is repeated its waveform in a negative direction after N/2 number of Samples,
then it is called symmetric sequence or signal.
Linearity:
A System which satisfies the superposition principle is said to be a linear system. The DFT have the
Linearity property. Since the DFT of the output is equal to the sum of the DFTs of the Inputs.
Fast Fourier Transform is an algorithm that efficiently computes the discrete fourier transform of a
sequence x(n). The direct computation of the DFT requires 2N2 evaluations of trignometric functions.
4N2 real multiplications and 4N(N-1) real additions.
62
UNIT II
IIR FILTER DESIGN
PREREQUISITE DISCUSSION:
Basically a digital filter is a linear time invariant discrete time system. The terms Finite Impulse
response (FIR) and Infinite Impulse Response (IIR) are used to distinguish filter types. The FIR filters are
of Non-Recursive type whereas the IIR Filters are of recursive type.
2.1 INTRODUCTION
To remove or to reduce strength of unwanted signal like noise and to improve the quality of
required signal filtering process is used. To use the channel full bandwidth we mix up two or more
signals on transmission side and on receiver side we would like to separate it out in efficient way.
Hence filters are used. Thus the digital filters are mostly used in
1. Removal of undesirable noise from the desired signals
2. Equalization of communication channels
3. Signal detection in radar, sonar and communication
4. Performing spectral analysis of signals.
In signal processing, the function of a filter is to remove unwanted parts of the signal, such as
random noise, or to extract useful parts of the signal, such as the components lying within a certain
frequency range.
There are two main kinds of filter, analog and digital. They are quite different in their physical
makeup and in how they work.
An analog filter uses analog electronic circuits made up from components such as resistors,
capacitors and op amps to produce the required filtering effect. Such filter circuits are widely used in
such applications as noise reduction, video signal enhancement, graphic equalizers in hi-fi systems, and
many other areas.
In analog filters the signal being filtered is an electrical voltage or current which is the direct
analogue of the physical quantity (e.g. a sound or video signal or transducer output) involved.
63
A digital filter uses a digital processor to perform numerical calculations on sampled values of the
signal. The processor may be a general-purpose computer such as a PC, or a specialized DSP (Digital
Signal Processor) chip.
The analog input signal must first be sampled and digitized using an ADC (analog to digital
converter). The resulting binary numbers, representing successive sampled values of the input signal, are
transferred to the processor, which carries out numerical calculations on them. These calculations
typically involve multiplying the input values by constants and adding the products together. If
necessary, the results of these
calculations, which now represent sampled values of the filtered signal, are output through a DAC
(digital to analog converter) to convert the signal back to analog form.
In a digital filter, the signal is represented by a sequence of numbers, rather than a voltage or
current.
Analog signal
Sampler Quantizer Digital
Xa (t) & Encoder Filter
1. Samplers are used for converting continuous time signal into a discrete time signal by taking
samples of the continuous time signal at discrete time instants.
2. The Quantizer are used for converting a discrete time continuous amplitude signal into a digital
signal by expressing each sample value as a finite number of digits.
64
3. In the encoding operation, the quantization sample value is converted to the binary equivalent of
that quantization level.
4. The digital filters are the discrete time systems used for filtering of sequences.
These digital filters performs the frequency related operations such as low pass, high pass,
band pass and band reject etc. These digital Filters are designed with digital hardware and
software and are represented by difference equation.
Filters are usually classified according to their frequency-domain characteristic as lowpass, highpass,
bandpass and bandstop filters.
1. Lowpass Filter
A lowpass filter is made up of a passband and a stopband, where the lower frequencies
Of the input signal are passed through while the higher frequencies are attenuated.
|H ()|
-c c
65
2. Highpass Filter
A highpass filter is made up of a stopband and a passband where the lower frequencies of the input
signal are attenuated while the higher frequencies are passed.
|H()|
-c c
3. Bandpass Filter
A bandpass filter is made up of two stopbands and one passband so that the lower and higher
frequencies of the input signal are attenuated while the intervening
frequencies are passed.
|H()|
-2 -1 2 1
4. Bandstop Filter
A bandstop filter is made up of two passbands and one stopband so that the lower and
higher frequencies of the input signal are passed while the intervening frequencies are
attenuated. An idealized bandstop filter frequency response has the following shape.
|H()|
5. Multipass Filter
A multipass filter begins with a stopband followed by more than one passband. By default, a
multipass filter in Digital Filter Designer consists of three passbands and
four stopbands. The frequencies of the input signal at the stopbands are attenuated
while those at the passbands are passed.
6. Multistop Filter
66
A multistop filter begins with a passband followed by more than one stopband. By default, a
multistop filter in Digital Filter Designer consists of three passbands and two stopbands.
1. Ideal filters have a constant gain (usually taken as unity gain) passband
characteristic and zero gain in their stop band.
2. Ideal filters have a linear phase characteristic within their passband.
3. Ideal filters also have constant magnitude characteristic.
4. Ideal filters are physically unrealizable.
4 FIR systems has limited or finite memory IIR system requires infinite memory.
requirements.
67
5 FIR filters are always stable Stability cannot be always guaranteed.
6 FIR filters can have an exactly linear phase IIR filter is usually more efficient design in
response so that no phase distortion is terms of computation time and
introduced in the signal by the filter. memory requirements. IIR systems usually
requires less processing time and storage as
compared with FIR.
7 The effect of using finite word length to Analogue filters can be easily and readily
implement filter, noise and quantization transformed into equivalent IIR digital
errors are less severe in FIR than in IIR. filter. But same is not possible in FIR
because that have no analogue
counterpart.
8 All zero filters Poles as well as zeros are present.
9 FIR filters are generally used if no phase IIR filters are generally used if sharp cutoff
distortion is desired. and high throughput is required.
Example: Example:
System described by System described by
Y(n) = 0.5 x(n) + 0.5 x(n-1) is FIR filter. Y(n) = y(n-1) + x(n) is IIR filter.
h(n)={0.5,0.5} h(n)=an u(n) for n0
The convolution of h(n) and x(n) for FIR systems can be written as
M-1
y(n)= h(k) x(nk) (1)
k=0
Implementation of direct form structure of FIR filter is based upon the above equation.
+ + + +
h(0)x(n) h(0)x(n)+
h(1)x(n) y(n)
68
1) There are M-1 unit delay blocks. One unit delay block requires one memory location.
Hence direct form structure requires M-1 memory locations.
2) The multiplication of h(k) and x(n-k) is performed for 0 to M-1 terms. Hence M
multiplications and M-1 additions are required.
3) Direct form structure is often called as transversal or tapped delay line filter.
In cascade form, stages are cascaded (connected) in series. The output of one system is input to
another. Thus total K number of stages are cascaded. The total system function
'H' is given by
k
H(z)= Hk(z) (3)
k=1
Each H1(z), H2(z) etc is a second order section and it is realized by the direct form as shown in
below figure.
x(n) x(n-1)
Z-1 Z-1
bk0 bk1 bk2
+ + y(n)
69
2. 4 STRUCTURES FOR IIR SYSTEMS
Z transform is given as
M N
H(z) = bk z / 1+ ak zk
k (2)
K=0 k=1
M N
Here H1(z) = bk zk And H2(z) = 1+ ak zk
K=0 k=0
Overall IIR system can be realized as cascade of two function H1(z) and H2(z). Here
H1(z) represents zeros of H(z) and H2(z) represents all poles of H(z).
DIRECT FORM - I
b0
x(n) y(n)
+ +
Z-1 Z-1
b1 -a1
+ +
Z-1 Z-1
b2 -a2
+ +
bM-1 -aN-1
+ +
Z-1 Z-1
bM -aN
70
FIG - DIRECT FORM I REALIZATION OF IIR SYSTEM
2. There are M+N-1 unit delay blocks. One unit delay block requires one memory location.
Hence direct form structure requires M+N-1 memory locations.
DIRECT FORM - II
2. Two delay elements of all pole and all zero system can be merged into single delay element.
3. Direct Form II structure has reduced memory requirement compared to Direct form I
structure. Hence it is called canonic form.
X(n) b0
y(n)
+ +
Z-1
-a1 b1
+ +
Z-1
-a2 b2
+ +
-aN-1 bN-1
+ +
Z-1
-aN bN
In cascade form, stages are cascaded (connected) in series. The output of one system is input to
another. Thus total K number of stages are cascaded. The total system function
'H' is given by
k
H(z)= Hk(z) (3)
k=1
Each H1(z), H2(z) etc is a second order section and it is realized by the direct form as shown in
below figure.
where HK(z) = bk0 + bk1 z-1 + bk2 z-2 / 1 + ak1 z-1 + ak2 z-2 (2)
X(n) bk0
y(n)
+ +
Z-1
-ak1 bk1
+ +
Z-1
-ak2 bk2
+ +
72
FIG - DIRECT FORM REALIZATION OF IIR SECOND ORDER SYSTEM (CASCADE)
H1(z) +
H2(z) +
X(n) y(n)
k1(z) +
1. IMPULSE INVARIANCE
2. BILINEAR TRANSFORMATION
3. BUTTERWORTH APPROXIMATION
Impulse Invariance Method is simplest method used for designing IIR Filters. Important
Features of this Method are
1. In impulse variance method, Analog filters are converted into digital filter just by replacing
unit sample response of the digital filter by the sampled version of impulse response of analog
filter. Sampled signal is obtained by putting t=nT hence
h(n) = ha(nT) n=0,1,2. .
where h(n) is the unit sample response of digital filter and T is sampling interval.
2. But the main disadvantage of this method is that it does not correspond to simple algebraic
mapping of S plane to the Z plane. Thus the mapping from analog
frequency to digital frequency is many to one. The segments
(2k-1)/T (2k+1) /T of j axis are all mapped on the unit circle .
This takes place because of sampling.
3. Frequency aliasing is second disadvantage in this method. Because of frequency aliasing, the
frequency response of the resulting digital filter will not be identical to the original analog
frequency response.
4. Because of these factors, its application is limited to design low frequency filters like LPF or
a limited class of band pass filters.
Z is represented as rej in polar form and relationship between Z plane and S plane is given as
Z=eST where s= + j .
Z= eST (Relationship Between Z plane and S plane)
Z= e ( + j ) T
= e T . ej T
Comparing Z value with the polar form we have.
r= e T and = T
Here we have three condition
1) If = 0 then r=1
2) If < 0 then 0 < r < 1
3) If > 0 then r> 1
Thus
73
1) Left side of s-plane is mapped inside the unit circle.
2) Right side of s-plane is mapped outside the unit circle.
3) j axis is in s-plane is mapped on the unit circle.
Im(z)
1 j
Re(z)
74
Im(z)
1 j
75
2
Re(z)
where pk are the poles of the analog filter and ck are the coefficients of partial fraction expansion. The impulse
response of the analog filter ha(t) is obtained by inverse Laplace transform and given as
n
ha(t) = Ck epkt (2)
k=1
The unit sample response of the digital filter is obtained by uniform sampling of ha(t). h(n) = ha(nT)
n=0,1,2. .
n
h(n) = Ck epknT (3)
k=1
N
H(z) = Ck epkT z-1 n (4)
k=1 n=0
Using the standard relation and comparing equation (1) and (4) system function of digital filter is given as
1 1
76
s - pk 1- epkT z-1
2
s+a 1- e-aT (cos bT) z-1
(s+a)2 + b2 1-2e -aT (cos bT)z-1+ e-2aTz-2
3
b e-aT (sin bT) z-1
(s+a) + b2
2
1-2e-aT (cos bT)z-1+ e-2aTz-2
The method of filter design by impulse invariance suffers from aliasing. Hence in order to overcome
this drawback Bilinear transformation method is designed. In analogue domain frequency axis is an
infinitely long straight line while sampled data z plane it is unit circle radius. The bilinear
transformation is the method of squashing the infinite straight analog frequency axis so that it becomes
finite.
1. Bilinear transformation method (BZT) is a mapping from analog S plane to digital Z plane.
This conversion maps analog poles to digital poles and analog zeros to digital zeros. Thus all
poles and zeros are mapped.
5. But the main disadvantage of frequency warping is that it does change the shape of the desired
filter frequency response. In particular, it changes the shape of the transition bands.
Z is represented as rej in polar form and relationship between Z plane and S plane in
BZT method is given as
2 z-1
S=
T z+1
2 rej - 1
S=
T rej + 1
2 r (cos + j sin ) -1
S=
T r (cos + j sin ) +1
2 r2 - 2r j 2 r sin
S= + p11 2
T 1+r2+2r cos +r +2r cos
2 r2 -1
=
T 1+ r2+2r cos
2 2 r sin
=
T 1+ r2+2r cos
When r =1
= 2 sin
78
T 1+cos
=
= (2/T) tan (/2)
= 2 tan -1 (T/2)
The above equations shows that in BZT frequency relationship is non-linear. The frequency
relationship is plotted as
2 tan -1 (T/2)
79
3 They are generally used for low frequencies like For designing of LPF, HPF and almost all types of
design of IIR LPF and a limited class of Band pass and band stop filters this
bandpass filter method is used.
4 Frequency relationship is linear. Frequency relationship is non-linear.
Frequency warping or frequency
compression is due to non-linearity.
5 All poles are mapped from the s plane to the z All poles and zeros are mapped.
plane by the relationship
Zk= epkT. But the zeros in two domain does not
satisfy the same relationship.
3 3 1 / s3 + 2 s2 + 2s +1 s3 / s3 + 2 s2 + 2s +1
*=
ccc= (2/T) tan (c Ts/2)
step 2. Find out the value of frequency scaled analog transfer function
Apply BZT. i.e Replace s by the ((z-1)/(z+1)). And find out the desired transfer function of
digital function.
Example:
Q) Design first order high pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps. Use BZT Method
81
H*(s)= s/(s + 0.325)
Step 4. Find out the digital filter transfer function. Replace s by (z-1)/(z+1)
H(z)= z-1
1.325z -0.675
Q) Design second order low pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps.
Q) First order low pass butterworth filter whose bandwidth is known to be 1 rad/sec . Use
BZT method to design digital filter of 20 Hz bandwidth at sampling frequency 60 sps.
Q) Second order low pass butterworth filter whose bandwidth is known to be 1 rad/sec . Use BZT
method to obtain transfer function H(z) of digital filter of 3 DB cutoff frequency of
150 Hz and sampling frequency 1.28 kHz.
Q) The transfer function is given as s2+1 / s2+s+1 The function is for Notch filter with frequency 1
rad/sec. Design digital Notch filter with the following specification
(1) Notch Frequency= 60 Hz
(2) Sampling frequency = 960 sps.
The filter passes all frequencies below c. This is called passband of the filter. Also the filter
blocks all the frequencies above c. This is called stopband of the filter. c is called cutoff
frequency or critical frequency.
No Practical filters can provide the ideal characteristic. Hence approximation of the ideal
characteristic are used. Such approximations are standard and used for filter design.
Such three approximations are regularly used. a)
Butterworth Filter Approximation
b) Chebyshev Filter Approximation
c) Elliptic Filter Approximation
Butterworth filters are defined by the property that the magnitude response is maximally
flat in the passband.
|H()|2
82
1
|Ha()|2=
1 + (/c)2N
The squared magnitude function for an analog butterworth filter is of the form.
1
|Ha()|2= 1 + (/c)2N
N indicates order of the filter and c is the cutoff frequency (-3DB frequency).
At s = j magnitude of H(s) and H(-s) is same hence
Ha(s) Ha(-s) = 1
1 + (-s2/ 2c)N
To find poles of H(s). H(-s) , find the roots of denominator in above equation.
-s2 = (-1)1/N
c2
s2 = (-1) 2c e j(2k+1) / N
+ -1 c [ e j(2k+1) / N ]1/2
Pk = + j c e j(2k+1) / 2N
As e j/2 = j
Pk = + c e j/2 e j(2k+1) / 2N
Pk = + c e j(N+2k+1) / 2N (1)
This equation gives the pole position of H(s) and H(-s).
The frequency response characteristic of |Ha()|2 is as shown. As the order of the filter N increases, the
butterworth filter characteristic is more close to the ideal characteristic. Thus at higher orders like N=16
the butterworth filter characteristic closely approximate ideal filter characteristic. Thus an infinite order
filter (N ) is required to get ideal characteristic.
83
|Ha()|2 N=18
84
N=6
N=2
|Ha()|
Ap
0.5
As
p c s
1
1 + (s/c)2N As2
82
To determine the poles and order of analog filter consider equalities.
s
= (1/As2)-1
2
N (1/Ap2)-1
p
log (1/As2)-1
(1/Ap2)-1 (2)
N= 0.5
log (s/ p)
As (DB) = - 20 log As
As = 10 -As/20
(As)-2 = 10 As/10
(As)-2 = 10 0.1 As DB
log 100.1 As -1
100.1 Ap -1 (4)
N= 0.5 log (s/ p)
83
Q) Design a digital filter using a butterworth approximation by using impulse invariance.
Example
|Ha()|
0.89125
0.17783
0.2 0.3
Filter Type - Low Pass Filter
Ap - 0.89125
As - 0.17783
p - 0.2
s - 0.3
log (1/As2)-1
(1/Ap2)-1
N= 0.5 log (s/ p)
N= 5.88
Hence N=6
p
c =
[(1/Ap2) -1]1/2N
cutoff frequency c = 0.7032
Pk = + c e j(N+2k+1) / 2N
-0.497 + j 0.497
1 P1= + 0.7032 e j9/12 0.497 - j 0.497
-0.679 + j 0.182
2 P2= + 0.7032 e j11/12 0.679 - j 0.182
For stable filter all poles lying on the left side of s plane is selected. Hence
Ha(s) = c6
(s-s1)(s-s1*) (s-s2)(s-s2*) (s-s3)(s-s3*)
Hence
Ha(s) = (0.7032)6
(s+0.182-j0.679)(s+0.182+j0.679) (s+0.497-j0.497) (s+0.497+j0.497)
(s+0.679-j0.182)(s+0.679-j0.182)
Ha(s) = 0.1209
[(s+0.182)2 +(0.679)2] [(s+0.497)2+(0.497)2] [(s+0.679)2-(0.182)2]
Q) Design second order low pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps. Use BZT and Butterworth approximation.
When the cutoff frequency c of the low pass filter is equal to 1 then it is called normalized filter.
Frequency transformation techniques are used to generate High pass filter, Bandpass and bandstop
filter from the lowpass filter system function.
s
1 Low Pass
lp
lp - Password edge frequency of another LPF
hp
2 High Pass
s
hp = Password edge frequency of HPF
(s2 + l h )
3 Band Pass s (h - l )
h - higher band edge frequency
l - Lower band edge frequency
s(h - l)
s2+ h l
4 Band Stop
h - higher band edge frequency
l - Lower band edge frequency
86
2.11.2 FREQUENCY TRANSFORMATION (DIGITAL FILTER)
z-2 - a1z-1 + a2
4 Band Stop
a2z-2 - a1z-1 + 1
Example:
Q) Design high pass butterworth filter whose cutoff frequency is 30 Hz at sampling frequency
of 150 Hz. Use BZT and Frequency transformation.
Step 4. Find out the digital filter transfer function. Replace s by (z-1)/(z+1)
H(z)= z-1
1.7265z - 0.2735
Q) Design second order band pass butterworth filter whose passband of 200 Hz and 300
Hz and sampling frequency is 2000 Hz. Use BZT and Frequency transformation.
Q) Design second order band pass butterworth filter which meet following specification
Lower cutoff frequency = 210 Hz
Upper cutoff frequency = 330 Hz
Sampling Frequency = 960 sps
Use BZT and Frequency transformation.
87
GLOSSARY:
System Design:
Usually, in the IIR Filter design, Analog filter is designed, then it is transformed to a digital filter the
conversion of Analog to Digital filter involves mapping of desired digital filter specifications into equivalent
analog filter.
Warping Effect:
The analog Frequency is same as the digital frequency response. At high frequencies, the relation between
and becomes Non-Linear. The Noise is introduced in the Digital Filter as in the Analog Filter. Amplitude
and Phase responses are affected by this warping effect.
Prewarping:
The Warping Effect is eliminated by prewarping of the analog filter. The analog frequencies are
prewarped and then applied to the transformation.
Infinite Impulse Response filters are a Type of Digital Filters which has infinite impulse response. This
type of Filters are designed from analog filters. The Analog filters are then transformed to Digital Domain.
In Bilinear transformation method the transform of filters from Analog to Digital is carried out in a way
such that the Frequency transformation produces a Linear relationship between Analog and Digital Filters.
Filter:
A filter is one which passes the required band of signals and stops the other unwanted band of frequencies.
Pass band:
The Band of frequencies which is passed through the filter is termed as passband.
Stopband:
The band of frequencies which are stopped are termed as stop band.
88
89
UNIT III
FIR FILTER DESIGN
PREREQUISITE DISCUSSION:
The FIR Filters can be easily designed to have perfectly linear Phase. These filters can be realized
recursively and Non-recursively. There are greater flexibility to control the Shape of their Magnitude
response. Errors due to round off noise are less severe in FIR Filters, mainly because Feed back is not
used.
1. FIR filter always provides linear phase response. This specifies that the signals in the pass
band will suffer no dispersion Hence when the user wants no phase distortion, then FIR
filters are preferable over IIR. Phase distortion always degrade the system performance. In
various applications like speech processing, data transmission over long distance FIR filters
are more preferable due to this characteristic.
2. FIR filters are most stable as compared with IIR filters due to its non feedback nature.
3. Quantization Noise can be made negligible in FIR filters. Due to this sharp cutoff
FIR filters can be easily designed.
4. Disadvantage of FIR filters is that they need higher ordered for similar magnitude response of
IIR filters.
System is stable only if system produces bounded output for every bounded input. This is stability
definition for any system.
Here h(n)={b0, b1, b2, } of the FIR filter are stable. Thus y(n) is bounded if input x(n) is
bounded. This means FIR system produces bounded output for every bounded
input. Hence FIR systems are always stable.
The various method used for FIR Filer design are as follows
1. Fourier Series method
2. Windowing Method
3. DFT method
4. Frequency sampling Method. (IFT Method)
Consider the ideal LPF frequency response as shown in Fig 1 with a normalizing angular cut off
frequency c.
1. In Fourier series method, limits of summation index is - to . But filter must have finite terms.
Hence limit of summation index change to -Q to Q where Q is some finite integer. But this type of
truncation may result in poor convergence of the series. Abrupt truncation of infinite series is equivalent
to multiplying infinite series with rectangular sequence. i.e at the point of discontinuity some oscillation
may be observed in resultant series.
3. This oscillation or ringing is generated because of side lobes in the frequency response
W() of the window function. This oscillatory behavior is called "Gibbs Phenomenon".
91
Truncated response and ringing effect is as shown in fig 3.
Rectangular Window: Rectangular This is the most basic of windowing methods. It does not require
any operations because its values are either 1 or 0. It creates an abrupt discontinuity that results in sharp
roll-offs but large ripples.
=1 for 0 n N
=0 otherwise
Triangular Window: The computational simplicity of this window, a simple convolution of two
rectangle windows, and the lower sidelobes make it a viable alternative to the rectangular window.
92
Kaiser Window: This windowing method is designed to generate a sharp central peak. It has reduced
side lobes and transition band is also narrow. Thus commonly used in FIR filter design.
Hamming Window: This windowing method generates a moderately sharp central peak. Its ability to
generate a maximally flat response makes it convenient for speech processing filtering.
Hanning Window: This windowing method generates a maximum flat filter design.
93
3.4 DESIGNING FILTER FROM POLE ZERO PLACEMENT
Filters can be designed from its pole zero plot. Following two constraints should be imposed
while designing the filters.
1. All poles should be placed inside the unit circle on order for the filter to be stable. However zeros
can be placed anywhere in the z plane. FIR filters are all zero filters hence they are always stable. IIR
filters are stable only when all poles of the filter are inside unit circle.
2. All complex poles and zeros occur in complex conjugate pairs in order for the filter coefficients
to be real.
In the design of low pass filters, the poles should be placed near the unit circle at points corresponding to
low frequencies ( near =0)and zeros should be placed near or on unit circle at points corresponding to
high frequencies (near =). The opposite is true for high pass filters.
A notch filter is a filter that contains one or more deep notches or ideally perfect nulls in its frequency
response characteristic. Notch filters are useful in many applications where specific
frequency components must be eliminated. Example Instrumentation and
recording systems required that the power-line frequency 60Hz and its harmonics be eliminated.
To create nulls in the frequency response of a filter at a frequency 0, simply introduce a
pair of complex-conjugate zeros on the unit circle at an angle 0.
comb filters are similar to notch filters in which the nulls occur periodically across the
frequency band similar with periodically spaced teeth. Frequency response
characteristic of notch filter |H()| is as shown
94
o 1
A digital resonator is a special two pole bandpass filter with a pair of complex conjugate poles
located near the unit circle. The name resonator refers to the fact that the filter has a larger magnitude
response in the vicinity of the pole locations. Digital resonators are useful in many applications,
including simple bandpass filtering and speech generations.
Ideal filters are not physically realizable because Ideal filters are anti-causal and as only causal
systems are physically realizable.
Proof:
Let take example of ideal lowpass filter.
H() = 1 for - c c
= 0 elsewhere
The unit sample response of this ideal LPF can be obtained by taking IFT of H().
1_
h(n) = 2 H() ejn d (1)
-
c
h(n) = 1_ 1 ejn d (2)
2 -c
c
1_
h(n) = 2 ejn
jn
-c
1
[ejcn - e-jcn ]
2jn
95
1_ [] c
2 -c
i.e
sin (cn )
n for n0
h(n) =
c
n for n=0
LSI system is causal if its unit sample response satisfies following condition. h(n) = 0
for n<0
In above figure h(n) extends - to . Hence h(n) 0 for n<0. This means causality condition is not
satisfied by the ideal low pass filter. Hence ideal low pass filter is non
causal and it is not physically realizable.
The order of a digital filter can be defined as the number of previous inputs (stored in the processor's
memory) used to calculate the current output.
This is illustrated by the filters given as examples in the previous section.
Example (1): yn = xn
This is a zero order filter, since the current output yn depends only on the current
input xn and not on any previous inputs.
Q) For each of the following filters, state the order of the filter and identify the values of its
coefficients:
(a) yn = 2xn - xn-1 A) Order = 1: a0 = 2, a1 = -1
(b) yn = xn-2 B) Order = 2: a0 = 0, a1 = 0, a2 = 1
(c) yn = xn - 2xn-1 + 2xn-2 + xn-3 C) Order = 3: a0 = 1, a1 = -2, a2 = 2, a3 = 1
97
98
99
100
101
102
103
104
105
106
107
GLOSSARY:
FIR Filters:
Symmetric FIR Filters have their Impulses that occur as the mirror image in
the first quadrant and second quadrant or Third quadrant and fourth quadrant or
both.
The Antisymmetric FIR Filters have their impulses that occur as the mirror
image in the first quadrant and third quadrant or second quadrant and Fourth
108
quadrant or both.
Linear Phase:
The FIR Filters are said to have linear in phase if the filter have the impulses
that increases according to the Time in digital domain.
Frequency Response:
The Frequency response of the Filter is the relationship between the angular
frequency and the Gain of the Filter.
Gibbs Phenomenon:
The abrupt truncation of Fourier series results in oscillation in both passband
and stop band. These oscillations are due to the slow convergence of the fourier
series. This is termed as Gibbs Phenomenon.
Windowing Technique:
To avoid the oscillations instead of truncating the fourier co-efficients we
are multiplying the fourier series with a finite weighing sequence called a window
which has non-zero values at the required interval and zero values for other
Elements.
109
UNIT IV
FINITE WORDLENGTH EFFECTS
4.1 Number Representation
In digital signal processing, (B + 1)-bit fixed-point numbers are usually
represented as twos- complement signed fractions in the format
bo b-ib-2 b-B
The number represented is then
X = -bo + b-i2- 1 + b-22 - 2 + + b-B 2-B (3.1)
where bo is the sign bit and the number range is 1 <X < 1. The advantage of this
representation is that the product of two numbers in the range from 1 to 1 is
another number in the same range. Floating-point numbers are represented as
X = (-1) s m2 c (3.2)
where s is the sign bit, m is the mantissa, and c is the characteristic or
exponent. To make the representation of a number unique, the mantissa is
normalized so that 0.5 <m < 1.
Although floating-point numbers are always represented in the form of (3.2), the
way in which this representation is actually stored in a machine may differ. Since
m > 0.5, it is not necessary to store the 2- 1 -weight bit of m, which is always set.
Therefore, in practice numbers are usually stored as
X = (-1)s(0.5 + f)2 c (3.3)
where f is an unsigned fraction, 0 <f < 0.5.
Most floating-point processors now use the IEEE Standard 754 32-bit floating-
point format for storing numbers. According to this standard the exponent is stored
as an unsigned integer p where
p = c + 126 (3.4)
Therefore, a number is stored as
X = (-1)s(0.5 + f )2p - 1 2 6 (3.5)
where s is the sign bit, f is a 23-b unsigned fraction in the range 0 <f < 0.5, and p
is an 8-b unsigned integer in the range 0 <p < 255. The total number of bits is 1 +
23 + 8 = 32. For example, in IEEE format 3/4 is written (-1)0 (0.5 + 0.25)2 so s =
0, p = 126, and f = 0.25. The value X = 0 is a unique case and is represented by
all bits zero (i.e., s = 0, f = 0, and p = 0). Although the 2-1-weight mantissa bit is
not actually stored, it does exist so the mantissa has 24 b plus a sign bit.
110
either after each multiply or after all products have been summed with double-
length precision.
We will examine three types of fixed-point quantizationrounding, truncation,
and magnitude truncation. If X is an exact value, then the rounded value will be
denoted Q r (X), the truncated value Q t (X), and the magnitude truncated value
Q m t (X). If the quantized value has B bits to the right of the decimal point, the
quantization step size is
A = 2-B (3.6)
Since rounding selects the quantized value nearest the unquantized value, it gives a
value which is never more than A /2 away from the exact value. If we denote the
rounding error by
fr = Qr(X) - X (3.7)
then
AA
<f r < (3.8)
2 - 2
Truncation simply discards the low-order bits, giving a quantized value that is
always less than or equal to the exact value so
- A < ft< 0 (3.9)
Magnitude truncation chooses the nearest quantized value that has a magnitude less
than or equal to the exact value so
A
<fmt <A (3 10)
.
The error resulting from quantization can be modeled as a random variable
uniformly distributed over the appropriate error range. Therefore, calculations with
roundoff error can be considered error-free calculations that have been corrupted
by additive white noise. The mean of this noise for rounding is
1fA/2
m ( r E{fr } = x/ frdfr 0 (3.11)
A
J-A/2
where E{} represents the operation of taking the expected value of a random
variable. Similarly, the variance of the noise for rounding is
2 1 (A/2
2 2 A2
a E{(fr - mr) } (fr - mr) dfr = (3.12)
12
A
-A/2
Likewise, for truncation,
A
m E{f }
ft = t = -y
2 A2
2
2
a = E{(ft - mft)} = (3.13)
m
fmt E { f mt } 0
and, for magnitude truncation
111
2 A2
2
a E{(f -m ) (3 14)
f-mt = mt m }= .
112
fl(X1 + X2) = (X1 + X2 )(1 + er) (3.23)
where e r is zero-mean with the variance of (3.20).
113
the convolution summation
N1
y(n) =Eh(k)x(n k)
(3.26)
k=0
When fixed-point arithmetic is used and quantization is performed after each
multiply, the result of the N multiplies is N-times the quantization noise of a single
multiply. For example, rounding after each multiply gives, from (3.6) and (3.12),
an output noise variance of
2 2 2B
2
a = N (3.27)
Virtually all digital signal processor integrated circuits contain one or more double-
length accumulator registers which permit the sum-of-products in (3.26) to be
accumulated without quantization. In this case only a single quantization is
necessary following the summation and
For the floating-point roundoff noise case we will consider (3.26) for N = 4 and
then generalize the result to other values of N. The finite-precision output can be
written as the exact output plus an error term e(n). Thus,
y(n) + e(n) = ({[h(0)x(n)[1 + E1(n)]
+ h(1)x(n -1)[1 + 2(n)]][1 + S3(n)]
+ h(2)x(n -2)[1 + 4(n)]}{1 + s5(n)}
+ h(3)x(n - 3)[1 + 6(n)])[1 + j(n)] (3.29)
In (3.29), 1(n) represents the errorin the first product, 2(n) the
error in the second product, 3(n)
the error in the firstaddition, etc.Notice that it has been assumed that the products
are summed in
the order implied by the summation of (3.26).
Expanding (3.29), ignoring products of error terms, and recognizing y(n) gives
e(n) = h(0)x(n)[1 (n) + 3(n) + $(n) + i(n)]
+ h(1)x(n-1)[2(n) + 3(n) + 5(n) + j(n)]
+ h(2)x(n-2)[4(n) + 5(n) + i(n)]
+ h(3)x(n- 3)[6(n)+ j(n)]
(3.30)
Assuming that the input is white noise of variance a^ so that E{x(n)x(n - k)} is
zero for k = 0, and assuming that the errors are uncorrelated,
E{e2(n)} = [4h2(0)+ 4h2(1) + 3h2(2) + 2h2(3)]a2a2 (3.31)
In general, for any N,
N-1
k=1
114
Notice that if the order of summation of the product terms in the convolution
summation is changed, then the order in which the h(k)s appear in (3.32)
changes. If the order is changed so that the h(k)with smallest magnitude is first,
followed by the next smallest, etc., then the roundoff noise variance is
minimized. However, performing the convolution summation in nonsequential
order greatly complicates data indexing and so may not be worth the reduction
obtained in roundoff noise.
115
where e(n) is a random roundoff noise sequence. Since e(n) is injected at the
same point as the input, it propagates througha system with impulse
response h(n). Therefore, forfixed-point arithmetic
with rounding,the outputroundoff noise variance from (3.6), (3.12), (3.25),
and (3.33) is
2 2
2 A 2 A 2n 2-2B 1
2 2 2n
a = > h (n) = > a = ---------------------------------- (3.36)
o ()
12 ^ 12 ^ 12 1 - a 2
n=<x n=0
With fixed-point arithmetic there is the possibility of overflow following
addition. To avoid overflow it is necessary to restrict the input signal amplitude.
This can be accomplished by either placing a scaling multiplier at the filter
input or by simply limiting the maximum input signal amplitude. Consider the
case of the first-order filter of (3.34). The transfer function of this filter is
,v, Y(e j m ) 1
jm
H(e ) = . = (3.37)
X(eJm) eJm a
so
\H(e j m )\ 2 = --- - 1 ------------------------------- (3.38)
1 + a 2 2a cos,)
and
,7, 1
|H(e^)|max = ---- (3.39)
1 \a\
The peak gain of the filter is 1/(1 \a\) so limiting input signal amplitudes to
\x(n)\ < 1 \ a | will make overflows unlikely.
An expression for the output roundoff noise-to-signal ratio can easily be
obtained for the case where the filter input is white noise, uniformly distributed
over the interval from (1 \ a \) to (1 \ a \) [4,5]. In this case
? 1 f 1 \ a \ ? 1 ?
a x dx (1 a (3 40)
x = 21a) =3 \ \ ) .
2(1 a ) 3
\ \ J (1 \a \)
so, from (3.25),
2 1 (1 \a \)2
2
ay = 3 , 2 (3'41)
y 2
31 a
Combining (3.36) and (3.41) then gives
0^ = ( 2 _l^\ (3^0L^ = 2B (3
42)
a2 \12 1 a 2 )\ (1 \a \) 2 ) 12 (1 \ a \ ) 2 (
. )
Notice that the noise-to-signal ratio increases without bound as \ a \ ^ 1.
Similar results can be obtained for the case of the causal second-order filter
116
realized by the difference equation
y(n) = 2r cos(0)y(n 1) r2y(n 2) + x(n)
(3.43)
This filter has complex-conjugate poles at re j 0 and impulse response
h(n) =r n sin[(n + 1)0] u(n)
--------------------------- (3.44)
sin (0)
Due to roundoff error, the output actually obtained is
y(n) = 2r cos(0)y(n 1) r2y(n 2) + x(n) + e(n) (3.45)
117
There are two noise sources contributing to e(n) if quantization is performed after
each multiply, and there is one noise source if quantization is performed after
summation. Since
2 1
1 r 2
(1 + r 2 2
) 4r2 (3.46
n= )
oo cos2 (9)
the output roundoff
noise is
22B1 + 1
a2 = V 2 (3.47
r 12 1 r 2 (1 + r 2 )2 4r2
2 )
where V = 1 for quantization cos
after(9)
summation, and V = 2 for quantization
after each multiply. To obtain an output noise-to-signal ratio we note that
H(e j w ) 1
(3.48
= 1 2r cos(9)e j m +
)
r2ej2m
and, using the approach of
[6],
iH(emmax = 1
(3.49
2 sat ( c o s ( 9 ) ^ cos(9) 2 2
sin 2
4r + )
(9)
wher
e I >1
sat (i) 1<I<1 (3.50
= )
Following the same approach as for the first-order
case then gives
2 2 2 B 1+r 2 3
V 12 1 r 2 (1 + r 2 )2 4r2
y
cos2 (9) 1
X 2 (^cos 2 2 (3.51
4r sat (9) cos( + 1 2 sin(9) )
) 9) r
Figure3.1 is a contour plot showing the noise-to-signal ratio 2rof (3.51) for v = 1
in units of the noise variance of a single quantization, 22 B /12. The plot is
symmetrical about 9 = 90, so only the range from 0 to 90 is shown. Notice
that as r ^ 1, the roundoff noise increases without bound. Also notice that the
noise increases as 9 ^ 0.
It is possible to design state-space filter realizations that minimize fixed-point
roundoff noise [7] - [10]. Depending on the transfer function being realized, these
structures may provide a roundoff noise level that is orders-of-magnitude lower
than for a nonoptimal realization. The price paid for this reduction in roundoff
noise is an increase in the number of computations required to implement the
filter. For an Nth-order filter the increase is from roughly 2N multiplies for a
direct form realization to roughly (N + 1)2 for an optimal realization. However,
if the filter is realized by the parallel or cascade connection of first- and second-
order optimal subfilters, the increase is only to about 4N multiplies. Furthermore,
near-optimal realizations exist that increase the number of multiplies to only
about 3N [10].
118
0.010.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99
Normalized fixed-point roundoff noise variance.
Notice that while the input is zero except for the first sample, the output oscillates
with amplitude 1/8 and period 6.
Limit cycles are primarily of concern in fixed-point recursive filters. As long as
floating-point filters are realized as the parallel or cascade connection of first- and
second-order subfilters, limit cycles will generally not be a problem since limit
cycles are practically not observable in first- and second-order systems
implemented with 32-b floating-point arithmetic [12]. It has been shown that such
systems must have an extremely small margin of stability for limit cycles to exist
at anything other than underflow levels, which are at an amplitude of less than
10 3 8 [12]. There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle
amplitude, expressed as an integral number of quantization steps [13]. It is then
possible to choose a word length that makes the limit cycle amplitude acceptably
low. Alternately, limit cycles can be prevented by randomly rounding calculations
up or down [14]. However, this approach is complicated to implement. The third
approach is to properly choose the filter realization structure and then quantize
the filter calculations using magnitude truncation [15,16]. This approach has the
disadvantage of producing more roundoff noise than truncation or rounding [see
(3.12)(3.14)].
75
y(n) = Qr\R (3.72)
8y(n -1) - 8y(n - 2) + x(n)
[Typetext]
35
x(n) = -4 (n) - ^&(n -1)
&
3 5 ,,
(3.73)
0, 0,
4 8
s to scale the filter calculations so as to render overflow impossible. However,
this may unacceptably restrict the filter dynamic range. Another method is to
force completed sums-of- products to saturate at 1, rather than overflowing
[18,19]. It is important to saturate only the completed sum, since intermediate
overflows in twos complement arithmetic do not affect the accuracy of the final
result. Most fixed-point digital signal processors provide for automatic saturation
of completed sums if their saturation arithmetic feature is enabled. Yet
another way to avoid overflow oscillations is to use a filter structure for which
any internal filter transient is guaranteed to decay to zero [20]. Such structures are
desirable anyway, since they tend to have low roundoff noise and be insensitive
to coefficient quantization [21].
0
0.25 0.50 0.75 1.00
Re Z
FIGURE: Realizable pole locations for the
difference equation of (3.76).
The sparseness of realizable pole locations near z = 1 will result in a large coefficient
quantization error for poles in this region.
Figure3.4 gives an alternative structure to (3.77) for realizing the transfer function of (3.76).
Notice that quantizing the coefficients of this structure corresponds to quantizing X r and Xi.
As shown in Fig.3.5 from [5], this results in a uniform grid of realizable pole locations.
Therefore, large coefficient quantization errors are avoided for all pole locations.
It is well established that filter structures with low roundoff noise tend to be robust to
coefficient quantization, and visa versa [22]- [24]. For this reason, the uniform grid
structure of Fig.3.4 is also popular because of its low roundoff noise. Likewise, the low-
noise realizations of [7]- [10] can be expected to be relatively insensitive to coefficient
quantization, and digital wave filters and lattice filters that are derived from low-sensitivity
analog structures tend to have not only low coefficient sensitivity, but also low roundoff
noise [25,26].
It is well known that in a high-order polynomial with clustered roots, the root location is a
very sensitive function of the polynomial coefficients. Therefore, filter poles and zeros can
be much more accurately controlled if higher order filters are realized by breaking them up
into the parallel or cascade connection of first- and second-order subfilters. One exception
to this rule is the case of linear-phase FIR filters in which the symmetry of the polynomial
coefficients and the spacing of the filter zeros around the unit circle usually permits an
acceptable direct realization using the convolution summation.
Given a filter structure it is necessary to assign the ideal pole and zero locations to the
realizable locations. This is generally done by simplyrounding or truncatingthe filter
coefficients to the available number of bits, or by assigning the ideal pole and zero locations
to the nearest realizable locations. A more complicated alternative is to consider the original
filter design problem as a problem in discrete
optimization, and choose the realizable pole and zero locations that give the best
approximation to the desired filter response [27]- [30].
the other hand, requires a quantization after every multiply and after every add in the
convolution summation. With 32-b floating-point arithmetic these quantizations introduce a
small enough error to be insignificant for many applications.
When realizing IIR filters, either a parallel or cascade connection of first- and second-order
subfilters is almost always preferable to a high-order direct-form realization. With the
availability of very low-cost floating-point digital signal processors, like the Texas
Instruments TMS320C32, it is highly recommended that floating-point arithmetic be used
for IIR filters. Floating-point arithmetic simultaneously eliminates most concerns regarding
scaling, limit cycles, and overflow oscillations. Regardless of the arithmetic employed, a
low roundoff noise structure should be used for the second- order sections. Good choices
are given in [2] and [10]. Recall that realizations with low fixed-point roundoff noise also
have low floating-point roundoff noise. The use of a low roundoff noise structure for the
second-order sections also tends to give a realization with low coefficient quantization
sensitivity. First-order sections are not as critical in determining the roundoff noise and
coefficient sensitivity of a realization, and so can generally be implemented with a simple
direct form structure.
GLOSSARY:
Quantization:
Total number of bits in x is reduced by using two methods namely Truncation and Rounding. These are
known as quantization Processes.
Input Quantization Error:
The Quantized signal are stored in a b bit register but for nearest values the same digital equivalent may be
represented. This is termed as Input Quantization Error.
Product Quantization Error:
The Multiplication of a b bit number with another b bit number results in a 2b bit number but it should be
stored in a b bit register. This is termed as Product Quantization Error.
Co-efficient Quantization Error:
The Analog to Digital mapping of signals due to the Analog Co-efficient Quantization results in error due
to the Fact that the stable poles marked at the edge of the j axis may be marked as an unstable pole in the
digital domain.
Limit Cycle Oscillations:
If the input is made zero, the output should be made zero but there is an error occur due to the quantization
effect that the system oscillates at a certain band of values.
Overflow limit Cycle oscillations:
Overflow error occurs in addition due to the fact that the sum of two numbers may result in overflow. To
avoid overflow error saturation arithmetic is used.
Dead band:
The range of frequencies between which the system oscillates is termed as Deadband of the Filter. It may
have a fixed positive value or it may oscillate between a positive and negative value.
Signal scaling:
The inputs of the summer is to be scaled first before execution of the addition operation to find for any
possibility of overflow to be occurred after addition. The scaling factor s0 is multiplied with the inputs to
avoid overflow.
UNIT V
APPLICATIONS OF DSP
PRE REQUISITE DISCUSSION:
The time domain waveform is transformed to the frequency domain using a filter bank. The strength of
each frequency band is analyzed and quantized based on how much effect they have on the perceived
decompressed signal.
1. In speech recognition system using microphone one can input speech or voice. The analog
speech signal is converted to digital speech signal by speech digitizer. Such digital signal is
called digitized speech.
2. The digitized speech is processed by DSP system. The significant features of speech such
as its formats, energy, linear prediction coefficients are extracted. The template of this
extracted features are compared with the standard
reference templates. The closed matched template is considered as the recognized
word.
3. Voice operated consumer products like TV, VCR, Radio, lights, fans and voice operated
telephone dialing are examples of DSP based speech recognized devices.
Impulse Voiced
Train Synthetic
Generator speech
Time
varying
digital filter
Random
number
generator Unvoiced
1. For voiced sound, pulse generator is selected as signal source while for unvoiced sounds
noise generator is selected as signal source.
2. The linear prediction coefficients are used as coefficients of digital filter. Depending upon these
coefficients , the signal is passed and filtered by the digital filter.
3. The low pass filter removes high frequency noise if any from the synthesized speech. Because
of linear phase characteristic FIR filters are mostly used as digital filters.
Pitch Period
Pulse Voiced
Generator
Synthetic
speech
Digital Time
filter varying
digital
White filter
Noise
generator Unvoiced
Filter Coefficients
2. The time domain waveform is transformed to the frequency domain using a filter bank. The
strength of each frequency band is analyzed and quantized based on how much effect they
have on the perceived decompressed signal.
3. The DSP processor is also used in digital video disk (DVD) which uses MPEG-2
compression, Web video content application like Intel Indeo, real audio.
4. Sound synthesis and manipulation, filtering, distortion, stretching effects are also done by DSP
processor. ADC and DAC are used in signal generation and recording.
5. 4. ECHO CANCELLATION
In the telephone network, the subscribers are connected to telephone exchange by two wire circuit. The
exchanges are connected by four wire circuit. The two wire circuit is bidirectional and carries signal in
both the directions. The four wire circuit has separate paths for transmission and reception. The hybrid
coil at the exchange provides the interface between two wire and four wire circuit which also provides
impedance matching between two wire and four wire circuits. Hence there are no echo or reflections on
the lines. But this impedance matching is not perfect because it is length dependent. Hence
for echo cancellation, DSP techniques are used as follows.
1. An DSP based acoustic echo canceller works in the following fashion: it records the sound
going to the loudspeaker and substract it from the signal coming from
the microphone. The sound going through the echo-loop is transformed and delayed,
and noise is added, which complicate the substraction process.
2. Let be the input signal going to the loudspeaker; let be the signal picked up by the
microphone, which will be called the desired signal. The signal after
substraction will be called the error signal and will be denoted by . The adaptive filter will
try to identify the equivalent filter seen by the system from the loudspeaker to the
microphone, which is the transfer function of the room the loudpeaker and microphone are in.
3. This transfer function will depend heavily on the physical characteristics of the environment. In
broad terms, a small room with absorbing walls will origninate just a few, first order reflections
so that its transfer function will have a short impulse response. On the other hand, large rooms
with reflecting walls will have a transfer function whose impulse response decays slowly in
time, so that echo cancellation will be much more difficult.
5. 5 VIBRATION ANALYSIS:
1. Normally machines such as motor, ball bearing etc systems vibrate depending upon the speed of
their movements.
2. In order to detect fault in the system spectrum analysis can be performed. It shows fixed
frequency pattern depending upon the vibrations. If there is fault in the machine, the
predetermined spectrum is changes. There are new frequencies introduced in the spectrum
representing fault.
3. This spectrum analysis can be performed by DSP system. The DSP system can also be used to
monitor other parameters of the machine simultaneously.
pitch,
Speech coding, A PC and SBC fine structure
Adaptive predictive coding (APC) is a technique used for speech coding, that is data
compression of spccch signals
APC assumes that the input
Typic speech signal is repetitive with a
al period significantly longer than
Voice the average frequency content.
d Two predictors arc used in APC.
speec The high frequency components
h (up to 4 kHz) are estimated using
a 'spectral or 'formant prcdictor and the low frequency components (50-200 Hz) by a pitch
or fine structure prcdictor (see figure 7.4). The spcctral estimator may he of order 1- 4 and the
pitch estimator about order 10. The low-frequency components of the spccch signal are due to
the movement of the tongue, chin and spectral
envelope, formants
The high-frequency components originate from the vocal chords and the noise-like sounds (like
in s) produced in the front of the mouth.
The output signal y(n)together with the predictor parameters, obtained adaptively in the
encoder, are transmitted to the decoder, where the spcech signal is reconstructed. The decoder
has the same structure as the encoder but the predictors arc not adaptive and arc invoked in the
reverse order. The prediction parameters are adapted for blocks of data corresponding to for
instance 20 ms time periods.
A PC' is used for coding spcech at 9.6 and 16 kbits/s. The algorithm works well in noisy
environments, but unfortunately the quality of the processed speech is not as good as for other
methods like CELP described below.
by dccimators, encoders (for instance ADPCM) and a multiplexer combining the data bits
coming from the sub-band channels. The output of the multiplexer is then transmitted to the
sub-band dccodcr having a demultiplexer splitting the multiplexed data stream back into Nsub-
band channels. Every sub-band channel has a dccodcr (for instance ADPCM), followed by an
interpolator and a band-pass filter. Finally, the outputs of the band-pass filters are summed and
a reconstructed output signal results.
Sub-band coding is commonly used at bit rates between 9.6 kbits/s and 32 kbits/s and
performs quite well. The complexity of the system may however be considerable if the number
of sub-bands is large. The design of the band-pass filters is also a critical topic when working
with sub-band coding systems.
periodic
pitch excitation
9+V
synthetic
noise speech
voiced/unvoiced
Figure 7.6 The LPC model
The first vocoder was designed by H. Dudley in the 1930s and demonstrated at
the New York Fair in 1939* Vocoders have become popular as they achieve
reasonably good speech quality at low data rates, from 2A kbits/s to 9,6 kbits/s.
There arc many types of vocoders (Marvcn and Ewers, 1993), some of the most
common techniques will be briefly presented below.
Most vocoders rely on a few basic principles. Firstly, the characteristics of the
spccch signal is assumed to be fairly constant over a time of approximately 20
ms, hcncc most signal processing is performed on (overlapping) data blocks of 20
40 ms length. Secondly, the spccch model consists of a time varying filter
corresponding to the acoustic properties of the mouth and an excitation signal.
The cxeilalion signal is cither a periodic waveform, as crcatcd by the vocal
chords, or a random noise signal for production of unvoiced' sounds, for
example s and T. The filter parameters and excitation parameters arc assumed
to be independent of each other and are commonly coded separately.
Linear predictive coding (LPC) is a popular method, which has however
been replaced by newer approaches in many applications. LPC works exceed-
ingly well at low bit rates and the LPC parameters contain sufficient information
of the spccch signal to be used in spccch recognition applications. The LPC
model is shown in Figure 7*6.
LPC is basically an autn-regressive model (sec Chapter 5) and the vocal tract
is modelled as a time-varying all-pole filter (HR filter) having the transfer
function H(z)
(7*17)
-k
k=I
where p is the order of the filter. The excitation signal *?(), being either noise or
a periodic waveform, is fed to the filter via a variable gain factor G. The output
signal can be expressed in the time domain as
and the excitation signal (linear predictive coding). The filter coefficients a k arc
time varying.
The model above describes how to synthesize the speech given the pitch
information (if noise or pet iodic excitation should be used), the gain and the
filter parameters. These parameters must be determined by the cncoder or the
analyser, taking the original spccch signal x(n) as input.
The analyser windows the spccch signal in blocks of 20-40 ms. usually with a
Hamming window (see Chapter 5). These blocks or frames arc repeated every
1030 ms, hence there is a certain overlap in time. Every frame is then analysed
with respect to the parameters mentioned above.
Firstly, the pitch frequency is determined. This also tells whether we arc
dealing with a voiced or unvoiccd spccch signal. This is a crucial part of the
system and many pitch detection algorithms have been proposed. If the segment
of the spccch signal is voiced and has a dear periodicity or if it is unvoiced and
not pet iodic, things arc quite easy* Segments having properties in between these
two extremes are difficult to analyse. No algorithm has been found so far that is
1
perfect* for all listeners.
Now, the second step of the analyser is to determine the gain and the filter
parameters. This is done by estimating the spccch signal using an adaptive
predictor. The predictor has the same structure and order as the filter in the
synthesizer, Hencc, the output of the predictor is
- i ( n ) - tf] jt(/7 1) a 2 x ( n 2) . . . OpX(np) (7-19)
where i(rt) is the predicted input spcech signal and jc(rt) is the actual input signal.
The filter coefficients a k are determined by minimizing the square error
This can be done in different ways, cither by calculating the auto-corrc- lation
coefficients and solving the Yule Walker equations (see Chapter 5) or by using
some recursive, adaptive filter approach (see Chapter 3),
So, for every frame, all the parameters above arc determined and irans- mittcd to
the synthesiser, where a synthetic copy of the spccch is generated.
An improved version of LPC is residual excited linear prediction (RELP).
Let us take a closer look at the error or residual signal r(fi) resulting from the
prediction in the analyser (equation (7.19)). The residual signal (wc arc try ing to
minimize) can be expressed as
r(n)= *() -i(rt)
= jf(rt) + a^x(n 1) + a 2 x(n 2) -h *.. + a p x(ftp) <7-21)
From this it is straightforward to find out that the corresponding expression using
the z-transforms is
Hcncc, the prcdictor can be regarded as an inverse filter to the LPC model filter.
If we now pass this residual signal to the synthesizer and use it to excite the LPC
filter, that is E(z) - R(z), instead of using the noise or periodic waveform
sources we get
Y ( z ) = E ( z ) H ( z ) = R ( z ) H ( z ) = X ( z ) H ~ \ z ) H ( z ) = X ( z ) (7.23)
In the ideal case, we would hence get the original speech signal back. When
minimizing the variance of the residual signal (equation (7.20)), we gathered as
much information about the spccch signal as possible using this model in the
filter coefficients a k . The residual signal contains the remaining information. If
the model is well suited for the signal type (speech signal), the residual signal is
close to white noise, having a flat spectrum. In such a case we can get away with
coding only a small range of frequencies, for instance 0-1 kHz of the residual
signal. At the synthesizer, this baseband is then repeated to generate higher
frequencies. This signal is used to excite the LPC filter
Vocoders using RELP are used with transmission rates of 9.6 kbits/s. The
advantage of RELP is a better speech quality compared to LPC for the same bit
rate. However, the implementation is more computationally demanding.
Another possible extension of the original LPC approach is to use multipulse
excited linear predictive coding (MLPC). This extension is an attempt to make
the synthesized spcech less mechanical, by using a number of different pitches
of the excitation pulses rather than only the two (periodic and noise) used by
standard LPC.
The MLPC algorithm sequentially detects k pitches in a speech signal. As soon
as one pitch is found it is subtracted from the signal and detection starts over
again, looking for the next pitch. Pitch information detection is a hard task and
the complexity of the required algorithms is often considerable. MLPC however
offers a better spcech quality than LPC for a given bit rate and is used in systems
working with 4.S-9.6 kbits/s.
Yet another extension of LPC is the code excited linear prediction (CELP).
The main feature of the CELP compared to LPC is the way in which the filter
coefficients are handled. Assume that we have a standard LPC system, with a
filter of the order p. If every coefficient a k requires N bits, we need to transmit
N-p bits per frame for the filter parameters only. This approach is all right if all
combinations of filter coefficients arc equally probable. This is however not the
case. Some combinations of coefficients are very probable, while others may
never occur. In CELP, the coefficient combinations arc represented by p
dimensional vectors. Using vector quantization techniques, the most probable
vectors are determined. Each of these vectors are assigned an index and stored in
a codebook. Both the analyser and synthesizer of course have identical copies of
the codebook, typically containing 256-512 vectors. Hcncc, instead of
transmitting N-p bits per frame for the filter parameters only 8-9 bits arc needed.
This method offers high-quality spcech at low-bit rates but requires consid-
erable computing power to be able to store and match the incoming spcech to the
standard sounds stored in the codebook. This is of course especially true if the
codebook is large. Speech quality degrades as the codebook size decreases.
Most CELP systems do not perform well with respect to higher frequency
components of the spccch signal at low hit rates. This is countcractcd in
There is also a variant of CELP called vector sum excited linear prediction
(VSELP). The main difference between CELP and VSELP is the way the
codebook is organized. Further, since VSELP uses fixed point arithmetic
algorithms, it is possible to implement using cheaper DSP chips than
Adaptive Filters
The signal degradation in some physical systems is time varying, unknown, or possibly
both. For example,consider a high-speed modem for transmitting and receiving data over
telephone channels. It employs a filter called a channel equalizer to compensate for the channel
distortion. Since the dial-up communication channels have different and time-varying
characteristics on each connection, the equalizer must be an adaptive filter.
filter involves both zeros and poles. Unless they are properly controlled, the poles in the filter
may move outside the unit circle and result in an unstable system during the adaptation of
coefficients. Thus, the adaptive FIR filter is widely used for practical real-time applications.
This chapter focuses on the class of adaptive FIR filters.
The most widely used adaptive FIR filter is depicted in Figure 7.2. The filter output signal
is computed
(7.13)
, where the filter coefficients wl (n) are time varying and updated by the adaptive algorithms that will be
discussed next.
We define the input vector at time n as
x(n) = [x(n)x(n - 1) . . . x(n - L + 1)]T , (7.14)
and the weight vector at time n as
w(n) = [w0(n)w1(n) . . . wL-1(n)]T . (7.15)
Equation (7.13) can be expressed in vector form as
y(n) = wT (n)x(n) = xT (n)w(n). (7.16)
The filter outputy(n) is compared with the desired d(n) to obtain the error signal e(n) =
d(n) - y(n) = d(n) - wT (n)x(n). (7.17)
Our objective is to determine the weight vector w(n) to minimize the predetermined performance (or cost)
function.
Performance Function:
The adaptive filter shown in Figure 7.1 updates the coefficients of the digital filter to optimize some
predetermined performance criterion. The most commonly used performance function is
based on the mean-square error (MSE).
The path leading from the musician's microphone to the audiophile's speaker is remarkably long. Digital
data representation is important to prevent the degradation commonly associated with analog storage and
manipulation. This is very familiar to anyone who has compared the musical quality of cassette tapes with
compact disks. In a typical scenario, a musical piece is recorded in a sound studio on multiple channels or
tracks. In some cases, this even involves recording individual instruments and singers separately. This is
done to give the sound engineer greater flexibility in creating the final product. The complex process of
combining the individual tracks into a final product is called mix down. DSP can provide several
important functions during mix down, including: filtering, signal addition and subtraction, signal editing,
etc. One of the most interesting DSP applications in music preparation is artificial reverberation. If
the individual channels are simply added together, the resulting piece sounds frail and diluted, much as if
the musicians were playing outdoors. This is because listeners are greatly influenced by the echo or
reverberation content of the music, which is usually minimized in the sound studio. DSP allows artificial
echoes and reverberation to be added during mix down to simulate various ideal listening environments.
Echoes with delays of a few hundred milliseconds give the impression of cathedral likelocations. Adding
echoes with delays of 10-20 milliseconds provide the perception of more modest size listening rooms.
Speech generation and recognition are used to communicate between humans and machines. Rather than
using your hands and eyes, you use your mouth and ears. This is very convenient when your hands and
eyes should be doing something else, such as: driving a car, performing surgery, or (unfortunately) firing
your weapons at the enemy. Two approaches are used for computer generated speech: digital
recording and vocal tract simulation. In digital recording, the voice of a human speaker is digitized
and stored, usually in a compressed form. During playback, the stored data are uncompressed and
converted back into an analog signal. An entire hour of recorded speech requires only about three me
gabytes of storage, well within the capabilities of even small computer systems. This is the most common
method of digital speech generation used today. Vocal tract simulators are more complicated, trying to
mimic the physical mechanisms by which humans create speech. The human vocal tract is an acoustic
cavity with resonate frequencies determined by the size and shape of the chambers. Sound originates in
the vocal tract in one of two basic ways, called voiced and fricative sounds. With voiced sounds, vocal
cord vibration produces near periodic pulses of air into the vocal cavities. In comparison, fricative sounds
originate from the noisy air turbulence at narrow constrictions, such as the teeth and lips. Vocal tract
simulators operate by generating digital signals that resemble these two types of excitation. The
characteristics of the resonate chamber are simulated by passing the excitation signal through a digital
filter with similar resonances. This approach was used in one of the very early DSP success stories, the
Speak & Spell, a widely sold electronic learning aid for children.
The automated recognition of human speech is immensely more difficult than speech generation. Speech
recognition is a classic example of things that the human brain does well, but digital computers do poorly.
Digital computers can store and recall vast amounts of data, perform mathematical calculations at blazing
speeds, and do repetitive tasks without becoming bored or inefficient. Unfortunately, present day
computers perform very poorly when faced with raw sensory data. Teaching a computer to send you a
monthly electric bill is easy. Teaching the same computer to understand your voice is a major
undertaking. Digital Signal Processing generally approaches the problem of voice recognition in two
steps: feature extraction followed by feature matching. Each word in the incoming audio signal is
isolated and then analyzed to identify the type of excitation and resonate frequencies. These parameters
are then compared with previous examples of spoken words to identify the closest match. Often, these
systems are limited to only a few hundred words; can only accept speech with distinct pauses between
words; and must be retrained for each individual speaker. While this is adequate for many
commercialapplications, these limitations are humbling when compared to the abilities of human hearing.
There is a great deal of work to be done in this area, with tremendous financial rewards for those that
produce successful commercial products.
Example 2
The result of the transformation shown in the figure below is to produce a binary image.
s = T(r)
Frequency domain methods
Let g( x, y) be a desired image formed by the convolution of an image f (x, y) and a linear, position
invariant operator h(x,y), that is:
g(x,y) = h(x,y)2f(x,y)
The following frequency relationship holds:
G(u, i) = H (a, v)F(u, i)
We can select H (u, v) so that the desired image
g(x,y) = 3 _ 1 i$(ii,v)F(u,v)
exhibits some highlighted features of f (x,y) . For instance, edges in f (x,y) can be accentuated by
using a function H(u,v) that emphasises the high frequency components of F(u,v) .
Glossary:
Sampling Rate:
The No. of samples per cycle given in the signal is termed as sampling rate of the signal .The samples
occur at T equal intervals of Time.
Sampling Theorem:
Sampling Theorem states that the no. of samples per cycle should be greater than or equal to twice that of
the frequency of the input message signal.
The Sampling rate of the signal may be increased or decreased as per the requirement and application. This
is termed as sampling rate Conversion.
Decimation:
The Decrease in the Sampling Rate are termed as decimation or Downsampling. The No. of Samples per
Cycle is reduced to M-1 no. of terms.
Interpolation:
The Increase in the Sampling rate is termed as Interpolation or Up sampling. The No. of Samples per
Cycle is increased to L-1 No. of terms.
Polyphase Implementation:
If the Length of the FIR Filter is reduced into a set of smaller filters of length k. Usual upsampling process
Inserts I-1 zeros between successive Values of x(n). If M Number of Inputs are there, Then only K
Number of Outputs are non-zero. These k Values are going to be stored in the FIR Filter.
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE
EC 65022 DIGITA
AL SIGNAL
L PROCESS
SING
1. Define DT
TFT. APRIL L/MAY20088
The discrrete-time Fourier transfoorm (or DTF
FT) of is usually wrritten:
W
Where both n and k are arrbitrary integgers. Thereffore:
orr
T
Time Domainn Frequencyy Domain
1 20
0152016
(As usual, the subscripts are interpreted modulo N; thus, for n = 0, we have xN 0 = x0.)
Second, one can also conjugate the inputs and outputs:
Third, a variant of this conjugation trick, which is sometimes preferable because it requires no
modification of the data values, involves swapping real and imaginary parts (which can be done on a
computer simply by modifying pointers). Define swap(xn) as xn with its real and imaginary parts
swappedthat is, if xn = a + bi then swap(xn) is b + ai. Equivalently, swap(xn) equals . Then
220152016
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE
w
where n is an integer and z is, in geneeral, a compllex number:
z = Aej (OR)
z = A(cosj + sinj
)
w
where A is thhe magnitud
de of z, and is the coomplex arguument (also referred
r to as
a angle or phase) in
raadians
Inn signal proccessing, this definition iss used when the signal iss causal.
9. Define Reggion Of Con nvergence.
The regionn of converg gence (ROC)) is the set ofo points in the
t complexx plane for which
w the Z--transform
suummation coonverges.
can be found. In prractice, it is often usefull to fractionaally decompoose before multipplying that
quuantity by to generate a form of which has terms with
w easily coomputable innverse Z-trannsforms.
3 20
0152016
Periodicity, Linearity, Time shift, Frequency shift, Scaling, Differentiation in frequency domain,
Time reversal, Convolution, Multiplication in time domain, Parsevals theorem
14. What is the DTFT of unit sample? NOV/DEC 2010
The DTFT of unit sample is 1 for all values of w.
15. Define Zero padding.
The method of appending zero in the given sequence is called as Zero padding.
16. Define circularly even sequence.
A Sequence is said to be circularly even if it is symmetric about the point zero on
the circle. x(N-n)=x(n),1<=n<=N-1.
17. Define circularly odd sequence.
A Sequence is said to be circularly odd if it is anti symmetric about point x(0) on the circle
420152016
Contour integration
Power series expansion
Convolution.
28. Obtain the inverse z transform of X(z)=1/z-a,|z|>|a| APRIL/MAY2010
Given X(z)=z-1/1-az-1
By time shifting property X(n)=an.u(n-1)
16-MARKS
Soln:
N-1
X(k)= x(n)e-j2kn/N, k=0,1,........N-1.
n=0
7
X(0)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(0) = 6
7
X(1)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(1)=-0.707-j1.707.
7
X(2)= x(n)e-j2kn/8, k=0,1,....7
n=0
X(2)=1-j
7
X(3)= x(n)e-j2kn/8, k=0,1,.....7.
n=0
X(3)=0.707+j0.293
7
X(4)= x(n)e-j2kn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(5)=0.707-j0.293
520152016
7
X(6)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(6)=1+j
7
X(7)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(7)= -0.707+j1.707.
X(k)={6,-0.707-j1.707,1-j, 0.707+j0.293, 0, 0.707-j0.293, 1+j, -0.707+j1.707}
Soln:
N-1
X(k)= x(n)e-j2kn/N, k=0,1,........N-1.
n=0
7
X(0)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(0) = 8
7
X(1)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(1)=0.
7
X(2)= x(n)e-j2kn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= x(n)e-j2kn/8, k=0,1,.....7.
n=0
X(3)=0
7
X(4)= x(n)e-j2kn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= x(n)e-j2kn/8, k=0,1,........7.
n=0
620152016
X(5)=0
7
X(6)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(7)= 0.
X(k)={8, 0, 0, 0, 0, 0, 0, 0}
Soln:
N-1
X(k)= x(n)e-j2kn/N, k=0,1,........N-1.
n=0
7
X(0)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(0) = 20
7
X(1)= x(n)e-j2kn/8, k=0,1,...7.
n=0
X(1)=-5.828-j2.414
7
X(2)= x(n)e-j2kn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= x(n)e-j2kn/8, k=0,1,.....7.
n=0
X(3)=-0.172-j0.414.
7
X(4)= x(n)e-j2kn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= x(n)e-j2kn/8, k=0,1,........7.
720152016
n=0
X(5)= -0.172+j0.414
7
X(6)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= x(n)e-j2kn/8, k=0,1,........7.
n=0
X(7)= -5.828+j2.414
X(k)={20,-5.828-j2.414, 0, -0.172-j0.414, 0, -0.172+j0.414, 0, -5.828+j2.414}
Soln:
N-1
x(n)=1/N X(k)e-j2kn/N, n=0,1,........N-1.
k=0
7
x(0)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(0) = 1
7
x(1)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(1)=0.75
7
x(2)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(2)=0.5
7
x(3)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(3)=0.25
7
x(4)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
820152016
x(4)=1
7
x(5)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(5)= 0.75
7
x(6)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(6)=0.5
7
x(7)=1/8 X(k)e-j2kn/8, n=0,1,.......7.
k=0
x(7)= 0.25
x(n)={1,0.75, 0.5, 0.25,1,0.75,0.5,0.25}
5. Derive and draw the radix -2 DIT algorithms for FFT of 8 points. (16) DEC 2009
RADIX-2 FFT ALGORITHMS
The /V-point DFT of an fV-point sequence J is
jV-l
rj=fl
Because rv(/i) may be either real or complex, evaluating -(A) requires on the order of jV complex
multiplications and N complex additions for each value of jt. Therefore, because there are Nvalues of
X(kh computing an N-point DFT requires N 1 complex multiplications and additions.
The basic strategy thaiis used in the FFT algorithm is one of "divide and conquer." which involves
decomposing an /V-point DFT inlo successively smaller DFTs. To see how this works, suppose lhat the
length of is even (i.e,,N is divisible by 2).If jr(n) is decimated into two sequences of length N/2,
computing the jV/2-point DFT of each of these sequences requires approximatelymultiplications and
thesame number
2 2
of additions, Thus, ihe two DFTs require 2(iV/2) = ^ /V multiplies and adds, Therefore, if it is possible
to find the fy-poim DFT of jt(h) from these two /V/2-point DFTs in fewer ihan N 2 f2operations, a
savings has been realized.
Deetmation-tn-Tlme FFT
The decimation'in-time FFT algorilhm is based on splitting (decimating) jt{a) into smaller sequences and
finding X(k)from the DFTsof these decimated sequences This section describes how this decimation
leads to an efficient algorithm when the sequence length is a power of 2.
Lei .r(n) be a sequence of length N =2', and suppose thal v(n) is splil (decimated) into iwo
subsequences, each of length N/2.As illustrated in Fig. the first sequence, %(n).is formed frum the
cven-index tcrmx,
920152016
N
g(n) = x(2n) n =0. I - I
and ihe second, h(ri), is formed from ihe odd-index terms,
N
h{n) = x(2n +1) n=0. I - - I
2
In terms of these sequences, the JV-point DFT of is
N -I
X(k)=^x(n)Wtf=x{n)Wtf4x{n)Wj?
heven odd
*_i iV _ n
1=0 I=\)
Odd-Index Ttraii
Because W$ k =
y| 1|
X { t ) = amiin 4- w*J2W ) K f l
1=0 l=to
Note that the firs! term is the A//2-poim DFT of and (ho second is Ihe N/2-point DFT of h(n);
X { k ) = Gik)+ WlH(k)k = 0 , IN - I
Although the N/2-point DFTs of g(n)and h(n)are sequences of length Nt[2, the periodicity of the
complex exponentials allows us to write
G(*)=G(*+y) H(*>=/m-+y)
Therefore, X(k) may be computed from the A1'/2-point DFTs G(k) and H(k), Note that because
IV* _ U/* LV'v^ IV*
IT N iy iVw iV vv iV
then IV.*'4* H(it + j} = - W* H [k)
and it is only necessarv to form the products W#H(Jt) tor k =0, 1 ... N/2 !. The complex exponentials
multiplyingti{k ) are called twiddle factors.A block diagram showing the computations
thaiare
necessaryfor the first stageof an eight-point dedmation-in-time FFT is shown in Fig.
If N/2 is even, #tft) andA(rt) may again be decimated. For example, G(A) may be evaluated as follows:
1020152016
: -1 ^ i -1
G(k)= = V + V giW$2
ir(I Ji even n J.II Id
1120152016
1220152016
X(
0)
X(l
)
xm
X(
3)
*(4
>
x(s
)
X(
6)
X(
7)
A complete eight-point radix-2 decimaliot)-iiHune FFT.
Computing an /V-point DFT using a radix-2 decimation-in-lime FFT is much more efficient
lhancalculating the DFTdirectly. For example, if N = 2'\ there are log, X= v stagesof compulation.
Bee a use each stage requires N/2complex multiplies by the twiddle factors W r N and Ncomplex
additions, there are a total of \Xlog;N complex multiplications1and Xlog-i Ncomplex additions.
From the structure of the dec i mat ion-in-time FFT algorithm, note that once a butterfly operation
has been performed on a pair of complex numbers, there is no need to save the input pair. Therefore, ihe
output pair may be stored in (he same registers as the input. Thus, only one airay of size Xis required,
and il is said that the computations may be performed in place.To perform the compulations in place,
however, the input sequence a(h) must he stored (or accessed) in nonsequential order as seen in Fig. The
shuffling of the inpul sequence that takes place is due to ihe successive decimations of The ordering
that results corresponds to a bil-reversed indexing of Ihe original sequence. In oilier words, if the index
n is written in binary fonn. Ihe order in which in the inpul sequence musi be accessed is found by
reading the binary representation for /? In reverse order as illustrated in ihe table below for N = 8;
132015
2016
Bil-Reversed
N Binary Binary
0 000 000 0
1 001 100 4
2 010 010 2
3 Oil 110 6
4
100 001 1
5 101 101 5
6 110 011 3
7 111 111 7
Alternate forms nFFFT algorithms may he derived from the decimalion-in-time FFT hy
manipulating the flowgrapli and rearranging the order in which ihe results of each stage of Ihe
computation are stored. For example, the ntxJes of the fluwgraph may be rearranged su that the input
sequenuc ,r(/i) is in normal order. What is lost with this reordering, however, is the .ibililv iti perform
the computations in place.
6. Derive DIF radix 2 FFT algorithm
Decimation-in-Frequency FFT
Another class of FFT algorithms may be derived by decimating the output sequence X (k) into smaller
and smaller subsequences. These algorithms are called decimation-in-frequencyFFTs and may be
derived as follows. Let N be a power of 2, /V = 21'. and consider separately evaluating the even-index
and odd-indcx samples of X(k). The even samples are
fi~ I
X(2k)=
fl=0
Separating this sum into the first N / 2 points and the last N / 2 points, and using the fact that Wjf k =
W^, this
becomes
S-i ftf-r
X(2k)='*(n)W*/2+ x(n)W* N k , 2
n=0 n=A//2
X(2k)=J2x{n)WN/2+ x ( l l + 7)^/2^
142015
2016
nl) n0 > ^
Finally, because ^"25 ** = W'Jv/2
152015
2016
Hn
X(2*)= f N\]
x(n) K). k
L+ x 1 N
/2
which is the N/2-point DFT of the sequence that is formed by adding the first N/2 points
of.*(n) to the last N/2. Proceeding in the same way for the odd samples of X(k) leads to
7-1
X(2k+l)= x(n) - H). nk (7.
x N/ 4)
n=
() 2
A flowgraph illustrating this first stage of decimation is shown in Fig. 7-7.
As with the decimation-in-time FFT, the decimation may be continued until
only two-point DFTs remain. A completeeight-point decimation-in-
frequency FFT is shown in Fig. 7-8. The complexity of the decimation-in-
frequency FFT is the same as the decimation-in-time, and the computations
may be performed in place. Finally, note that although the input sequence
x(n) is in normal order, the frequency samples X(k) are in bit-reversed
order.
162015
2016
PROPERTIES OF DFT
DFT
1720152016
1. Periodicity
Let x(n) and x(k) be the DFT pair then if
1820152016
DFT
Then DFT
DFT of linear combination of two or more signals is equal to the same linear combination of DFT of individual
signals.
A) A sequence is said to be circularly even if it is symmetric about the point zero on the circle. Thus X(N-n) =
x(n)
B) A sequence is said to be circularly odd if it is anti symmetric about the point zero on the circle. Thus X(N-
n) = - x(n)
D) Anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence. Thus
delayed or advances sequence x'(n) is related to x(n) by the circular shift.
N-1
This property states that if the sequence is real and odd x(n)=-x(N-n) then DFT becomes
N-1
1920152016
This property states that if the sequence is purely imaginary x(n)=j Xi(n) then DFT becomes
N-1
2020152016
N-1
3. Circular Convolution
The Circular Convolution property states that if
DFT
DFT
x2(n)X2(k) Then
N
DFT
21
20152016
N-1
y(m) = I x1 (n) x2 (m-n)N ........... (4)
n=0
Multiplication of two sequences in time domain is called as Linear convolution
while Multiplication of two sequences in frequency domain is called as circular
convolution. Results of both are totally different but are related with each other.
22
20152016
UNIT-II
INFINITE IMPULSE RESPONSE DIGITAL FILTERS
1.Give the expression for location of poles of normalized Butterworth
filter(May07,Nov10)
From the given chebyshev filter specifications we can obtain the parameters
like the order of the filter N, , transition ratio k, and the poles of the filter.
3.Find the digital transfer function H(z) by using impulse invariant method for
the analog transfer function H(s)=1/s+2.Assume T=0.5sec(May/june-07)
H(z)= 1/ 1-e-1z-1
23
20152016
S-plane.
6.What is the relationship between analog & digital freq. in impulse invariant
transformation?(Apr/May-08)
2 1 z 1
transformation is s = .
T 1 + z 1
9.What is Prewarping?(May-09,Apr-10,May-11)
24
20152016
H(z) = 1/ Z+e-0.3
13.Give the square magnitude function of Butterworth filter.(Nov-2010)
1
H ( j ) = 1
N = 1 , 2 , 3 ,...
2 N
2
1 +
c
Where N is the order of the filter and c is the cutoff frequency. The
magnitude response of the butter worth filter closely approximates the ideal
25
20152016
response as the order N increases. The phase response becomes more non-
linear as N increases.
14. Find the digital transfer function H(z) by using impulse invariant method
for the analog transfer function H(s)=1/s+1.Assume T=1sec.(Apr-11)
H(z)= 1/ 1-e-1z-1
15.Give the equation for the order of N and cut-off frequency c of butter
worth filter.(Nov-06)
10 0.1 s 1
log 0.1
The order of the filter N = 10 p 1
log s
p
p
c = 1
(100.1 1) 2 N
26
20152016
i. Direct-form-I structure
ii. Direct-form-II structure
iii. Transposed direct-form II structure
iv. Cascade form structure
v. Parallel form structure
vi. Lattice-Ladder structure
19.Draw the general realization structure in direct-form I of IIR system.(May-
04)
27
20152016
22.Mention any two techniques for digitizing the transfer function of an analog
filter. (Nov11)
The two techniques available for digitizing the analog filter transfer function
are Impulse invariant transformation and Bilinear transformation.
23.Write a brief notes on the design of IIR filter. (Or how a digital IIR filter is
designed?)
For designing a digital IIR filter, first an equivalent analog filter is designed
using any one of the approximation technique for the given specifications. The
result of the analog filter design will be an analog filter transfer function Ha(s).
The analog filter transfer function is transformed to digital filter transfer function
H(z) using either Bilinear or Impulse invariant transformation.
28
20152016
iii. The design involves design of The digital filter can be directly
analog filter and then transforming designed to achieve the desired
analog filter to digital filter. specifications.
29
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
s
s
'p
p
H 1 (s) = H p ( s)
'p
Wheree,
p= normalized
n c
cutoff freq=11 rad/sec
p= Desired
D LP cutoff
c freq
at =p
= it is H(j1)
This transformatio
t on involves replacing the variable Z-1 by a rationnal function
g(z-1),, while doingg this follow
wing propertiies need to beb satisfied:
-1 -1
1. Mappingg Z to g(z ) must mapp points insidde the unit ciircle in the
Z- plane onto
o the unit circle of z- plane
p to presserve causaliity of the
filter.
300
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
The general form m of the funcction g(.) thaat satisfy the above requiirements of
" all-p
pass " type iss
The different
d trannsformations are shown in
i the below table.
31
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
1
Let H(s) =
s2 + s + 1
Reepresents th
he transfer function
f of a low pass filter
fi (not
butterrworth) witth a passbannd of 1 rad/sec. Use freq transform mation to
find the
t transfer function off the followiing filters: (Apr/Mayy-08)
(12)
1. A LP filter with a pass band of 10 rad/sec
2. A HP filter with a cutooff freq of 1 rad/sec
3. A HP filter with a cutooff freq of 100 rad/sec
4. A BP filter with a pass band of 10 rad/sec and d a corner freq
f of 100
rad/seec
32
20152016
Solution:
Given
1
H (s) =
s + s +1
2
a. LP LP Transform
replace
s s
s =
p 10
1
sub H a (s) = H (s) | =
s 2
s
s s
10
( ) + ( ) + 1
10 10
100
=
s + 10 s + 100 2
b. LP HP(normalized) Transform
u 1
s =
s s
1
sub H a ( s ) = H ( s ) | 1 =
s 1 2 1
s
( ) + ( ) + 1
s s
s2
=
s2 + s +1
replace
d. LP BP Transform
replace
s 2 + u l s 2 + o2
s = where o = u l
s ( u l ) sB0
and Bo = ( u l )
sub H a ( s ) = H ( s ) | s 2 +10 4
s
10 s
100s 2
= 4
s + 10s 3 + 20100s 2 + 10 5 s + 10 8
e. LP BS Transform
replace
s ( u l ) sB
s = 2 0 2 where o = u l
s + u l s + o
2
and Bo = ( u l )
sub H a ( s ) = H ( s ) | 2s
s
s 2 +100
( s 2 + 100) 2
= 4
s + 2 s 3 + 204 s 2 + 200 s + 10 4
Solution:
34
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Note that
t the resuulting filter has
h zeros at z=1
z and a pair
p of poles that
depen
nd on the chooice of l annd u
This filter
f has polles at z=j0.7713 and hennce resonatess at =/2
35
20152016
b z
k =0
k
k
H(Z) = N
1 + ak z k
k =1
1. Direct form-I
2. Direct form-II
3. Cascade form
4. Parallel form
5. Lattice form
Direct form-I
This is a straight forward implementation of difference equation which
is very simple. Typical Direct form I realization is shown below. The
upper branch is forward path and lower branch is feedback path. The
number of delays depends on presence of most previous input and output
samples in the difference equation.
36
20152016
Direct form-II
Y ( z) V ( z) Y ( z)
H ( z) = = .
X ( z) X ( z) V ( z)
V ( z) 1
= -------------------all poles
X ( z) N
1 + ak z k
k =1
Y ( z) M
= 1 + bk z k -------------------all zeros
V ( z ) k =1
N
v ( n) = x ( n) a k v ( n k )
k =1
M
y ( n) = v( n) + bk v( n 1)
k =1
37
20152016
38
20152016
Cascade Form
The transfer function of a system can be expressed as,
H ( z ) = H 1 ( z ) H 2 ( z )....H k ( z )
bk 0 + bk1 Z 1 + bk 2 Z 2
H k (Z ) =
1 + a k1 Z 1 + a k 2 Z 2
39
20152016
Where {pk} are the poles, {Ak} are the coefficients in the partial fraction
expansion, and the constant C is defined as C = b N a N , The system
realization of above form is shown below.
bk 0 + bk1 Z 1
Where H k ( Z ) =
1 + a k1 Z 1 + ak 2 Z 2
1 1
H (Z ) = =
N
AN ( Z )
1 + a N (k ) Z k
k =1
40
20152016
OR
N
x( n) = y ( n) + a N ( k ) y ( n k )
k =1
For N=1
x ( n ) = y ( n ) + a1 (1) y ( n 1)
We observe
x(n) = f1 (n)
= x ( n) k1 y ( n 1)
k1 = a1 (1)
y ( n ) = x ( n ) a 2 (1) y ( n 1) a 2 ( 2) y ( n 2)
41
20152016
This output can be obtained from a two-stage lattice filter as shown in below
fig
f 2 ( n) = x ( n)
f 1 ( n ) = f 2 ( n ) k 2 g 1 ( n 1)
g 2 ( n ) = k 2 f 1 ( n ) + g 1 ( n 1)
f 0 (n) = f1 (n) k1 g 0 (n 1)
g1 (n) = k1 f 0 (n) + g 0 (n 1)
= x ( n ) k 2 [k1 y ( n 1) + y ( n 2)] k1 y ( n 1)
= x ( n ) k1 (1 + k 2 ) y ( n 1) k 2 y ( n 2)
42
20152016
Similarly
g 2 ( n ) = k 2 y ( n ) + k1 (1 + k 2 ) y ( n 1) + y ( n 2)
We observe
a 2 (0) = 1; a 2 (1) = k1 (1 + k 2 ); a 2 ( 2) = k 2
f N ( n) = x ( n)
y ( n) = f 0 ( n) = g 0 ( n)
H (Z ) =
( )(
10 1 12 Z 1 1 23 Z 1 1 + 2 Z 1 )( )
(1 3
4
Z 1
)(1 1
8
Z 1
)(1 ( 1
2
+ j 12 )Z 1
)(1 ( 1
2
j 12 )Z 1 )
a). Direct form I
43
20152016
c). Cascade
d). Parallel
Solution:
H (Z ) =
( )(
10 1 12 Z 1 1 23 Z 1 1 + 2 Z 1 )( )
(1 3
4
Z 1
)(1 1
8
Z 1
)(1 ( 1
2
+ j 12 )Z 1
)(1 ( 1
2
j 12 )Z 1 )
=
(
10 1 76 Z 1 + 13 Z 2 1 + 2 Z 1 )( )
(1 + 7
8
Z 1
+ 323 Z 2
)(1 Z 1
+ 12 Z 2 )
H (Z ) =
(
10 1 + 56 Z 1 2Z 2 + 23 Z 3 )
(1 15
8
Z 1
+ 47
32
Z 2
17
32
Z 3
+ 643 Z 4 )
44
20152016
Where
7 1 1 2
1
z + z
H1 ( z) = 6 3
7 3 2
1 z 1 + z
8 32
10(1 + 2 z 1 )
H1 ( z) =
1
1 z 1 + z 2
2
45
20152016
Parallel Form
6. Obtain the direct form I, direct form-II, Cascade and parallel form
realization for the following system, y(n)=-0.1 y(n-1)+0.2y(n-
2)+3x(n)+3.6 x(n-1)+0.6 x(n-2)
(12). (May/june-07)
Solution:
The Direct form realization is done directly from the given i/p o/p
equation, show in below diagram
46
20152016
Y ( z ) 3 + 3.6 z 1 + 0.6 z 2
H ( z) = =
X ( z ) 1 + 0.1z 1 0.2 z 2
(3 + 0.6 z 1 )(1 + z 1 )
H ( z) =
(1 + 0.5 z 1 )(1 0.4 z 1 )
47
20152016
3 + 0.6 z 1 1 + z 1
where H 1 ( z ) = and H 2 ( z ) =
1 + 0.5 z 1 1 0.4 z 1
7 1
H ( z ) = 3 + 1
1 0 .4 z 1 + 0.5 z 1
Solution:
48
20152016
Given bM ( Z ) = 1 + 2Z 1 + 2Z 2 + Z 3
1
And AN ( Z ) = 1 + 13
24 Z + 85 Z 2 + 13 Z 3
a3 (0) = 1; a3 (1) = 13
24 ; a3 (2) = 85 ; a3 (3) = 1
3
k 3 = a3 (3) = 1
3
a m ( k ) a m ( m) a m ( m k )
a m 1 (k ) =
1 a 2 m ( m)
1 ( 13 )
8
1 a32 (3) 2
5
13 . 13 4513
8 24
= 72
= 1
2
1 19 8
9
3
12 . 83 83 163
8
= = 1
4
1 ( 12 ) 2 1 14
49
20152016
M
C m = bm C .a (1 m)
i = m +1
1 1 m=M, M-1,1,0
C3 = b3 = 1; C 2 = b2 C3 a3 (1)
M=3
= 2 1.( 13
24 ) = 1.4583
3
C1 = b1 c1 a1 (i m) m=1
i=2
[
= b1 c 2 a 2 (1) + c3 a 3( 2 ) ]
= 2 [(1.4583)( 83 ) + 85 ] = 0.8281
3
c 0 = b0 c1 a1 (i m)
i =1
50
20152016
UNIT III
Advantages:
1. FIR filters have exact linear phase.
2. FIR filters are always stable.
3. FIR filters can be realized in both recursive and non recursive structure.
51
20152016
4. Filters with any arbitrary magnitude response can be tackled using FIR
sequency.
Disadvantages:
1. For the same filter specifications the order of FIR filter design can be as
high as 5 to n10 times that of IIR design.
2. Large storage requirements needed.
3.Powerful computational facilities required for the implementation.
The Optimum Equiripple design Criterion is used for designing FIR Filters with
Equal level filteration throughout the Design.
FIR FIlters
In the filter design by Fourier series method the infinite duration impulse response
is truncated to finite duration impulse response at n= (N-1/2). The abrupt
truncation of impulse introduces oscillations in the pass band and stop band. This
effect is known as Gibbs phenomenon.
52
20152016
8.Mention various methods available for the design of FIR filter.Also list a few
window for the design of FIR filters.(May/june-2010)
There are three well known method of design technique for linear phase FIR filter.
They are
There are three well known method of design technique for linear phase FIR filter.
They are
53
20152016
FIR filter is always stable because all its poles are at the origin.
15.What are the possible types of impulse response for linear phase FIR
filter?(Nov-11)
There are four types of impulse response for linear phase FIR filters
54
20152016
2. The width of the transition band can be made narrow by increasing the
value of N where N is the length of the window sequence.
3. The attenuation in the stop band is fixed for a given window, except in
case of Kaiser Window where it is variable.
55
20152016
1.The main lobe width is equal to8/N The main lobe width ,the peak side lobe
level can be varied by varying the
and the peak side lobe level is 41dB. parameter and N.
have first side lobe peak of 53 dB The side lobe peak can be varied by
varying the parameter .
H d ( e j ) = 1 for
4
=0 | |<
4
Solution:
56
20152016
/ 4
1
e
j n j n
hd ( n ) = [ d + e d ]
2 /4
1 n
hd (n) = [sin n sin ] for n and n0
n 4
/ 4
1 3
hd (0) = [ d + d ] = = 0.75
2 /4 4
hd(1) = hd(-1)=-0.225
hd(4) = hd(-4)= 0
57
20152016
2n M 1 M 1
whn (n) = 0.5 + 0.5 cos ( )n( )
M 1 2 2
=0 otherwise
for N = 11
n
whn (n) = 0.5 + 0.5 cos 5 n 5
5
whn(0) = 1
whn(1) = whn(-1)=0.9045
whn(2)= whn(-2)=0.655
whn(4)= whn(-4)=0.0945
whn(5)= whn(-5)=0
h(n)= whn(n)hd(n)
sin (n 3 )
= 4
(n 3 )
this gives h d (0 ) = h d ( 6 ) = 0 . 075
h d (1 ) = h d ( 5 ) = 0 . 159
h d (2 ) = h d ( 4 ) = 0 . 22
h d ( 3 ) = 0 . 25
58
20152016
3. Design a LP FIR filter using Freq sampling technique having cutoff freq
of /2 rad / sample. The filter should have linear phase and length of 17.
(12)(May-07)
H d ( e j ) = e j 8 for 0 / 2
=0 for / 2
2k 2k
Selecting k = = for k = 0,1,......16
M 17
H ( k ) = H d ( e j ) | 2k
=
17
59
20152016
2k
j 8 2k
H (k ) = e 17
for 0
17 2
2k
=0 for /2
17
16k
j 17
H (k ) = e 17 for 0 k
4
17 17
=0 for k
4 2
0k 4
and 5k 8
i.e.,
4
1
h( n) = (1 + 2 Re(e j16k / 17 e j 2kn / 17 ))
17 k =1
1 4
2k (8 n)
h( n) = ( H (0) + 2 cos( ) for n = 0,1,........16
17 k =1 17
60
20152016
b) Hamming window
(6)
Solution:
H(ej) = j between to +
1 cos n
j e
jn
hd (n) = d = n and n0
2
n
a) rectangular window
h(n)=hd(n)wr(n)
h(1)=-h(-1)=hd(1)=-1
h(2)=-h(-2)=hd(2)=0.5
h(3)=-h(-3)=hd(3)=-0.33
thus,
( M 3) / 2
M 1
Also from the equation
H r ( e j ) = 2 n =0
h(n) sin (
2
n)
61
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
j
H r (e ) = 0 . 666 sin 3 sinn 2 + 2 sin
H (e j ) = jH
H r (e j ) = j (0.66 sin 3 sin 2 + 2 sin
s )
b) Hamming window
w
n)=hd(n)wh(nn)
h(n
2n
wh (n) = 0.54 + 0.46 cos ( M 1) / 2 n ( M 1) / 2
( M 1)
= 0 ottherwise
n
wh ( n) = 0.54 + 0.46 cos 3 n 3
3
Wh
h(n)= [0.08 0.31
0 0.77 1 0.77
0 0.31 0.008]
Thu
us h(n) = h((n-5) = [0.02267, -0.155, 0.77, 0, -0.777, 0.155, -0.0267]
Sim
milar to the earlier
e case of
o rectangulaar window we
w can writee the freq
response of diffferentiator as
a
H (e j ) = jH r (e j ) = j (0.0534
0 sin 3 0.31sin 2 + 1.54 sin )
62
20152016
5.Justify Symmetric and Anti-symmetric FIR filters giving out Linear Phase
characteristics.(Apr-08)
(10)
Symmetry in filter impulse response will ensure linear phase
An FIR filter of length M with i/p x(n) & o/p y(n) is described by the
difference equation:
M 1
y(n)= b0 x(n) + b1 x(n-1)+.+b M-1 x(n-(M-1)) = b x(n k )
k =0
k -
(1)
An FIR filter has linear phase if its unit sample response satisfies the condition
63
20152016
If M is odd
M 1 M +1 M +3
M 1 ( ) M + 1 ( ) M + 3 ( )
H ( z) = h(0) + h(1) z 1 + ..........+ h( )z 2
+ h( )z 2
+ h( )z 2
+ ...........
2 2 2
+ h(M 2) z ( M 2) + h(M 1) z ( M 1)
(
M 1
)
(
M 1
) (
M 3
) M 1 M + 1 1 M + 3 2 (
M 1
)
=z h (0)2
z 2
+ h(1) z 2
+ ..........
.. + h( ) + h( ) z + h( ) z + .....
h ( M 1) z 2
2 2 2
Applying symmetry conditions for M odd
h(0) = h( M 1)
h(1) = h( M 2)
.
.
M 1 M 1
h( ) = h( )
2 2
M +1 M 3
h( ) = h( )
2 2
.
.
h( M 1) = h(0)
64
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
M 1
M 3
( M 1) 2
( M 1 2 n ) / 2
H ( z) = z h( 2
) + h(n){z ( M 1 2 n ) / 2
z }
2 n =0
similaarly for M even
e
M 1 M2 1
( )
h(n){z ( M 1 2 n ) / 2 z ( M 1 2 n ) / 2 }
H ( z) = z 2
n =0
Or equ
uivalently, thhe system fuunction
M 1
H (Z ) = b Z
k =0
k
k
bn 0 n n 1
Wheree we can ideentify h(n) =
0 otherrwise
1. Direcct form
2. Cascaade form
3. Frequuency-sampling realization
4. Latticce realizationn
Diirect form
It is Non recursive
r in structure
s
65
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Where
W H k (Z ) = bk 0 + bk 1 Z 1 + bk 2 Z 2 k = 1, 2 .. K
an
nd K = integeer part of (M
M+1) / 2
66
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
In case of
o linear phhase FIR fillter, the symmmetry in h(n)
h implies
that the zeros of H(z) alsoo exhibit a foorm of symm
metry. If zk and
a zk* are
paair of compllex conjuggate zeros thhen 1/zk andd 1/zk* are also a pair
co
omplex connjugate zeroos. Thus sim mplified fouurth order sections
s are
fo
ormed. This is
i shown bellow,
677
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
N 1
1 H (k )
H ( z ) = (1 z N )
N
1W k 1
k =0 N z
Thhis form cann be realized with cascadde of FIR andd IIR structuures. The
1 N 1 H (k )
terrm (1-z-N) iss realized as FIR and thee term
N k =0 1 WNk z 1
as IIR
strructure.
Th
he realizationn of the abovve freq samppling form shhows necesssity of
co
omplex arithm metic.
Laattice realizzation
Laattice structuures offer maany interestinng features:
688
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
onsider
Co
Y ( z) m
H ( z) = = 1 + a m (i ) z i
X ( z) i =1
wh
hen m = 1 Y(z)/
Y X(z) = 1+ a1(1) z-1
f0(n)=
( r0(n)=x(n)
Th
he outputs arre
f 1 (n) = f 0 (n) + k1 r0 (n 1) 1a
r1 (n) = k1 f 0 (n) + r0 (n 1) 1b
iff k1 = a1 (1),
) then f 1 ( n) = y ( n)
If m=2
699
20152016
Y ( z)
= 1 + a 2 (1) z 1 + a 2 (2) z 2
X ( z)
y (n) = x(n) + a 2 (1) x(n 1) + a 2 (2) x(n 2)
y (n) = f1 (n) + k 2 r1 (n 1) (2)
y ( n ) = f 0 ( n ) + k1 r0 ( n 1) + k 2 [ k1 f 0 ( n 1) + r0 ( n 2 )]
= f 0 ( n ) + k1 r0 ( n 1) + k 2 k1 f 0 ( n 1) + k 2 r0 ( n 2 )]
sin ce f 0 ( n ) = r0 ( n ) = x ( n )
y ( n ) = x ( n ) + k1 x ( n 1) + k 2 k1 x ( n 1) + k 2 x ( n 2 )]
= x ( n ) + ( k1 + k1 k 2 ) x ( n 1) + k 2 x ( n 2 )
We recognize
a 2 (1) = k1 + k1k 2
a 2 (1) = k 2
a 2 (1)
k1 = and k 2 = a 2 (2) (4)
1 + a 2 (2)
Equation (3) means that, the lattice structure for a second-order filter is
simply a cascade of two first-order filters with k1 and k2 as defined in
eq (4)
70
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
M Stages
1 1 1 1
We reecognize h(nn) = 1, , , , , 1
3 4 4 3
M is even
e = 6, andd we observe h (n) = h (M
M-1-n) h (n) = h (5-nn)
i.e h (0)
( = h (5) h (1) = h (4) h (2) = h (3)
71
20152016
Solution:
m=9
1 1 1 1 1 1
h(n) = 1, , , , , , , 1
4 3 2 2 3 4
Odd symmetry
72
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Solutiion:
73
20152016
Given a1 (1) = 2 , a 2 ( 2) = 1
3
m = M, M-1 2, 1
If m=2 k 2 = a 2 ( 2) = 1
3
If m=1 k1 = a1 (1)
a 2 (1) 2 3
a1 (1) = = =
1 + a 2 (2) 1 + 3 2
1
Hence k1 = a1 (1) = 3
2
UNIT IV
74
20152016
Sub band coding is a method by which the signal (speech signal) is sub divided in
to several frequency bands and each band is digitally encoded separately.
4.Identify the various factors which degrade the performance of the digital
filter implementation when finite word length is used.(May-07,May-2010)
The truncation is the process of reducing the size of binary number by discarding all
bits less significant than the least significant bit that is retained.
Rounding is the process of reducing the size of a binary number to finite word
sizes of b-bits such that, the rounded b-bit number is closest to the original
unquantized number.
In recursive system when the input is zero or some nonzero constant value, the
nonlinearities due to finite precision arithmetic operation may cause periodic
oscillations in the output. These oscillations are called limit cycles.
i. Input quantization error ii. Product quantization error iii. Coefficient quantization
error.
75
20152016
The IEEE-754 standard for 32-bit single precision floating point number is given by
Floating point number,
N f = (-1)* (2^E-127)*M
0 1 8 9 31
S E M
1. The accuracy of the result is less due to 1. The accuracy of the result will be
smaller dynamic range. higher due to larger dynamic range.
4. Fixed point arithmetic can be used for 4. Floating point arithmetic cannot be
real; time computations. used for real time computations
76
20152016
To prevent overflow, the signal level at certain points in the digital filters
must be scaled so that no overflow occurs in the adder.
14.What are the results of truncation for positive & negative numbers?(Nov-
06)
To truncate these numbers to 4 decimal digits, we only consider the 4 digits to the
right of the decimal point.
The result would be: 5.6341 ,32.4381 ,6.3444 Note that in some cases,
truncating would yield the same result as rounding, but truncation does not round
up or round down the digits; it merely cuts off at the specified digit. The truncation
error can be twice the maximum error in rounding.
16. List out some of the finite word length effects in digital filter.(Apr-06)
77
20152016
1. Signed-magnitude format
2. Ones-complement format
3. Twos-complement format.
In all the three formats, the positive number is same but they differ only in
representing negative numbers.
The floating numbers will have a mantissa part and exponent part. In a given
word size the bits allotted for mantissa and exponent are fixed. The mantissa is used
to represent a binary fraction number and the exponent is a positive or negative
binary integer. The value of the exponent can be adjusted to move the position of
binary point in mantissa. Hence this representation is called floating point. The
floating point number can be expressed as,
78
20152016
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
In dig
gital computaation, the ouutput of multtipliers i.e., thhe products is
quuantized to finite
f word length
l in ordder to store thhem in regisster and to bee used in
suubsequent caalculation. The
T error duee to the quanntization of thhe output off
m
multipliers is referred to as
a product quantization
q e
error.
In fixeed point adddition the oveerflow occurrs when the sum exceedss the finite
w
word o the registeer used to stoore the sum. The overfloow in additioon may be
length of
leead to oscillaation in the output
o whichh is called ovverflow limiit cycle.
255. Define wh
hite noise?(Deec-06)
A statiionary random
m process is said
s to be whiite noise if itss power densitty spectrum
iss constant. Heence the whitee noise has flaat frequency response
r specctrum.
799
20152016
i) saturation arithmetic
ii) scaling
29. What is meant by A/D conversion noise?(MU-04)
A DSP contains a device, A/D converter that operates on the analog input
x(t) to
produce xq(t) which is binary sequence of 0s and 1s. At first the signal x(t) is
sampled at regular intervals to produce a sequence x(n) is of infinite precision. Each
sample x(n) is expressed in terms of a finite number of bits given the sequence
xq(n). The difference signal e(n) =xq (n)-x(n) is called A/D conversion noise.
16 MARKS
1, Explain in detail about Number Representation.
Number Representation
In digital signal processing, (B + 1)-bit fixed-point numbers are usually
represented as twos- complement signed fractions in the format
bo b-ib-2 b-B
The number represented is then
X = -bo + b-i2- 1 + b-22 - 2 + + b-B 2-B (3.1)
where bo is the sign bit and the number range is 1 <X < 1. The advantage of this
representation is that the product of two numbers in the range from 1 to 1 is
another number in the same range. Floating-point numbers are represented as
X = (-1) s m2 c (3.2)
where s is the sign bit, m is the mantissa, and c is the characteristic or
exponent. To make the representation of a number unique, the mantissa is
normalized so that 0.5 <m < 1.
Although floating-point numbers are always represented in the form of (3.2), the
way in which this representation is actually stored in a machine may differ. Since
m > 0.5, it is not necessary to store the 2- 1 -weight bit of m, which is always set.
Therefore, in practice numbers are usually stored as
X = (-1)s(0.5 + f)2 c (3.3)
80
20152016
81
20152016
1 (A/2 A2
a2 E{(fr - mr2) } (fr - mr2) dfr = (3.12)
A 12
-A/2
Likewise, for truncation,
E{f } A
m
ft = t = -y
22 A2 2
a = E{(ft - mft)} = (3.13)
fmt m
E { f mt } 0
and, for magnitude truncation
a 2 E{(f - m 2) A2 (3 14)
f-mt = mt m }= .
82
20152016
= (0.167)2-2B = (3.18)
6
In practice, the distribution of m is not exactly uniform. Actual measurements of
roundoff noise in [1] suggested that
al r 0.23A2 (3.19)
while a detailed theoretical and experimental analysis in [2] determined
a2 0.18A2 (3.20)
From (3.15) we can represent a quantized floating-point value in terms of the
unquantized value and the random variable er using
Qr(X) = X(1 + er) (3.21)
Therefore, the finite-precision product X1X2 and the sum X1 + X2 can be written
f IX1X2) = X1X2U + e r ) (3.22)
and
fl(X1 + X2) = (X1 + X2 )(1 + er) (3.23)
where e r is zero-mean with the variance of (3.20).
83
20152016
TO
my = mx ^2g(n) (3.24)
n=TO
and variance
TO
ay = al ^ g2 (n) (3.25)
n=TO
Therefore, if g(n) is the impulse response from the point where a roundoff takes
place to the filter output, the contribution of that roundoff to the variance (mean-
square value) of the output roundoff noise is given by (3.25) with a 2 replaced with
the variance of the roundoff. If there is more than one source of roundoff error in
the filter, it is assumed that the errors are uncorrelated so the output noise variance
is simply the sum of the contributions from each source.
84
20152016
85
20152016
where e(n) is a random roundoff noise sequence. Since e(n) is injected at the
same point as the input, it propagates througha system with impulse
response h(n). Therefore, forfixed-point arithmetic
with rounding,the outputroundoff noise variance from (3.6), (3.12), (3.25),
and (3.33) is
A2 A2 2-2B 1
a 22o = > h 2 (n)()2 = > a 22nn = ---------------------------------- (3.36)
12 ^ 12 ^ 12 1 - a 2
n=<x n=0
With fixed-point arithmetic there is the possibility of overflow following
addition. To avoid overflow it is necessary to restrict the input signal amplitude.
This can be accomplished by either placing a scaling multiplier at the filter
input or by simply limiting the maximum input signal amplitude. Consider the
case of the first-order filter of (3.34). The transfer function of this filter is
,v, Y(e j m ) 1
H(e j m ) = .m = m (3.37)
X(eJ ) eJ a
so
\H(e j m )\ 2 = --- -21 ------------------------------- (3.38)
1 + a 2a cos,)
and
,7, 1
|H(e^)|max = ---- (3.39)
1 \a\
The peak gain of the filter is 1/(1 \a\) so limiting input signal amplitudes to
\x(n)\ < 1 \ a | will make overflows unlikely.
An expression for the output roundoff noise-to-signal ratio can easily be
obtained for the case where the filter input is white noise, uniformly distributed
over the interval from (1 \ a \) to (1 \ a \) [4,5]. In this case
a
? 1 f 1 \ a x\ ?d x 1 ( 1 a ? ) (3 40)
x = 21a)
2(1 a ) =3 \ \3 .
\ \ J (1 \a \)
so, from (3.25),
2 1 (1 \a \)2
ayy 2 = 3 , 2 (3'41)
31 a2
Combining (3.36) and (3.41) then gives
0^ = ( 2 _l^\ (3^0L^ = 2B (3
42)
a2 \12 1 a 2 )\ (1 \a \) 2 ) 12 (1 \ a \ ) 2 (
. )
86
20152016
87
20152016
There are two noise sources contributing to e(n) if quantization is performed after
each multiply, and there is one noise source if quantization is performed after
summation. Since
2 1
1 2 2 2 2 (3.46
n= 2 r (1 + r ) 4r )
oo cos (9)
the output roundoff
noise is
2 2B1 + 1
a2 = V r 2 (3.47
12 21r 2 (1 + r 2 )2 4r2 )
cos (9)
where V = 1 for quantization after summation, and V = 2 for quantization
after each multiply. To obtain an output noise-to-signal ratio we note that
H(e j w ) 1
jm (3.48
= 122r j 2 mcos(9)e + )
r e
and, using the approach of
[6],
iH(emmax = 1
(3.49
2 sat ( c o s ( 9 ) ^ cos(9) 2 2
sin 2 )
4r + (9)
wher
e I >1
sat (i) 1<I<1 (3.50
= )
Following the same approach as for the first-order
case then gives
2 2 2 B 1+r 2 3
V 12 1 r 2 (1 + r 2 )2 4r2
y cos2 (9)
1
X 2 1 2 (3.51
2
4r sat (^cos (9) cos( + sin(9) )
) 9) r 2
2r
Figure3.1 is a contour plot showing the noise-to-signal ratio of (3.51) for v = 1
in units of the noise variance of a single quantization, 22 B /12. The plot is
symmetrical about 9 = 90, so only the range from 0 to 90 is shown. Notice
that as r ^ 1, the roundoff noise increases without bound. Also notice that the
noise increases as 9 ^ 0.
It is possible to design state-space filter realizations that minimize fixed-point
roundoff noise [7] - [10]. Depending on the transfer function being realized, these
structures may provide a roundoff noise level that is orders-of-magnitude lower
than for a nonoptimal realization. The price paid for this reduction in roundoff
noise is an increase in the number of computations required to implement the
filter. For an Nth-order filter the increase is from roughly 2N multiplies for a
direct form realization to roughly (N + 1)2 for an optimal realization. However,
if the filter is realized by the parallel or cascade connection of first- and second-
order optimal subfilters, the increase is only to about 4N multiplies. Furthermore,
near-optimal realizations exist that increase the number of multiplies to only
about 3N [10].
88
20152016
Notice that while the input is zero except for the first sample, the output oscillates
with amplitude 1/8 and period 6.
Limit cycles are primarily of concern in fixed-point recursive filters. As long as
floating-point filters are realized as the parallel or cascade connection of first- and
second-order subfilters, limit cycles will generally not be a problem since limit
cycles are practically not observable in first- and second-order systems
implemented with 32-b floating-point arithmetic [12]. It has been shown that such
systems must have an extremely small margin of stability for limit cycles to exist
at anything other than underflow levels, which are at an amplitude of less than
10 3 8 [12]. There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle
amplitude, expressed as an integral number of quantization steps [13]. It is then
possible to choose a word length that makes the limit cycle amplitude acceptably
low. Alternately, limit cycles can be prevented by randomly rounding calculations
up or down [14]. However, this approach is complicated to implement. The third
approach is to properly choose the filter realization structure and then quantize
the filter calculations using magnitude truncation [15,16]. This approach has the
disadvantage of producing more roundoff noise than truncation or rounding [see
(3.12)(3.14)].
75
y(n) = Qr\R 8y(n -1) - 8y(n - 2) + x(n) (3.72)
[Typetext]
35
x(n) = -4&(n) - ^&(n -1)
y ( 4) = Qr R
3 = Qr
5 ,, (3.74)
0, 0, (3.73)
4 8
s to scale the filter calculations so as to render overflow impossible. However,
this may unacceptably restrict the filter dynamic range. Another method is to
force completed sums-of- products to saturate at 1, rather than overflowing
[18,19]. It is important to saturate only the completed sum, since intermediate
overflows in twos complement arithmetic do not affect the accuracy of the final
result. Most fixed-point digital signal processors provide for automatic saturation
of completed sums if their saturation arithmetic feature is enabled. Yet
another way to avoid overflow oscillations is to use a filter structure for which
any internal filter transient is guaranteed to decay to zero [20]. Such structures are
desirable anyway, since they tend to have low roundoff noise and be insensitive
to coefficient quantization [21].
Re Z
FIGU
URE: Realizzable pole lo
ocations for the
t differencce equation of
o (3.76).
The sparseness of o realizablee pole locations near z = 1 will reesult in a larrge coefficieent quantizattion error for
poles in this regiion.
Figuure3.4 gives an alternattive structurre to (3.77) for realizinng the transsfer functionn of (3.76). Notice thaat
quanntizing the cooefficients of o this structuure correspoonds to quanntizing X r annd Xi. As shhown in Fig.33.5 from [5]],
this results in a uniform griid of realizaable pole loccations. Theerefore, largee coefficientt quantizatioon errors aree
avoidded for all pole
p locations.
It is well established that fillter structurees with low roundoff
r noise tend to be
b robust to coefficient quantization
q n,
and visa
v versa [222]- [24]. Fo or this reasonn, the uniformm grid struccture of Fig.33.4 is also poopular becauuse of its low
w
rounndoff noise. Likewise,
L th
he low-noisee realizationss of [7]- [100] can be exppected to bee relatively insensitive too
coefffficient quanntization, and d digital wavve filters and lattice filtters that are derived from m low-sensitivity analogg
strucctures tend too have not only low coeffficient sensitivity, but alsoa low rounndoff noise [25,26].
[
It is well knownn that in a high-order polynomial with clustered roots, thhe root locaation is a veery sensitivee
function of the polynomial coefficiennts. Thereforre, filter pooles and zerros can be much morre accuratelyy
contrrolled if highher order filtters are realiized by breakking them upp into the paarallel or casscade connecction of firstt-
and second-ordeer subfilters. One excepption to thiss rule is thee case of linnear-phase FIR F filters in i which thee
symm metry of the polynomiaal coefficiennts and the spacing of the filter zeros z aroundd the unit circle usuallyy
permmits an accepptable direct realization usingu the connvolution suummation.
Giveen a filter strructure it is necessary
n too assign the ideal
i pole annd zero locattions to the realizable
r locations. This
is geenerally donne by simply yrounding orr truncatingtthe filter coeefficients to the available number of o bits, or byy
assiggning the ideeal pole and zero locatioons to the neearest realizaable locationns. A more complicated alternative
a is
to coonsider the original
o filterr design probblem as a prooblem in disscrete
FIGU
URE 3.4: Allternate realiization structture.
FIGU
URE 3.5: Reealizable polle locations for
f the alternnate realizatiion structuree.
optimmization, annd choose the realizable pole and zeero locationss that give thhe best approximation too the desiredd
filterr response [227]- [30].
When realizing IIR filters, either a parallel or cascade connection of first- and second-order subfilters is almost
always preferable to a high-order direct-form realization. With the availability of very low-cost floating-point
digital signal processors, like the Texas Instruments TMS320C32, it is highly recommended that floating-point
arithmetic be used for IIR filters. Floating-point arithmetic simultaneously eliminates most concerns regarding
scaling, limit cycles, and overflow oscillations. Regardless of the arithmetic employed, a low roundoff noise
structure should be used for the second- order sections. Good choices are given in [2] and [10]. Recall that
realizations with low fixed-point roundoff noise also have low floating-point roundoff noise. The use of a low
roundoff noise structure for the second-order sections also tends to give a realization with low coefficient
quantization sensitivity. First-order sections are not as critical in determining the roundoff noise and coefficient
sensitivity of a realization, and so can generally be implemented with a simple direct form structure.
2 MARKS
Changing one sampling rate to other sampling rate is called sampling rate
conversion.
16 MARKS
Definition.
Given an integer D, we define the downsampling operator S i J D ,shown
in Figure by the following relationship:
>{] - " *[>]
The operator SIW decreases the sampling frequency by a factor of D, by
keeping one sample out of D samples.
Figure Upsampling
jctn]
Figure D o w n s a m p l i n g
Example
Let x = [ ..., 1,2,3,4,5,6,... ] , then j[n] = S(/2x[n] is given by
?[] = [ 1.3,5f7 ]
bamplmg Kate Conversion by a Rational t* actor
the problem of designing an algorithm for resampling a
digital signal x[n] from the original rate F x (in Hz) into a rate Fv = (UD)F z f with L
and D integers. For example, we have a signal at telephone quality, F x = 8 kHz, and
we want to resample it at radio quality, F y = 22 kHz. In this case, clearly L = 11 and
D = 4.
First consider two particular cases, interpolation and decimation, where we
upsample and downsample by an integer factor without creating aliasing or image
frequencies.
Example <
As an example of application, suppose you want
to design a Filter with the fallowing
specifications:
Passband Fp = 450 Hz
[Typetext]
Stopband F s =500 Hz
Sampling frequency F s ~96 kHz
Notice that the stopband is several orders of
magnitude smaller than the sampling frequency.
This leads to a filter with a very short transition
region of high complexity. In
Speech signals
From prehistory to the new media of the future, speech has
been and will be a primary form of communication between
humans.
Nevertheless, there often occur conditions under which we
measure and then transform the speech to another form, speech
signal, in order to enhance our ability to communicate.
The speech signal is extended, through technological media such
as telephony, movies, radio, television, and now Internet. This
trend reflects the primacy of speech communication in human
psychology.
Speech will become the next major trend in the personal
computer market in the near future.
Speech signal processing
The topic of speech signal processing can be loosely defined as
the manipulation of sampled speech signals by a digital processor
to obtain a new signal with some desired properties.
Speech signal processing is a diverse field that relies on knowledge of
language at the levels of Signal processing
Acoustics (P)
Phonetics
( ^ ^ ^ ) Language-
independent Phonology
(^^)
Morphology ( i ^ ^ ^ )
Syntax
( ^ , ) Languag
e-dependent
Semantics
(\%X)
Pragmatics ( i f , f f l ^ )
7 layers for describing speech From Speech to Speech
Signal, in terms of Digital Signal Processing
At-*
Acoustic (and
perceptual) features
{traits)
- fundamental
freouency (FO)
(pitch)
- amplitude(loudness)
- spectrum(timber)
Typic
al
Voice
d
speec
h
signalsAPCassumesthattheinputspeechsignalisrepetitivewitha
periodsignificantlylongerthantheaveragefrequencycontent.Two
predictorsarcusedinAPC.Thehighfrequencycomponents(upto4kHz)
areestimatedusinga'spectralor'formantprcdictorandthelow
frequencycomponents(50200Hz)byapitchorfinestructure
prcdictor(seefigure7.4).Thespcctralestimatormayheoforder14and
thepitchestimatoraboutorder10.Thelowfrequencycomponentsof
thespccchsignalareduetothemovementofthetongue,chinand
spectral
envelope, formants
The high-frequency components originate from the vocal chords and the
noise-like sounds (like in s) produced in the front of the mouth.
The output signal y(n)together with the predictor parameters, obtained
adaptively in the encoder, are transmitted to the decoder, where the spcech
signal is reconstructed. The decoder has the same structure as the encoder
but the predictors arc not adaptive and arc invoked in the reverse order.
The prediction parameters are adapted for blocks of data corresponding to
for instance 20 ms time periods.
A PC' is used for coding spcech at 9.6 and 16 kbits/s. The algorithm
works well in noisy environments, but unfortunately the quality of the
processed speech is not as good as for other methods like CELP described
below.
periodic
pitch excitation
9+V
synthetic
noise speech
voiced/unvoiced
Figure 7.6 The LPC model
ThefirstvocoderwasdesignedbyH.Dudleyinthe1930sanddemonstrated
attheNewYorkFairin1939*Vocodershavebecomepopularastheyachieve
reasonablygoodspeechqualityatlowdatarates,from2A kbits/sto9,6kbits/s.
Therearcmanytypesofvocoders(MarvcnandEwers,1993),someofthemost
commontechniqueswillbebrieflypresentedbelow.
Mostvocodersrelyonafewbasicprinciples.Firstly,thecharacteristicsofthe
spccchsignalisassumedtobefairlyconstantoveratimeofapproximately20
ms,hcnccmostsignalprocessingisperformedon(overlapping)datablocksof
2040mslength.Secondly,thespccchmodelconsistsofatimevaryingfilter
correspondingtotheacousticpropertiesofthemouthandanexcitationsignal.
Thecxeilalionsignaliscitheraperiodicwaveform,ascrcatcdbythevocal
chords,orarandomnoisesignalforproductionofunvoiced'sounds,for
examplesandT.Thefilterparametersandexcitationparametersarcassumed
tobeindependentofeachotherandarecommonlycodedseparately.
Linear predictive coding (LPC) isa popularmethod,whichhashowever
beenreplacedbynewerapproachesinmanyapplications.LPCworksexceed
inglywellat lowbitratesandtheLPC parameterscontainsufficient
informationofthespccchsignaltobeusedinspccchrecognitionapplications.
TheLPC modelisshowninFigure7*6.
LPCisbasicallyanautn-regressive model(secChapter5)andthevocaltract
ismodelledasatimevaryingallpolefilter(HRfilter)havingthetransfer
functionH(z)
(7*17)
-k
k=I
wherep istheorderofthefilter.Theexcitationsignal*?(),beingeithernoise
oraperiodicwaveform,isfedtothefilterviaavariablegainfactorG. The
outputsignalcanbeexpressedinthetimedomainas
y(n) ~ Ge(ri) - a , y ( n - 1) - a 2 y ( n - 2 ) %y{n-p) ( 1 . IK)
andtheexcitationsignal(linearpredictivecoding).Thefiltercoefficientsa k arc
timevarying.
Themodelabovedescribeshowtosynthesize thespeechgiventhepitch
information(ifnoiseorpetiodicexcitationshouldbeused),thegainandthe
filterparameters.Theseparametersmustbedeterminedbythecncoderorthe
analyser,takingtheoriginalspccchsignalx(n) asinput.
Theanalyserwindowsthespccchsignalinblocksof2040ms.usuallywitha
Hammingwindow(seeChapter5).Theseblocksorframesarcrepeatedevery
1030ms,hencethereisacertainoverlapintime.Everyframeisthen
analysedwithrespecttotheparametersmentionedabove.
Firstly,thepitchfrequencyisdetermined.Thisalsotellswhetherwearc
dealingwithavoicedorunvoiccdspccchsignal.Thisisacrucialpartofthe
systemandmanypitchdetectionalgorithmshavebeenproposed.Ifthe
segmentofthespccchsignalisvoicedandhasadearperiodicityorifitis
unvoicedandnotpetiodic,thingsarcquiteeasy*Segmentshavingpropertiesin
betweenthesetwoextremesaredifficulttoanalyse.Noalgorithmhasbeen
foundsofarthatis1perfect*foralllisteners.
Now,thesecondstepoftheanalyseristodeterminethegainandthefilter
parameters.Thisisdonebyestimatingthespccchsignalusinganadaptive
predictor.Thepredictorhasthesamestructureandorderasthefilterinthe
synthesizer,Hencc,theoutputofthepredictoris
- i ( n ) - tf] jt(/7 1) a 2 x ( n 2) . . . OpX(np) (7-19)
wherei(rt)isthepredictedinputspcechsignalandjc(rt)istheactualinput
signal.Thefiltercoefficientsa k aredeterminedbyminimizingthesquareerror
(7,20)
Thiscanbedoneindifferentways,citherbycalculatingtheautocorrclation
coefficientsandsolvingtheYuleWalkerequations(seeChapter5)orbyusing
somerecursive,adaptivefilterapproach(seeChapter3),
So,foreveryframe,alltheparametersabovearcdeterminedandiransmittcd
tothesynthesiser,whereasyntheticcopyofthespccchisgenerated.
AnimprovedversionofLPCisresidual excited linear prediction (RELP). Let
ustakeacloserlookatthe errororresidualsignalr(fi)resultingfromthe
predictionintheanalyser(equation(7.19)).Theresidualsignal(wcarctryingto
minimize)canbeexpressedas
r(n)= *() -i(rt)
= jf(rt) + a^x(n 1) + a 2 x(n 2) -h *.. + a p x(ftp) <7-21)
Fromthisitisstraightforwardtofindoutthatthecorrespondingexpression
usingtheztransformsis
(7,22)
Hcncc,theprcdictorcanberegardedasaninversefiltertotheLPCmodelfilter.
IfwenowpassthisresidualsignaltothesynthesizeranduseittoexcitetheLPC
filter,thatisE(z) - R(z), insteadofusingthenoiseorperiodicwaveform
sourcesweget
Y ( z ) = E ( z ) H ( z ) = R ( z ) H ( z ) = X ( z ) H ~ \ z ) H ( z ) = X ( z ) (7.23)
Intheidealcase,wewouldhencegettheoriginalspeechsignalback.When
minimizingthevarianceoftheresidualsignal(equation(7.20)),wegatheredas
muchinformationaboutthespccchsignalaspossibleusingthismodelinthe
filtercoefficientsa k . Theresidualsignalcontainstheremaininginformation.If
themodeliswellsuitedforthesignaltype(speechsignal),theresidualsignalis
closetowhitenoise,havingaflatspectrum.Insuchacasewecangetawaywith
codingonlyasmallrangeoffrequencies,forinstance01kHzoftheresidual
signal.Atthesynthesizer,thisbasebandisthenrepeatedtogeneratehigher
frequencies.ThissignalisusedtoexcitetheLPCfilter
VocodersusingRELPareusedwithtransmissionratesof9.6kbits/s.The
advantageofRELPisabetterspeechqualitycomparedtoLPCforthesamebit
rate.However,theimplementationismorecomputationallydemanding.
AnotherpossibleextensionoftheoriginalLPCapproachistousemultipulse
excited linear predictive coding (MLPC). Thisextensionisanattempttomake
thesynthesizedspcechlessmechanical,byusinganumberofdifferentpitches
oftheexcitationpulsesratherthanonlythetwo(periodicandnoise)usedby
standardLPC.
TheMLPCalgorithmsequentiallydetectsk pitchesinaspeechsignal.Assoon
asonepitchisfounditissubtractedfromthesignalanddetectionstartsover
again,lookingforthenextpitch.Pitchinformationdetectionisahardtaskand
thecomplexityoftherequiredalgorithmsisoftenconsiderable.MLPChowever
offersabetterspcechqualitythanLPCforagivenbitrateandisusedinsystems
workingwith4.S9.6kbits/s.
YetanotherextensionofLPCisthecode excited linear prediction (CELP).
ThemainfeatureoftheCELPcomparedtoLPCisthewayinwhichthefilter
coefficientsarehandled.AssumethatwehaveastandardLPCsystem,witha
filteroftheorderp. Ifeverycoefficienta k requiresN bits,weneedtotransmit
N-p bitsperframeforthefilterparametersonly.Thisapproachisallrightifall
combinationsoffiltercoefficientsarcequallyprobable.Thisishowevernotthe
case.Somecombinationsofcoefficientsareveryprobable,whileothersmay
neveroccur.InCELP,thecoefficientcombinationsarcrepresentedbyp
dimensionalvectors.Usingvectorquantizationtechniques,themostprobable
vectorsaredetermined.Eachofthesevectorsareassignedanindex andstored
inacodebook. Boththeanalyserandsynthesizerofcoursehaveidenticalcopies
ofthecodebook,typicallycontaining256512vectors.Hcncc,insteadof
transmittingN-p bitsperframeforthefilterparametersonly89bitsarc
needed.
Thismethodoffershighqualityspcechatlowbitratesbutrequiresconsid
erablecomputingpowertobeabletostoreandmatchtheincomingspcechto
thestandardsoundsstoredinthecodebook.Thisisofcourseespeciallytrueif
thecodebookislarge.Speechqualitydegradesasthecodebooksizedecreases.
MostCELPsystemsdonotperformwellwithrespecttohigherfrequency
componentsofthespccchsignalatlowhitrates.Thisiscountcractcdin
Thereisalsoa variantofCELPcalledvector sum excited linear prediction
(VSELP). ThemaindifferencebetweenCELPandVSELP isthewaythe
codebookisorganized.Further,sinceVSELPusesfixedpointarithmetic
algorithms,it ispossibletoimplementusingcheaperDSPchipsthan
Adaptive Filters
The signal degradation in some physical systems is time varying, unknown, or possibly
both. For example,consider a high-speed modem for transmitting and receiving data over
telephone channels. It employs a filter called a channel equalizer to compensate for the channel
distortion. Since the dial-up communication channels have different and time-varying
characteristics on each connection, the equalizer must be an adaptive filter.
4. Explain about Adaptive Filter4. Explain about Adaptive Filter
Adaptive filters modify their characteristics to achieve certain objectives by
automatically updating their coefficients. Many adaptive filter structures and adaptation
algorithms have been developed for different applications. This chapter presents the most widely
used adaptive filters based on the FIR filter with the least-mean-square (LMS) algorithm. These
adaptive filters are relatively simple to design and implement. They are well understood with
regard to stability, convergence speed, steady-state performance, and finite-precision effects.
Introduction to Adaptive Filtering
An adaptive filter consists of two distinct parts - a digital filter to perform the desired filtering,
and an adaptive algorithm to adjust the coefficients (or weights) of the filter. A general form of
adaptive filter is illustrated in Figure 7.1, where d(n) is a desired (or primary input) signal, y(n)
is the output of a digital filter driven by a reference input signal x(n), and an error signal e(n) is
the difference between d(n) and y(n). The adaptive algorithm adjusts the filter coefficients to
minimize the mean-square value of e(n). Therefore, the filter weights are updated so that the
error is progressively minimized on a sample-bysample basis.
In general, there are two types of digital filters that can be used for adaptive filtering: FIR and
IIR filters. The FIR filter is always stable and can provide a linear-phase response. On the other
hand, the IIR
filter involves both zeros and poles. Unless they are properly controlled, the poles in the filter
may move outside the unit circle and result in an unstable system during the adaptation of
coefficients. Thus, the adaptive FIR filter is widely used for practical real-time applications.
This chapter focuses on the class of adaptive FIR filters.
The most widely used adaptive FIR filter is depicted in Figure 7.2. The filter output signal
is computed
(7.13)
, where the filter coefficients wl (n) are time varying and updated by the adaptive algorithms
that will be discussed next.
We define the input vector at time n as
x(n) = [x(n)x(n - 1) . . . x(n - L + 1)]T , (7.14)
and the weight vector at time n as
w(n) = [w0(n)w1(n) . . . wL-1(n)]T . (7.15)
Equation (7.13) can be expressed in vector form as
y(n) = wT (n)x(n) = xT (n)w(n). (7.16)
The filter outputy(n) is compared with the desired d(n) to obtain the error
signal e(n) = d(n) - y(n) = d(n) - wT (n)x(n). (7.17)
Our objective is to determine the weight vector w(n) to minimize the predetermined
performance (or cost) function.
Performance Function:
The adaptive filter shown in Figure 7.1 updates the coefficients of the digital filter to
optimize some predetermined performance criterion. The most commonly used performance
function is
based on the mean-square error (MSE) defined as
[Typetext]
computer generated speech: digital recording and vocal tract simulation. In digital
recording, the voice of a human speaker is digitized and stored, usually in a compressed form.
During playback, the stored data are uncompressed and converted back into an analog signal.
An entire hour of recorded speech requires only about three me gabytes of storage, well within
the capabilities of even small computer systems. This is the most common method of digital
speech generation used today. Vocal tract simulators are more complicated, trying to mimic the
physical mechanisms by which humans create speech. The human vocal tract is an acoustic
cavity with resonate frequencies determined by the size and shape of the chambers. Sound
originates in the vocal tract in one of two basic ways, called voiced and fricative sounds.
With voiced sounds, vocal cord vibration produces near periodic pulses of air into the vocal
cavities. In comparison, fricative sounds originate from the noisy air turbulence at narrow
constrictions, such as the teeth and lips. Vocal tract simulators operate by generating digital
signals that resemble these two types of excitation. The characteristics of the resonate chamber
are simulated by passing the excitation signal through a digital filter with similar resonances.
This approach was used in one of the very early DSP success stories, the Speak & Spell, a
widely sold electronic learning aid for children.
Speech recognition
The automated recognition of human speech is immensely more difficult than speech
generation. Speech recognition is a classic example of things that the human brain does well,
but digital computers do poorly. Digital computers can store and recall vast amounts of data,
perform mathematical calculations at blazing speeds, and do repetitive tasks without becoming
bored or inefficient. Unfortunately, present day computers perform very poorly when faced with
raw sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the
same computer to understand your voice is a major undertaking. Digital Signal Processing
generally approaches the problem of voice recognition in two steps: feature extraction
followed by feature matching. Each word in the incoming audio signal is isolated and then
analyzed to identify the type of excitation and resonate frequencies. These parameters are then
compared with previous examples of spoken words to identify the closest match. Often, these
systems are limited to only a few hundred words; can only accept speech with distinct pauses
between words; and must be retrained for each individual speaker. While this is adequate for
many commercialapplications, these limitations are humbling when compared to the abilities of
human hearing. There is a great deal of work to be done in this area, with tremendous financial
rewards for those that produce successful commercial products.
[Typetext]
m
1. What is a continuous and discrete time signal?
2. Give the classification of signals?
co
3. What are the types of systems?
4. What are even and odd signals?
5. What are deterministic and random signals?
6. What are energy and power signal?
N.
7. What are the operations performed on a signal?
va
8. What are elementary signals and name them?
9. What are the properties of a system?
10. What is memory system and memory less system?
na
m
the analog filter into a digital filter. (8)
ii) Determine H(z) using impulse invariance method for the
co
given transfer function,
3
H(S)= -------------
(S+1)(S+3)
Assume T=1 Sec
N.
3. i) Find the convolution of X(n) and h(n)
(8)
va
X (n) = (1/ 2)n u(n)
h (n)= (1/3)n u(n) (8)
ii) Find the z-transform of x (n), x (n) = (1/2)n-5 u(n-2.) + 8(n-5). (8)
na
1
X(z)= ------- , Where a constant. (8)
aa
1+az-1
(ii) Find the z-transform of autocorrelation function. (8)
5. State and prove important properties of the z-transforms.
M
m
x (n) =1/4, for 0<=n <=2
0, otherwise
co
2. Derive the DFT of the sample data sequence x (n) = {1, 1, 2, 2, 3, 3} and compute the
corresponding amplitude and phase spectrum.
3. Given x(n) = {0,1,2,3,4,5,6,7} find X(k) using DIT FFT algorithm.
4. Given X (k) = {28,-4+j9.656,-4+j4,-4+j1.656,-4,-4-j1.656,-4-j4,-4-j9.656}, find x (n)
using inverse DIT FFT algorithm.
5. Find the inverse DFT of X (k) = {1,2,3,4}
N.
UNIT III- IIR FILTER DESIGN
va
PART-A (2 MARKS)
1. State the steps to design digital IIR filter using bilinear method.
2. What is warping effect?
na
5. Why impulse invariant method is not preferred in the design of IIR filters other than
low pass filter?
6. What is meant by impulse invariant method?
M
m
2) Design a Butter worth digital filter to meet the following constraints.
0.9 H() 1, 0 /2
co
H()0.2, 3/4
Use bilinear transformation mapping technique. Assume T=1 Sec
3) Consider the system described by
N.
y(n)-(3/4)y(n-1)+(1/8)y(n-2)=x(n)+(1/3)x(n-1)
Determine and draw all possible realization structures
4) Explain the following terms briefly.
va
i.Frequency sampling structure. (4)
ii.Lattice structure for IIR filter (4)
iii.Perturbation error (4)
iv.Limit cycles. (4)
na
m
18. What are the desirable characteristics of the windows?
co
19. Compare Hamming window with Kaiser Window.
20. What is the necessary and sufficient condition for linear phase characteristics in FIR
filter?
N.
PART B(16 MARKS)
1. Derive the condition of FIR filter to be linear in phase.
2. Describe briefly the different methods of power spectral estimation?
va
i. Bartlett method (6)
ii.Welch method (6)
iii.Blackman-Tukey method and its derivation. (4)
3 Design a digital low pass filter FIR filter of length 11,with cut off frequency
na
10. What is the relationship between truncation error e and the bits b for representing a
decimal into binary?
11. What is meant rounding? Discuss its effect on all types of number representation?
12. What is meant by A/D conversion noise?
13. What is the effect of quantization on pole location?
14. What is meant by quantization step size?
15. How would you relate the steady-state noise power due to quantization and the b bits
representing the binary sequence?
16. What is overflow oscillation?
m
17. What are the methods used to prevent overflow?
co
18. What are the two kinds of limit cycle behavior in DSP?
19. Determine "dead band" of the filter.
20. Explain briefly the need for scaling in the digital filter implementation.
N.
21. What are the different buses of TMS320C5X and their functions?
PART B(16 MARKS)
va
1. Derive the expression for steady state I/P Noise Power and Steady state O/P Noise
power
2. Draw the product quantatization model for first order and second order filter
Write the difference equation and draw the noise model.
na
3. For the second order filter draw the direct form II realization and find the scaling
factor S0 to avoid over flow
Find the scaling factor from the formula
1+r2
aa
I= ---------------------------------
(1-r2)(1-2r2cos2 =r4)
4. Explain briefly about various number representation in digital computer.
5. Consider the transfer function H (Z) =H1 (Z) H2 (Z) where H1 (Z) =1/1-a1Z-1
M
BRANCH/SEM/SEC:CSE/IV/A& B
UNIT I
Part A
m
1. What do you understand by the terms : signal and signal processing
co
2. Determine which of the following signals are periodic and compute their
fundamental period (AU DEC 07)
a) sin2 t b)sin20 t+sin5t
3. What are energy and power signals? (MU Oct. 96)
2
a) y(n)=n x (n) b)a x(n)
N.
4. State the convolution property of Z transform
5. Test the following systems for time invariance:
(AU DEC 06)
(DEC 03)
6. Define symmetric and antisymmetric signals. How do you prevent alaising while
va
sampling a CT signal? (AU MAY 07)(EC 333, May 07)
7. What are the properties of region of convergence(ROC) ?(AU MAY 07)
8. Differentiate between recursive and non recursive difference equations
na
Part-B
m
0 elsewhere and
h(n)= 1 0n4
0 elsewhere
co
b) A discrete time system can be static or dynamic, linear or non-linear, Time
variant or time invariant, causal or non causal, stable or unstable. Examine the
following system with respect to the properties also (AU DEC 07)
1) y(n)=cos(x(n))
2) y(n)=x(-n+2)
3) y(n)=x(2n)
N.
4)y(n)=x(n) cos n
va
2. a) Determine the response of the causal system
y(n)-y(n-1)=x(n)+x(n-1) to inputs x(n)=u(n) and x(n)=2 n u(n).Test its stability
b) Determine the IZT of X(Z)=1/(1-Z-1)(1-Z-1)2
na
Determine whether each of the following systems defined below is (i) casual (ii)
linear (iii) dynamic (iv) time invariant (v) stable
aa
6. Determine and sketch the magnitude and phase response of the following systems
(a) y(n) = 1/3 [x(n) + x(n-1) + x(n-2)]
(b) y(n) = [x(n) x(n-1)]
(c) y(n) - 1/2y(n-1)=x(n)
m
7. a) Determine the impulse response of the filter defined by y(n)=x(n)+by(n-1)
b) A system has unit sample response h(n) given by
co
h(n)=-1/(n+1)+1/2(n)-1-1/4 (n-1). Is the system BIBO stable? Is the filter
causal? Justify your answer (DEC 2003)
8. Determine the Fourier transform of the following two signals(CS 331 DEC 2003)
a) a n u(n) for a<1
b) cos n u(n)
N.
9. Check whether the following systems are linear or not (AU APR 05)
va
a) y(n)=x 2 (n) b) y(n)=n x(n)
10. For each impulse response listed below, dtermine if the corresponding system is
na
11. Explain with suitable block diagram in detail about the analog to digital
conversion and to reconstruct the analog signal (AU DEC 07)
M
13. Determine whether the following systems are linear , time invariant
1) y(n)=A x(n)+B
2) y(n)=x(2n)
Find the convolution of the following sequences: (AU DEC 04)
1) x(n)=u(n) h(n)=u(n-3)
2) x(n)={1,2,-1,1} h(n)={1,0,1,1}
UNIT II
PART A
m
4. Determine the DTFT of the sequence x(n)=a n u(n) for a<1 (AU DEC 06)
5. Is the DFT of the finite length sequence periodic? If so state the reason
co
(AU DEC 05)
6. Find the N-point IDFT of a sequence X(k) ={1 ,0 ,0 ,0} (Oct 98)
7. what do you mean by in place computation of FFT? (AU DEC 05)
8. What is zero padding? What are its uses? (AU DEC 04)
9. List out the properties of DFT
10. Compute the DFT of x(n)=(n-n0)
N. (MU Oct 95,98,Apr 2000)
11. Find the DFT of the sequence of x(n)= cos (n/4) for 0n 3 (MU Oct 98)
12. Compute the DFT of the sequence whose values for one period is given by
va
x(n)={1,1,-2,-2}. (AU Nov 06,MU Apr 99)
13. Find the IDFT of Y(k)={1,0,1,0} (MU Oct 98)
14. What is zero padding? What are its uses?
15. Define discrete Fourier series.
na
18. Obtain the circular convolution of the following sequences x(n)={1, 2, 1} and
h(n)={1, -2, 2}
19. Distinguish between DFT and DTFT (AU APR 04)
20. Write the analysis and synthesis equation of DFT (AU DEC 03)
M
21. Assume two finite duration sequences x1(n) and x2(n) are linearly combined.
What is the DFT of x3(n)?(x3(n)=Ax1(n)+Bx2(n)) (MU Oct 95)
22. If X(k) is a DFT of a sequence x(n) then what is the DFT of real part of x(n)?
23. Calculate the DFT of a sequence x(n)=(1/4)^n for N=16 (MU Oct 97)
24. State and prove time shifting property of DFT (MU Oct 98)
25. Establish the relation between DFT and Z transform (MU Oct 98,Apr 99,Oct 00)
26. What do you understand by Periodic convolution? (MU Oct 00)
27. How the circular convolution is obtained using concentric circle method?
(MU Apr 98)
28. State the circular time shifting and circular frequency shifting properties of DFT
29. State and prove Parsevals theorem
30. Find the circular convolution of the two sequences using matrix method
X1(n)={1, 2, 3, 4} and x2(n)={1, 1, 1, 1}
31. State the time reversal property of DFT
32. If the DFT of x(n) is X(k) then what is the DFT of x*(n)?
33. State circular convolution and circular correlation properties of DFT
34. Find the circular convolution of the following two sequences using concentric
circle method
x1(n)={1, 2, 3, 4} and x2(n)={1, 1, 1, 1}
35. The first five coefficients of X(K)={1, 0.2+5j, 2+3j, 2 ,5 }Find the remaining
coefficients
PART B
m
1. Find 4-point DFT of the following sequences
(a) x(n)={1,-1,0,0}
(b) x(n)={1,1,-2,-2} (AU DEC 06)
co
(c) x(n)=2n
(d) x(n)=sin(n/2)
4. Find the circular convolution of the following using matrix method and
concentric circle method
(a) x1(n)={1,-1,2,3}; x2(n)={1,1,1};
aa
Determine the response of the LTI system by radix2 DIT-FFT? (AU Nov 06).
If the impulse response of a LTI system is h(n)=(1,2,3,-1)
6. Determine the impulse response for the cascade of two LTI systems having
impulse responses h1(n)=(1/2)^n* u(n),h2(n)=(1/4)^n*u(n) (AU May 07)
8. Find the output sequence y(n)if h(n)={1,1,1,1} and x(n)={1,2,3,1} using circular
convolution (AU APR 04)
9. State and prove the following properties of DFT (AU DEC 03)
1) Cirular convolution 2) Parsevals relation
2) Find the circular convolution of x1(n)={1,2,3,4} x2(n)={4,3,2,1}
PART A
1. Why FFT is needed? (AU DEC 03) (MU Oct 95,Apr 98)
m
2. What is FFT? (AU DEC 06)
3. Obtain the block diagram representation of the FIR filter (AU DEC 06)
4. Calculate the number of multiplications needed in the calculation of DFT and FFT
co
with 64 point sequence. (MU Oct 97, 98).
5. What is the main advantage of FFT?
6. What is FFT? (AU Nov 06)
7. How many multiplications and additions are required to compute N-point DFT
using radix 2 FFT? N.
8. Draw the direct form realization of FIR system
9. What is decimation-in-time algorithm?
(AU DEC 04)
(AU DEC 04)
(MU Oct 95).
10. What do you mean by in place computation in DIT-FFT algorithm?
va
(AU APR 04)
11. What is decimation-in-frequency algorithm? (MU Oct 95,Apr 98).
12. Mention the advantage of direct and cascade structures (AU APR 04)
na
PART B
1. Compute an 8-point DFT of the following sequences using DIT and DIF
algorithms
(a)x(n)={1,-1,1,-1,0,0,0,0}
(b)x(n)={1,1,1,1,1,1,1,1} (AU APR 05)
(c)x(n)={0.5,0,0.5,0,0.5,0,0.5,0}
(d)x(n)={1,2,3,2,1,2,3,2}
(e)x(n)={0,0,1,1,1,1,0,0} (AU APR 04)
2. Compute the 8 point DFT of the sequence x(n)={0.5, 0.5 ,0.5,0.5,0,0,0,0} using
radix 2 DIF and DIT algorithm (AU DEC 07)
4. How do you linear filtering by FFT using save-add method (AU DEC 06)
5. Compute the IDFT of the following sequences using (a)DIT algorithm (b)DIF
algorithms
(a)X(k)={1,1+j,1-j2,1,0,1+j2,1+j}
m
(b)X(k)={12,0,0,0,4,0,0,0}
(c)X(k)={5,0,1-j,0,1,0,1+j,0}
(d)X(k)={8,1+j2,1-j,0,1,0,1+j,1-j2}
co
(e)X(k)={16,1-j4.4142,0,1+j0.4142,0,1-j0.4142,0,1+j4.4142}
8. Draw the butterfly diagram using 8 pt DIT-FFT for the following sequences
x(n)={1,0,0,0,0,0,0,0} (AU May 07).
9. a) From first principles obtain the signal flow graph for computing 8 point DFT
aa
10. State and prove circular time shift and circular frequency shift properties of DFT
M
11. State and prove circular convolution and circular conjugate properties of DFT
12. Explain the use of FFT algorithms in linear filtering and correlation
14. Determine the cascade and parallel form realization of the following system
y(n)=-0.1y(n-1)+0.2y(n-2)+3x(n)+3.6x(n-1)+0.6x(n-2)
Expalin in detail about the round off errors in digital filters (AU DEC 04)
UNIT-III
PART-A
m
6. Determine the order of the analog butterworth filter that has a -2 dB pass band
attenuation at a frequency of 20 rad/sec and atleast -10 dB stop band attenuation at 30
co
rad/sec (AU DEC 07)
7. By impulse invariant method obtain the digital filter transfer function and
differential equation of the analog filter H(S)=1/S+1 (AU DEC 07)
8. Give the expression for location of poles of normalized butterworth filter
N.
9. What are the parameters(specifications) of a chebyshev filter
(EC 333, May 07)
(EC 333, May 07)
10. Why impulse invariance method is not preferred in the design of IIR filter other than low
pass filter?
va
11. What are the advantages and disadvantages of bilinear transformation?(AU DEC 04)
12. Write down the transfer function of the first order butterworth filter having low pass
behavior (AU APR 05)
na
13. What is warping effect? What is its effect on magnitude and phase response?
14. Find the digital filter transfer function H(Z) by using impulse invariance method for the
analog transfer function H(S)= 1/S+2 (MAY AU 07)
15. Find the digital filter transfer function H(Z) by using bilinear transformation method for
the analog transfer function H(S)= 1/S+3
aa
16. Give the equation for converting a normalized LPF into a BPF with cutoff frequencies l
and u
17. Give the magnitude function of Butterworth filter. What is the effect of varying order of
N on magnitude and phase response?
M
18. Give any two properties of Butterworth low pass filters. (MU NOV 06).
19. What are the properties of Chebyshev filter? (AU NOV 06).
20. Give the equation for the order of N and cut off frequency c of Butterworth filter.
21. Give the Chebyshev filter transfer function and its magnitude response.
22. Distinguish between the frequency response of Chebyshev Type I filter for N odd and N
even.
23. Distinguish between the frequency response of Chebyshev Type I & Type II filter.
24. Give the Butterworth filter transfer function and its magnitude characteristics for
different order of filters.
25. Give the equations for the order N, major, minor and axis of an ellipse in case of
Chebyshev filter.
26. What are the parameters that can be obtained from the Chebyshev filter specification?
(AU MAY 07).
27. Give the expression for the location of poles and zeros of a Chebyshev Type II filter.
28. Give the expression for location of poles for a Chebyshev Type I filter. (AU MAY 07)
29. Distinguish between Butterworth and Chebyshev Type I filter.
30. How one can design Digital filters from Analog filters.
31. Mention any two procedures for digitizing the transfer function of an analog filter.
(AU APR 04)
32. What are properties that are maintained same in the transfer of analog filter into a digital
filter.
33. What is the mapping procedure between s-plane and z-plane in the method of mapping of
differentials? What is its characteristics?
34. What is mean by Impulse invariant method of designing IIR filter?
35. What are the different types of structures for the realization of IIR systems?
36. Write short notes on prewarping.
37. What are the advantages and disadvantages of Bilinear transformation?
m
38. What is warping effect? What is its effect on magnitude and phase response?
39. What is Bilinear Transformation?
co
40. How many numbers of additions, multiplications and memory locations are required to
realize a system H(z) having M zeros and N poles in direct form-I and direct form II
realization?
41. Define signal flow graph.
42. What is the transposition theorem and transposed structure?
43.
44.
45.
N.
Draw the parallel form structure of IIR filter.
Give the transposed direct form II structure of IIR second order system.
What are the different types of filters based on impulse response? (AU 07)
46. What is the most general form of IIR filter?
va
PART B
following
Systems
y(n)=-0.1x(n-1)+0.2y(n-2)+3x(n)+3.6x(n-1)+0.6x(n-2)
b) Discuss the limitation of designing an IIR filetr using impulse invariant method
M
m
s=0.15 HZ;s=15 dB:F=1Hz.
co
8. Design (a) a Butterworth and (b) a Chebyshev analog high pass filter that will
pass all radian frequencies greater than 200 rad/sec with no more that 2 dB
attuenuation and have a stopband attenuation of greater than 20 dB for all less
than 100 rad/sec.
H(S)=10/S2+7S+10
N.
9. Design a digital filter equivalent to this using impulse invariant method
(AU DEC 03)(AU DEC 04)
va
10. Use impulse invariance to obtain H(Z) if T= 1 sec and H(s) is
1/(s3 +3s2 +4s+1)
na
1/(s2+2 s +1)
11. Use bilinear transformation method to obtain H(Z) if T= 1 sec and H(s) is
1/(s+1)(S+2) (AU DEC 03)
aa
2
1/(s +2 s +1)
12. Briefly explain about bilinear transformation of digital filter design(AU APR 05)
M
13. Use bilinear transform to design a butterworth LPF with 3 dB cutoff frequeny of
0.2 (AU APR 04)
15. a) Design a chebyshev filter with a maxmimum pass band attenuation of 2.5 Db;
at p=20 rad/sec and the stop band attenuation of 30 Db at s=50 rad/sec.
b)Realize the system given by difference equation
y(n)=-0.1 y(n-1)+0.72y(n-2)+0.7x(n)-0.25x(n-2) in parallel form
(EC 333 DEC 07 )
UNIT IV
PART A
m
MAY 07)
5. What are the design techniques of designing FIR filters?
6. What condition on the FIR sequence h(n) are to be imposed in order that this filter can be
co
called a Linear phase filter? (AU 07)
7. State the condition for a digital filter to be a causal and stable. (AU 06)
8. What is Gibbs phenomenon? (AU DEC 04) (AU DEC 07)
9. Show that the filter with h(n)={-1, 0, 1} is a linear phase filter
10.
11.
12.
N.
Explain the procedure for designing FIR filters using windows.
What are desirable characteristics of windows?
What is the principle of designing FIR filters using windows?
(MU 02)
PART-B
2. i) Prove that FIR filter has linear phase if the unit impulse responsesatisfies the
condition h(n)=h(N-1-n), n=0,1,M-1. Also discuss symmetric and
antisymmetric cases of FIR filter (AU DEC 07)
3. What are the issues in designing FIR filter using window method?(AU APR 04,
DEC 03)
4. ii) Explain the need for the use of window sequences in the design of FIR filter.
Describe the window sequences generally used and compare their properties
5. Derive the frequency response of a linear phase FIR filter when impulse responses
symmetric & order N is EVEN and mention its applications
6. i) Explain the type I design of FIR filter using frequency sampling method
ii) A low pass filter has the desired response as given below
Hd(ej)= e j3, 0/2
0 /2
Determine the filter coefficients h(n) for M=7 using frequency sampling
m
technique (AU DEC 07)
co
7. i) Derive the frequency response of a linear phase FIR filter when impulse responses
antisymmetric & order N is odd
ii) Explain design of FIR filter by frequency sampling technique (AU MAY 07)
0 ; otherwise
Take N=11.
N.
7. Design an approximation to an ideal bandpass filter with magnitude response
H(ej) = 1 ; 434
= 0.5 k=5
= 0.25 k=6
= 0.1 k=7
=0 elsewhere
aa
9. Design an ideal band pass digital FIR filter with desired frequency response
H(e j )= 1 for 0.25 0.75
0 for 0.25 and 0.75
M
11. a) How is the design of linear phase FIR filter done by frequency sampling method?
Explain.
b) Determine the coefficients of a linear phase FIR filter of length N=15 which has
Symmetric unit sample response and a frequency response that satisfies the following
conditions
13. Using a rectangular window technique design a low pass filter with pass band gain of unity
cut off frequency of 1000 Hz and working at a sampling frequency of 5 KHz. The length
of the impulse response should be 7.( EC 333 DEC 07)
16. Design an Ideal Hilbert transformer using rectangular window and Black man window
for N=11. Plot the frequency response in both Cases (EC 333 DEC 07)
m
0 ; otherwise
Take N=11.Use hanning and hamming window (AU DEC 04)
co
N.
UNIT V
2. Express the fraction 7/8 and -7/8 in sign magnitude, 2s complement and 1s
complement (AU DEC 06)
3. What are the quantization errors due to finite word length registers in digital filters?
(AU DEC 06)
aa
m
29. What do you mean by quantization step size?
30. Find the quantization step size of the quantizer with 3 bits
31. Give the expression for signal to quantization noise ratio and calculate the
co
improvement with an increase of 2 bits to the existing bit.
32. Express the following binary numbers in decimal
A) (100111.1110)2 (B) (101110.1111)2 C (10011.011)2
33.Why rounding is preferred to truncation in realizing digital filter? (EC 333, May 07)
PART-B
(EC 333 MAY 07)
(EC 333 MAY 07)
va
1. Draw the quantization noise model for a second order system and explain
H(z)=1/(1-2rcosz-1+r2z-2) and find its steady state output noise variance (ECE AU 05)
na
3. Find the effect of coefficient quantiztion on pole locations of the given second
order IIR system when it is realized in direct form I and in cascade form. Assume a
word length of 4-bits through truncation.
H(z)= 1/(1-0.9z-1+0.2z 2) (AU Nov 05)
M
4. Explain the characteristics of Limit cycle oscillations with respect to the system described
by the differential equations.
y(n)=0.95y(n-1)+x(n) and
determine the dead band of the filter (AU Nov 04)
5. i) Describe the quantization errors that occur in rounding and truncation in twos
complement
ii) Draw a sample/hold circuit and explain its operation
iii) What is a vocoder? Expalin with a block diagram (AU DEC 07)
6. Two first order low pass filter whose system functions are given below are connected in
cascade. Determine the overall output noise power
H1(Z)=1/(1-0.9Z-1) H2(Z)=1/(1-0.8Z-1) (AU DEC 07)
7. Consider a Butterworth lowpass filter whose transfer function is
H(z)=0.05( 1+z-1)2 /(1-1.2z-1 +0.8 z-2 ).
Compute the pole positions in z-plane and calculate the scale factor So to prevent
overflow in adder 1.
8. Express the following decimal numbers in binary form
A) 525 B) 152.1875 C) 225.3275
m
Twos complement form.
co
11. Express the decimal values -6/8 and 9/8 in (i) Sign magnitude form (ii) Ones complement
form (iii) Twos complement form
N.
12. Study the limit cycle behavior of the following systems
i. y(n) = 0.7y(n-1) + x (n)
ii. y(n) = 0.65y(n-2) + 0.52y (n-1) + x (n)
va
13. For the system with system function H (z) =1+0.75z-1 / 1-0.4z-1 draw the signal flow graph
14. and find scale factor s0 to prevent overflow limit cycle oscillations
15. Derive the quantization input nose power and determine the signal to noise ratio of the system
na
16. Derive the truncation error and round off error noise power and compare both errors
17. Explain product quantization error and coefficient quantization error with examples
18. Derive the scaling factor So that prevents the overflow limit cycle oscillations in a second
aa
produced by the quantization noise at the output of the filter if the input is quantized to
1) 8 bits 2) 16 bits (EC 333 DEC 07)
19. Convert the following decimal numbers into binary: (EC 333 DEC 07)
1) (20.675) 2) (120.75)
10 10
20. Find the steady state variance of the noise in the output due to quantization of input for the
(EC 333 DEC 07)
first order filter y(n)=ay(n-1)+x(n)
ANAND INSTITUTE OF HIGHER TECHNOLOGY
KAZHIPATTUR, CHENNAI 603 103
DEPARTMENT OF ECE
Date: 15-05-2009
PART-A QUESTIONS AND ANSWERS
Subject : Digital signal Processing Sub Code : IT1252
Staff Name: Robert Theivadas.J Class : VII Sem/CSE A&B
m
(a) Cos 0.01n
(b) sin (62n/10) Nov/Dec 2008 CSE
co
a) Cos 0.01 n
b) sin (62n/10)
Wo=0.01 the fundamental frequency is multiply of .Therefore the signal is
periodic
Fundamental period
aa
N=2 [m/wo]
=2(m/(62/10))
Choose the smallest value of m that will make N an integer
M=31
M
N=2(310/62)
N=10
Fundamental period N=10
2. State sampling theorem Nov/Dec 2008 CSE
A band limited continuous time signal, with higher frequency f max Hz can be uniquely
recovered from its samples provided that the sampling rate Fs>2fmax samples per second
3. State sampling theorem , and find Nyquist rate of the signal
x(t)=5 sin250 t + 6cos300 t April/May2008 CSE
A band limited continuous time signal, with higher frequency f max Hz can be
uniquely recovered from its samples provided that the sampling rate Fs>2f max samples
per second.
Nyquist rate
x(t)=5 sin250t+ 6cos300 t
F1=125Hz F2=150Hz
Fmax=150Hz
Fs>2Fmax=300 Hz
The Nyquist rate is FN= 300Hz
m
co
N.
5. Determine which of the following signals are periodic and compute their
va
fundamental period. Nov/Dec 2007 CSE
(a) sin 2t
(b) sin 20t + sin 5t
(a) sin 2t
na
= 2 [m/2]
m=2
=2 [2/2]
N=2
(b) sin 20t + sin 5t
M
m
co
N.
va
Y(n)= 15,16,21,15
na
X (z)=
= u(-n-1)=0for n>1
M
=
=-
= -z d/dz X(z)
=z d/dz( )=
Y1(n)=T[x1(n)]= x1(n)
9. Is the system y(n)=ln[x(n)] is linear and time invariant? (MAY 2006 IT)
The system y(n)=ln[x(n)] is non-linear and time invariant
alnx1(n)+blnx2(n) ln(ax1(n)+bx2(n) Non-linear system
lnx (n)=lnx (n-n0) Time invariant system
10. Write down the expression for discrete time unit impulse and
unit step function. (APR 2005 IT).
Discrete time unit impulse function
(n) =1, n=0
m
=0, n0
Discrete time step impulse function.
co
u(n) = 1, for n0
= 0 for n<0
11. List the properties of DT sinusoids. (NOV 2005 IT)
DT sinusoid is periodic only if its frequency f is a rational number.
identical. N.
DT sinusoid whose frequencies are separated by an integer multiple of 2 are
12. Determine the response a system with y(n)=x(n-1) for the input signal
x(n) = |n| for -3n3
va
= 0 otherwise (NOV 2005 IT)
x(n)= {3,2,1,0,1,2,3}
H(z),Y(z) & X(z)z-transform of the system impulse, output and input respectively.
15. What is the causality condition for an LTI system? (NOV 2004 IT)
Conditions for the causality
h(n)=0 for n<0
M
1. Find out the DFT of the signal X(n)= (n) Nov/Dec 2008 CSE
X(n)={1,0,0,0}
m
X(k)={1,1,1,1}
co
2. What is meant by bit reversal and in place commutation as applied to FFT?
Nov/Dec 2008
CSE
N.
"Bit reversal" is just what it sounds like: reversing the bits in a binary word from
left to write. Therefore the MSB's become LSB's and the LSB's become MSB's.The data
ordering required by radix-2 FFT's turns out to be in "bit reversed" order, so bit-reversed
va
indexes are used to combine FFT stages.
0 000 000 0
1 001 100 4
2 010 010 2
3 011 110 6
aa
4 100 001 1
5 101 101 5
6 110 011 3
7 111 111 7
M
5. Draw the basic butterfly diagram for radix 2 DIT-FFT and DIF-FFT.
Nov/Dec
2007 CSE
Butterfly Structure for DIT FFT MAY 2006 ECESS
&(NOV 2006 ITSS)
The DIT structure can be expressed as a butterfly diagram
The DIF structure expressed as a butterfly diagram
m
co
6. What are the advantages of Bilinear mapping April/May 2008 IT
Aliasing is avoided
Mapping the S plane to the Z plane is one to one
The closed left half of the S plane is mapped onto the unit disk of the Z plane
N.
7. How may multiplication and addition is needed for radix-2 FFT? April/May 2008 IT
Number of complex addition is given by N
-j2kn/N
X(k) = x(n)e
X(n) = x(k)ej2kn/N
9. Define Complex Conjugate of DFT property. (May/Jun 2007)-ECE
DFT
If x(n)X(k) then
aa
N
X*(n)(X*(-k))N = X*(N- K)
10.Differentiate between DIT and DIF FFT algorithms. (MAY 2006 IT)
M
m
Where an = 1-an/(1-a)
X(K) = (1 aNej2k)/ (1-aej2k/N)
co
15. What do you mean by in place computation in FFT. (APR 2005 IT)
FFT algorithms, for computing the DFT when the size N is a power of 2 and when it
is a power of 4
16.Is the DFT is a finite length sequence periodic. Then state the reason (APR 2005
ITDSP) N.
DFT is a finite length sequence periodic.
N-1
X(ej )= x(n) e-jn
va
n =0
X(e ) is continuous & periodic in , with period 2.
j
na
1. What are the requirements for converting a stable analog filter into a stable digital filter?
Nov/Dec 2008 CSE
The J axis in the s plane should be map into the unit circle in the Z plane .thus there
M
will be a direct relationship between the two frequency variables in the two domains
The left half plane of the s plane should be map into the inside of the unit circle in the z
plane .thus the stable analog filter will be converted to a stable digital filter
2. Distinguish between the frequency response of chebyshev type I and Type II filter
Nov/Dec 2008 CSE
Type I chebyshev filter
m
co
N.
va
Type II chebyshev filter
Type I chebyshev filters are all pole filters that exhibit equirpple behavior in the pass
na
band and monotonic in stop band .Type II chebyshev filters contain both poles and zeros
and exhibits a monotonic behavior in the pass band and an equiripple behavior in the stop
band
3. What is the need for prewraping in the design of IIR filter Nov/Dec 2008 CSE
aa
The warping effect can be eliminated by prewarping the analog filter .This can be done
by finding prewarping analog frequencies using the formula
= 2tan-1T/2
4.Write frequency translation for BPF from LPF April/May2008 CSE
M
Low pass with cut off frequency C to band pass with lower cut-off frequency 1 and
higher cut-off frequency 2:
S ------------- C (s2 + 1 2) / s (2 - 1)
The system function of the high pass filter is then
H(s) = Hp { C ( s2 + 1 2) / s (2 - 1)}
5.Compare Butterworth, Chebyshev filters April/May2008
CSE
Poles on the butter worth lies on the circle Poles of the chebyshev filter lies on the
ellipse
6. Determine the order of the analog Butterworth filter that has a -2 db pass band
attenuation at a frequency of 20 rad/sec and atleast -10 db stop band attenuation at 30
rad/sec.
Nov/Dec 2007CSE
p =2 dB; p =20 rad/sec
s = 10 dB; s = 30 rad/sec
m
N
Log s/ p
co
log10 -1/ 100.2 -1
N
Log 30/ 20
3.37
Rounding we get N=4
N.
7. By Impulse Invariant method, obtain the digital filter transfer function
and differential equation of the analog filter H(s)=1 / (s+1) Nov/Dec 2007
va
CSE
H(s) =1/(s+1)
Using partial fraction
H(s) =A/(s+1)
na
= 1/(s-(-1)
Using impulse invariance method
H (z) =1/1-e-Tz-1
AssumeT=1sec
aa
H(z)=1/1-e-1z-1
H(z)=1/1-0.3678z-1
m
= 1/2j
co
10.What are the advantages and disadvantages of bilinear transformation?
(May/June 2006)-ECE Advantages:
Disadvantage:
Aliasing
N.
2. linear frequency relationship between analog and its transformed digital frequency,
(cutoff frequency, center frequency) such that when the analog filter is transformed
into the digital filter, the designed digital filter will meet the desired specifications.
12. Give any two properties of Butterworth filter and chebyshev filter. (Nov/Dec 2006)
a. The magnitude response of the Butterworth filter decreases monotonically as the
aa
d. The magnitude response of the chebyshev type-I filter exhibits ripple in the pass
band.
e. The poles of the Chebyshev type-I filter lies on an ellipse.
m
Poles=2N=14
16.What is impulse invariant mapping? What is its limitation? (Apr/May 2005)-ECE
co
The philosophy of this technique is to transform an analog prototype filter into an
IIR discrete time filter whose impulse response [h(n)] is a sampled version of the analog
filters impulse response, multiplied by T.This procedure involves choosing the response of
the digital filter as an equi-spaced sampled version of the analog filter.
17.Give the bilinear transformation. (Nov/Dec 2003)-ECE
N.
The bilinear transformation method overcomes the effect of aliasing that is
caused due to the analog frequency response containing components at or beyond the
nyquist frequency. The bilinear transform is a method of compressing the infinite,
straight analog frequency axis to a finite one long enough to wrap around the unit
va
circle only once.
18.Mention advantages of direct form II and cascade structures. (APR 2004
ITDSP)
na
(i) The main advantage direct form-II structure realization is that the number of delay
elements is reduced by half. Hence, the system complexity drastically reduces the
number of memory elements .
(ii) Cascade structure realization, the system function is expressed as a product of
aa
several sub system functions. Each sum system in the cascade structure is realized in
direct form-II. The order of each sub system may be two or three (depends) or more.
19. What is prewarping? (Nov/Dec 2003)-ECE
When bilinear transformation is applied, the discrete time frequency is related
M
m
co
4. Explain briefly the need for scaling in the digital filter realization Nov/Dec 2007
CSE
N.
To prevent overflow, the signal level at certain points in the digital filters must be
scaled so that no overflow occur in the adder
5. What are the advantages of FIR filters? April/May 2008 IT
1.FIR filter has exact linear phase
va
2.FIR filter always stable
3.FIR filter can be realized in both recursive and non recursive structure
4.Filters wit h any arbitrary magnitude response can be tackled using FIR sequency
na
X(n)=A , +
Where A= Maximum Amplitude of the signal
Wo=Frequency in radians
M
f=phase angle
Due to the delay in the system response ,the output signal lagging in phase but the
frequency remain the same
Y(n)= A ,
In This equation that the output is the time delayed signal and is more commonly known
7. State the advantages and disadvantages of FIR filter over IIR filter.
(MAY 2006 IT DSP) & (NOV 2004
ECEDSP)
Advantages of FIR filter over IIR filter
It is a stable filter
It exhibit linear phase, hence can be easily designed.
It can be realized with recursive and non-recursive structures
It is free of limit cycle oscillations when implemented on a finite word length
digital system
Disadvantages of FIR filter over IIR filter
Obtaining narrow transition band is more complex.
Memory requirement is very high
Execution time in processor implementation is very high.
8. List out the different forms of structural realization available for realizing a FIR system.
(MAY 2006 IT DSP)
The different types of structures for realization of FIR system are
m
1.Direct form-I 2. Direct form-II
9. What are the desirable and undesirable features of FIR Filters? (May/June 2006)-
ECE
co
The width of the main lobe should be small and it should contain as much of total
energy as possible.The side lobes should decease in energy rapidly as w tends to
10. Define Hanning and Blackman window functions. (May/June 2006)-ECE
The window function of a causal hanning window is given by
N.
WHann(n) = 0.5 0.5cos2n/ (M-1), 0nM-1
0, Otherwise
The window function of non-causal Hanning window I s expressed by
WHann(n) = 0.5 + 0.5cos2n/ (M-1), 0|n|(M-1)/2
va
0, Otherwise
The width of the main lobe is approximately 8/M and thee peak of the first side lobe is
at -32dB.
The window function of a causal Blackman window is expressed by
WB(n) = 0.42 0.5 cos2n/ (M-1) +0.08 cos4n/(M-1), 0nM-1
na
= 0, otherwise
The window function of a non causal Blackman window is expressed by
WB(n) = 0.42 + 0.5 cos2n/ (M-1) +0.08 cos4n/(M-1), 0|n|(M-1)/2
= 0, otherwise
aa
The width of the main lobe is approximately 12/M and the peak of the first side lobe is
at -58dB.
11. What is the condition for linear phase of a digital filter? (APR 2005 ITDSP)
h(n) = h(M-1-n) Linear phase FIR filter with a nonzero response at =0
M
the Fourier transform of the window function W(ejw) should have a small width
of main lobe containing as much of the total energy as possible
the fourier transform of the window function W(ejw) should have side lobes that
decrease in energy rapidly as w to . Some of the most frequently used window
functions are described in the following sections
16. Give the Kaiser Window function. (Apr/May 2004)-ECE
The Kaiser Window function is given by
WK(n) = I0() / I0() , for |n| (M-1)/2
Where is an independent variable determined by Kaiser.
= [ 1 (2n/M-1)2]
m
17. What is meant by FIR filter? And why is it stable? (APR 2004 ITDSP)
FIR filter Finite Impulse Response. The desired frequency response of a FIR
co
filter can be represented as
Hd(ej)= hd(n)e-jn
n= -
N.
If h(n) is absolutely summable(i.e., Bounded Input Bounded Output Stable).
So, it is in stable.
18. Mention two transformations to digitize an analog filter.
(i) Impulse-Invariant transformation techniques
(APR 2004 ITDSP)
20.Give the equation specifying Barlett and hamming window. (NOV 2004 ITDSP)
The transfer function of Barlett window
aa
1. Compare fixed point and floating point arithmetic. Nov/Dec 2008 CSE&MAY 2006 IT
Fixed Point Arithmetic Floating Point Arithmetic
m
multiplication
co
Overflow is rare
phenomenon
N.
va
2.What are the errors that arise due to truncation in floating point numbers
Nov/Dec 2008
CSE
na
1.Quantization error
2.Truncation error
Et=Nt-N
3.What are the effects of truncating an infinite flourier series into a finite series?
aa
Nov/Dec 2008
CSE
4. Draw block diagram to convert a 500 m/s signal to 2500 m/s signal and state the problem
M
m
8. What are the types of limit cycle oscillation? April/May 2008 IT
i.Zero input limit cycle oscillation
co
ii.overflow limit cycle oscillation
9. What is meant by overflow limit cycle oscillations? (May/Jun 2006 )
In fixed point addition, overflow occurs due to excess of results bit, which are
stored at the registers. Due to this overflow, oscillation will occur in the system. Thus
N.
oscillation is called as an overflow limit cycle oscillation.
10. How will you avoid Limit cycle oscillations due to overflow in addition(MAY 2006 IT
DSP)
Condition to avoid the Limit cycle oscillations due to overflow in addition
va
|a1|+|a2|<1
a1 and a2 are the parameter for stable filter from stability triangle.
11.What are the different quantization methods? (Nov/Dec 2006)-ECE
amplitude quantization
na
vector quantization
scalar quantization
12.List the advantages of floating point arithmetic. (Nov/Dec 2006)-ECE
Large dynamic range
aa
17.Give the rounding errors for fixed and floating point arithmetic.
(APR 2004 ITDSP)
A number x represented by b bits which results in bR after being
Rounded off. The quantized error R due to rounding is given by
R=QR(x)-x
where QR(x) = quantized number(rounding error)
The rounding error is independent of the types of fixed point arithmetic, since
it involves the magnitude of the number. The rounding error is symmetric about
zero and falls in the range.
-((2-bT-2-b)/2) R ((2-bT-2-b)/2)
R may be +ve or ve and depends on the value of x.
m
The error R incurred due to rounding off floating point number is in the range
-2E.2-bR/2) R 2E.2-bR/2
co
18.Define the basic operations in multirate signal processing.
(APR 2004 ITDSP)
The basic operations in multirate signal processing are
(i)Decimation
(ii)Interpolation
N.
Decimation is a process of reducing the sampling rate by a factor D, i.e., down-
sampling. Interpolation is a process of increasing the sampling rate by a factor I,
i.e., up-sampling.
va
19. Define sub band coding of speech. (APR 2004 ITDSP)
& (NOV 2003 ECEDSP) & (NOV 2005 ECEDSP)
Sub band coding of speech is a method by which the speech signal is
subdivided into several frequency bands and each band is digitally encode
na
m
The digital implementation of the filter has finite accuracy. When numbers are
represented in digital form, errors are introduced due to their finite accuracy. These
errors generate finite precision effects or finite word length effects.
co
When multiplication or addition is performed in digital filter, the result is to be
represented by finite word length (bits). Therefore the result is quantized so that it can
be represented by finite word register. This quantization error can create noise or
oscillations in the output. These effects are called finite word length effects.
N.
va
na
PART B
UNIT-1 - SIGNALS AND SYSTEMS
aa
1.Determine whether the following signals are Linear ,Time Variant, causal and stable
(1) Y(n)=cos[x(n)] Nov/Dec 2008 CSE
(2) Y(n)=x(-n+2)
(3) Y(n)=x(2n)
M
(4) Y(n)=x(n)+nx(n+1)
Refer book : Digital signal processing by Ramesh Babu .(Pg no 1.79)
2. Determine the causal signal x(n) having the Z transform Nov/Dec 2008 CSE
X(z)=
Refer book : Digital signal processing by Ramesh Babu .(Pg no 2.66)
3. Use convolution to find x(n) if X(z) is given by Nov/Dec 2008 CSE
for ROC
Refer book : Digital signal processing by Ramesh Babu .(Pg no 2.62)
4.Find the response of the system if the input is {1,4,6,2} and impulse response of the
system is {1,2,3,1}
April/May2008CSE
m
co
(ii) Find convolution of {5,4,3,2} and {1,0,3,2} April/May2008 CSE
(2) y(n)=x(-n+2)
(3) y(n)=x(2n)
(4) y(n)=x(n).cosWo(n)
Nov/Dec 2007 CSE
M
11.(i) find the convolution and correlation for x(n)={0,1,-2,3,-4} and h(n)={0.5,1,2,1,0.5}.
m
Refer book : Digital signal processing by Ramesh Babu .(Pg no 1.79)
co
(ii)Determine the Impulse response for the difference equation
Y(n) + 3 y(n-1)+2y(n-2)=2x(n)-x(n-1) April/May2008 IT
Refer book : Digital signal processing by Ramesh Babu .(Pg no 2.57)
12. (i) Compute the z-transform and hence determine ROC of x(n) where
X (n) = (1/3) n
N.
u(n).n 0
va
(1/2) -n u(n).n<0
(iii) prove the property that convolution in Z-domains multiplication in time domain
April/May2008 IT
aa
13.Find the response of the system if the input is {1,4,6,2} and impulse response of the
system is {1,2,3,1} April/May2008CSE
M
and
h (n)= 1, 0n4
0, elsewhere Nov/Dec 2007 CSE
m
(3) y(n)=x(2n)
(4) y(n)=x(n).cosWo(n)
co
Nov/Dec 2007 CSE
stability.
N.
y(n)-y(n-1)=x(n)+x(n-1) to inputs x(n)=u(n) and x(n)=2-n u(n). Test its
23.a. Find the convolution sum for the x(n) =(1/3)-n u(-n-1) and h(n)=u(n-1)
Refer signals and systems by P. Ramesh babu , page no:3.76,3.77
b. Convolve the following two sequences linearly x(n) and h(n) to get y(n).
x(n)= {1,1,1,1} and h(n) ={2,2}.Also give the illustration
Refer signals and systems by chitode, page no:67
c. Explain the properties of convolution. (NOV2006 ECESS)
Refer signals and systems by chitode, page no:4.43 to 4.45
m
24. Check whether the following systems are linear or not
1. y(n) = x2(n)
co
2. y(n) = nx(n) (APRIL 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3rd Edition.
Page number (67)
N.
25.(i)Determine the response of the system described by,
y(n)-3y(n-1)-4y(n- 2)=x(n)+2x(n-1) when the input sequence is x(n)=4n u(n).
Refer signals and systems by P. Ramesh babu , page no:3.23
(ii)Write the importance of ROC in Z transform and state the relationship between Z
va
transforms to Fourier transform. (APRIL 2004 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3rd Edition.
Page number (153)
na
1.By means of DFT and IDFT ,Determine the sequence x3(n) corresponding to the circular
convolution of the sequence x1(n)={2,1,2,1}.x2(n)={1,2,3,4}. Nov/Dec 2008 CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no 3.46)
2. State the difference between overlap save method and overlap Add method
Nov/Dec 2008 CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no 3.88)
3. Derive the key equation of radix 2 DIF FFT algorithm and draw the relevant flow graph
taking the computation of an 8 point DFT for your illustration Nov/Dec 2008
CSE
Refer book : Digital signal processing by Nagoor Kani .(Pg no 215)
4. Compute the FFT of the sequence x(n)=n+1 where N=8 using the in place radix 2
decimation in frequency algorithm. Nov/Dec 2008 CSE
Refer book : Digital signal processing by Nagoor Kani .(Pg no 226)
5. Find DFT for {1,1,2,0,1,2,0,1} using FFT DIT butterfly algorithm and
plot the spectrum April/May2008
CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no 4.17)
6. (i)Find IDFT for {1,4,3,1} using FFT-DIF method April/May2008
CSE
(ii)Find DFT for {1,2,3,4,1} (MAY 2006
ITDSP)
Refer book : Digital signal processing by Ramesh Babu .(Pg no 4.29)
7.Compute the eight point DFT of the sequence x(n)={ ,,,,0,0,0,0} using radix2
decimation in time and radix2 decimation in frequency algorithm. Follow exactly the
corresponding signal flow graph and keep track of all the intermediate quantities by
putting them on the diagram. Nov/Dec 2007 CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no 4.30)
m
8.(i) Discuss the properties of DFT.
Refer book : Digital signal processing by S.Poornachandra.,B.sasikala.
(Pg no 749)
co
(ii)Discuss the use of FFT algorithm in linear filtering. Nov/Dec 2007
CSE
Refer book : Digital signal processing by John G.Proakis .(Pg no 447)
10.Derive the equation for radix 4 FFT for N=4 and Draw the butterfly Diagram.
April/May2008 IT
11. (i) Compute the 8 pt DFT of the sequence
aa
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing Principles,
Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition. Page number
(456 & 465)
12.Find the 8-pt DFT of the sequence x(n)={1,1,0,0} (APRIL 2005
ITDSP)
Refer P. Ramesh babu, Signals and Systems. Page number (8.58)
13.Find the 8-pt DFT of the sequence
x(n)= 1, 0n7
0, otherwise
using Decimation-in-time FFT algorithm (APRIL 2005 ITDSP)
Refer P. Ramesh babu, Signals and Systems.Page number (8.87)
14.Compute the 8 pt DFT of the sequence
x(n)={0.5,0.5,0.5,0.5,0,0,0,0} using DIT FFT (NOV 2005 ITDSP)
Refer P. Ramesh babu, Signals and Systems.Page number (8.89)
15.By means of DFT and IDFT , determine the response of an FIR filter with impulse
response h(n)={1,2,3},n=0,1,2 to the input sequence x(n) ={1,2,2,1}.
(NOV 2005 ITDSP)
Refer P. Ramesh babu, Signals and Systems.Page number (8.87)
16.(i)Determine the 8 point DFT of the sequence
x(n)= {0,0,1,1,1,0,0,0}
Refer P. Ramesh babu, Signals and Systems.Page number (8.58)
(ii)Find the output sequence y(n) if h(n)={1,1,1} and x(n)={1,2,3,4} using circular
convolution (APR 2004 ITDSP)
Refer P. Ramesh babu, Signals and Systems.Page number (8.65)
17. (i)What is decimation in frequency algorithm? Write the similarities and differences
between DIT and DIF algorithms. (APR 2004 ITDSP) & (MAY 2006 ECEDSP)
Refer P. Ramesh babu, Signals and Systems. Page number (8.70-8.80)
18.Determine 8 pt DFT of x (n)=1for -3n3 using DIT-FFT algorithm (APR 2004
m
ITDSP)
Refer P. Ramesh babu, Signals and Systems. Page number (8.58)
co
19.Let X(k) denote the N-point DFT of an N-point sequence x(n).If the DFT of X(k)is
computed to obtain a sequence x1(n). Determine x1(n) in terms of x(n) (NOV 2004
ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing Principles,
Algorithms and Application, PHI/Pearson Education, 2000, 3rd Edition. Page number (456 &
465) N.
UNIT-III - IIR FILTER DESIGN
va
1.Design a digital filter corresponding to an analog filter H(s)= using the impulse
invariant method to work at a sampling frequency of 100 samples/sec
na
4.Design a digital butter worth filter satisfying the constraints Nov/Dec 2008 CSE
0.707 1 for 0 w
0.20 for w
With T=1 sec using bilinear transformation .realize the same in Direct form II
( 1+Z-1)(1+2Z-1+4Z-2)
( ii) Find H(s) for a 3 rd order low pass butter worth filter April/May2008
CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no 5.8)
7.(i) Derive bilinear transformation for an analog filter with system function
H(s) =b / (s+a)
Refer book: Digital signal processing by John G.Proakis .(Pg no 676-679)
(ii)Design a single pole low pass digital IIR filter with -3 db bandwidth of
0.2 by use of bilinear transformation.
m
Nov/Dec 2007
CSE
co
8.(i) Obtain the Direct Form I, Direct Form II, cascade and parallel realization for the
following system Y(n)= -0.1y(n-1)+0.2y(n-2)+3x(n)+3.6x(n-1)+0.6x(n-2)
Refer book : Digital signal processing by Ramesh Babu .(Pg no 5.68)
invariant method. N.
(ii) Discuss the limitation of designing an IIR filter using impulse
Nov/Dec 2007 CSE
April/May2008 IT
Ha(s)=s+a/((s+a)2 +b2 )
Refer book : Digital signal processing by Ramesh Babu .(Pg no 5.42)
aa
(ii) Determine the order of Cheybshev filter that meets the following specifications
(1) 1 dB ripple in the pass band 0|w| 0.3 b
(2) Atleast 60 dB attrnuation in the stop band 0.35 |w| Use Bilinear
Transformation
M
13.Explain the method of design of IIR filters using bilinear transform method.
(APRIL 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition. Page
number (676-8.3.3)
14.Explain the following terms briefly:
(i)Frequency sampling structures
(ii)Lattice structure for IIR filter (NOV 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition. Page
number (506 &531)
15.Consider the system described by
y(n)-0.75y(n-1)+0.125y(n-2)=x(n)+0.33x(n-1).
Determine its system function (NOV 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
m
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition. Page
number (601-7.37)
16.Find the output of an LTI system if the input is x(n)=(n+2) for 0n3 and h(n)=a nu(n) for
co
all n (APR 2004 ITDSP)
Refer signals and systems by P. Ramesh babu , page no:3.38
17.Obtain cascade form structure of the following system:
y(x)=-0.1y(n-1)+0.2y(n-2)+3x(n)+3.6x(n-1)+0.6x(n-2) (APR 2004 ITDSP)
N.
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition.
Page number (601-7.9c)
18.Verify the Stability and causality of a system with
va
H(z)=(3-4Z-1)/(1+3.5Z-1+1.5Z-2) (APR 2004 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3rd Edition.
Page number (209)
na
1.Design a FIR linear phase digital filter approximating the ideal frequency response
Nov/Dec 2008 CSE
M
With T=1 Sec using bilinear transformation .Realize the same in Direct form II
Refer book : Digital signal processing by Nagoor Kani .(Pg no 367)
2.Obtain direct form and cascade form realizations for the transfer function of the system
given by
Nov/Dec 2008
CSE
Refer book : Digital signal processing by Nagoor Kani .(Pg no 78)
3.Explain the type I frequency sampling method of designing an FIR filter.
Nov/Dec 2008
CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no6.82)
4.Compare the frequency domain characteristics of various window functions .Explain how
a linear phase FIR filter can be used using window method. Nov/Dec 2008 CSE
Refer book : Digital signal processing by Ramesh Babu .(Pg no6.28)
5. Design a LPF for the following response .using hamming window with
N=7
m
co
N. April/May2008 CSE
6. (i) Prove that an FIR filter has linear phase if the unit sample response satisfies the
condition h(n)= h(M-1-n), n=0,1,.M-1. Also discuss symmetric and antisymmetric cases
va
of FIR filter. Nov/Dec 2007
e-i3w, 0w</2
jw
Hd(e )=
0, /2<
Determine the filter coefficients h(n) for M=7 using frequency sampling
method.
Nov/Dec 2007
CSE
8.(i) For FIR linear phase Digital filter approximating the ideal frequency response
Hd(w) = 1 |w| /6
0 /6 |w|
Determine the coefficients of a 5 tap filter using rectangular Window
Refer book : Digital signal processing by A.Nagoor kani .(Pg no 415
(ii) Determine the unit sample response h(n) of a linear phase FIR filter of Length M=4
for which the frequency response at w=0 and w= /2 is given as Hr(0) ,Hr(/2) =1/2
April/May2008 IT
Refer book : Digital signal processing by A.Nagoor kani .(Pg no 310)
9.(i) Determine the coefficient h(n) of a linear phase FIR filter of length M=5 which has
symmetric unit sample response and frequency response
Hr(k)=1 for k=0,1,2,3
0.4 for k=4
0 for k=5, 6, 7 April/May2008 IT(NOV 2005 ITDSP)
Refer book : Digital signal processing by A.Nagoor kani .(Pg no 308)
m-1
(ii) Show that the equation h(n)=sin (wj-wn)=0,is satisfied for a linear phase FIR filter
m
n=0
co
of length 9
April/May2008 IT
10. Design linear HPF using Hanning Window with N=9
H(w) =1 - to Wc and Wc to
=0 N.
otherwise
April/May2008 IT
Refer book : Digital signal processing by A.Nagoor kani .(Pg no 301)
11.Explain in detail about frequency sampling method of designing an FIR filter.
va
(NOV 2004 ITDSP) & ( NOV 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd
Edition. Page number (630)
na
12.Explain the steps involved in the design of FIR Linear phase filter using window method.
(APR 2005 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3rd
aa
m
5. (i) Explain how the speech compression is achieved .
co
(ii) Discuss about quantization noise and derive the equation for
finding quantization noise power. April/May2008CSE
Refer book : Digital signal processing by Ramesh Babu.(Pg no 7.9-7.14)
N.
6. Two first order low pass filter whose system functions are given below are connected in
cascade. Determine the overall output noise
power. H1(z) = 1/ (1-0.9z-1) and H2(z) = 1/ (1-0.8z-1) Nov/Dec 2007 CSE
Refer book: Digital signal processing by Ramesh Babu. (Pg no 7.24)
va
7. Describe the quantization errors that occur in rounding and
truncation in twos complement. Nov/Dec 2007 CSE
Refer book : Digital signal processing by John G.Proakis .(Pg no 564)
m
na
of the individual section are H1(z)=1/(1-0.9z-1 ) and H2(z) =1/(1-0.8z-1) .Draw the product
quantization noise model of the system and determine the overall output noise power
April/May2008 IT
Refer book : Digital signal processing by A.Nagoor kani .(Pg no 415)
M
9. (i) Show dead band effect on y(n) = .95 y(n-1)+x(n) system restricted to 4 bits .Assume
x(0) =0.75 and y(-1)=0
m
(NOV 2004 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd
co
Edition. Page number (790)
15. Write applications of multirate signal processing in Musical sound processing
(NOV 2004 ITDSP)
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
16. With examples illustrate (i) Fixed point addition (ii) Floating point multiplication (iii)
Truncation (iv) Rounding.(APR 2005 ITDSP) & (NOV 2003 ITDSP)
va
Refer John G Proakis and Dimtris G Manolakis, Digital Signal Processing
Principles, Algorithms and Application, PHI/Pearson Education, 2000, 3 rd Edition.
Page number (7.5)
17. Describe a single echo filter using in musical sound processing.
na
Iv.{2459
B.E./B.Tech.DEGREE EXAMINATION, APRILA{AY 2008.
Fifth Semester
m
Time : Three hours Maximum : 100 marks
co
Answer ALL questions.
N.
PART A - (10 x2 = 20 marks)
va
Check for linearity and crrusality of the system y(n) = Cos atnT .
na
1.
aa
domain.
ww
9. What is interpolation?
i1. (aJ (i) Represent the signal j(n) = x(2n) + x(n. *1) where r(z) is the input
and y(n) is the output. (8)
(ii) Explain the procedure to perform linear and circular convoiution. (8)
Or
(b) Expiain in detail the steps in the computation of FFT using DIF
algorithm. (16)
12. (al Design a FIR filter with the following characteristics using rectangular
window with M = 7 and determine /z (rz) (16)
Or
(b) Discuss the various window functions available for constructing iinear
m
phaseFIR filters. (16)
co
13. (a) Design a Butterworth filter with the following characteristics using
bilinear transformationmethod using T =I s. (16)
Or
aa
(b) Explain briefly how Cascadeand parallei realization of filters are done.
(16)
M
14. (4, Explain fixed and floating point representation in detail. (16)
w.
Or
ww
(b) Explain the various errors that occur in a DSP system. (16)
15. (a) With a neat block diagram explain decimation and interpolation. (16)
Or
Iy.{2459
DEPARTMENT OF ECE
m
co
2 MARKS & QUESTION- ANSWERS
N.
va
na
aa
m
Discrete time signal: A discrete time signal is defined only at discrete instants of time.
The independent variable has discrete values only, which are uniformly spaced. A
co
discrete time signal is often derived from the continuous time signal by sampling it at a
uniform rate.
Ans:
Continuous time and discrete time systems
Linear and Non-linear systems
Causal and Non-causal systems
aa
m
6. What are energy and power signal?
Ans:
Energy signal: signal is referred as an energy signal, if and only if the total energy of the
co
signal satisfies the condition 0<E<. The total energy of the continuous time signal x(t)
is given as
E=limTx2 (t)dt, integration limit from T/2 to +T/2
N.
Power signal: signal is said to be powered signal if it satisfies the condition 0<P<.
The average power of a continuous time signal is given by
P=limT1/Tx2(t)dt, integration limit is from-T/2 to +T/2.
va
7. What are the operations performed on a signal?
Ans:
Operations performed on dependent variables:
na
Amplitude scaling: y (t) =cx (t), where c is the scaling factor, x(t) is the continuous time
signal.
Addition: y (t)=x1(t)+x2(t)
Multiplication y (t)=x1(t)x2(t)
aa
Time shifting
Amplitude scaling
Time reversal
m
Invertibility: A system is said to be invertible if the input of the system con be recovered
from the system output.
Time invariance: A system is said to be time invariant if a time delay or advance of the
co
input signal leads to an identical time shift in the output signal.
Linearity: A system is said to be linear if it satisfies the super position principle
i.e.) R(ax1(t)+bx2(t))=ax1(t)+bx2(t)
Ans:
N.
10. What is memory system and memory less system?
A system is said to be memory system if its output signal at any time depends on the past
values of the input signal. circuit with inductors capacitors are examples of memory
va
system..
A system is said to be memory less system if the output at any time depends on the
present values of the input signal. An electronic circuit with resistors is an example for
na
A system is said to be invertible system if the input of the system can be recovered from
the system output. The set of operations needed to recover the input as the second system
connected in cascade with the given system such that the output signal of the second
system is equal to the input signal applied to the system.
M
H-1{y(t)}=H-1{H{x(t)}}.
13. Is a discrete time signal described by the input output relation y[n]= rnx[n] time
invariant.
Ans:
A signal is said to be time invariant if R{x[n-k]}= y[n-k]
R{x[n-k]}=R(x[n]) / x[n]x[n-k]
=rnx [n-k] ---------------- (1)
y[n-k]=y[n] / nn-k
=rn-kx [n-k] -------------------(2)
Equations (1)Equation(2)
Hence the signal is time variant.
14. Show that the discrete time system described by the input-output relationship y[n]
=nx[n] is linear?
Ans:
m
For a sys to be linear R{a1x1[n]+b1x2[n]}=a1y1[n]+b1y2[n]
L.H.S:R{ a1x1[n]+b1x2[n] }=R{x[n]} /x[n] a1x1[n]+b1x2[n]
= a1 nx1[n]+b1 nx2[n] -------------------(1)
co
R.H.S: a1y1[n]+b1y2[n]= a1 nx1[n]+b1 nx2[n] --------------------(2)
Equation(1)=Equation(2)
Hence the system is linear
16. What is the output of the system with system function H1 and H2 when connected in
cascade and parallel?
Ans:
aa
When the system with input x(t) is connected in cascade with the system H1 and H2 the
output of the system is
y(t)=H2{H1{x(t)}}
When the system is connected in parallel the output of the system is given by
M
y(t)=H1x1(t)+H2x2(t).
m
21.Determine the response y(n), n>=0 of the system described by the second order
difference equation
y(n) 4y(n-1) + 4y(n-2) = x(n) x(n-1) when the input is x(n) = (-1)n u(n) and the
co
initial condition are y(-1) = y(-2)=1.
25 How many multiplication terms are required for doing DFT by expressional
method and FFT method
expression n2 FFT N /2 log N
M
FIR IIR
Impulse response is finite Impulse Response is infinite
They have perfect linear phase They do not have perfect linear
phase
Non recursive Recursive
Analog digital
Constructed using active or Consists of elements like adder,
passive components and it is subtractor and delay units and it is
described by a differential described by a difference equation
equation
Frequency response can be Frequency response can be
changed by changing the changed by changing the filter
components coefficients
m
It processes and generates Processes and generates digital
analog output output
Output varies due to external Not influenced by external
co
conditions conditions
N.
The expression is N=log ( /) 1/2/log (1/k)
LPF to LPF:s=s/c
LPF to HPF:s=c/s
aa
LPF to BPF:s=s2xlxu/s(xu-xl)
LPF to BSF:s=s(xu-xl)?s2=xlxu. X=
33. State the equation for finding the poles in chebyshev filter
sk=acosk+jbsink,where k=/2+(2k-1)/2n)
34. State the steps to design digital IIR filter using bilinear method
Substitute s by 2/T (z-1/z+1), where T=2/ (tan (w/2) in h(s) to get h (z)
m
For smaller values of w there exist linear relationship between w and .but for
larger values of w the relationship is nonlinear. This introduces distortion in the
co
frequency axis. This effect compresses the magnitude and phase response. This
effect is called warping effect
37. Give the bilinear transform equation between s plane and z plane
na
s=2/T (z-1/z+1)
38. Why impulse invariant method is not preferred in the design of IIR filters other
aa
39. By impulse invariant method obtain the digital filter transfer function and the
differential equation of the analog filter h(s) =1/s+1
H (z) =1/1-e-Tz-1
Y/x(s) =1/s+1
Cross multiplying and taking inverse lap lace we get,
D/dt(y(t)+y(t)=x(t)
40. What is meant by impulse invariant method?
In this method of digitizing an analog filter, the impulse response of the resulting
digital filter is a sampled version of the impulse response of the analog filter. For
e.g. if the transfer function is of the form, 1/s-p, then
H (z) =1/1-e-pTz-1
m
1. The magnitude response of the chebyshev filter exhibits ripple either in the stop
band or the pass band.
2. The poles of this filter lies on the ellipse
co
43. Give the Butterworth filter transfer function and its magnitude characteristics for
different orders of filter.
The transfer function of the Butterworth filter is given by
H (j) =1/1+j (/c) N N.
44. Give the magnitude function of Butterworth filter.
The magnitude function of Butterworth filter is
va
|h(j)=1/[1+(/c)2N]1/2 ,N=1,2,3,4,.
45. Give the equation for the order N, major, minor axis of an ellipse in case of
na
chebyshev filter?
The order is given by N=cosh-1(((10.1p)-1/10.1s-1)1/2))/cosh-1s/p
A= (1/N--1/N)/2p
B=p (1/N+ -1/N)/2
aa
46. Give the expression for poles and zeroes of a chebyshev type 2 filters
M
47. How can you design a digital filter from analog filter?
Digital filter can de designed from analog filter using the following methods
1. Approximation of derivatives
2. Impulse invariant method
3. Bilinear transformation
48. write down bilinear transformation.
s=2/T (z-1/z+1)
m
50. Differentiate Butterworth and Chebyshev filter.
co
Butterworth dampimg factor 1.44 chebyshev 1.06
Butterworth flat response damped response.
58. What is the necessary and sufficient condition for the linear phase characteristic of
a FIR filter?
m
The phase function should be a linear function of w, which inturn requires
constant group delay and phase delay.
co
59. List the well known design technique for linear phase FIR filter design?
Fourier series method and window method
Frequency sampling method.
Optimal filter design method.
and differentiators.
62. For what kind of application , the symmetrical impulse response can be used?
The impulse response ,which is symmetric having odd number of samples can be
aa
used to design all types of filters ,i.e , lowpass,highpass,bandpass and band reject.
The symmetric impulse response having even number of samples can be used
to design lowpass and bandpass filter.
M
64.What condition on the FIR sequence h(n) are to be imposed n order that this filter
can be called a liner phase filter?
The conditions are
(i) Symmetric condition h(n)=h(N-1-n)
(ii) Antisymmetric condition h(n)=-h(N-1-n)
65. Under what conditions a finite duration sequence h(n) will yield constant group
delay in its frequency response characteristics and not the phase delay?
If the impulse response is anti symmetrical ,satisfying the condition
H(n)=-h(N-1-n)
The frequency response of FIR filter will have constant group delay and not the
phase delay .
66. State the condition for a digital filter to be causal and stable?
A digital filter is causal if its impluse response h(n)=0 for n<0.
A digital filter is stable if its impulse response is absolutely summable ,i.e,
h(n)<
n=-
m
1.FIR filter is always stable.
2.A realizable filter can always be obtained.
3.FIR filter has a linear phase response.
co
68. When cascade from realization is preferred in FIR filters?
The cascade from realization is preferred when complex zeros with absolute
magnitude less than one.
N.
69. What are the disadvantage of Fourier series method ?
In designing FIR filter using Fourier series method the infinite duration impulse
response is truncated at n= (N-1/2).Direct truncation of the series will lead to fixed
va
percentage overshoots and undershoots before and after an approximated discontinuity in
the frequency response .
na
73.What is the necessary and sufficient condition for linear phase characteristics in
m
FIR filter?
The necessary and sufficient condition for linear phase characteristics in FIR filter
is the impulse response h(n) of the system should have the symmetry property,i.e,
co
H(n) = h(N-1-n)
Where N is the duration of the sequence .
75.What is the principle of designing FIR filter using frequency sampling method?
In frequency sampling method the desired magnitude response is sampled and a linear
na
phase response is specified .The samples of desired frequency response are defined as
DFT coefficients. The filter coefficients are then determined as the IDFT of this set of
samples.
aa
m
4. Dedicated-register addressing
5. Memory-mapped register addressing
6. Circular addressing
co
82.what is meant by block floating point representation? What are its advantages?
In block point arithmetic the set of signals to be handled is divided into blocks. Each
block have the same value for the exponent. The arithmetic operations with in the block
N.
uses fixed point arithmetic & only one exponent per block is stored thus saving memory.
This representation of numbers is more suitable in certain FFT flow graph & in digital
audio applications.
va
83.what are the advantages of floating point arithmetic?
1. Large dynamic range
2. Over flow in floating point representation is unlike.
na
84.what are the three-quantization errors to finite word length registers in digital filters?
1. Input quantization error 2. Coefficient quantization error 3. Product quantization
error
aa
85.How the multiplication & addition are carried out in floating point arithmetic?
In floating point arithmetic, multiplication are carried out as follows,
Let f1 = M1*2c1 and f2 = M2*2c2. Then f3 = f1*f2 = (M1*M2) 2(c1+c2)
M
That is, mantissa is multiplied using fixed-point arithmetic and the exponents are
added.
The sum of two floating-point number is carried out by shifting the bits of the mantissa
of the smaller number to the right until the exponents of the two numbers are equal and
then adding the mantissas.
m
88.what is the relationship between truncation error e and the bits b for representing a
decimal into binary?
For a 2's complement representation, the error due to truncation for both positive and
co
negative values of x is 0>=xt-x>-2-b
Where b is the number of bits and xt is the truncated value of x.
The equation holds good for both sign magnitude, 1's complement if x>0
If x<0, then for sign magnitude and for 1's complement the truncation error satisfies.
N.
89.what is meant rounding? Discuss its effect on all types of number representation?
Rounding a number to b bits is accomplished by choosing the rounded result as the b
bit number closest to the original number unrounded.
va
For fixed point arithmetic, the error made by rounding a number to b bits satisfy the
inequality
-2-b 2-b
na
-----<=xt-x<= --------
2 2
for all three types of number systems, i.e., 2's complement, 1's complement & sign
aa
magnitude.
For floating point number the error made by rounding a number to b bits satisfy the
inequality
M
m
q= 2 =2-b
--------
2b+1
co
Where q is known as quantization step size.
94.How would you relate the steady-state noise power due to quantization and the b bits
representing the binary sequence?
Steady state noise power N.
Where b is the number of bits excluding sign bit.
99.Explain briefly the need for scaling in the digital filter implementation.
To prevent overflow, the signal level at certain points in the digital filter must be
scaled so that no overflow occurs in the adder.
100.What are the different buses of TMS320C5X and their functions?
The C5X architecture has four buses and their functions are as follows:
Program bus (PB):
It carries the instruction code and immediate operands from program memory
space to the CPU.
Program address bus (PAB):
It provides addresses to program memory space for both reads and writes.
Data read bus (DB):
It interconnects various elements of the CPU to data memory space.
Data read address bus (DAB):
It provides the address to access the data memory space.
m
Part B
co
1. Determine the DFT of the sequence
0, otherwise N.
Ans: The N point DFT of the sequence x(n) is defined as
va
N-1
x(k)= x(n)e-j2nk/N K=0,1,2,3,N-1
n=0
na
x(n) = (1/4,1/4,1/4)
2. Derive the DFT of the sample data sequence x(n) = {1,1,2,2,3,3}and compute the
corresponding amplitude and phase spectrum.
M
N-1
X(k)= x(n)e-j2nk/N K=0,1,2,3,N-1
n=0
X(0) = 12
X(1) = -1.5 + j2.598
X(2) = -1.5 + j0.866
X(3) = 0
X(4) = -1.5 j0.866
X(5) =-1.5-j2.598
X(k) = {12, -1.5 + j2.598, -1.5 + j0.866,0, -1.5 j0.866, -1.5-j2.598}
|X(k)|={12,2.999,1.732,0,1.732,2.999}
X(k)={0,- /3,- /6,0, /6, /3}
WNk = e-j(2/N)k
W80 = 1
W81 =0.707-j0.707
W82 = -j
W83 = -0.707-j0.707
m
X(k) = {28,-4+j9.656,-4+j4,-4+j1.656,-4,-4-j1.656,-4-j4,-4-j9.656}
co
using inverse DIT FFT algorithm.
WNk = ej(2/N)k
W80 = 1
W81 =0.707+j0.707
W82 = j
W83 = -0.707+j0.707
N.
va
x(n) = {0,1,2,3,4,5,6,7}
N-1
x(n)=(1/N ) x(k)ej2nk/N n=0,1,2,3,N-1
aa
k=0
x(0) = 5/2
x(1) = -1/2-j1/2
x(2) = -1/2
M
x(3) = -1/2+j1/2
x(n) = {5/2, -1/2-j1/2, -1/2, -1/2+j1/2}
6. Design an ideal low pass filter with a frequency response Hd(e jw) =1 for
/2<=w<=/2
0 otherwise
find the value of h(n) for N=11 find H(Z) plot magnitude response
7. Design an ideal low pass filter with a frequency response Hd(e jw) =1 for
m
/4<=|w|<=
0 otherwise
co
find the value of h(n) for N=11 find H(Z) plot magnitude response
i. H(0)=0.75
h(1)=h(-1)=-.22
N.
h. Convert h(n) in to a fine length by truncation
h(2)=h(-2)=-.159
va
h(3)=h(-3)= -0.075
h(4)=h(-4)=0
h(5)=h(-5)=0.045
j.
na
Find the transfer function H(Z) which is not realizable conver in to realizable by
multiplying by z-(N-1/2)
k. H(Z) obtained is 0.045-0.075z-2 -.159 Z-3-0.22Z-4+0.75Z-5-.22Z-6 -0.159Z-7 -.
075Z-8+0.045Z-10
l. Find H (e jw) and plot amplitude response curve.
aa
M
8. Design band pass filter with a frequency response Hd(e jw) =1 for /3<=|w|<=2/3
0 otherwise
find the value of h(n) for N=11 find H(Z) plot magnitude response
m
10. Derive the condition of FIR filter to be linear in phase.
Conditions are
co
Group delay and Phase delay should be constant
And
And show the condition is satisfied
N.
11Derive the expression for steady state I/P Noise Power and Steady state O/P Noise
Power.
va
Write the derivation.
12 Draw the product quantatization model for first order and second order filter
na
13 For the second order filter Draw the direct form II realization and find the scaling
factor S0 to avoid over flow
aa
1 Fixed point
2 Floating point
3 Block floating point
2 2b (5.43)
Ans:__________________
m
12
co
16.Explain the architecture of DSP processor .
2.
Bartlett method
Welch method
N.
17.Describe briefly the different methods of power spectral estimation?
1.
3. Blackman-Tukey method
va
and its derivation.
A DSP contains a device, A/D converter that operates on the analog input x(t) to
produce xq(t) which is binary sequence of 0s and 1s.
At first the signal x(t) is sampled at regular intervals to produce a sequence x(n) is of
aa
infinite precision. Each sample x(n) is expressed in terms of a finite number of bits given
the sequence xq(n). The difference signal e(n)=xq(n)-x(n) is called A/D conversion noise.
+ derivation.
M
20.Given X(k) = {1,1,1,1,1,1,1,1,} ,find x(n) using inverse DIT FFT algorithm.
WNk = ej(2/N)k
Find x(n)
N-1
x(n)=(1/N ) x(k)ej2nk/N n=0,1,2,3,N-1
m
k=0
co
22. Explain various addressing modes of TMS processor.
Immediate.
Register
Register indirect
Indexed
N.
& its detail explanation.
va
23 Derive the expression for steady state I/P Noise Variance and Steady state O/P
Noise Variance
na
v 4558
DEGREEEXAMINATION,APRIUMAY2008.
B.E./B.Tech.
Fifth Semester
(Regulation 2004)
m
(Common to B.E. (Part-Time) Fourth Semester Regulation 2005)
co
Time : Three hours Maximum: 100 marks
N.
Answer ALL questions.
va
PARTA-(10 x2=20 marks)
na
invariant transformation?
5. What are the three types of quantization error occurredin digital systems?
7. What is a periodogram?
11. (a) (i) Discuss in detail the important properties of the Discrete Fourier
Transform. (8)
x(n)=Cosnn/4.
Or
(b) (i) Using decimation-in-time draw the butterfly line diagram for
8 point FFT calculationand explain. (8)
(ii ) Computean 8 point DFT using DIF FFT radix 2 algorithm. (8)
X (") = {,2,3,4,4,3,2,1}
L2. (a) (i) Determine the magnitude response of an FIR filter (M = 11) and
show that the phase and group delays are constant (8)
m
M-7
H(z)=\n@)"-
co
n=0
Or
M
'= 1
(b) (i) For the analog transfer function | (s)
-Fl' determine
c;t6
w.
0 . 9 < l n ( n *) < t f o r 0 3 w 3 n f2
l n ( u ' *) = o . z f o r3 n f4 1 w 1 n
13. (a) (i) Discuss in detail the Truncation error and Round-off error for sign
magnitude and two's complement representation' (8)
Or
v 4558
(b) (i) A digltal system is charactefizedby the difference equation
y(n)=0.9y (n-7)+x(n)
With r(n)= 0 and initial condition y (-1) =12. Determine the dead
band ofthe system. (4)
(ii) What is meant by the co-efficientquantization? Explain. (12)
14. (a) (i) Explain the Barlett method of averaging periodograms. (8)
(ii) What is the reiationship between autocorrelation and power
spectrum? Prove it. (8)
Or
(b) (i) Derive the mean and variance of the power spectral estimate of the
Biackman and Tukev method. (8)
(ii) Obtain the expression for mean and variance of the auto correlation
function of random signals. (8)
m
15. (a) (i) Describethe multiplier and accumulatorunit in DSP processors.(6)
co
(ii) Explain the architectureof TMS 320 C5X DSP processor. (10)
Or
N.
va
(b) (i) Discuss in detail the four phasesof the pipeline techniques. (8)
v 4558
c 3189
B.E./B.Tech. DE GREE EXAN{INATION, MAY/JUNE 2007.
Fifth Semester
(Regulation 2004)
m
Answer ALL questions.
co
PARTA-(10 x2=20 marks)
N.
1. The first frve DFT coefficientsof a sequencex[n] are X(0) = 20, Xft) = 5 + i2,
va
x(2)=0, X ( 3 ) = 0 . 2 +j 0 . 4 , X ( 4 ) = 0 . Determine the remaining DFT
na
coefficients.
aa
2. What are the advantagesof FFT algorithm over direct computation of DFT?
M
w.
4. Find the digrtal transfer function H Q) by using irnpulse invariant method for
5. Identify the various factors which degrade the performance of the digrtal filter
implementation when finite word length is used.
PARTB-(Sx16=80marks)
11. (a) (i) Prove the following properties of DFT when H [kl is the DFT of an
N-point sequenceh [n] .
(1) H lltl is real and even when h[nl is real and even.
(2) H Vzlis imaginary and odd when hlnl is real and odd. (8)
m
(ii) Computethe DFT of x[nl = s-05n,0 < n < 5 . (8)
co
Or N.
(b) (i) From first principles obtain the signal flow graph for computing
va
8-point DFT using radix-2 decimation-in-frequencyFFT algorithm.
(8)
na
12. (a) A bandpass FIR fiiter of length 7 is required. It is to have iower and
w.
Or
c 3189
13. (a) (i) Consider the truncation of negative fraction numbers represented
in (f +1)-bit fixed point binary form including sign bit. Let (P*b)
bits be truncated. Obtain the range of truncation errors for signed
magnitude, 2's compiementand 1's complementrepresentationsof
the negativenumbers. (8)
Or
m
(b) (i) Consider (b+l)-bit (including sign bit) bipolar A/D converter.
co
Obtain an expressionfor signal to quantization noise ratio. State
the assumptionsmade. (8)
(ii )
N.
A causal IIR filter is defined by the difference equation
y [ n ] = x l n l - 0 . 9 y [ n - 1 ] . T h e u n i t s a m p l er e s p o n s eh l n ) i s c o m p u t e d
va
such that the computed values are rounded to one decimal place.
na
Show that the filter exhibits dead band effect. Determine the dead
band range. (8)
aa
L4. (a) (i) Compute the autocorrelation and power spectral density for the
M
Or
c 3189
15. (a) (i) Expiain what is meant by instruction pipelining. Explain with an
example, how pipelining increasesthe through put efficiency. (8)
Or
(ii) Explain the operationof CSSU of TNIS320C54X and explain its use
consideringthe Viteri operator. (8)
m
co
N.
va
na
aa
M
w.
ww
4 c 3189
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m
ww
w.
M
aa
nav
aN
.co
m