0% found this document useful (0 votes)
148 views5 pages

LPC

This document discusses linear predictive coding (LPC) analysis and synthesis of speech signals. LPC models the human vocal tract as an infinite impulse response filter to efficiently represent speech sounds like vowels. The key steps are: 1) Compute the autocorrelation sequence of the speech signal to measure its redundancy over time shifts. 2) Use the autocorrelation values to find the prediction coefficients that minimize the mean-squared prediction error of current samples based on past samples. 3) The prediction coefficients represent the frequency response of the vocal tract filter and can be used for speech compression or resynthesis.

Uploaded by

tussh_rocks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views5 pages

LPC

This document discusses linear predictive coding (LPC) analysis and synthesis of speech signals. LPC models the human vocal tract as an infinite impulse response filter to efficiently represent speech sounds like vowels. The key steps are: 1) Compute the autocorrelation sequence of the speech signal to measure its redundancy over time shifts. 2) Use the autocorrelation values to find the prediction coefficients that minimize the mean-squared prediction error of current samples based on past samples. 3) The prediction coefficients represent the frequency response of the vocal tract filter and can be used for speech compression or resynthesis.

Uploaded by

tussh_rocks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Connexions module: m10482 1

Speech Processing: Theory of LPC



Analysis and Synthesis

Douglas L. Jones

Swaroop Appadwedula

Matthew Berry

Mark Haun

Jake Janovetz

Michael Kramer

Dima Moussa

Daniel Sachs

Brian Wade

This work is produced by The Connexions Project and licensed under the
Creative Commons Attribution License †

Abstract
Speech analysis and synthesis with Linear Predictive Coding (LPC) exploit the predictable nature
of speech signals. Cross-correlation, autocorrelation, and autocovariance provide the mathematical tools
to determine this predictability. If we know the autocorrelation of the speech sequence, we can use the
Levinson-Durbin algorithm to nd an ecient solution to the least mean-square modeling problem and
use the solution to compress or resynthesize the speech.

1 Introduction
Linear predictive coding (LPC) is a popular technique for speech compression and speech synthesis. The
theoretical foundations of both are described below.

1.1 Correlation coecients


Correlation, a measure of similarity between two signals, is frequently used in the analysis of speech and
other signals. The cross-correlation between two discrete-time signals x [n] and y [n] is dened as

(1)
X
rxy [l] = (x [n] y [n − l])
n=−∞

∗ Version 2.19: Jun 1, 2009 10:26 am GMT-5


† http://creativecommons.org/licenses/by/1.0

http://cnx.org/content/m10482/2.19/
Connexions module: m10482 2

where n is the sample index, and l is the lag or time shift between the two signals Proakis and Manolakis
[1] (pg. 120 ). Since speech signals are not stationary, we are typically interested in the similarities between
signals only over a short time duration (< 30 ms). In this case, the cross-correlation is computed only over
a window of time samples and for only a few time delays l = {0, 1, . . . , P }.
Now consider the autocorrelation sequence rss [l], which describes the redundancy in the signal s [n].
N −1
!
l X
rss [l] = (s [n] s [n − l]) (2)
N n=0

where s [n], n = {−P, (−P) + 1, . . . , N − 1} are the known samples (see Figure 1) and the 1
N is a normalizing
factor.

s[n]
0 N −1
s[n − l]
−P −l 0 N −l−1 N −1

multiply and accumulate to get rss [l]

Figure 1: Computing the autocorrelation coecients

Another related method of measuring the redundancy in a signal is to compute its autocovariance
N −1
!
1 X
rss [l] = (s [n] s [n − l]) (3)
N −1
n=l

where the summation is over N − l products (the samples {s [−P] , . . . , s [−1]} are ignored).

1.2 Linear prediction model


Linear prediction is a good tool for analysis of speech signals. Linear prediction models the human vocal
tract as an innite impulse response (IIR) system that produces the speech signal. For vowel sounds
and other voiced regions of speech, which have a resonant structure and high degree of similarity over time
shifts that are multiples of their pitch period, this modeling produces an ecient representation of the sound.
Figure 2 shows how the resonant structure of a vowel could be captured by an IIR system.

http://cnx.org/content/m10482/2.19/
Connexions module: m10482 3

1
−1
1 − a1 z − a2 z −2 − . . . aP z −P

Figure 2: Linear Prediction (IIR) Model of Speech

The linear prediction problem can be stated as nding the coecients ak which result in the best predic-
tion (which minimizes mean-squared prediction error) of the speech sample s [n] in terms of the past samples
^
s [n − k], k = {1, . . . , P }. The predicted sample s [n] is then given by Rabiner and Juang [2]

^
P
(4)
X
s [n] = (ak s [n − k])
k=1

where P is the number of past samples of s [n] which we wish to examine.


Next we derive the frequency response of the system in terms of the prediction coecients ak . In (4),
^
when the predicted sample equals the actual signal (i.e., s [n] = s [n]), we have
P
X
s [n] = (ak s [n − k])
k=1

P
X
ak s (z) z −k

s (z) =
k=1

1
s (z) = PP (5)
1− k=1 (ak z −k )
The optimal solution to this problem is Rabiner and Juang [2]
 
a= a1 a2 ... aP

 T
r= rss [1] rss [2] . . . rss [P ]

 
rss [0] rss [1] ... rss [P − 1]
rss [P − 2] 
 
 rss [1] rss [0] ...
R= .. .. .. ..
 
. . . .

 
 
rss [P − 1] rss [P − 2] . . . rss [0]

http://cnx.org/content/m10482/2.19/
Connexions module: m10482 4

a = R−1 r (6)
Due to the Toeplitz property of the R matrix (it is symmetric with equal diagonal elements), an ecient
algorithm is available for computing a without the computational expense of nding R−1 . The Levinson-
Durbin algorithm is an iterative method of computing the predictor coecients aRabiner and Juang [2]
(p.115 ).
Initial Step: E0 = rss [0], i = 1
for i = 1 to P .
Steps
 
1. ki = Ei−1
1 i−1
rss [i] − j=1 (αj,i−1 rss [|i − j|])
P

2. • αj,i = αj,i−1 − ki αi−j,i−1 j = {1, . . . , i − 1}


• αi,i = ki 
3. Ei = 1 − ki 2 Ei−1

1.3 LPC-based synthesis


It is possible to use the prediction coecients to synthesize the original sound by applying δ [n], the unit
impulse, to the IIR system with lattice coecients ki , i = {1, . . . , P } as shown in Figure 3. Applying
δ [n] to consecutive IIR systems (which represent consecutive speech segments) yields a longer segment of
synthesized speech.
In this application, lattice lters are used rather than direct-form lters since the lattice lter coecients
have magnitude less than one and, conveniently, are available directly as a result of the Levinson-Durbin
algorithm. If a direct-form implementation is desired instead, the α coecients must be factored into second-
order stages with very small gains to yield a more stable implementation.

x[n] y[n]
k3 k2 k1
−k3 −k2 −k1
D D D D

Figure 3: IIR lattice lter implementation.

When each segment of speech is synthesized in this manner, two problems occur. First, the synthesized
speech is monotonous, containing no changes in pitch, because the δ [n]'s, which represent pulses of air from
the vocal chords, occur with xed periodicity equal to the analysis segment length; in normal speech, we vary
the frequency of air pulses from our vocal chords to change pitch. Second, the states of the lattice lter (i.e.,
past samples stored in the delay boxes) are cleared at the beginning of each segment, causing discontinuity
in the output.
To estimate the pitch, we look at the autocorrelation coecients of each segment. A large peak in the
autocorrelation coecient at lag l 6= 0 implies the speech segment is periodic (or, more often, approximately
periodic) with period l. In synthesizing these segments, we recreate the periodicity by using an impulse train
as input and varying the delay between impulses according to the pitch period. If the speech segment does

http://cnx.org/content/m10482/2.19/
Connexions module: m10482 5

not have a large peak in the autocorrelation coecients, then the segment is an unvoiced signal which has
no periodicity. Unvoiced segments such as consonants are best reconstructed by using noise instead of an
impulse train as input.
To reduce the discontinuity between segments, do not clear the states of the IIR model from one segment
to the next. Instead, load the new set of reection coecients, ki , and continue with the lattice lter
computation.

2 Additional Issues
• Spanish vowels (mop, ace, easy, go, but) are easier to recognize using LPC.
• Error can be computed as aT Ra, where R is the autocovariance or autocorrelation matrix of a test
segment and a is the vector of prediction coecients of a template segment.
• A pre-emphasis lter before LPC, emphasizing frequencies of interest in the recognition or synthesis,
can improve performance.
• The pitch period for males (80- 150 kHz) is dierent from the pitch period for females.
• For voiced segments, rrss [T ]
ss [0]
≈ 0.25, where T is the pitch period.

References
[1] J. G. Proakis and D. G. Manolakis. Digital Signal Processing: Principles, Algorithms, and Applications .
Prentice-Hall, Upper Saddle River, NJ, 1996.
[2] L. Rabiner and B. H. Juang. Fundamentals of Speech Recognition . Prentice-Hall, Englewood Clis, NJ,
1993.

http://cnx.org/content/m10482/2.19/

You might also like