LPC
LPC
Douglas L. Jones
Swaroop Appadwedula
Matthew Berry
Mark Haun
Jake Janovetz
Michael Kramer
Dima Moussa
Daniel Sachs
Brian Wade
This work is produced by The Connexions Project and licensed under the
Creative Commons Attribution License †
Abstract
Speech analysis and synthesis with Linear Predictive Coding (LPC) exploit the predictable nature
of speech signals. Cross-correlation, autocorrelation, and autocovariance provide the mathematical tools
to determine this predictability. If we know the autocorrelation of the speech sequence, we can use the
Levinson-Durbin algorithm to nd an ecient solution to the least mean-square modeling problem and
use the solution to compress or resynthesize the speech.
1 Introduction
Linear predictive coding (LPC) is a popular technique for speech compression and speech synthesis. The
theoretical foundations of both are described below.
http://cnx.org/content/m10482/2.19/
Connexions module: m10482 2
where n is the sample index, and l is the lag or time shift between the two signals Proakis and Manolakis
[1] (pg. 120 ). Since speech signals are not stationary, we are typically interested in the similarities between
signals only over a short time duration (< 30 ms). In this case, the cross-correlation is computed only over
a window of time samples and for only a few time delays l = {0, 1, . . . , P }.
Now consider the autocorrelation sequence rss [l], which describes the redundancy in the signal s [n].
N −1
!
l X
rss [l] = (s [n] s [n − l]) (2)
N n=0
where s [n], n = {−P, (−P) + 1, . . . , N − 1} are the known samples (see Figure 1) and the 1
N is a normalizing
factor.
s[n]
0 N −1
s[n − l]
−P −l 0 N −l−1 N −1
Another related method of measuring the redundancy in a signal is to compute its autocovariance
N −1
!
1 X
rss [l] = (s [n] s [n − l]) (3)
N −1
n=l
where the summation is over N − l products (the samples {s [−P] , . . . , s [−1]} are ignored).
http://cnx.org/content/m10482/2.19/
Connexions module: m10482 3
1
−1
1 − a1 z − a2 z −2 − . . . aP z −P
The linear prediction problem can be stated as nding the coecients ak which result in the best predic-
tion (which minimizes mean-squared prediction error) of the speech sample s [n] in terms of the past samples
^
s [n − k], k = {1, . . . , P }. The predicted sample s [n] is then given by Rabiner and Juang [2]
^
P
(4)
X
s [n] = (ak s [n − k])
k=1
P
X
ak s (z) z −k
s (z) =
k=1
1
s (z) = PP (5)
1− k=1 (ak z −k )
The optimal solution to this problem is Rabiner and Juang [2]
a= a1 a2 ... aP
T
r= rss [1] rss [2] . . . rss [P ]
rss [0] rss [1] ... rss [P − 1]
rss [P − 2]
rss [1] rss [0] ...
R= .. .. .. ..
. . . .
rss [P − 1] rss [P − 2] . . . rss [0]
http://cnx.org/content/m10482/2.19/
Connexions module: m10482 4
a = R−1 r (6)
Due to the Toeplitz property of the R matrix (it is symmetric with equal diagonal elements), an ecient
algorithm is available for computing a without the computational expense of nding R−1 . The Levinson-
Durbin algorithm is an iterative method of computing the predictor coecients aRabiner and Juang [2]
(p.115 ).
Initial Step: E0 = rss [0], i = 1
for i = 1 to P .
Steps
1. ki = Ei−1
1 i−1
rss [i] − j=1 (αj,i−1 rss [|i − j|])
P
x[n] y[n]
k3 k2 k1
−k3 −k2 −k1
D D D D
When each segment of speech is synthesized in this manner, two problems occur. First, the synthesized
speech is monotonous, containing no changes in pitch, because the δ [n]'s, which represent pulses of air from
the vocal chords, occur with xed periodicity equal to the analysis segment length; in normal speech, we vary
the frequency of air pulses from our vocal chords to change pitch. Second, the states of the lattice lter (i.e.,
past samples stored in the delay boxes) are cleared at the beginning of each segment, causing discontinuity
in the output.
To estimate the pitch, we look at the autocorrelation coecients of each segment. A large peak in the
autocorrelation coecient at lag l 6= 0 implies the speech segment is periodic (or, more often, approximately
periodic) with period l. In synthesizing these segments, we recreate the periodicity by using an impulse train
as input and varying the delay between impulses according to the pitch period. If the speech segment does
http://cnx.org/content/m10482/2.19/
Connexions module: m10482 5
not have a large peak in the autocorrelation coecients, then the segment is an unvoiced signal which has
no periodicity. Unvoiced segments such as consonants are best reconstructed by using noise instead of an
impulse train as input.
To reduce the discontinuity between segments, do not clear the states of the IIR model from one segment
to the next. Instead, load the new set of reection coecients, ki , and continue with the lattice lter
computation.
2 Additional Issues
• Spanish vowels (mop, ace, easy, go, but) are easier to recognize using LPC.
• Error can be computed as aT Ra, where R is the autocovariance or autocorrelation matrix of a test
segment and a is the vector of prediction coecients of a template segment.
• A pre-emphasis lter before LPC, emphasizing frequencies of interest in the recognition or synthesis,
can improve performance.
• The pitch period for males (80- 150 kHz) is dierent from the pitch period for females.
• For voiced segments, rrss [T ]
ss [0]
≈ 0.25, where T is the pitch period.
References
[1] J. G. Proakis and D. G. Manolakis. Digital Signal Processing: Principles, Algorithms, and Applications .
Prentice-Hall, Upper Saddle River, NJ, 1996.
[2] L. Rabiner and B. H. Juang. Fundamentals of Speech Recognition . Prentice-Hall, Englewood Clis, NJ,
1993.
http://cnx.org/content/m10482/2.19/