Get Foundations of Signal Processing Martin Vetterli free all chapters
Get Foundations of Signal Processing Martin Vetterli free all chapters
https://ebookgate.com/product/the-digital-signal-processing-handbook-
video-speech-and-audio-signal-processing-2nd-edition-vijay-k-
madisetti/
ebookgate.com
https://ebookgate.com/product/information-fusion-in-signal-and-image-
processing-digital-signal-and-image-processing-1st-edition-isabelle-
bloch/
ebookgate.com
Multimedia Signal Processing Saeed V. Vaseghi
https://ebookgate.com/product/multimedia-signal-processing-saeed-v-
vaseghi/
ebookgate.com
https://ebookgate.com/product/digital-signal-processing-1st-edition-j-
s-chitode/
ebookgate.com
https://ebookgate.com/product/modern-digital-signal-processing-1st-
edition-roberto-cristi/
ebookgate.com
https://ebookgate.com/product/essentials-of-digital-signal-
processing-1st-draft-edition-b-p-lathi/
ebookgate.com
The cover illustration captures an experiment first described by Isaac Newton in
Opticks in 1730, showing that white light can be split into its color components and
then synthesized back into white light. It is a physical implementation of a decom-
position of white light into its Fourier components – the colors of the rainbow –
followed by a synthesis to recover the original.
Foundations of Signal Processing
This comprehensive and engaging textbook introduces the basic principles and tech-
niques of signal processing, from the fundamental ideas of signals and systems theory
to real-world applications.
• Introduces students to the powerful foundations of modern signal processing,
including the basic geometry of Hilbert space, the mathematics of Fourier transforms,
and essentials of sampling, interpolation, approximation, and compression.
• Discusses issues in real-world use of these tools such as effects of truncation and
quantization, limitations on localization, and computational costs.
• Includes over 160 homework problems and over 220 worked examples, specifically
designed to test and expand students’ understanding of the fundamentals of signal
processing.
• Accompanied by extensive online materials designed to aid learning, including
Mathematica resources and interactive demonstrations.
JELENA KOVA ČEVI ´C is the David Edward Schramm Professor and Head of Electrical and
Computer Engineering, and a Professor of Biomedical Engineering, at Carnegie Mellon
University. She has been awarded the Belgrade October Prize (1986), the E. I. Jury
Award (1991) from Columbia University, and the 2010 Philip L. Dowd Fellowship at
Carnegie Mellon University. She is a former Editor-in-Chief of IEEE Transactions on
Image Processing and a Fellow of the IEEE and EURASIP.
“Finally a wonderful and accessible book for teaching modern signal processing to undergraduate
students.”
´
St´ephane Mallat, Ecole Normale Supérieure
“This is a major book about a serious subject – the combination of engineering and mathematics
that goes into modern signal processing: discrete time, continuous time, sampling, filtering, and
compression. The theory is beautiful and the applications are so important and widespread.”
Gil Strang, Massachusetts Institute of Technology
“This book (FSP) and its companion (FWSP) bring a refreshing new, and comprehensive approach
to teaching the fundamentals of signal processing, from analysis and decompositions, to multi-
scale representations, approximations, and many other aspects that have a tremendous impact
in modern information technology. Whereas classical texts were usually written for students in
electrical or communication engineering programs, FSP and FWSP start from basic concepts in
algebra and geometry, with the benefit of being easily accessible to a much broader set of read-
ers, and also help those readers develop strong abstract reasoning and intuition about signals and
processing operators. A must-read!”
Rico Malvar, Microsoft Research
“This is a wonderful book that connects together all the elements of modern signal processing.
From functional analysis and probability theory, to linear algebra and computational methods,
it’s all here and seamlessly integrated, along with a summary of history and developments in the
field. A real tour-de-force, and a must-have on every signal processor’s shelf!”
Robert D. Nowak, University of Wisconsin–Madison
“Most introductory signal processing textbooks focus on classical transforms, and study how
these can be used. Instead, Foundations of Signal Processing encourages readers to think of sig-
nals first. It develops a ’signal-centric’ view, one that focuses on signals, their representation and
approximation, through the introduction of signal spaces. Unlike most entry-level signal process-
ing texts, this general view, which can be applied to many different signal classes, is introduced
right at the beginning. From this, starting from basic concepts, and placing an emphasis on intu-
ition, this book develops mathematical tools that give the readers a fresh perspective on classical
results, while providing them with the tools to understand many state-of-the-art signal represen-
tation techniques.”
Antonio Ortega, University of Southern California
“Foundations of Signal Processing by Vetterli, Kovačevi´c, and Goyal, is a pleasure to read. Draw-
ing on the authors’ rich experience of research and teaching of signal processing and signal rep-
resentations, it provides an intellectually cohesive and modern view of the subject from the geo-
metric point of view of vector spaces. Emphasizing Hilbert spaces, where fine technicalities can
be relegated to backstage, this textbook strikes an excellent balance between intuition and math-
ematical rigor, that will appeal to both undergraduate and graduate engineering students. The last
two chapters, on sampling and interpolation, and on localization and uncertainty, take full advan-
tage of the machinery developed in the previous chapters to present these two very important
topics of modern signal processing, that previously were only found in specialized monographs.
The explanations of advanced topics are exceptionally lucid, exposing the reader to the ideas and
thought processes behind the results and their derivation. Students will learn not only a substantial
body of knowledge and techniques, but also why things work, at a deep level, which will equip
them for independent further reading and research. I look forward to using this text in my own
teaching.”
Yoram Bresler, University of Illinois at Urbana-Champaign
Foundations of Signal Processing
´ ´´
University Printing House, Cambridge CB2 8BS, United Kingdom
www.cambridge.org
Information on this title: www.cambridge.org/9781107038608
c M. Vetterli, J. Kova čević & V. K. Goyal 2014
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2014
Printed and bound in the United Kingdom by TJ International Ltd. Padstow Cornwall
A catalogue record for this publication is available from the British Library
ISBN 978-1-107-03860-8 Hardback
Additional resources for this publication at www.cambridge.org/vetterli
and www.fourierandwavelets.org
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To Marie-Laure, for her ∞ patience and many other qualities,
Thomas and Noémie, whom I might still convince of the beauty of this material,
and my parents, who gave me all the opportunities one can wish for.
— MV
Acknowledgments xxiii
Preface xxv
xi
xii Contents
References 675
Index 681
Quick reference
Abbreviations
AR Autoregressive
ARMA Autoregressive moving average
AWGN Additive white Gaussian noise
BIBO Bounded input, bounded output
CDF Cumulative distribution function
DCT Discrete cosine transform
DFT Discrete Fourier transform
DTFT Discrete-time Fourier transform
DWT Discrete wavelet transform
FFT Fast Fourier transform
FIR Finite impulse response
i.i.d. Independent and identically distributed
IIR Infinite impulse response
KLT Karhunen–Loève transform
LMMSE Linear minimum mean-squared error
LPSV Linear periodically shift-varying
LSI Linear shift-invariant
MA Moving average
MAP Maximum a posteriori probability
ML Maximum likelihood
MMSE Minimum mean-squared error
MSE Mean-squared error
PDF Probability density function
PMF Probability mass function
POCS Projection onto convex sets
rad Radians
ROC Region of convergence
SNR Signal-to-noise ratio
SVD Singular value decomposition
WSCS Wide-sense cyclostationary
WSS Wide-sense stationary
xvii
xviii Quick reference
Sets
natural numbers N 0, 1, . . .
integers Z . . . , −1, 0, 1, . . .
positive integers Z+ 1, 2, . . .
rational numbers Q p/q, p, q ∈ Z, q = 0
real numbers R (−∞, ∞)
positive real numbers R+ (0, ∞)
complex numbers C a + jb or re jθ with a, b, r, θ ∈ R
a generic index set I
a generic vector space V
a generic Hilbert space H
closure of set S S
Asymptotic notation
big O x ∈ O (y ) 0 ≤ x n ≤ γyn for all n ≥ n0 ;
some n 0 and γ > 0
little o x ∈ o(y ) 0 ≤ x n ≤ γyn for all n ≥ n0 ;
some n 0 , any γ > 0
Omega x ∈ Ω(y ) x n ≥ γyn for all n ≥ n0 ;
some n 0 and γ > 0
Theta x ∈ Θ(y ) x ∈ O (y ) and x ∈ Ω(y)
asymptotic equivalence xy lim n→∞ x n /yn = 1
Quick reference xix
linear h∗x x k hn−k
k∈
N
−1
circular
h x x k h(n−k) mod N
(N-periodic sequences)
k=0
(h ∗ x) n convolution result at n
v n eigenvector
infinite time vn = e jωn h ∗ v = H (ejω ) v
finite time vn = e j2πkn/N h v = Hk v
eigenvalue
corresponding to v n
jω −jωn
infinite time H (e ) hn e
n∈
N
−1 N
−1
finite time Hk h ne −j2πkn/N = hn W Nkn
n=0 n=0
∞
linear h∗x x(τ ) h(t − τ ) dτ
−∞
T /2
circular
h x x(τ ) h(t − τ ) dτ
(T -periodic functions) −T /2
(h ∗ x)(t) convolution result at t
v (t) eigenvector
jωt
infinite time v (t) = e h ∗ v = H (ω ) v
finite time v (t) = ej2πkt/T h v = Hk v
eigenvalue corresponding to v( t)
∞
infinite time H (ω ) h(t) e −jωt dt
−∞
T /2
finite time Hk h(τ ) e−j2πkτ/T dτ
−T /2
Quick reference xxi
Spectral analysis ∞
FT
Fourier transform x(t) ←→ X (ω ) X (ω ) = x(t)e−jωt dt
−∞
∞
1
inverse x(t) = X (ω ) e jωt dω
2π −∞
FS 1 T /2
Fourier series coefficients x(t) ←→ Xk Xk = x(t) e−j(2π/T)kt dt
T −T /2
reconstruction x(t) = Xk ej(2π/T)kt
k∈
DTFT
discrete-time Fourier transform xn ←→ X (e jω ) X (ejω ) = x ne−jωn
n∈
π
1
inverse xn = X (ejω ) ejωn dω
2π −π
N
−1
DFT
discrete Fourier transform xn ←→ Xk Xk = xn WNkn
n=0
N
−1
1
inverse xn = Xk WN−kn
N n=0
ZT
z -transform xn ←→ X (z ) X (z ) = x n z −n
n∈
Acknowledgments
This book would not exist without the help of many people, whom we attempt to
list below. We apologize for any omissions and welcome corrections and suggestions.
We are grateful to Professor Libero Zuppiroli of EPFL and Christiane Grimm
for the photograph that graces the cover; Professor Zuppiroli proposed an ex-
periment from Newton’s treatise, Opticks [70], as emblematic of the book, and
Ms. Grimm beautifully photographed the apparatus that he designed. Françoise
Behn, Jocelyne Plantefol, and Jacqueline Aeberhard typed parts of the manuscript,
Eric Strattman assisted with some of the figures, Krista Van Guilder designed and
implemented the initial book web site, and Jorge Albaladejo Pomares designed
and implemented the book blog. We thank them for their diligence and patience.
S. Grace Chang and Yann Barbotin helped organize and edit the problem compan-
ion, while Patrick Vandewalle designed and implemented a number of MATLAB R
1
problems available on the book website. We thank them for their expertise and
insight. We are indebted to George Beck for his Mathematica R
editorial comments
on Wolfram Demonstrations inspired by this book. We are grateful to Giovanni
Pacifici for the many useful scripts for automating various book-writing processes.
We thank John Cozzens of the US National Science Foundation for his unwavering
support over the last two decades. At Cambridge University Press, Steven Holt
held us to high standards in our writing and typesetting through many corrections
and suggestions; Elizabeth Horne and Jonathan Ratcliffe coordinated production
expertly; and, last but not least, Phil Meyler repeatedly (and most gracefully) re-
minded us of our desire to finish the book.
Many instructors have gamely tested pre-alpha, alpha, and beta versions of
this manuscript. Of these, Amina Chebira, Filipe Condessa, Mihailo Kolundžija,
Yue M. Lu, Truong-Thao Nguyen, Reza Parhizkar, and Jayakrishnan Unnikrishnan
have done far more than their share in providing invaluable comments and sugges-
tions. We also thank Robert Gray for reviewing the manuscript and providing many
important suggestions, in particular a better approach to covering the Dirac delta
function; Thierry Blu for, among other things, providing a simple proof for a partic-
ular case of the Strang–Fix theorem; Matthew Fickus for consulting on some finer
mathematical points; Michael Unser for his notes and teaching approach; and Zoran
Cvetković, Minh N. Do, and Philip Schniter for teaching with the manuscript and
providing many constructive comments. Useful comments have also been provided
1 http://www.fourierandwavelets.org
xxiii
xxiv Acknowledgments
Our main goals in this book and its companion volume, Fourier and Wavelet Signal
Processing (FWSP) [57], are to enable an understanding of state-of-the-art signal
processing methods and techniques, as well as to provide a solid foundation for those
hoping to advance the theory and practice of signal processing. We believe that the
best way to grasp and internalize the fundamental concepts in signal processing is
through the geometry of Hilbert spaces, as this leverages the great innate human
capacity for spatial reasoning. While using geometry should ultimately simplify the
subject, the connection between signals and geometry is not innate. The reader will
have to invest effort to see signals as vectors in Hilbert spaces before reaping the
benefits of this view; we believe that effort to be well placed.
Many of the results and techniques presented in the two volumes, while rooted
in classic Fourier techniques for signal representation, first appeared during a flurry
of activity in the 1980s and 1990s. New constructions of local Fourier transforms and
orthonormal wavelet bases during that period were motivated both by theoretical
interest and by applications, multimedia communications in particular. New bases
with specified time–frequency behavior were found, with impact well beyond the
original fields of application. Areas as diverse as computer graphics and numerical
analysis embraced some of the new constructions – no surprise given the pervasive
role of Fourier analysis in science and engineering.
Many of these new tools for signal processing were developed in the applied
harmonic analysis community. The resulting high level of mathematical sophistica-
tion was a barrier to entry for many signal processing practitioners. Now that the
dust has settled, some of what was new and esoteric has become fundamental; we
want to bring these new fundamentals to a broader audience. The Hilbert space
formalism gives us a way to begin with the classical Fourier analysis of signals and
systems and reach structured representations with time–frequency locality and their
varied applications. Whenever possible, we use explanations rooted in elementary
analysis over those that would require more advanced background (such as measure
theory). We hope to have balanced the competing virtues of accessibility to the
student, rigor, and adequate analytical power to reach important conclusions.
The book can be used as a self-contained text on the foundations of signal
processing, where discrete and continuous time are treated on equal footing. All
the necessary mathematical background is included, with examples illustrating the
applicability of the results. In addition, the book serves as a precursor to FWSP,
which relies on the framework built here; the two books are thus integrally related.
xxv
xxvi Preface
Foundations of Signal Processing This book covers the foundations for an in-
depth understanding of modern signal processing. It contains material that many
readers may have seen before scattered across multiple sources, but without the
Hilbert space interpretations, which are essential in signal processing. Our aim is
to teach signal processing with geometry, that is, to extend Euclidean geometric in-
sights to abstract signals; we use Hilbert space geometry to accomplish that. With
this approach, fundamental concepts – such as properties of bases, Fourier rep-
resentations, sampling, interpolation, approximation, and compression – are often
unified across finite dimensions, discrete time, and continuous time, thus making
it easier to point out the few essential differences. Unifying results geometrically
helps generalize beyond Fourier-domain insights, pushing the understanding farther,
faster.
Chapter 2, From Euclid to Hilbert, is our main vehicle for drawing out unifying
commonalities; it develops the basic geometric intuition central to Hilbert spaces,
together with the necessary tools underlying the constructions of bases and frames.
The next two chapters cover signal processing on discrete-time and continuous-
time signals, specializing general concepts from Chapter 2. Chapter 3, Sequences
and discrete-time systems, is a crash course on processing signals in discrete time or
discrete space together with spectral analysis with the discrete-time Fourier trans-
form and discrete Fourier transform. Chapter 4, Functions and continuous-time
systems, is its continuous-time counterpart, including spectral analysis with the
Fourier transform and Fourier series.
Chapter 5, Sampling and interpolation, presents the critical link between dis-
crete and continuous domains given by sampling and interpolation theorems. Chap-
ter 6, Approximation and compression, veers from exact representations to approx-
imate ones. The final chapter in the book, Chapter 7, Localization and uncertainty,
considers time–frequency behavior of the abstract representation objects studied
thus far. It also discusses issues arising in applications as well as ways of adapting
the previously introduced tools for use in the real world.
Fourier and Wavelet Signal Processing The companion volume focuses on signal
representations using local Fourier and wavelet bases and frames. It covers the
two-channel filter bank in detail, and then uses it as the implementation vehicle
for all sequence representations that follow. The local Fourier and wavelet methods
are presented side-by-side, without favoring any one in particular; the truth is that
each representation is a tool in the toolbox of the practitioner, and the problem or
application at hand ultimately determines the appropriate one to use. We end with
examples of state-of-the-art signal processing and communication problems, with
sparsity as a guiding principle.
The material grew out of teaching signal processing, wavelets, and applications
in various settings. Two of us (MV and JK) authored a graduate textbook, Wavelets
and Subband Coding (originally with Prentice Hall in 1995, now open access ),
which we and others used to teach graduate courses at various US and European
institutions. With the maturing of the field and the interest arising from and for
these topics, the time was right for the three of us to write entirely new texts geared
toward a broader audience. We and others have taught with these books, in their
entirety or in parts, a number of times and to a number of different audiences: from
senior undergraduate to graduate level, and from engineering to mixes that include
life-science students.
The books and their website provide a number of features for teaching and
learning:
Exercises are an integral part of the material and come in two forms: solved
exercises with explicit solutions within the text, and regular exercises that
allow students to test their knowledge. Regular exercises are marked with ,
, or in increasing order of difficulty.
Notational points To traverse the book efficiently, it will help to know the various
numbering conventions that we have employed. In each chapter, a single counter is
used for definitions, theorems, and corollaries (which are all shaded) and another
for examples (which are slightly indented). Equations, figures, and tables are also
numbered within each chapter. A prefix E . – in the number of an equation,
figure, or table indicates that it is part of Solved Exercise . in Chapter . The
letter P is used similarly for statements of regular exercises.
1
2 On rainbows and spectra
Orthonormal bases When the basis vectors form an orthonormal set, the coeffi-
cients X k are obtained from the function x and the basis vectors ϕk through an
inner product
Xk = x, ϕk . (1.2)
For example, Fourier’s construction of a series representation for periodic functions
with period T = 1 can be written as
x(t) = Xk ej2πkt , (1.3a)
k∈Z
where 1
Xk = x(t)e−j2πkt dt. (1.3b)
0
3
2 4 .
(a) 0( ) = 1. (b) 1( )= . (c) 2( )=
Example Fourier series basis functions for the interval [0 1). Real parts are
shown with solid lines and imaginary parts are shown with dashed lines.
exactly the same as (1.3b). The basis vectors form an orthonormal set (the first
few are shown in Figure 1.1):
1
j2πkt −j2πit 1, for i = k;
ϕk , ϕi = e e dt = (1.5)
0 0, otherwise.
While the Fourier series is certainly a key orthonormal basis with many out-
standing properties, other bases exist, some of which have their own favorable prop-
erties. Early in the twentieth century, Alfred Haar proposed a basis which looks
quite different from Fourier’s. It is based on a function ψ(t) defined as
⎧ 1
⎨ 1, for 0 ≤ t < 2 ;
ψ(t) = −1, for 12 ≤ t < 1; (1.6)
⎩
0, otherwise.
For the interval [0, 1), we can build an orthonormal system by scaling ψ(t) by powers
of 2, and then shifting the scaled versions appropriately, yielding
−m/2 t − n2 m
ψm,n (t) = 2 ψ , (1.7)
2m
with m ∈ {0, −1, −2, . . .} and n ∈ {0, 1, . . . , 2 −m − 1} (a few are shown in Fig-
ure 1.2). It is quite clear from the figure that the various basis functions are indeed
orthogonal to each other, as they either do not overlap, or when they do, one changes
sign over the constant span of the other. We will spend a considerable amount of
time studying this system in the companion volume to this book, [57].
4 On rainbows and spectra
Example Haar series basis functions for the interval [0 1). The prototype
function is ( ) = 0 0 ( ).
While the system (1.7) is orthonormal, it cannot be a basis for all functions on
[0, 1); for example, there would be no way to reconstruct a constant 1. We remedy
that by adding the function
1, for 0 ≤ t < 1;
ϕ0 (t) = (1.8)
0, otherwise,
into the mix, yielding an orthonormal basis for the interval [0, 1). This is a very
different basis from the Fourier one; for example, instead of being infinitely differ-
entiable, no ψ m,n is even continuous. We can now define an expansion as in (1.3),
0
2−1
x(t) = x, ϕ 0ϕ0 (t) + Xm,n ψm,n(t), (1.9a)
m=−∞ n=0
where 1
Xm,n = x(t) ψm,n(t) dt. (1.9b)
0
It is natural to ask which basis is better. Such a question does not have a
simple answer, and the answer will depend on the class of functions or sequences we
wish to represent, as well as our goals in the representation. Furthermore, we will
have to be careful in describing what we mean by equality in an expansion such as
(1.3a); otherwise we could be misled the same way Fourier was.
Approximation One way to assess the quality of a basis is to see how well it can
approximate a given function with a finite number of terms. History is again en-
lightening. Fourier series became such a useful tool during the nineteenth century
that researchers built elaborate mechanical devices to compute a function based
on Fourier series coefficients. They built analog computers, based on harmonically
related rotating wheels, where amplitudes of Fourier coefficients could be set and
the sum computed. One such machine, the Harmonic Integrator, was designed by
the physicists Albert Michelson and Samuel Stratton, and it could compute a series
with 80 terms. To the designers’ dismay, the synthesis of a square wave from its
Fourier series led to oscillations around the discontinuity that would not go away
5
(a) Series with 9, 65, and 513 terms. (b) Detail of (a).
(a) Series with 8, 64, and 512 terms. (b) Detail of (a).
Approximation of a box function (dashed lines) with a Haar basis using the
first 8 ( = 0 1 2), 64 ( = 0 1 5), and 512 ( =0 1 8), terms
(solid lines, from lightest to darkest), with =0 1 2 1. The discontinuity is at
the irrational point 1 2.
even as they increased the number of terms; they concluded that a mechanical prob-
lem was at fault. Not until 1899, when Josiah Gibbs proved that Fourier series of
discontinuous functions cannot converge uniformly, was this myth dispelled. The
phenomenon was termed the Gibbs phenomenon, referring to the oscillations ap-
pearing around the discontinuity when using any finite number of terms. Figure 1.3
shows approximations of a box function with a Fourier series basis (1.3a) using Xk ,
k = −K, −K + 1, . . . , K − 1, K .
So what would the Haar basis provide in this case? Surely, it seems more
appropriate for a box function. Unfortunately, taking the first 2−m terms in the
natural ordering (the term corresponding to the function ϕ0 (t) plus 2 −m terms
corresponding to each scale m = 0, −1, −2, . . . ) leads to a similarly poor perfor-
mance, shown in Figure 1.4. This poor performance is dependent on the position of
the discontinuity; approximating a box function with a discontinuity at an integer
multiple of 2−k for some k ∈ Z would lead to a much better performance.
However, changing the approximation procedure slightly makes a big differ-
6 On rainbows and spectra
Approximation of a box function (dashed lines) with a Haar basis using the
8 (light) and 15 (dark) largest-magnitude terms. The 15-term approximation is visually
indistinguishable from the target function.
ence. Upon retaining the largest coefficients in absolute value instead of simply
keeping a fixed set of terms, the approximation quality changes drastically, as seen
in Figure 1.5. In this admittedly extreme example, for each m, there is only one n
such that X m,n is nonzero (that for which the corresponding Haar wavelet straddles
the discontinuity). Thus, approximating using coefficients largest in absolute value
allows many more values of m to be included.
Through this comparison, we have illustrated how the quality of a basis for
approximation can depend on the method of approximation. Retaining a predefined
set of terms, as in the Fourier example case (Figure 1.3) or the first Haar example
(Figure 1.4) is called linear approximation. Retaining an adaptive set of terms in-
stead, as in the second Haar example (Figure 1.5), is called nonlinear approximation
and leads to a superior approximation quality.
Overview of the book The purpose of this book is to develop the framework for
the methods just described, namely expansions and approximations, as well as to
show practical examples where these methods are used in engineering and applied
sciences. In particular, we will see that expansions and approximations are closely
related to the essential signal processing tasks of sampling, filtering, estimation, and
compression.
, introduces the basic machinery of
Hilbert spaces. These are vector spaces endowed with operations that induce intu-
itive geometric properties. In this general setting, we develop the notion of signal
representations, which are essentially coordinate systems for the vector space. When
a representation is complete and not redundant, it provides a basis for the space;
when it is complete and redundant, it provides a frame for the space. A key virtue
for a basis is orthonormality; its counterpart for a frame is tightness.
Chapters 3 and 4 focus our attention on sequence and function spaces for
which the domain can be associated with time, leading to an inherent ordering not
necessarily present in a general Hilbert space. In
, a vector is a sequence that depends on discrete time, and
an important class of linear operators on these vectors is those that are invariant
7
to time shifts; these are convolution operators. These operators lead naturally to
signal representations using the discrete-time Fourier transform and, for circularly
extended finite-length sequences, the discrete Fourier transform.
, parallels Chap-
ter 3; a vector is now a function that depends on continuous time, and an important
class of linear operators on these vectors are again those that are invariant to time
shifts; these are convolution operators. These operators lead naturally to signal rep-
resentations using the Fourier transform and, for circularly extended finite-length
functions, or periodic functions, the Fourier series. The four Fourier representations
from these two chapters exemplify the diagonalization of linear, shift-invariant op-
erators, or convolutions, in the various domains.
, makes fundamental connec-
tions between Chapters 3 and 4. Associating a discrete-time sequence with a
given continuous-time function is sampling, and the converse is interpolation; these
are central concepts in signal processing since digital computations on continuous-
domain phenomena must be performed in a discrete domain.
, introduces many types of
approximations that are central to making computationally practical tools. Ap-
proximation by polynomials and by truncations of series expansions are studied,
along with the basic principles of compression.
, introduces time, frequency,
scale, and resolution properties of individual vectors; these properties build our
intuition for what might or might not be captured by a single representation coeffi-
cient. We then study these properties for sets of vectors used to represent signals. In
particular, time and frequency localization lead to the concept of a time–frequency
plane, where essential differences between Fourier techniques and wavelet techniques
become evident: Fourier techniques use vectors with equal spacing in frequency
while wavelet techniques use vectors with power-law spacing in frequency; further-
more, Fourier techniques use vectors at equal scale while wavelet techniques use
geometrically spaced scales. We end with examples with real signals to develop
intuition about various signal representations.
Chapter 2
Contents
2.1 Introduction 10
2.2 Vector spaces 18
2.3 Hilbert spaces 35
2.4 Approximations, projections, and decompositions 50
2.5 Bases and frames 69
2.6 Computational aspects 119
2.A Elements of analysis and topology 135
2.B Elements of linear algebra 141
2.C Elements of probability 151
2.D Basis concepts 159
Chapter at a glance 161
Historical remarks 162
Further reading 162
Exercises with solutions 163
Exercises 169
We start our journey into signal processing with different backgrounds and perspec-
tives. This chapter aims to establish a common language, develop the foundations
for our study, and begin to draw out key themes.
There will be more formal definitions in this chapter than in any other, to
approach the ideal of a self-contained treatment. However, we must assume some
background in common: On the one hand, we expect the reader to be familiar with
linear algebra at the level of [93, Ch. 1–5] (see also Appendix 2.B) and probability
9
10 From Euclid to Hilbert
at the level of [6, Ch. 1–4] (see also Appendix 2.C). (The textbooks we have
cited are just examples; nothing unique to those books is necessary.) On the other
hand, we are not assuming prior knowledge of general vector space abstractions
or mathematical analysis beyond basic calculus; we develop these topics here to
extend geometric intuition from ordinary Euclidean space to spaces of sequences
and functions. For more details on abstract vector spaces, we recommend books by
Kreyszig [59], Luenberger [64], and Young [111].
2.1 Introduction
This section introduces many topics of the chapter through the familiar setting of
the real plane. In the more general treatment of subsequent sections, the intuition
we have developed through years of dealing with the Euclidean spaces around us
(R 2 and R 3 ) will generalize to some not-so-familiar spaces. Readers comfortable
with vector spaces, inner products, norms, projections, and bases may skip this
section; otherwise, this will be a gentle introduction to Euclid’s world.
x, y = x0 y0 + x1 y1 . (2.1)
Other names for inner product are scalar product and dot product. The inner prod-
uct of a vector with itself is simply
While the norm is sometimes called the length, we avoid this usage because length
can also refer to the number of components in a vector. A vector of norm 1 is called
a unit vector.
In (2.1), the inner product computation depends on the choice of coordinate
axes. Let us now derive an expression in which the coordinates disappear. Consider
2.1 Introduction 11
y1
y 1 − x1
x−y
x1
θ y0 − x0
x
θx θy
x0 y0
x and y as shown in Figure 2.1. Define the angle between x and the positive
horizontal axis as θx (measured counterclockwise), and define θy similarly. Using a
little algebra and trigonometry, we get
x, y = x0 y0 + x1 y1
= (x cos θx )(y cos θy ) + (x sin θx )(y sin θy )
= x y(cos θx cos θy + sin θx sin θy )
= x y cos(θx − θy ). (2.3)
Thus, the inner product of the two vectors is the product of their norms and the
cosine of the angle θ = θx − θ y between them.
The inner product measures both the norms of the vectors and the similarity
of their orientations. For fixed vector norms, the greater the inner product, the
closer the vectors are in orientation. The orientations are closest when the vectors
are collinear and pointing in the same direction, that is, when θ = 0; they are the
farthest when the vectors are antiparallel, that is, when θ = π. When x, y = 0,
the vectors are called orthogonal or perpendicular. From (2.3), we see that x, y
is zero only when the norm of one vector is zero (meaning that one of the vectors
is the vector 0 0 ) or the cosine of the angle between them is zero (θ = ± 12 π).
So, at least in the latter case, this is consistent with the conventional concept of
perpendicularity.
The distance between two vectors is defined as the norm of their difference:
d(x, y) = x − y = x − y, x − y = (x0 − y0)2 + (x1 − y1 )2 . (2.4)
S S•
x • x
•
θ • ϕ ϕ
x
••
x
0 0
(a)
= x, ϕϕ = (x ϕ cos θ)ϕ = (x cos θ)ϕ,
x (2.5)
where (a) uses ϕ = 1, and θ is the angle measured counterclockwise from ϕ to x,
as marked in Figure 2.2(a). When ϕ is not of unit norm, the orthogonal projection
onto the subspace specified by ϕ is
(a) ϕ ϕ (b) 1
= (x cos θ)
x = (x ϕ cos θ) 2
= x, ϕϕ, (2.6)
ϕ ϕ ϕ2
where (a) expresses the orthogonal projection using the unit vector ϕ/ϕ; and (b)
uses (2.3).
Projection is more general than orthogonal projection; for example, Fig-
ure 2.2(b) illustrates oblique projection. The operator is still linear and vectors
in the subspace are still left unchanged; however, the difference (x − x
) is no longer
orthogonal to S.
2.1 Introduction 13
α1 α 1ϕ1
x x
ϕ1 ϕ1 1 ϕ1
ϕ
ϕ0 ϕ0 ϕ0
α0 α0 ϕ0
ϕ
0
(a) Expansion with an (b) Expansion with a (c) Basis {ϕ0 , ϕ1 } and
orthonormal basis. nonorthogonal basis. ϕ0 , ϕ
its dual { 1 }.
Orthonormal bases Vectors e0 = 1 0 and e1 = 0 1 constitute the stan-
dard basis and are depicted in Figure 2.3(a). They are orthogonal and of unit norm,
and are thus called orthonormal. We have been using this basis implicitly in that
x0 1 0
x = = x0 + x1 = x0 e0 + x1 e1 (2.7)
x1 0 1
is an expansion of x with respect to the basis {e0 , e1 }. For this basis, it is obvious
that an expansion exists for any x because the coefficients of the expansion x 0 and
x1 are simply the entries of x.
The general condition for {ϕ0 , ϕ1 } to be an orthonormal basis for R2 is
From the i = k case, the basis vectors are orthogonal to each other; from the i = k
case, they are of unit norm. With any orthonormal basis {ϕ 0, ϕ 1 }, one can uniquely
find the coefficients of the expansion
x = α0ϕ 0 + α1 ϕ1 (2.10)
by the Pythagorean theorem, because α0 and α1 form the sides of a right triangle
with hypotenuse of length x (see Figure 2.3(a)). The equality (2.11) is an example
of a Parseval equality 6 and is related to Bessel’s inequality; these will be formally
introduced in Section 2.5.2.
An expansion like (2.10) is often termed a change of basis, since it expresses
x with respect to {ϕ 0, ϕ 1 }, rather than in the standard basis {e 0 , e1 }. In other
words, the coefficients (α 0 , α1 ) are the coordinates of x in this new basis {ϕ0 , ϕ1 }.
α0 = x, ϕ
0 and α1 = x, ϕ
1 ,
are shown in Figure 2.3(c). We have thus just derived an instance of the expansion
formula
x = α0 ϕ0 + α1ϕ1 = x, ϕ 0 ϕ0 + x, ϕ
1ϕ 1, (2.12)
where {ϕ 1 } is the basis dual to the basis {ϕ0 , ϕ 1 }, and the two bases form a
0 , ϕ
biorthogonal pair of bases. For any basis, the dual basis is unique. The defining
characteristic for a biorthogonal pair is
ϕ
i , ϕk = δi−k for i, k ∈ {0, 1}. (2.13)
6What we call the Parseval equality in this book is sometimes called Plancherel’s equality.
2.1 Introduction 15
We can check that this is satisfied in our example and that any orthonormal basis
is its own dual. Clearly, designing a biorthogonal basis pair has more degrees of
freedom than designing an orthonormal basis. The disadvantage is that (2.11) does
not hold, and, furthermore, computations can become numerically unstable if ϕ0
and ϕ1 are too close to collinear.
Frames The signal expansion (2.12) has the minimum possible number of terms
to work for every x ∈ R2 , namely two terms because the dimension of the space is
two. It can also be useful to have an expansion of the form
x = x, ϕ
0 ϕ0 + x, ϕ
1 ϕ1 + x, ϕ
2 ϕ 2 . (2.14)
Here, an expansion will exist as long as {ϕ0 , ϕ1 , ϕ 2} are not collinear. Then,
even after the set {ϕ0 , ϕ1 , ϕ 2} has been fixed, there are infinitely many dual sets
{ϕ0 , ϕ 2 } such that (2.14) holds for all x ∈ R2 . Such redundant sets are called
1 , ϕ
frames and their (nonunique) dual sets are called dual frames. This flexibility can
be used in various ways. For example, setting a component of ϕ i to zero could save
a multiplication and an addition in computing an expansion, or, the dual, which is
not unique, could be chosen to make the coefficients as small as possible.
As an example, let us start with the standard basis {ϕ 0 = e0 , ϕ1 = e 1 }, add
a vector ϕ 2 = −e0 − e 1 to it,
1 0 −1
ϕ0 = , ϕ1 = , ϕ2 = , (2.15)
0 1 −1
and see what happens (see Figure 2.4(a)). As there are now three vectors in R 2, they
are linearly dependent; indeed, as defined, ϕ 2 = −ϕ0 − ϕ 1 . Moreover, these three
vectors must be able to represent every vector in R 2 since each two-element subset
is able to do so. To show that, we use the expansion x = x, ϕ 0 ϕ0 + x, ϕ1ϕ 1 and
add a zero to it to give
1 ϕ
1
ϕ1 1
√
2
ϕ0 ϕ0
−1 1 − √16 2
3
ϕ2 − √12
ϕ2
−1
shown in Figure 2.4(b). By expanding an arbitrary x = x0 x1 , we can verify
that x = 2k=0 x, ϕ kϕk holds for any x. The expansion looks like the orthonormal
basis one, where the same set of vectors plays both roles (inside the inner product
and outside). The norm is preserved similarly to what happens with orthonormal
bases ( 2k=0 |x, ϕk | 2 = x2 ), except that the norms of the frame vectors are not
1, but rather 2/3. A frame with this property is called a tight frame. We could
have renormalized the frame vectors by 3/2 to make them unit-norm vectors, in
which case 2k=0 |x, ϕk |2 = 32 x2 , where 32 indicates the redundancy of the frame
(we have 32 times more vectors than needed for an expansion in R2).
Matrix view of bases and frames An expansion with a basis or frame involves
operations that can be expressed conveniently with matrices.
Take the biorthogonal basis expansion formula (2.12). The coefficients in the
expansion are the inner products
α0 = x, ϕ
0 = ϕ
00 x0 + ϕ
01 x1 ,
α1 = x, ϕ 10 x0 + ϕ
1 = ϕ 11 x1 ,
where ϕ0 = ϕ 00 ϕ
01 and ϕ
1 = ϕ 10 ϕ
11 . Rewrite the above as a matrix–
vector product,
α0 x, ϕ
0 ϕ
00 ϕ 01 x0 x.
α = = = = Φ
α1 x, ϕ
1 ϕ
10 ϕ 11 x1
Φ
x = α0 ϕ0 + α1 ϕ1 .
2.1 Introduction 17
= Φα = ΦΦ x,
where ϕ0 = ϕ00 ϕ01 and ϕ1 = ϕ10 ϕ11 . The matrix Φ with ϕ 0 and ϕ1 as
columns is called the synthesis operator, and left multiplying an expansion coefficient
vector α by it performs the reconstruction of x from (α0, α1 ).
The matrix view makes it obvious that the expansion formula (2.12) holds
for any x ∈ R2 when ΦΦ is the identity matrix. In other words, we must have
−1
Φ =Φ , which is equivalent to (2.13). The inverse exists whenever {ϕ 0 , ϕ1 } is a
Chapter outline
The next several sections follow the progression of topics in this brief introduc-
tion. In Section 2.2, we formally introduce vector spaces and equip them with inner
products and norms. We also give several examples of common vector spaces. In
Section 2.3, we discuss the concept of completeness that turns an inner product
space into a Hilbert space. More importantly, we define the central concept of or-
thogonality and then introduce linear operators. We follow with approximations,
projections, and decompositions in Section 2.4. In Section 2.5, we define bases and
frames. This step gives us the tools to analyze signals and to create approximate
representations. Section 2.5.5 develops the matrix view of basis and frame expan-
sions. Section 2.6 discusses a few algorithms pertaining to the material covered.
The first three appendices review some elements of analysis and topology, linear
algebra, and probability. The final appendix discusses some finer mathematical
points on the concept of a basis.
Random documents with unrelated
content Scribd suggests to you:
»Hän kantaa pienokaisensa sairaalaan», ajatteli Rossi.
Äkkiä hän muisti, että löytölasten seimi oli suljettu aikoja sitten, ja
siitä syystä oli mahdotonta, että kukaan oli pannut lapsen seimeen.
Eikä hän ollut kuullut kellon ääntäkään eikä naisen askeleita eikä
hänen ääntään, kun hän itki.
»Mene sinne!»
YHDEKSÄS OSA.
KANSA.
I.
Sinä yönä paavi nukkui huonosti. Sytyttäen valon, joka riippui hänen
päänsä yläpuolella, hän vietti unettomat tunnit lukemalla sen päivän
sanomalehtiä, joissa kerrottiin Rossin vangitsemisesta Chiassossa ja
selitettiin säälimättömällä tavalla syy, miksi Roma oli hänet pettänyt.
Tunne siitä, että häntä oli petetty, masensi paavia, ja hän luki
eteenpäin päästäkseen kiusallisista ajatuksista. Kello oli yli yhden,
kun hän sammutti valon, ja yhä vieläkin hän kuuli soittokuntien
sävelet kaupungilta ja näki tulikipinät Pinciolta.
Seuraavana aamuna paavi heräsi kukon laulaessa ja soitti
luokseen kamaripalvelijansa maatessaan vielä vuoteessaan. Cortis
saapui hyvin kiihoittuneen näköisenä.
»Tänne.»
Niin sanoen hän aikoi ottaa taskustaan jotakin, mutta sitten hän
sanoi:
»Se on poissa — nyt muistan — heitin sen pettäjäni eteen.»
»Hyvät herrat», sanoi paavi, yhä vielä sveitsiläisen kaartin
upseereille, »jos viranomaiset koettavat vangita tämän nuoren
miehen, tulee teidän vaatia heiltä kirjoitettu vakuutus hänen
henkensä säilyttämisestä.»
»Te olette hyvin hyvä», sanoi hän, »enkä minä tahdo pettää teitä.
Vaikka olen syytön siihen rikokseen, josta minua syytetään, olen
rikkonut Jumalan ja tämän valtakunnan lakia vastaan, ja jos te
vähääkään pelkäätte seurauksia, tulee teidän ajaa minut pois, kun
vielä on aikaa.»
Rossin ääni alkoi murtua. »Te kokoatte tulisia hiiliä pääni päälle.
En ole koskaan ollut Vatikaanin ystävä, ja jos annatte minulle suojaa,
suojelette sellaista henkilöä, joka on koettanut syöstä teidät pois
valtaistuimeltanne.»
Puoli tuntia myöhemmin eräs huhu kulki läpi Vatikaanin aivan kuin
tuuli myrskyn edellä. Paavi huomasi jotakin tullessaan messusta.
»Mitä?»
»Tunnustanut?»
Kun aika tuli, jolloin Donna Roma oli vietävä pois, juoksi väki ulos
nähdäkseen hänet vilahdukselta vielä viimeisen kerran. Siellä oli
suljetut vaunut ja joukko karabinieereja, mutta liikutettu väkijoukko
oli voimakkaampi, se koetti pelastaa vangin. Tämä tapahtui lähellä
Angelon linnaa, ja kun portit olivat auki, työnsivät sotilaat Donna
Roman kiireesti sinne turviin. Hän oli nyt siellä.
»Mene takaisin majurin luo ja sano, että minä tulen itse», sanoi
paavi.
»Pyhä isä!»
Kreivi de Raymond palasi sanoen, että majuri oli antava avata sen.
Nykyisen valtiollisen hämmingin aikana ei kukaan voinut sanoa, mitä
huomenna tapahtuisi, mutta joka tapauksessa majuri otti
vastatakseen seurauksista.
»Teidän pyhyytenne!»
»Heitä se ulos!»
»Te olette kärsinyt, poikani. Mutta Jumala yksin tietää, mitä vielä
voi tapahtua.»
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookgate.com