0% found this document useful (0 votes)
6 views

Compressive_Sensing_Lecture_Notes

This document discusses compressive sensing, a method that captures and represents compressible signals at rates below the Nyquist rate using nonadaptive linear projections. It highlights the inefficiencies of traditional sampling and compression methods and introduces a framework for directly acquiring compressed signal representations. The document also outlines the design of measurement matrices and reconstruction algorithms that enable the recovery of K-sparse signals from fewer measurements, emphasizing the importance of optimization techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Compressive_Sensing_Lecture_Notes

This document discusses compressive sensing, a method that captures and represents compressible signals at rates below the Nyquist rate using nonadaptive linear projections. It highlights the inefficiencies of traditional sampling and compression methods and introduces a framework for directly acquiring compressed signal representations. The document also outlines the design of measurement matrices and reconstruction algorithms that enable the recovery of K-sparse signals from fewer measurements, emphasizing the importance of optimization techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

[lecture NOTES] Richard G.

Baraniuk

Compressive Sensing

T
he Shannon/Nyquist sam- which can be viewed as an N × 1 column the largest coefficients are encoded.
pling theorem specifies that vector in RN with elements x[n], Unfortunately, this sample-then-com-
to avoid losing information n = 1, 2, . . . , N. (We treat an image or press framework suffers from three
when capturing a signal, one higher-dimensional data by vectorizing it inherent inefficiencies. First, the initial
must sample at least two into a long one-dimensional vector.) Any number of samples N may be large even
times faster than the signal bandwidth. In signal in RN can be represented in terms if the desired K is small. Second, the set
many applications, including digital of a basis of N × 1 vectors {ψ i }Ni=1 . For of all N transform coefficients {si } must
image and video cameras, the Nyquist rate simplicity, assume that the basis is be computed even though all but K of
is so high that too many samples result, orthonormal. Using the N × N basis them will be discarded. Third, the loca-
making compression a necessity prior to matrix  = [ψ1 |ψ2 | . . . |ψN ] with the tions of the large coefficients must be
storage or transmission. In other applica- vectors {ψ i } as columns, a signal x can encoded, thus introducing an overhead.
tions, including imaging systems (medical be expressed as
scanners and radars) and high-speed ana- THE COMPRESSIVE
log-to-digital converters, increasing the 
N
SENSING PROBLEM
x= si ψ i or x = ψs (1)
sampling rate is very expensive. i =1 Compressive sensing address these ineffi-
This lecture note presents a new ciencies by directly acquiring a com-
method to capture and represent com- where s is the N × 1 column vector of pressed signal representation without
pressible signals at a rate significantly weighting coefficients si = x, ψ i  = ψ iTx going through the intermediate stage of
below the Nyquist rate. This method, and · T denotes transposition. Clearly, x acquiring N samples [1], [2]. Consider a
called compressive sensing, employs and s are equivalent representations of general linear measurement process that
nonadaptive linear projections that pre- the signal, with x in the time or space computes M < N inner products
serve the structure of the signal; the sig- domain and s in the  domain. between x and a collection of vectors
nal is then reconstructed from these The signal x is K-sparse if it is a linear {φ j } M
j=1 as in yj = x, φ j. Arrange the
projections using an optimization combination of only K basis vectors; that measurements yj in an M × 1 vector y
process [1], [2]. is, only K of the si coefficients in (1) are and the measurement vectors φ jT as rows
nonzero and (N − K) are zero. The case in an M × N matrix . Then, by substi-
RELEVANCE of interest is when K  N. The signal x tuting  from (1), y can be written as
The ideas presented here can be used to is compressible if the representation (1)
illustrate the links between data acquisi- has just a few large coefficients and many y = x = s = s (2)
tion, compression, dimensionality reduc- small coefficients.
tion, and optimization in undergraduate where  =  is an M × N matrix. The
and graduate digital signal processing, TRANSFORM CODING measurement process is not adaptive,
statistics, and applied mathematics AND ITS INEFFICIENCIES meaning that  is fixed and does not
courses. The fact that compressible signals are depend on the signal x. The problem
well approximated by K-sparse represen- consists of designing a) a stable meas-
PREREQUISITES tations forms the foundation of trans- urement matrix  such that the salient
The prerequisites for understanding this form coding [3]. In data acquisition information in any K-sparse or com-
lecture note material are linear algebra, systems (for example, digital cameras) pressible signal is not damaged by the
basic optimization, and basic probability. transform coding plays a central role: the dimensionality reduction from x ∈ RN
full N-sample signal x is acquired; the to y ∈ R M and b) a reconstruction algo-
PROBLEM STATEMENT complete set of transform coefficients rithm to recover x from only M ≈ K
{si } is computed via s =  Tx; the K measurements y (or about as many
COMPRESSIBLE SIGNALS largest coefficients are located and the measurements as the number of coeffi-
Consider a real-valued, finite-length, (N − K) smallest coefficients are dis- cients recorded by a traditional trans-
one-dimensional, discrete-time signal x, carded; and the K values and locations of form coder).

IEEE SIGNAL PROCESSING MAGAZINE [118] JULY 2007 1053-5888/07/$25.00©2007IEEE

Authorized licensed use limited to: Indian Institute of Technology Palakkad. Downloaded on August 30,2022 at 12:08:21 UTC from IEEE Xplore. Restrictions apply.
erated it), and the basis  and recon-
y Φ Ψ S y Θ S struct the length-N signal x or, equiva-
lently, its sparse coefficient vector s. For
= =
M N K-sparse signals, since M < N in (2)
K-sparse there are infinitely many s that satisfy
s = y. This is because if s = y then
(s + r) = y for any vector r in the null
x
space N () of . Therefore, the signal
(a) (b)
reconstruction algorithm aims to find
[FIG1] (a) Compressive sensing measurement process with a random Gaussian the signal’s sparse coefficient vector in
measurement matrix  and discrete cosine transform (DCT) matrix . The vector of the (N − M)-dimensional translated null
coefficients s is sparse with K = 4. (b) Measurement process with  = . There are space H = N () + s.
four columns that correspond to nonzero si coefficients; the measurement vector y is a
linear combination of these columns. ■ Minimum 2 norm reconstruction:
Define the  p norm of the vector s as

(s p) p = N p
i=1 |si | . The classical
approach to inverse problems of this
SOLUTION length N. However, both the RIP and type is to find the vector in the trans-
incoherence can be achieved with high lated null space with the smallest 2
DESIGNING A STABLE probability simply by selecting  as a norm (energy) by solving
MEASUREMENT MATRIX random matrix. For instance, let the
The measurement matrix  must allow matrix elements φ j,i be independent s = argmin s 2 such that s = y.

the reconstruction of the length-N signal and identically distributed (iid) random (4)
x from M < N measurements (the vector variables from a Gaussian probability
y). Since M < N, this problem appears density function with mean zero and This optimization has the convenient
ill-conditioned. If, however, x is K-sparse variance 1/N [1], [2], [4]. Then the s =  T( T)−1 y.
closed-form solution
and the K locations of the nonzero coef- measurements y are merely M different Unfortunately, 2 minimization will
ficients in s are known, then the problem randomly weighted linear combinations almost never find a K-sparse solution,
can be solved provided M ≥ K. A neces- of the elements of x, as illustrated in returning instead a nonsparse  s with
sary and sufficient condition for this sim- Figure 1(a). The Gaussian measure- many nonzero elements.
plified problem to be well conditioned is ment matrix  has two interesting and ■ Minimum 0 norm reconstruction:
that, for any vector v sharing the same K useful properties: Since the 2 norm measures signal
nonzero entries as s and for some  > 0 ■ The matrix  is incoherent with energy and not signal sparsity, con-
the basis  = I of delta spikes with sider the 0 norm that counts the
v2 high probability. More specifically, an number of non-zero entries in s.
1− ≤ ≤1+. (3)
v2 M × N iid Gaussian matrix (Hence a K -sparse vector has 0
 = I =  can be shown to have norm equal to K.) The modified opti-
That is, the matrix  must preserve the the RIP with high probability if mization
lengths of these particular K-sparse vec- M ≥ cK log(N/K), with c a small
tors. Of course, in general the locations constant [1], [2], [4]. Therefore, K- s = argmin s 0 such that s = y

of the K nonzero entries in s are not sparse and compressible signals of (5)
known. However, a sufficient condition length N can be recovered from
for a stable solution for both K-sparse only M ≥ cK log(N/K)  N random can recover a K-sparse signal exactly
and compressible signals is that  satis- Gaussian measurements. with high probability using only
fies (3) for an arbitrary 3K-sparse vector ■ The matrix  is universal in the M = K + 1 iid Gaussian measure-
v. This condition is referred to as the sense that  =  will be iid ments [5]. Unfortunately, solving (5)
restricted isometry property (RIP) [1]. A Gaussian and thus have the RIP with is both numerically unstable and NP-
related condition, referred to as incoher- high probability regardless of the complete, requiring an exhaustive
 
ence, requires that the rows {φ j} of  choice of orthonormal basis . enumeration of all N K possible loca-
cannot sparsely represent the columns tions of the nonzero entries in s.
{ψ i } of  (and vice versa). DESIGNING A SIGNAL ■ Minimum 1 norm reconstruction:
Direct construction of a measure- RECONSTRUCTION ALGORITHM Surprisingly, optimization based on
ment matrix  such that  =  has The signal reconstruction algorithm the 1 norm
the RIP requires verifying (3) for each must take the M measurements in the
  s = argmin s 1 such that s = y

of the N K possible combinations of K vector y, the random measurement
nonzero entries in the vector v of matrix  (or the random seed that gen- (6)

IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007

Authorized licensed use limited to: Indian Institute of Technology Palakkad. Downloaded on August 30,2022 at 12:08:21 UTC from IEEE Xplore. Restrictions apply.
[lecture NOTES] continued

can exactly recover K-sparse signals and


closely approximate compressible signals
with high probability using only H H
M ≥ cK log(N/K) iid Gaussian meas-
urements [1], [2]. This is a convex opti- S
mization problem that conveniently S
reduces to a linear program known as S S S
basis pursuit [1], [2] whose computation-
al complexity is about O(N 3 ). Other,
related reconstruction algorithms are
(a) (b) (c)
proposed in [6] and [7].

[FIG2] (a) The subspaces containing two sparse vectors in R3 lie close to the
DISCUSSION coordinate axes. (b) Visualization of the 2 minimization (5) that finds the non-
The geometry of the compressive sensing sparse point-of-contact 
s between the 2 ball (hypersphere, in red) and the
problem in RN helps visualize why 2 translated measurement matrix null space (in green). (c) Visualization of the 1
minimization solution that finds the sparse point-of-contact  s with high probability
reconstruction fails to find the sparse thanks to the pointiness of the 1 ball.
solution that can be identified by 1
reconstruction. The set of all K-sparse
vectors s in RN is a highly nonlinear Scene
space consisting of all K-dimensional
hyperplanes that are aligned with the Photodiode Bitstream
A/D Reconstruction Image
coordinate axes as shown in Figure 2(a).
The translated null space H = N () + s
is oriented at a random angle due to the
randomness in the matrix  as shown in
Figure 2(b). (In practice N, M, K  3, so DMD
any intuition based on three dimensions Array RNG
may be misleading.) The 2 minimizer  s (a)
from (4) is the point on H closest to the
origin. This point can be found by blow-
ing up a hypersphere (the 2 ball) until it
contacts H. Due to the random orienta-
tion of H, the closest point  s will live
away from the coordinate axes with high
probability and hence will be neither (b) (c)
sparse nor close to the correct answer s. [FIG3] (a) Single-pixel, compressive sensing camera. (b) Conventional digital camera
In contrast, the 1 ball in Figure 2(c) has image of a soccer ball. (c) 64 × 64 black-and-white image 
x of the same ball (N = 4,096
points aligned with the coordinate axes. pixels) recovered from M = 1,600 random measurements taken by the camera in (a).
The images in (b) and (c) are not meant to be aligned.
Therefore, when the 1 ball is blown up,
it will first contact the translated null
space H at a point near the coordinate PRACTICAL EXAMPLE Each mirror can be independently ori-
axes, which is precisely where the sparse As a practical example, consider a sin- ented either towards the photodiode
vector s is located. gle-pixel, compressive digital camera (corresponding to a 1) or away from the
While the focus here has been on dis- that directly acquires M random linear photodiode (corresponding to a 0). To
crete-time signals x, compressive sensing measurements without first collecting collect measurements, a random number
also applies to sparse or compressible the N pixel values [10]. As illustrated in generator (RNG) sets the mirror orienta-
analog signals x(t) that can be represent- Figure 3(a), the incident light-field cor- tions in a pseudorandom 1/0 pattern to
ed or approximated using only K out of responding to the desired image x is create the measurement vector φ j. The
N possible elements from a continuous reflected off a digital micromirror device voltage at the photodiode then equals yj,
basis or dictionary {ψ i (t)}N i =1 . While (DMD) consisting of an array of N tiny which is the inner product between φ j
each ψ i (t ) may have large bandwidth mirrors. (DMDs are present in many and the desired image x. The process is
(and thus a high Nyquist rate), the signal computer projectors and projection tele- repeated M times to obtain all of the
x(t ) has only K degrees of freedom and visions.) The reflected light is then col- entries in y.
thus can be measured at a much lower lected by a second lens and focused onto
rate [8], [9]. a single photodiode (the single pixel). (continued on page 124)

IEEE SIGNAL PROCESSING MAGAZINE [120] JULY 2007

Authorized licensed use limited to: Indian Institute of Technology Palakkad. Downloaded on August 30,2022 at 12:08:21 UTC from IEEE Xplore. Restrictions apply.
[dsp TIPS&TRICKS] continued

line structure in the spectrum of the tangent/cosine configuration. The tan(θ) patents on digital receivers and DSP
recursive CORDIC, and the phase error computations were implemented by technology. He has written over 140
correction is not applied to suppress phase CORDIC rotations avoiding the need for journal and conference papers and is the
error artifacts but rather to complete the multiply operations. To minimize output author of the book Multirate Signal
phase rotation left incomplete due to the phase angle error, we applied a post- Processing for Communication Systems
residual phase term in the angle accumu- CORDIC clean-up angle rotation. Finally, (Prentice Hall Publishing).
lator. This is a very different DDS! we stabilized the DDS output amplitude
by an AGC loop. The phase-noise per- REFERENCES
[1] C. Dick, F. Harris, and M. Rice, “Synchronization
IMPLEMENTATION formance of the DDS is quite remarkable in software defined radios—Carrier and timing
As a practical note, there are truncating and we invite you, the reader, to take a recovery using FPGAs,” in Proc. IEEE Symp. Field-
Programmable Custom Computing Machines, Napa
quantizers between the AGC multipliers careful look at its structure. A MATLAB- Valley, CA, pp. 195–204, Apr. 2000.
and the feedback delay element regis- code implementation of the DDS is avail- [2] J. Valls, T. Sansaloni, A. Perez-Pascual, V. Torres,
ters. As such, the truncation error circu- able at http://apollo.ee.columbia.edu/ and V. Almenar, “The use of CORDIC in software
defined radios: A tutorial,” IEEE Commun. Mag.,
lates in the registers and contributes an spm/?i=external/tipsandtricks. vol. 44, no. 9, pp. 46–50, Sept. 2006.
undesired dc component to the complex [3] F. Harris, C. Dick, and R. Jekel, “An ultra low
sinusoid output. This dc component can ACKNOWLEDGMENT phase noise DDS,” presented at Software Defined
Radio Forum Tech. Conf. (SDR-2006), Orlando FL,
(and should) be suppressed by using a Thanks to Rick Lyons for patience and Nov. 2006.
sigma delta-based dc cancellation loop constructive criticism above and beyond [4] R. Lyons, Understanding Digital Signal
between the AGC multipliers and the the call of duty. Processing, 2nd ed. Upper Saddle River, NJ: Prentice
Hall, pp. 576–578, 2004.
feedback delay elements [6].
[5] C. Turner, “Recursive discrete-time sinusoidal
AUTHOR oscillators,” IEEE Signal Processing Mag., vol. 20,
no. 3, pp. 103–111, May 2003.
CONCLUSIONS Fred Harris ([email protected])
We modified the traditional recursive teaches DSP and modem design at San [6] C. Dick and F. Harris, “FPGA signal processing using
sigma-delta modulation,” IEEE Signal Processing Mag.,
DDS complex oscillator structure to a Diego State University. He holds 12 vol. 17, no. 1, pp. 20–35, Jan. 2000. [SP]

[lecture NOTES] continued from page 120

An image acquired with the single- Instruments (TI) Leadership University [3] S. Mallat, A Wavelet Tour of Signal Processing.
New York: Academic, 1999.
pixel camera using about 60% fewer ran- Program. Special thanks are due to TI for
dom measurements than reconstructed the DMD array used in the single-pixel [4] R.G. Baraniuk, M. Davenport, R. DeVore, and
M.B. Wakin, “A simple proof of the restricted isome-
pixels is illustrated in Figure 3(c); com- camera. Thanks also to the Rice DSP try principle for random matrices (aka the
pare to the target image in Figure 3(b). group and Ron DeVore for many enlight- Johnson-Lindenstrauss lemma meets compressed
sensing),” Constructive Approximation, 2007
The reconstruction was performed via a ening discussions and Justin Romberg [Online]. Available: http://dsp.rice.edu/
total variation optimization [1], which is for help with the reconstruction in cs/jlcs-v03.pdf

closely related to the 1 reconstruction in Figure 3. [5] D. Baron, M.B. Wakin, M. Duarte, S. Sarvotham,
and R.G. Baraniuk, “Distributed compressed
the wavelet domain. In addition to requir- s e n s ing,” 2005 [Online]. Available: http://dsp.
ing fewer measurements, this camera can AUTHOR rice.edu/cs/DCS112005.pdf
image at wavelengths where is difficult or Richard G. Baraniuk ([email protected]) [6] J. Tropp and A.C. Gilbert, “Signal recovery
expensive to create a large array of sen- is the Victor E. Cameron Professor of from partial information via orthogonal matching
pursuit,” Apr. 2005 [Online]. Available: http://www-
sors. It can also acquire data over time to Electrical and Computer Engineering personal.umich.edu/~jtropp/papers/TG06-Signal-
enable video reconstruction [10]. at Rice University. His research inter- Recovery.pdf
ests include multiscale analysis, [7] J. Haupt and R. Nowak, “Signal reconstruction
CONCLUSIONS: inverse problems, distributed signal from noisy random projections,” IEEE Trans. Inform.
Theory, vol. 52, no. 9, pp. 4036–4048, Sept. 2006.
WHAT WE HAVE LEARNED processing, and sensor networks. He is
[8] S. Kirolos, J. Laska, M. Wakin, M. Duarte, D.
Signal acquisition based on compressive a Fellow of the IEEE. Baron, T. Ragheb, Y. Massoud, and R.G. Baraniuk,
sensing can be more efficient than tradi- “Analog-to-information conversion via random
demodulation,” in Proc. IEEE Dallas Circuits
tional sampling for sparse or compressible Systems Workshop, Oct. 2006, pp. 71-74.
signals. In compressive sensing, the famil- REFERENCES
Additional compressive sensing resources are avail- [9] M. Vetterli, P. Marziliano, and T. Blu, “Sampling
iar least squares optimization is inadequate able at dsp.rice.edu/cs. signals with finite rate of innovation,” IEEE Trans.
for signal reconstruction, and other types Signal Processing, vol. 50, no. 6, pp. 1417–1428,
[1] E. Candès, J. Romberg, and T. Tao, “Robust
June 2002.
of convex optimization must be invoked. uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information,” [10] D. Takhar, V. Bansal, M. Wakin, M. Duarte, D.
IEEE Trans. Inform. Theory, vol. 52, no. 2, Baron, J. Laska, K.F. Kelly, and R.G. Baraniuk, “A
ACKNOWLEDGMENTS pp. 489–509, Feb. 2006. compressed sensing camera: New theory and an
[2] D. Donoho, “Compressed sensing,” IEEE Trans. implementation using digital micromirrors,” in
This work was supported by grants from Inform. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. Proc. Comput. Imaging IV SPIE Electronic Imaging,
NSF, DARPA, ONR, AFOSR, and the Texas 2006. San Jose, Jan. 2006. [SP]

IEEE SIGNAL PROCESSING MAGAZINE [124] JULY 2007

Authorized licensed use limited to: Indian Institute of Technology Palakkad. Downloaded on August 30,2022 at 12:08:21 UTC from IEEE Xplore. Restrictions apply.

You might also like