Chapter 3: The Uncertainty Principle & Time-Bandwidth Product
Chapter 3: The Uncertainty Principle & Time-Bandwidth Product
Product
Vineet Kumar (01D07001), Kashinath Murmu (01D07038)
Introduction
For a long time Fourier transforms have been an invaluable tool for signal processing and
its applications. The Fourier transform is essentially required to get the information of the
signal in the frequency domain.
To get a high resolution of the frequency data of a signal, one needs to observe the signal
in time domain for a long interval. If one observes the signal in the time domain for only
a short interval, then the (frequency data) Fourier transform of the signal is more spreader
out. E.g.: In the DFT computation if one wishes to have closely spaced frequency
components i.e. high resolution of samples (n) in frequency domain, then one would
require an equal number of samples (n) in the time domain for analysis. This is equivalent
to saying that one should observe the signal for a longer interval.
However this was not a problem as long as one was restricted to analysis in a single
domain (i.e. time domain or frequency domain). But a problem arises when one starts to
analyze signals in Composite domains. Composite domain analysis is studying the given
data in 2 or more domains simultaneously. The need for such analysis arises when one
wants to extract the frequency components at different instants of time. For e.g. if one
wants to analyze the composition of the notes played by a solo instrumental performance,
then composite domain analysis is required.
Composite domains in general do not pose a difficulty in analysis. This happens only
when the 2 domains are inversely related to each other. E.g.: Time and frequency
domains have a relation (t = 1/f). In this case it is seen that if a signal has compact
support in time then its frequency equivalent does not have it, and vice versa.
What is Uncertain?
If we analyze a signal in time and frequency together, then more we zoom in into time,
the equivalent amount we zoom out in frequency, and vice versa.
1) A sine wave from
to has FT as impulse at frequency w. That is we are certain
in frequency domain but interval in time is large.
2) An impulse signal has FT as constant 1 over frequency
to . That is we are
certain in time but in frequency the spread is large.
Hence if one wishes to zoom into time and frequency simultaneously, one comes across a
fundamental limit. This limit to the highest resolution that one can achieve in the timefrequency composite domain is the uncertainty principle. This is a limit imposed by
nature.
Since compact support in any one-domain dictates that representation in the other domain
is of non-compact support, one cannot use support as measure in composite domain
analysis. It would anyway be of little use. For analysis in composite domain it is therefore
the spread or variance (i.e. where approx 98 % of the energy concentration lies) that is
measured.
1
4
I.e. 0.25 is the lower bound for the time-bandwidth product of a finite energy signal.
Making the use of famous Schwarz inequality derives this inequality.
The equality holds for a Gaussian distributed signal, whose time and frequency variances
Are 0.5 each. So a Gaussian waveform is an optimal waveform.
Proof:
Let us consider a finite energy signal x(t) with centre in time and frequency,
t0 0
0
2
t
t 2 x(t ) dt
x
2
2
w
(Time Variance)
2
2
x( ) d
x( )
(Frequency Variance)
The frequency variance can also be written as (using duality property of Fourier
transform),
d
dt
x(t ) dt
x
2
2
2
t
t 2 x(t ) dt
d
dt
x(t ) dt
4
2
f (.), g (.)
t 2 x(t ) dt
2
dx(t )
dt
dt
tx(t ) dt
dx(t )
dt
tx(t )
2
2
2
t
dx(t )
tx(t )
dt
dt
dx(t )
Re tx(t )
dt
dt
1
(z z* )
2
dx(t )
Re x(t )
dt
dt
Re( x)
Now, Re( z )
Thus,
1
dx(t )
{x(t )
2
dt
x(t )
dx(t )
}
dt
1 d
2
x(t )
2 dt
2
t
1 d
2
t
x(t )
2 dt
1
d
2
t
x(t )
4 dt
Integrating by parts we get,
t x(t )
x(t ) dt
t 2 x(t ) dt , is finite,
t 2 x(t )
0 as t
2
t
is finite
(b)
x)
-(a)
And hence,
2
t x(t )
0
Therefore, 1st part of the integral (d) goes to zero.
2
t
Numerator of
1
4
x(t ) dt
1
x
4
1
x
4
2 2
2
4
2
2
t
2
t
1
4
x 2
4
4
x 2
1
4
2
t
2
w
dx(t )
dt
where 0 is a constant i.e. the two functions are linearly dependant on each other
dx(t ) 1
tdt
x(t )
0
From Schwartz Inequality we can see that equality is achieved when tx(t )
~
~
x(t ) C0 .e 2 0 Where C0 eC 0
Since we want finite energy, 0 must be negative real
~
C0 Is a constant, leaving it does not matter since it just scales the function. Hence we get
the Optimal function as the Gaussian function, given by
t2
x(t )
Where
2
0