0% found this document useful (0 votes)
5 views

T2 SignalSpaceAndDegreesOfFreedom

Uploaded by

bertamanteca11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

T2 SignalSpaceAndDegreesOfFreedom

Uploaded by

bertamanteca11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

digital data transmission — 2023/2024

signal space and degrees of freedom

Digital communications consist of mapping bits to symbols and symbols to signals in a clever such hat we infor-
mation bits can be recovered from the received signal with low error probability. Signal theory is the mathematical
machinery for representing signals as linear combinations of a set of orthonormal waveforms, namely a vector space
whose elements are signals x(t). The orthonormality condition assures that physical quantities such as energy or
noise have a meaningful representation in the signal space, facilitating the design of communication systems.

1 orthonormal expansions of signals


The fundamental operation in signal space is the inner product between two signals. For two signals x(t) and y(t),
the inner product is defined as

⟨x, y⟩ = ∫ x(t)y∗(t) dt. (1.1)
−∞
A set of waveforms {ϕ1 , . . . , ϕ n } is called an orthonormal set if and only if
⟨ϕ i , ϕ j ⟩ = δ i j , (1.2)
where δ i j is the Kronecker delta, that is, δ i j = 1 for i = j and δ i j = 0 otherwise.
Problem 1.1. Let ϕ1 (t) = ϕ(t) and ϕ2 (t) = ϕ(t − T), where ϕ(t) is the rectangular waveform of amplitude A and
support [0, T]. Is {ϕ1 , ϕ2 } an orthonormal set? What value of A assures orthonormality?
Solution . The waveforms are orthonormal if and only if they satisfy (1.2). Since ϕ1 (t)ϕ2 (t) = 0 for all t ∈ R, it
follows that they are orthogonal because ⟨ϕ1 , ϕ2 ⟩ = ⟨ϕ2 , ϕ1 ⟩ = 0. However, there are not unit-energy in general since
∞ ∞
⟨ϕ1 , ϕ1 ⟩ = ⟨ϕ2 , ϕ2 ⟩ = ∫−∞ ∣ϕ1 (t)∣2 dt = ∫−∞ ∣ϕ2 (t)∣2 dt = A2 T. For real-valued A, orthonormality is guaranteed if
⟨ϕ1 , ϕ1 ⟩ = ⟨ϕ2 , ϕ2 ⟩ = 1, i.e., when A = √ .
1
T

A signal x(t) belongs to the signal space spanned by the set of orthonormal waveforms {ϕ1 , . . . , ϕ n } if and only if
it can be represented as the expansion
n
x(t) = ∑ x i ϕ i (t), (1.3)
i=1
where x i are called the coefficients of x(t) and are given by
x i = ⟨x, ϕ i ⟩. (1.4)
It is rather evident that not all signals can be perfectly represented in any orthonormal basis, i.e., in general the space
of signals (1.3) is a subspace of all signals. For example, let the orthonormal function

⎪ 0≤t≤T
⎪√
1
ϕ1 (t) = ⎨ T (1.5)


⎩0 elsewhere.


Clearly, any signal x(t) that takes a constant value A between 0 and T (and 0 otherwise) belongs the signal space
spanned by ϕ1 (t) since it can be written as x(t) = x1 ϕ1 (t), with coefficient x1 = A T. On the contrary, the signal


⎪t 0 ≤ t ≤ T
x(t) = ⎨ (1.6)


⎩0 elsewhere
cannot be perfectly represented by ϕ1 (t) because there not exists a value x1 such that x(t) = x1 ϕ1 (t).

1
Problem 1.2. Calculate the coefficient x1 according to Equation 1.4 for the signal x(t) in (1.6) for the basis wave-
form (1.5). Plot x1 ϕ1 (t) and graphically check that x(t) ≠ x1 ϕ1 (t).
Solution. To calculate the coefficient of x(t), we use Equation (1.4) and the definition of the inner product (1.1) as
∞ 1 T 1 T2
x1 = ⟨x, ϕ1 ⟩ = ∫ x(t)ϕ1 (t)dt = √ ∫ tdt = √ . (1.7)
−∞ T 0 T 2
Denoting the projection of x(t) onto the signal space spanned by ϕ1 (t) as x̃(t) = x1 ϕ1 (t), we obtain that


⎪T 0≤t≤T
x̃(t) = ⎨ 2 (1.8)


⎩0 elsewhere.

Clearly, using x̃(t) we are approximating a triangular signal x(t) by a rectangular wave whose amplitude is the
average amplitude of x(t), as it can be seen in Figure 1.1.

x(t), x̃(t)

T
2

t
T

Figure 1.1: Representation of Problem 1.2.

The main advantage of orthonormal expansions is that energy and distances have a meaningful representation in
the signal space. If x(t) belongs to the signal space spanned by {ϕ1 , . . . , ϕ n }, then its energy can be calculated as
∞ n
Ex = ∫ ∣x(t)∣2 dt = ∑ ∣x i ∣2 . (1.9)
−∞ i=1

Likewise, the inner product (1.1) between two signals can be expressed in terms of the coefficients of such signals in
an orthonormal basis. Let x(t) and y(t) be two signals belonging to the signal space spanned by a set of orthonormal
functions {ϕ1 , . . . , ϕ n }. Then, if x i = ⟨x, ϕ i ⟩ and y i = ⟨y, ϕ i ⟩, we have that
n
⟨x, y⟩ = ∑ x i y∗i . (1.10)
i=1

Equations (1.9) and (1.10) have a nice geometrical interpretation. The coefficients of a signal can be seen as the
coordinates of a vector with respect to the orthonormal basis. We may see this through an example.
Example 1.1. Consider the signal space spanned by {ϕ1 (t), ϕ2 (t)} given by
⎧√ ⎧ √

⎪ T2 0 ≤ t ≤ T2 ⎪
⎪− T2 T
≤t≤T
ϕ1 (t) = ⎨ ϕ2 (t) = ⎨ 2 (1.11)

⎪ ⎪

⎩0 elsewhere, ⎩0 elsewhere.

We first note that they form an orthonormal set since ⟨ϕ1 , ϕ2 ⟩ = 0 and E ϕ1 = E ϕ2 = 1. Two signals, x(t) and y(t)

2
x(t) y(t)
A
A T
2 t 2 T t
T
2
T − A2
−A

Figure 1.2: Two signals for Example 1.1.

belonging to the space spanned by ϕ1 and ϕ2 are the signals in Figure 1.2. Clearly, the two signals can be represented
in the orthonormal basis as x(t) = x1 ϕ1 (t) + x2 ϕ2 (t) where
√ √
A T T
x1 = ⟨x, ϕ1 ⟩ = and x2 = ⟨x, ϕ2 ⟩ = A , (1.12)
2 2 2
while y(t) = y1 ϕ1 (t) + y2 ϕ2 (t) where
√ √
T A T
y1 = ⟨y, ϕ1 ⟩ = A and y2 = ⟨y, ϕ2 ⟩ = . (1.13)
2 2 2
Placing the coefficients as coordinates, we may see the signals as the points x = (x1 , x2 ) and y = (y1 , y2 ) in the plane:

ϕ2 (t)


A T2 ●x

A T ●y
2 2
ϕ1 (t)
√ √
A T
2 2 A T2

Figure 1.3: Geometric representation of the signal space of Example 1.1.

Problem 1.3. Check that Equations (1.9) and (1.10) are satisfied for the signals x(t) and y(t) in Example 1.1, that is,
calculate E x , E y and ⟨x, y⟩ first using the definitions of energy and inner product, then the same quantities using
the r.h.s. of (1.9) and (1.10), and check that coincide.
Solution. Using the definitions of energy and inner product, we first calculate

∞ T
T 5
Ex = ∫ ∣x(t)∣ dt = ∫
2 A2
+ ∫ T A2 = A2 T
2
(1.14)
−∞ 0 4
2
8

while E y = 85 A2 T as well. Also,

∞ T
T A2 T

⟨x, y⟩ = ∫ x(t)y (t)dt = ∫ ⋅ A + ∫ T (−A) ⋅ (− A2 ) =
2 A
. (1.15)
−∞ 0 2
2
2

3
Alternatively,
n √√ 2 A2 T A2 T 5
2
E x = ∑ ∣x i ∣ = x12 + x22
= + (A T2 ) =
( A2 T
2 ) + = A2 T, (1.16)
i=1 8 2 8
n √ 2 √ 2 A2 T A2 T 5
E y = ∑ ∣y i ∣ = y1 + y2 = (A 2 ) + ( 2 T2 ) =
2 2 T A
+ = A2 T, (1.17)
i=1 2 8 8
and
n √ √ √ √ A2 T A2 T A2 T
⟨x, y⟩ = ∑ x i y∗i = x1 y1 + x2 y2 = ( A2 2 ) ⋅ (A
T
2 ) + (A
T
2)⋅(2
T A
2)
T
= + = , (1.18)
i=1 4 4 2

hence checking that equalities (1.9) and (1.10) are indeed satisfied.

We already know that the Fourier transform has the nice properties of linearity (the Fourier transform of αx(t) +
β y(t) is α x̂( f ) + β ŷ( f )) and invertibility (we can always uniquely go back and forth from x(t) and x̂( f )), allowing
us to study the orthonormal expansion of signals described here in a meaningful way. Simply taking the Fourier
transform in the both sides of (1.3), we obtain
n
x̂( f ) = ∑ x i ϕ̂ i ( f ). (1.19)
i=1

An important question arises. Is {ϕ̂1 , . . . , ϕ̂ n } still an orthonormal set? The answer lies in the following central
theorem in digital data transmission. Parseval’s theorem states that, for any two signals x(t) and y(t) with finite
energy and Fourier transforms x̂( f ) and ŷ( f ),

⟨x, y⟩ = ⟨x̂, ŷ⟩. (1.20)

Parseval’s says that if {ϕ1 , . . . , ϕ n } satisfies the orthonormality condition (1.2), the tuple of Fourier trasforms {ϕ̂1 , . . . , ϕ̂ n }
also does so. Equation (1.20) implies that the Fourier transform preserves the inner product, and that the Fourier
transform allows to study orthonormal expansions of signals either in the time of frequency domain at our conve-
nience. And setting x = y, we recover Parseval’s theorem for the energy—a result we already saw in Topic 1.

2 fourier functions
Above, we inspected simple instances of orthonormal expansions such as the one in Example 1.1, where a pair of
orthonormal waveforms ϕ1 (t) and ϕ2 (t) that are respectively constant in the [0, T/2) and [T/2, T) intervals span
a signal subspace formed only by signals x(t) that are constant in those interval. To accommodate any signal shape
in a limited time interval, we consider a limitless amount of orthonormal functions to represent a signal, that is

x(t) = ∑ x i ϕ i (t), (2.1)
i=−∞

where x i = ⟨x, ϕ i ⟩ and ⟨ϕ i , ϕ k ⟩ = δ ik . We will assume that the r.h.s. of (2.1) converges. We say that a set of or-
thonormal waveforms {. . . , ϕ−1 , ϕ0 , ϕ1 , . . .} is complete over a class of signals if all the signals in that particular
class are in the signal space spanned by the waveforms ϕ i (t). Clearly, the orthonormal waveforms ϕ1 (t) and ϕ2 (t)
in Example 1.1 are complete over the class of signals that are constant in the [0, T/2) and [T/2, 2T) intervals. An
important consequence of completeness is that the energy of a signal belonging to the class of signals spanned by a
complete set of orthonormal waveforms satisfies
∞ ∞
Ex = ∫ ∣x(t)∣2 dt = ∑ ∣x i ∣2 . (2.2)
−∞ i=−∞

4
A result from Calculus is that any periodic function (we assume that the period is T seconds) can be represented
using the Fourier series as a linear combination of sines and cosines. A periodic function such as sin(πt) has infinite
energy, and therefore it cannot be represented using finite-energy orthonormal waveforms. However, the Fourier
functions defined in the [−T/2, T/2] interval do form an orthonormal set of waveforms. For every T > 0 and integer
i ∈ Z, the Fourier functions are defined as


⎪ √T e T
1 j2π i t
− T2 ≤ t ≤ T
ϕ i (t) = ⎨ 2
(2.3)


⎩0 elsewhere.

Problem 2.1. Check that the Fourier functions (2.3) satisfy the orthonormality condition.
Solution. The inner product between two Fourier functions ϕ i (t) and ϕ k (t) is, substituting (2.3)

1 T sin((i − k)π)
T T
1 j2π i t − j2π kTt 1
⟨ϕ i , ϕ k ⟩ = ∫ T e Te dt = ∫ T e j2π T dt = = sinc(i − k) = δ ik ,
2 2 (i−k)t
(2.4)
T −2 T −2 T (i − k)π

hence it satisfies the orthonormality condition (1.2).

Problem 2.2. Consider the triangular signal




⎪t − T2 ≤ t ≤ T2
x(t) = ⎨ (2.5)


⎩0 elsewhere.

Find the coefficients x0 , x−1 and x1 in the Fourier basis. In the same plot, draw and compare x(t) and x̃(t) defined
as x̃(t) = x−1 ϕ−1 (t) + x0 ϕ0 (t) + x1 ϕ1 (t).
Solution . The signal x(t) is time-limited to T seconds and graphically is a straight line crossing the axes origin,
hence an odd function satisfying x(−t) = −x(t). We start computing x0 , the coefficient given by

∞ 1
T

x0 = ⟨x, ϕ0 ⟩ = ∫ x(t)ϕ∗0 (t)dt = √ ∫ T t = 0,


2
(2.6)
−∞ T −2

which is the average value of x(t) in the [−T/2, T/2] interval. Next,

∞ 1
T
T T
x1 = ⟨x, ϕ1 ⟩ = ∫ x(t)ϕ∗1 (t)dt = √ ∫ T te − j2π Tt
dt = − j
2
, (2.7)
−∞ T −2 2π

a 2 (at − 1). Similarly, since x(t) is odd, we note that


e at
where we used that a primitive of t e at is

∞ ∞ ∞ T T
x−1 = ∫ x(t)ϕ∗−1 (t)dt =∫ x(−u)ϕ∗−1 (−u)du = −∫ x(u)ϕ∗1 (u)du = −x1 = j , (2.8)
−∞ −∞ −∞ 2π
where we made the change of variable u = −t, used that x(−u) = −x(u) and that ϕ−i (−u) = ϕ i (u). Defining x̃(t)
as the projection of x(t) in the subspace spanned only by the three waveforms {ϕ−1 , ϕ0 , ϕ1 }, we obtain
√ √
T T 1 − j2π t T T 1 T 2πt
x̃(t) = x−1 ϕ−1 (t) + x0 ϕ0 (t) + x1 ϕ1 (t) = j ⋅√ e T − j ⋅ √ e j2π T = sin ( ).
t
(2.9)
2π T 2π T π T

Shown in Figure 2.1, x̃(t) is the order 1 Fourier series approximation to x(t).

5
x(t), x̃(t)
T
2

t
− T2 T
2

− T2

Figure 2.1: Representation of Problem 2.2.

We next compute the coefficients x i of a finite-energy signal x(t) in the Fourier basis. In general,

∞ 1
T

x i = ⟨x, ϕ i ⟩ = ∫ x(t)ϕ∗i (t)dt = √ ∫ T x(t)e − j2π T dt.


2 it
(2.10)
−∞ T −2

The utility of the Fourier series comes clear now. If the support of x(t) is [−T/2, T/2], then the coefficients x i are
related to the Fourier transform x̂( f ) as
1 i
x i = √ x̂( ). (2.11)
T T
If the support of x(t) is larger than [−T/2, T/2], then (2.11) does not hold because the integral (2.10) is not the
inverse Fourier transform of x̂( f ) but a truncated version of it!

Evaluating the coefficients in the Fourier basis corresponds to uniformly sample the signal in the frequency domain.
In general we need an infinite amount of coefficients x i to represent a time-limited signal using the Fourier functions.
If we use a finite amount of coefficients, we will obtain an approximate representation of x(t), or a projection, using
the Fourier functions, as the example in Problem 2.2.
Problem 2.3. Consider x(t) in Problem 2.2. Plot its Fourier transform, and the location of the samples (2.11).
Solution. The Fourier transform of x(t) satisfies x̂( f ) = 0 for f = 0 and
T
T cos(π f T) sin(π f T)
x̂( f ) = ∫ x(t)e −2π f t dt = j −j
2
, (2.12)
− T2 2π f 2π 2 f 2

for f ≠ 0. To obtain (2.12), we used again that ea 2 (at − 1) is a primitive of te at , combined the complex exponentials
at

into sines and cosines, and simplified. Since x(t) is a real-valued odd function, it turns out that its Fourier transform
x̂( f ) is also odd and pure imaginary. We may then plot only the imaginary part, as shown in Figure 2.2. Now, it
follows that the coefficients of the signal x(t) in the Fourier basis satisfy x0 = 0 and

1 i T T
x i = √ x̂ ( ) = j , (2.13)
T T 2πi

where we used that sin(πi) = 0 and cos(πi) = (−1)i for i ∈ Z. We may check that x1 and x−1 calculated using (2.13)
coincide with the ones calculated in Problem 2.2. We observe in Figure 2.2 how the Fourier transform is sampled to
obtain the coefficients in the Fourier basis, noting at the same time that x i ≠ 0 for all i ≠ 0.

6
√1 Im(x̂)
T

● T

T

T T

● 4π
● f

− T1 1 2


T T


− T2πT ●

Figure 2.2: Fourier transform ϕ̂ i ( f ) of the Fourier functions.

3 degrees of freedom
We end this topic by studying the spectral resources used to transmit information using signals that are time-limited
to T seconds. Suppose that we want to represent time-limited signals to T seconds that are approximately restricted
in frequency in the [−W, W] Hz interval, that is x̂( f ) ≈ 0 for f ∉ [−W, W]. In the frequency-domain, the repre-
sentation of x(t) is the Fourier transform of (2.1), namely

x̂( f ) = ∑ x i ϕ̂ i ( f ), (3.1)
i=−∞

where ϕ̂ i ( f ) is the Fourier transform of ϕ i (t). Since ϕ i (t) is time-limited to T seconds, we know that it spreads
out over a range of frequencies. In other words, after computing the Fourier transform, we obtain

∞ 1
T
√ i
ϕ̂ i ( f ) = ∫ ϕ i (t)e − j2π f t dt = √ ∫ T e − j2πt( f −i/T) dt = T sinc [T ( f − )] .
2
(3.2)
−∞ T −2 T

Representing (3.2) in Figure 3.1, we observe that in fact ϕ i (t) has its most frequency components concentrated
around a frequency equal to i/T. Combining Equations (3.1) and (2.11), we obtain that

i i
x̂( f ) = ∑ x̂ ( ) sinc [T ( f − )] , (3.3)
i=−∞ T T

i.e., x̂( f ) can be seen as the superposition of sinc(⋅) functions centered at the multiples of T1 with an amplitude
that corresponds to the Fourier transform of x(t) at that particular frequency. Therefore, if x(t) is approximately
frequency-limited to W Hz, it might be well represented in the Fourier basis (2.3) using the coefficients x0 , x±1 up
to x±m , where m is the last coefficient at the boundary of W Hz, that is, m = W T. In other words, any signal x(t)
that is strictly time-limited in [−T/2, T/2] and approximately frequency-limited to W Hz can be represented as
WT
x(t) = ∑ x i ϕ i (t), (3.4)
i=−W T

where ϕ i (t) are the Fourier functions (2.3). In an actual communication system, the transmitted signal is a real-
valued quantity, such as the intensity of light in an optical fiber or the intensity of the electromagnetic field in a
wireless antenna. If x(t) is real, then its Fourier transform enjoys Hermitian symmetry: x̂ ∗( f ) = x(− f ). This
translates to the fact that x0 is real and that the complex-valued coefficients x i for i ≠ 0 satisfy x−i = x i∗ . Hence, we

7
ϕ̂ i ( f )

T

f
i
T
i−1 i+1
T T

Figure 3.1: Fourier transform ϕ̂ i ( f ) of the Fourier functions.

have roughly W T independent complex numbers which we call degrees of freedom. In essence, we can transmit
W T independent complex numbers using signals of duration T seconds and bandwidth W Hz, and recover them
perfectly in the absence of noise. In the following topics we will discuss how to estimate the transmitted information
in the presence of noise, and what is the maximum rate to get a low probability of error.

You might also like