0% found this document useful (0 votes)
5 views

ECE376 Dimensionality of Signals ASK PSK QAM FSK

Uploaded by

yahya mans
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

ECE376 Dimensionality of Signals ASK PSK QAM FSK

Uploaded by

yahya mans
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

1.

Dimensionality of Signals

Given a signal set of M signals (waveforms) of s1 t  sM t  , we wish to find a set of N

orthonormalized basis functions  1 t  N t  , so that we can write s1 t  sM t  in terms of

 1 t  N t  . Note that after this operation of orthogonalization, we can represent s1 t  sM t 
in an N dimensional space where each (orthogonal) axis corresponds to one of  1 t  N t  . We
chose m and n indexes such that 1  m  M , 1  n  N and it is reasonable to assume that
N M .

It is imperative to understand that s1 t  sM t  constitute the symbols (i.e. the grouping of binary
waveforms and their assignments to certain symbols) of a particular modulation scheme.

A formal method of finding  1 t  N t  given s1 t  sM t  is to use Gram-Schmidt


Orthogonalization Procedure. This is described below in steps

Gram-Schmidt Orthogonalization Procedure

1. We begin with s1 t  and set 1 t  as follows

s1 t 
 1 t   (1.1)
1

where  1 is the energy in s1 t  and found from

 1   s12 t  dt (1.2)


This way 1 t  is simply s1 t  with normalized energy, i.e. unity energy. Note that we demand

orthonormalized basis functions  1 t  N t  to have unit energy to avoid scaling problems
(similar to Frequency Transforms)

2. To find 2 t we proceed as follows. We take s2 t  and find its projection onto 1 t  (axis)
from

c21   s2 t   1 t  dt (1.3)


Then we subtract c21 1 t  from s2 t  to get

d2 t   s2 t  c21 1 t  (1.4)
Now d2 t  is orthogonal to 1 t  and 2 tcan be found by normalizing its energy, hence

d2 t  

 2 t   ,  2   d22 t  dt (1.5)
2 

Note that unlike  1 ,  2 does not correspond to the energy in s2 t  , but rather it refers to the energy

in d2 t  .

3. In general, the n th orthonormalized basis function is obtained from

d n t  n1
 n t   , d n t   sn t    cni i t  ,
n i 1

 

 n   d n2 t  dt , cni =  sn t   i t  dt i  1 n 1 (1.6)


 

4. So, the process in 3. is continued until we reach n  M , i.e. when all M waveforms are
exhausted.

Example 1.1 : Four signals, waveforms, i.e. , M  4 named as s1 t  s4 t  are given in Fig. 1.1, we

are asked to find a set of orthonormalized basis functions  1 t  N t  , where N  4 . Before we
tackle the solution, it is important to remind that our signals are defined piecewise over the given
time intervals. This means that mathematical calculations must be made taking into these intervals,
otherwise we will get inaccurate results. To be precise, we give the expressions for s1 t  s4 t  ,
based on the plots in Fig. 1.1a and split into the time intervals they occupy.

1
 0  t 1
1
 0t 2 
s1 t   
 
s2 t   1 1 t  2

0
 otherwise 

0

 otherwise

1 0  t 1

 
1 0t 3
s3 t   1 1 t  3 s4 t   
 (1.7)

 
0 otherwise
0

 otherwise

Solution : As described above, first we find 1 t 

s1 t  
s1 t 
 1 t   , where 1   s12 t  dt  2 , hence  1 t   (1.8)
1  2

Now we proceed to find c21  0 from

c21   s2 t   1 t  dt  0 (1.9)

This is because the overlapping parts of s2 t  and  1 t  are orthogonal. Then d2 t   s2 t  and

d2 t  s2 t  s2 t  

 2 t     , since  2   s22 t  dt  2 (1.10)


2 2 2 

To evaluate 3 t , we need to compute c31 and c32 using (1.6), this way

 2
1 1 1 2
c31 =  s3 t   1 t  dt   s3 t   1 t  dt   dt   dt  0
 0 2 0 2 1

c32 =  s3 t   2 t  dt   2 (1.11)


Then

d 3 t   s3 t   c32 2 t   s3 t   2 2 t  (1.12)

As seen from (1.12), d3 t  is a waveform extending from t  2 to t  3 with unit energy, thus

 3 t   d3 t  .

Finally (again using (1.6)), we find that c41  2 , c42  0 , c43  1 , then

d 4 t   s4 t   c43 3 t   c42 2 t   c41 1 t   s4 t   3 t   2 1 t   0 (1.13)

As can be verified by plotting d4 t  . (1.13) means that we have reached the end of the

orthogonalization process and the waveforms s1 t  s4 t  can adequately be represented by

orthonormalized basis functions of  1 t  3 t  . So in this example M  4, N  3 . For general

M and N , the waveforms s1 t  s4 t  can be plotted in an N dimensional signal space, where
any of sm t  in s1 t  sM t  can be written in terms of the orthonormalized basis functions of

 1 t  N t  as follows

N 

sm t    smn n t  , m  1 M , smn =  sm t   n t  dt (1.14)


n 1 

(1.14) means sm t  can be constructed from the components smn along different n t . Or

alternatively, we can say that smn coefficients are the projections of sm t  along the axis of n t. In

this manner we define the vectorial representation of sm t  as

sm   sm1 , sm 2 ,  smN  ,  sm t  sn t  dt  sm  sn , dmn  sm  sn (1.15)



The integral in the middle means that the product of two signal waveforms corresponds to the
vectorial inner product. For instance if sm t  and sn t  are orthogonal, then the inner product will
be zero. d mn refers to the distance between the ends of vectors s m and s n , which will be important
from probability of error performance.

Since sm t  and s m refer to the same signal, the energy  m in sm t  can be obtained either from

sm t  or from s m , thus

 N
2
 m   sm2 t  dt   smn
2
 sm (1.16)
 n 1

Note that it is important to comprehend these concepts, since they will be used in the detection
process at receiver.

For the example above, signal vectors s1  s4 are shown in Fig. 1.2 in the three dimensional space of
 1 t  3 t  . By using (1.14) and (1.15) it is possible to calculate s1  s4 and their respective
distances as

 s11 s12 s13 


  
s1   2 , 0 , 0  , s2  0, 2, 0 , s3  0,  2, 1 , s4   2, 0, 1
       
 
0.5
 2
   
2 2
d12  s1 s2   2  0  0  2 0  0   2 , d13  s1 s3  5
 
d14  s1 s4  1 , d23  s2 s3  3 , d24  s2 s4  5 , d34  s3 s4  2 (1.17)

s (t)
1 s (t)
2

1 1

0 t
1 2
0 t
1 2 -1

s (t) s (t)
3 4

1 1

0 t 0 t
1 2 3 1 2 3

-1

a) Signal waveforms s1 t  s4 t 


 (t)
1
 (t)
1/2 2

1/2

0 t
1 2 0 t
1 2

-1/2

 (t)
3

0 t
1 2 3

b) Orthonormal waveforms (basis function)  1 t  3 t 

Fig. 1.1 Signals waveforms and orthonormalized basis functions for Example 1.1

We do not always have to go to such lengths of finding orthonormalized basis functions by the use of
Gram-Schmidt Orthogonalization Procedure. As an alternative, we can use intuition and eye
inspection to arrive at a set of orthonormalized basis functions. Here the rule is that
orthonormalized basis functions  1 t  N t  should satisfy three requirements

1.  1 t  N t  should be orthogonal amongst themselves, this means

  t   t  dt  0
i k
for i  k , i  1 N , k  1 N (1.18)


2.  1 t  N t  should have unit energy, this means

  t  dt  1 , n  1 N (1.19)
2
n


3.  1 t  N t  should be able to represent (or span) all the signals in the set of
s1 t  sM t  in an N dimensional signal space, or alternatively we should be able to express all of
the signal in the set of s1 t  sM t  in terms of  1 t  N t  .

So bearing in mind these requirements and particularly paying attention to time slicing in wave forms
of s1 t  s4 t  , we can deduce an alternative set of orthonormalized basis functions of

 1a t  3a t  , as shown in Fig. 1.3.


 (t)
2
s
2
d
12
2
2 s
d 1
23  (t)
1
d
14
3 3
s
4
 (t) s d
3 34
3

Fig. 1.2 Signal space diagram of signal waveforms s1 t  s4 t  in Example 1.1.

 a( t )  a( t )
1 2
1
1

0 t 0 t
1 1 2

 a( t )
3

0 t
1 2 3

Fig. 1.3 Alternative set of orthonormalized basis functions of  1 t  3 t  for Example 1.1.
a a

.
Before we start to investigate different modulation types, it is instructive to show the simplified block
diagram of the modulator we are referring to. This model is shown below in Fig. 1.5 including the
prior part of analogue to digital conversion. According to this figure, modulation means taking
unmodulated input of binary waveforms in groups of k  log 2 M and converting (mostly called
mapping) them into modulated output of M ary signals s1 t   s M t  . The way that this modulator
functions will correspond to different types of modulations examined next. Note that apart from the
case of M  2 , it is always the case that (symbol duration) T  Tb (binary waveform duration) .
For this reason, strictly speaking, the energy of a symbol will increase as M is raised.

The relevant output waveforms from the blocks of Fig. 1.5 can be found in Fig. 1.6, for the sample
cases of ASK and PSK at M  4 . Note that multilevel signal representation is modulation free, or
modulationless. Hence a digital modulation block is assumed to be driven by a multilevel signal.
Digital modulation
Binary Mary
Sampling s1( t ) ..... s ( t )
m(t) Quantizing
Binary
waveforms Conversion
signal
Modulator M

Analogue waveform Duration Tb


to Mary levels
Duration T (Mapper)
message assingment Modulated
signal output with
Analogue to
digital conversion T = T x log ( M )
b 2

Fig. 1.5 Simplified block diagram of digital modulator including the prior part of analogue to digital
conversion.

Binary waveforms Multilevel signals ASK - M = 4 PSK - M = 4


(unipolar)
Sampling , Quantizing , Binary waveform assingment

s(t) T = 2T
s(t) 1 b
T 2T T = 2T
1 T = 2T
b b
b b A
m(t) 0 0 0 A 0
0 t 0 t t t
s(t)
2
T
b 2T s(t) A T = 2T
b 0 2
T = 2T b
b t
1
-A t
0 1 0 T/2
0
0 t t T = 2T
-A
b
t T s(t) T = 2T
b 2T
b
3 s(t) b
0 2 T = 2T 3
b 3A t
1 0
1 0 0 0
0 t t t -A
Analogue
message s(t) T = 2T s(t)
4 b 4
T 2T A
signal b b 0 t
3 t
1 1 0 T/2
t
0 t -3A T = 2T
0 -A b
T = 2T
b

Fig. 1.6 Waveform representation of the outputs of the block in Fig. 1.5.

From Fig. 1.6, it is apparent, compared with ASK, PSK will demand more bandwidth, since it slices the
symbol duration into two halves.

Below we take ASK, PSK and QAM in baseband form, since a sinusoidal carrier merely shifts the
frequency spectrum of our modulated signal and has no effect on the probability of error analysis.

2. Amplitude Shift Keying (ASK) or Pulse Amplitude


Modulation (PAM)
Now we examine different modulation types with the perspective of dimensionality of signal. The
first and the simplest one is ASK (or PAM). In this modulation type, N  1 , so we say that ASK is one
dimensional, thus we need a single orthonormalized function  t  which is usually drawn as a
horizontal line, where the signal vector are placed according to their respective energies. In this
sense ASK signals will be differentiated by energy differences (essentially amplitude differences) and
their respective orientation to the left (negative pulse) and to the right (positive pulse).

By taking a symbol duration of T, an ASK example of M  4 is given in Fig. 2.1

s (t) s (t)
1 2
T T s (t)
0 t 0 t 3 s (t)
4
3A
-A

A
-3A
0 t 0 t
T T

(t)

1 / T

0 t
T

s -A T 3A T s
2 4
(t)
s
1
-3A T A T s
3

Fig. 2.1 An example of ASK signals, orthonormalized basis function and signal space diagram, for
M 4.

By looking at Fig. 2.1 and benefitting from the formulations given in section 1., it is possible to write
the following expressions for signal waveforms s1  t   s 4  t  , signal vectors s1  s4 and their
respective energies. This way s1 t  and s 4 t  will have greater energies than s2 t  and s3 t 
s1 t   3 A , s2 t   A , s3 t   A , s4 t   3A ,  t   1/ T 0  t T
s1 t   3 A T t  , s2 t   A T t  , s3 t   A T t  , s4 t   3A T t 
s1  3 A T  , s 2   A T  , s3   A T  , s 4  3 A T 
       
2 2 2 2
1  s1  9 A2T ,  2  s2  A2T ,  3  s3  A2T ,  4  s4  9 A2T (2.1)

Mostly to conserve bandwidth and to avoid intersymbol interference that will occur during
transmission, we will not use rectangular waveforms, but instead use a shaping waveform called
gT t  . We may set  t  to be gT t  . The examples of rectangular and a sinusoidal shaped gT t 
are mathematically given below


1/ T 0 t T
Rectangular shaping waveform gTR t   

0
 otherwise

 2

2 sin 2  t / T  0  t T
Sinusoidal shaping waveform gTS t    3T (2.2)


0

 otherwise

In Figs. 2.2 and 2.3, we show the respective time and frequency plots of the rectangular and
sinusoidal shaping waveforms of (2.2).

0.2 gTS ( t ) = sin2 ( t / T )

Rectangular shaping waveform for g (t)


TR
0.15
gT ( t )

0.1

0.05

-0.05
0 50 100 150 200 250 300 350 400 450
t

Fig. 2.2 Time plots of rectangular and sinusoidal shaping waveforms.


0.12

Spectrum of rectangular shaping waveform


0.1

0.08

Spectrum of sin2 ( t / T )
GT ( f )

0.06

0.04

0.02

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
f

Fig. 2.3 Frequency spectrum plots of rectangular and sinusoidal shaping waveforms.

3. Phase Shift Keying (PSK) and Quadrature Amplitude


Modulation (QAM) – Two Dimensional Signals
To establish two dimensions, it is natural to use two (orthogonal) axes which will be
 1 t  and  2 t  . By taking a symbol duration of T, it is possible to generate two types of
orthogonal set of  1 t  and  2 t  as illustrated in Fig. 3.1.

 (t)  (t)
1 2

 
A A

T/2 T
0 t 0 t
T

-
A
 a( t )  a( t )
1 2

a a
A A

0 t 0 t
T/2 T
T/2

Fig. 3.1 Two sets orthogonal functions  1 t  and  2 t  ,  1a  t  and  2a t  .

We can write the mathematical expressions of  1 t  and  2 t  ,  1a  t  and  2a t  as

 1 t    A for 0  t  T / 2 ,  1 t    A for T / 2  t  T ,  1 t   0 for t  0 or t  T


 2 t    A for 0  t  T ,  2 t   0 for t  0 or t  T
 1a t    Aa for 0  t  T / 2 ,  1a t   0 for t  0 or t  T / 2
 2a t    Aa for T / 2  t  T ,  2a t   0 for t  T / 2 or t  T (3.1)

It is quite easy to see that


 

  1 t   2 t  dt  0 ,   t   t  dt  0 (3.2)
a a
1 2
 

So both  1 t  and  2 t  and  1a  t  and  2a t  are orthogonal amoung themselves. Note that
 1a t  and  2a t  achieves orthogonolization by nonoverlapping along time axis, while
 1 t  and  2 t  are overlapping. To establish that  1 t  and  2 t  and  1a t  and  2a t  are
orthonormalized as well, we demand that energies are unity such that
   
2 2
  1 t  dt  1 ,   2 t  dt  1 ,   1 t  dt  1 ,   2 t  dt  1 (3.3)
2 2 a a

   

1 2
From the evaluations in (3.2), we get  A  and  Aa  . So now  1 t  and  2 t  and
T T
 1a t  and  2a t  are both orthogonal and orthonormalized.

Now we choose two time signal waveforms s1 t  and s 2 t  , similar to  1 t  and  2 t  as


displayed in Fig. 3.2.
s (t) s (t)
1 2

A A

T/2 T
0 t 0 t
T

-A

Fig. 3.2 Two time signal waveforms s1 t  and s 2 t  .

If  1a  t  and  2a t  are selected to represent s1 t  and s 2 t  , then it is possible to write


s1 t  and s 2 t  in terms of the orthonormalized basis functions  1a t  and  2a t  as

s1 t   A for 0  t  T / 2 , s1 t    A for T / 2  t  T , s1 t   0 for t  0 or t  T


s2 t   A for 0  t  T , s2 t   0 for t  0 or t  T
T a T
s1 t   A  1 t   A  2a t  for 0  t  T , s1 t   0 for t  0 or t  T
2 2
T a T
s2 t   A  1 t   A  2a t  for 0  t  T , s2 t   0 for t  0 or t  T
2 2
 T T   T T 
s1   s11 , s12    A , A , s   s , s    A , A (3.4)
2   2 
2 21 22
 2  2

Note that on the first two lines of (3.4), we have intentionally written for s1 t  and s 2 t  as they are
seen from s1 t  and s 2 t  (without  1a  t  and  2a t  ). On the third and fourth lines of (3.4), where
there are the expressions of s1 t  and s 2 t  in terms of  1a  t  and  2a t  , we do not actually need
the time range specifications given at the end of lines, since these time ranges are readily built into
 1a t  and  2a t  . On the last line of (3.4) we have the vectorial representation of s1 t  and s2 t  ,
i.e. s1   s11 , s12  and s 2   s21 , s22  whose vectorial coefficients can be calculated either using the
integral in (1.13) or by eye inspection. With these vectorial coefficients, it is now possible to
construct the signal space diagram as shown in Fig. 3.3.
 a( t )
2
s
s 2
22

s
11  a( t )
1
s
21
A T / 2
A T
s
12
s
1

Fig. 3.3 Signal space diagram for s1 t  and s 2 t  of Fig. 3.2.

As seen from Fig. 3.3. the two signal vectors are placed at 900 with respect to each other which is not
surprising since s1 t  and s 2 t  are orthogonal to each other. Additionally the angles

s1 t  and s 2 t  make with  1a t  is 450 , since s11  s12  A T / 2 , s21  s22  A T / 2 .


Furthermore, the energies in s1 t  and s 2 t  can be calculated from the lengths of s1 and s2 as well
as from the time waveforms as shown in (3.5). Of course in both cases we arrive at identical results.

2
 1  s1  s112  s122   s12 t  dt  A2T


2
 2  s 2  s212  s222   s22 t  dt  A2T , 1   2   s (3.5)


s1 t  and s 2 t  time waveforms of Fig. 3.2 and the associated signal space diagram in Fig. 3.3.
constitute what is called PSK, since here the energies of signals (thus the length of signal vectors) are
the same (denoted commonly as  s ) and the only differentiating factor is the respective angular
location corresponding to phases in s1 t  and s 2 t  . Looking at Fig. 3.3, we see that the two
dimensional signal space is not used efficiently and we can place two more signals, namely
s3 t  and s4 t  such that s1 t    s3 t  and s 4 t    s 2 t  , hence the vectors s3 and s4 will have
a rotation of 180 0 with respect to s1 and s2 . The new signal space diagram comprising
s1 , s2 , s3 and s4 is given in Fig. 3.4. Note that here we have reverted from  1a t  and  2a t  to
 1 t  and  2 t  .
 (t)
2
s
3 s
2

 (t)
1

s s
4 1

Fig. 3.4 Signal space diagram for 4 PSK signals s1  t   s 4  t  .

In Fig. 3.4, M  4 (corresponding to 4 level signalling), this PSK scheme is also known as quadrature
PSK. In the signal space diagram of Fig. 3.3, we had M  2 (binary). Since in PSK, all signal vectors
are of same length (and same energies), it is customary to draw a circle passing through signal end
points as indicated in Fig. 3.4. It is of course possible to add more signals to the two dimensional
signal space of PSK. For instance, the case of M  8 is shown in Fig. 3.5 where we have removed the
connections of signal vector ends to the origin for clarity. It is also possible to go to higher M values.
This way, the appearance in the signal space diagrams will turn more into a constellation of stars. For
this reason, signal space diagram is also called constellation diagram. For a general M , the mth
signal waveform sm t  and signal vector s m from the signal set of s1  t   s m  t   s M t  can be
formulated as

sm t   A T Cc 1 t   Cs 2 t  , Cc  cos  2 m 1 / M  , Cs  sin  2 m 1 / M 



 2 / T for 0  t  T / 2 
 2 / T for T / 2  t  T
 1 t     2 t   
 
0
 elsewhere 0
 elsewhere

 
s m  A T cos  2 m 1 / M  , A T sin  2 m 1 / M  , m  1 M (3.6)

Note that the orthonormalized basis functions defined in (3.6) are the same as  1a  t  and  2a t  of
Fig. 3.1.
s  (t)
3 2
s s
4
2

s s
5 1
 (t)
1
s  = AT
1

s s
8
6
s
7

Fig. 3.5 Signal space diagram for 8 PSK signals s1  t  s8  t  .

 8

As an alternative to time sliced version of representing the signal vectors, we can use the complex
plane for PSK (and also QAM) constellation such that the positions of signal vectors are represented
on a complex plane, since a complex plane is two dimensional as well. With this arrangement,  1 t 
will be replaced by the real part of the complex exponential and  2 t  will be replaced by the
imaginary part of the same exponential. Thus the signal vector s m of M ary PSK will become


sm  A T exp  2 j m 1 / M   , m  1 M (3.7)

It is easy to see that even with increasing M , the two dimensional signal space is still not efficiently
used. One solution would be to create energy variations as well as phase variations in signal vectors,
this way achieve a combination of ASK plus PSK. This combination will be called Quadrature
Amplitude Modulation (QAM). An example of 16 QAM is shown in Fig. 3.6. As seen here, QAM
constellations are usually arranged in the form of rectangles, although from probability of error view
point, this is not the best placement of signal vectors, there is little difference between the
rectangular arrangements and the optimum ones. Fig. 3.6 proves that QAM is indeed a combination
of ASK and PSK. For instance, in the given constellation of Fig. 3.6, the collection of signal vectors
s7 , s6 , s3 and s 2 constitute 4 ASK, whereas the collection s4 , s5 , s10 and s15 represents 4 PSK. It is
equally possible to identify other similar groupings.
 (t) 4 ASK
s s 2
7 6 s s
3 2
4 ASK
4 PSK
s s
8 5 s
4 s
1

 (t)
1
s s
11 15 s
14
s
10
s
12
s
9 s s
16 13

Fig. 3.6 Rectangular signal constellation for 16 QAM signals.

Arrangements other than the rectangular type are also possible in QAM. For instance placing of
signal vectors on different circles within each other is another option as illustrated in Fig. 3.7. Of
course, the objective here is to find the constellation (distribution of signal vectors) that will give the
maximum distances between vector ends for the same total or average energy, since it this criteria
which will determine the probability of error performance.

 (t)
s 2
6

s
2 s
1
s s
7 5
 (t)
1

s s
3 4

d
48

s
8

Fig. 3.7 Circular signal constellation for 8 QAM signals.

In practice QAM is used mostly in radio links.

4. Multidimensional Signals
4.1 Frequency Shift Keying (FSK)

With the context of multidimensional signals, here we will initially study Frequency Shift Keying (FSK).
Although the other modulation types can be represented both in baseband (without carrier) and
bandpass (with carriers), FSK can only be written in terms of sinusoidal carriers. Assume that we
choose M  2 and assign frequencies f1 and f 2 to our message signals of s1 t  and s 2 t  , then

s1 t   2 A cos 2 f1t  , 0  t  Tb , Tb : Binary waveform duration


Tb Tb

s2 t   2 A cos 2 f 2t  , 0  t  Tb ,  b   s12 t  dt   s22 t  dt  A2Tb (4.1)


0 0

Note that in the writing of signal waveforms, we have used slightly different notation than ASK and
PSK. By setting f  f m  f m1 , adopting a starting frequency of f c such that f m  f c   m  1  f
we can write for the m th signal as follows

sm t   2 A cos  2 f c t  2f m 1 t  , 0  t  T


T

 s   sm2 t  dt  A2T  k  b   b log 2 M , m  1 M , T  kTb (4.2)


0

It is interesting to examine the variation of f (frequency separation) against T (symbol duration).


To this end we define the correlation coefficient  mn as follows

1 T

 mn 
s
 sm t  sn t  dt
0

1 T

  2 A2 cos  2 f c t  2f m 1t  cos  2 f ct  2f n 1t  dt


s 0
1 T   1 T

T 0
  ccos 2 f t  2f  m  n 
t dt   cos  2 f ct  2f m  n  2t  dt
T 0
sin  2f m  nT 
 (4.3)
2f m  nT

where on the second line, we have substitutions from (4.1) and the approximation on the line is due
to f c  1/ T . A plot of  mn against f is given in Fig. 4.1. As seen from this figure,  mn passes
through zero at integer multiples of 1/ 2T . It means at this values of f , the signals s m t  and s n t 
are orthogonal. The minimum value of  mn is 0.217 and reached at f  0.715/ T . These
markings are important and form the basis of Orthogonal Frequency Division Multiplexing (OFDM),
and minimum shift keying (MSK).
FSK correlation coefficient 1

0.8

0.6

0.4

0.2

0
mn

-0.2
0 0.5/T 1/T 1.5/T 2/T
f

Fig. 4.1 The variation of FSK correlation coefficient  mn against f .

discontinuity exits at the transitions from s1 t  to 2 

s (t) s (t)
s(t) 1
s (t)
2 1
Phase discontinuity

FSK

0 t

t=T t=2T t=3T

a) FSK signal with phase discontinuity.


s(t) f =f
i 1
f =f
i 2
f =f
i 1

MSK

0 t

t=T t=2T t=3T

b) MSK signal (without phase discontinuity)

Fig. 4.2 FSK and MSK signals illustrating the existence and the absence of phase discontinuity.

4.2 Orthogonal Signal Waveforms and Pulse Position Modulation (PPM)

Orthogonal signal waveforms s1w t   s 4w t  and pulse position modulation signal waveforms
s1p t   s 4p t  are shown in Fig. 4.3.

sw( t ) sw( t ) sw( t ) sw( t )


1 2 3 4
A
A A A
Orthogonal
signal set
T T 0 T Walsh
0 t t
t
0 t waveforms
0 T/2 M=4
T T/8
T/4
-A -A
-A p
s p( t ) s p( t ) s (t)
3 s p( t )
1 2 4
2A 2A 2A 2A Pulse position
modulation (PPM)
waveforms
M=4

0 T/2 3T / 4 0 3T / 4
T
T/4 0 T/2
0 t T/4
t t t
p
p  p
(t)  p
(t)  (t)
 (t) 2 3 4 Orthonormalized
1
2/T 2/T 2/T basis functions
2/T
for PPM waveforms
0 0 0
0 t t t t
T/4 T/4 T/2 3T / 4 3T / 4 T
T/2

Fig. 4.3 Orthogonal signal waveform set (Walsh waveforms), pulse position modulation (PPM)
waveforms and the orthonormalized basis functions for PPM waveforms.

The orthogonal waveforms displayed on the first row of Fig. 4.3 are derived from Walsh sequences
and hence named as such. In both cases of Fig. 4.3, M  N  4 , that is the number of signals in the
signal set is equivalent to the number of dimensions of the set. Therefore, we obviously need four
orthonormalized basis functions to represent s1w t   s 4w t  and s1p t   s 4p t  . Those for the PPM
set are given on the last row of Fig. 4.3. Hence the PPM waveforms themselves, their time waveform
and vectorial representation in terms of this basis functions  1p t   4p t  can be written as follows

2 A
 0 t T / 4 2 A
 T / 4  t T / 2
s1p t   
 s2p t   


0
 otherwise 
0
 otherwise

2 A T / 2  t  3T / 4 
2 A 3T / 4  t  T
s3p t   
 s4p t   


0
 otherwise 
0
 otherwise

2 / T 0 t T / 4 
2 / T T / 4 t T / 2
 1p t     2p t   
 
0
 otherwise 0
 otherwise
2 / T
 T / 2  t  3T / 4 2 / T
 3T / 4  t  T
 3p t     4p t   

0 otherwise 
0 otherwise
 
s1p t   A T 1p t  , s2p t   A T 2p t  , s3p t   A T 3p t  , s4p t   A T 4p t 
s1p   s11p , s12p , s13p , s14p    A T , 0, 0, 0 , s 2p   s21p , s22p , s23p , s24p   0, A T , 0, 0
   
s 3   s31 , s32 , s33 , s34   0, 0, A T ,
p p p p p
0 , s 4   s41 , s42 , s43 , s44    0, 0, 0, A T 
p p p p p
(4.4)
   

It is clear from Fig. 4.3 and the expressions in (4.4) that as we go to higher dimensions, we slice the
time axis more, this in turn increases our bandwidth requirement, So we can establish the following
relation

Number of dimensions N  Bandwidth (4.5)

From the above analysis, we summarize our findings as follows

 ASK is one dimensional ( N  1 ), PSK and QAM are two dimensional ( N  2 ), whereas FSK can
be multidimensional ( N  2 ).
 ASK signals differ by energies. There is only angular variation but no energy variation in the PSK
signals. In QAM, there is energy as well as angular variations in QAM signals.
 For the same symbol rate, i.e., same symbol duration of T , Bandwidth of ASK < Bandwidth of PSK
and QAM.
 As the dimensionality, i.e., N increases, bandwidth requirement increases as well.

5. Detection of Signal in Presence of Additive White


Gaussian Noise – Correlators and Matched Filters
We assume that within a time interval of 0  t  T , our transmitter randomly sends one of the
s1 t  sM t  signals, namely sm t  and in the communication channel, only additive white Gaussian
noise (AWGN) is added to the signal, so that the received signal r t  is

N0
r t   sm t   n t  , Sn  f     n2 (5.1)
2

where S n  f  is known as noise spectral density function, N 0 and  n are noise spectral density level
2

and noise variance. It is obvious that S n  f  is independent of frequency f , hence the White nature
of Gaussian noise.

Such a channel model is known as (band unlimited) AWGN channel and depicted in Fig. 5.1. It is clear
that in this channel, there is no band limitation, which means that

C  f   1 , c t    t  ,  t  : Time delta function (5.2)

Communication channel
Transmitted C( f ) , c ( t )
signal Received signal

+ r ( t ) =s ( t ) +n( t )
s (t) m
m

Transmitter Receiver side


side

Noise , n( t )

Fig. 5.1 AWGN channel model.

The receiver has a knowledge of modulation type and symbol duration, T that are employed at the
transmitter. Furthermore the receiver also knows the set of signals s1 t   s M t  , i.e. the alphabet
used by the transmitter. Finally we assume that the receiver is able to extract the beginning of time
interval 0  t  T , called synchronization. So the job of the receiver is to demodulate the incoming
signal r t  and decide correctly which sm t  was sent from the transmitter within the time interval
0  t  T . Note that since we are dealing with an unlimited channel, it is sufficient to consider any
symbol interval. Here we choose, the interval, 0  t  T . To perform demodulation tasks, we pass
the received signal through a correlator as shown in Fig. 5.2. Basically the operations performed in
the correlator of Fig. 5.2 are feeding the received signal simultaneously to N branches, multiplying
the received signal r t  on each branch by one of the orthonormal basis functions  1 t  N t 

(the same ones used at transmitter to construct the signal sm t  ), integrating the resultant over one
symbol duration, and sampling at the end of this duration, forming the vector array r by collecting
the individual components r1  rN , eventually sending r to a detector to decide which sm t  was
sent from the transmitter. The operations carried out on the n th branch of this correlator can
mathematically be described as
T T

 r t   t  dt    s t   n t   t  dt
n m n
0 0

rn  smn  nn n  1 N
T T

smn =  sm t   n t  dt , nn   n t   n t  dt (5.3)
0 0

Note that the definition of smn is equivalent to the one in (1.13).

Fig. 5.2 Correlator type of demodulator.

Considering all branches in the correlator of Fig. 5.2, we get

r  sm  n (5.4)

(5.4) means the totality of the operations in the correlator can be treated as row arrays of
r, s m and n , where

r   r1  rn  rN  , s m   sm1  smn  smN  , n   n1  nn  n N  (5.5)

It is important to point out that s m is deterministic in the sense that it will take upon one of the
values from the set s1  s M , while n is random. The probability density function (pdf) for the
amplitude distribution of one nn sample of n is the same as input noise, hence

1  nn2 
f nn   exp   , N 0  2 n2 , n  1 N (5.6)
 N 0 
0.5
 N 0 
where N0 / 2 is the noise spectral density and  n is the variance. All nn samples have zero mean
2

and are uncorrelated, which means

E  nn    E  n t   n t  dt  0
0
T T

E  nn nm     E  n t  n    n t  m   dtd
0 0
T T
N0
   t    n t  m   dt
0 0 2
N0 T

2 0
  n t  m t  dt
N
 0  mn ,  mn  0 if n  m ,  mn  1 if n  m (5.7)
2

As a consequence of (5.5) and (5.6)

N
1  N n2 
f n    f nn   exp  n 
 n1 N 0 
N /2
n 1
 N 0 
E  rn   E  smn  nn   E  smn   E  nn   smn  0  smn
N

f r s m   
n 1
f rn smn  , m  1 M
1
f rn smn   exp rn  smn  / N 0 
2
, n  1 N
 N 0  0.5

1  N r  s 2 
f r s m   N /2
exp  n mn 

 N 0   n 1 N 0 
1  r s 2 
 N /2
exp  m 
 , m  1 M (5.8)
 N 0   N 0 

The development in (5.8) means that (vectorwise) when noise n is added to the incoming signal s m ,
then the received signal r becomes a Gaussian random variable as well. This way the received signal
inherits all properties of noise, except that the previous zero mean is now shifted to s m . In a way,
this is like adding a DC shift ( s m ) to an AC signal n . Note that in (5.5) noise vector n is shown to be
N dimensional. This is because the correlator of Fig. 5.2 takes the projections of noise signal n  t 
onto an N dimensional space. Prior to such an operation, noise signal n  t  (i.e. the noise in nature)
has infinite number of dimensions.

It is also possible to perform the demodulation via match filter type of demodulator. This is shown in
Fig. 5.3. As seen in Fig. 5.3, the previous acts of multiplying the received signal by orthonormal basis
functions and the integrating the resulting product are concentrated in the boxes known as matched
filters (MFs). This can be proven mathematically by writing the output from MF n th branch on the as
follows.
t

yn t    r   hn t   d
0
t

  r    n T  t    d , n  1 N (5.9)
0

After sampling at t  T , we obtain


T

yn T    r    n   d , n  1 N (5.10)
0

(5.10) will deliver the same result as the first line of the integral in (5.3).

Fig. 5.3 Matched Filter (MF) type of demodulator.

It is instructive to examine the time domain properties of MF. The impulse response of a filter
matched to an input signal of s t  is given by h t   s T  t  . Then the response from such a filter
would be
t t

y t    s   h t    d   s   s T  t    d (5.11)
0 0

So the output from the matched filter can be interpreted as the time autocorrelation function of the
input signal s t  . An example input and output of MF are given in Fig. 5.4. As seen from Fig. 5.4c, the
match filter response inside the convolution integral becomes oriented in the same direction as the
input so that maximum similarity (correlation) is established at the output at the instance of t  T .
Matched Filter ( MF )
Input Output

s ( t ) , S( f ) H( f ) , h( t ) y ( t ) , Y( f )

a) Block diagram of matched filter (MF)

h(t-)
s(t) h ( t ) = s (T - t ) = s (T - t +  )
s(-t)
A A

t
t 0 t 
0 T t-T 0 t
T -T

b) Orientation of input signal to matched filter (MF)

s ( )
A
Sample area of overlap
when T < t < 2T

0 T 
s (T - t +  ) T < t < 2T t >2T
A A
t<0 0<t<T


t -T t t -T 0 t t -T t t -T t
c) Output from matched filter (MF) via convolution

Fig. 5.4 Block diagram, orientation of input signal and obtaining output signal from MF via
convolution.

To find the output of MF for the input given in Fig. 5.4, we need the mathematical expressions of
s   and s T  t   as seen for the integration in (5.11). These are

 t 
s    A , s T  t     A A (5.12)
T T

Again looking at Fig. 5.4 c), we identify four different regions of integration for the expression of
(5.11) which are


 y1 t   0 t 0



 t

 y t    s   s T  t   d 0  t T


2

y t   
0
(5.13)


T

 y3 t    s   s T  t   d T  t  2T




t T

 y4 t   0

 t  2T

Note that on the first and last lines of (5.13), the result is zero because the is no overlap between
s   and s T  t   , while on the second and third lines, the integrand is the same as expected,
but the integration limits are adjusted according to the areas of overlap. After using (5.12) in (5.13),
we get


 y1 t   0 t 0



 A2t 3 A2t 2

 y t    2  0 t T
 2 6T 2T
y t   
 (5.14)

 A2
3 A2t 5 A2T

 y3 t   t  T    T  t  2T
 6T 2 2 6


 y4 t   0

 t  2T

For the results (meaning the second and third lines) in (5.14), the tests conducted at the check points
t  0, t  T and t  2T are given in (5.15). Note that these tests, of course, do not guarantee the
absolute correctness of the formulations given for y2 t  and y3 t  in (5.14). We show the plot of

y t  in Fig. 5.5.
3 2
A2 t  0 A2 t  0
y2 t  0    0 Test for t  0 : OK
6T 2 2T
3 2
A2 t  T  A2 t  T  A2T T

y2 t  T        s 2 t  dt   s Test for t  T : OK
6T 2 2T 3 0

A2 3 A2 t  T  5 A2T A2T
y3 t  T   t  T  T      s Test for t  T : OK
6T 2 2 6 3
A2 3 A2 t  2T  5 A2T A2T
y3 t  2T   2 t  2T  T     0 Test for t  2T : OK
6T 2 6 3
(5.15)

0.35

s
0.3

0.25
y( t )-MFoutput

0.2 y (t)
2

0.15 y (t)
1

0.1 y (t)
4

0.05 y (t)
3

0
-0.5T 0 0.5T T 1.5T 2T 2.5T
t (inunitsofT)

Fig. 5.5 The output from MF, when the input is as shown in Fig. 5.4.

As seen from Fig. 5.5, the peak of y t  occurs at t  T . This explains also why we have chosen the
sampling instance t  T for correlator in Fig. 5.2 and matched filter in Fig. 5.3. It is important to note
that the output of MF comes out to be in units of energy, whereas we expect it to be in units of
voltage (or current). To correct this, we have to divide the output of MF by the square root of energy
of input.

Finally we examine the frequency domain interpretation of MF. To this end we assume that for a
signal of s t  the time response of matched filter is given by h t   s T  t  , thus the Fourier

transform of s T  t  will be
T

H  f    s T  t  exp 2 j ft  dt
0

T 
   s   exp 2 j f   d  exp 2 j fT 
 0 
 S *  f  exp 2 j fT  (5.16)

The last line in (5.16) means that the frequency response of MF is equal to the multiplication of the
complex conjugate of the frequency response of the input signal and phase factor exp 2 j fT  ,

representing the time delay of T in s T  t  . The output from MF in will then be

y t    Y  f  exp 2 j ft  df


2
  S  f  exp 2 j fT  exp 2 j ft  df

 
2
y t  T    S  f  df   s 2 t  dt   s (5.17)
 

where on the last line we have taken into account the sampling at the instance of t  T . Then we
have used Perseval’s relation to establish the energy equivalence of a (time limited) signal along time
and frequency axis. Since it is a bit awkward to find the output of MF in units of energy, usually we
scale the response of MF by the square root of energy. Note that such a scaling is already accounted
for in the orthonormalized functions of  1 t  N t  .

The last line of (5.17) gives the signal output as amplitude, thus its square will give the output power,
that is

Ps  y 2 T    s2 (5.18)

The noise with a spectral density of Sn  f   N0 / 2 , when fed to an MF whose frequency response

is H  f   S  f  exp 2 j fT  will deliver spectral density output of


*

2 2
S0  f   H  f  S n  f   S  f  N 0 / 2 (5.19)

As a result, nose power at the output of MF will be



N0 
2  s N0
Pn   S 0  f  df   S  f  df  (5.20)
 2  2

By using (5.18) and (5.20), we can calculate the signal to noise ratio (SNR) at the output as follows

Ps  s2 2
SNR    s (5.21)
Pn  s N 0 / 2 N 0
6. Optimum Detector
By optimum detector, we mean a detector that makes best use of the received (statistical)
information and establishes a correct decision as far as possible. As seen from Figs. 5.2 and 5.3, the
received to be used by the optimum detector is r  r1 ;; rN  . From (5.8), we know that r is a
Gaussian random variable (a property inherited from noise) with a mean of s m (a property inherited
from the transmitted signal). Vectorwise r  s m  n . So if the possible number of transmitted
signals was M  4 and the dimensionality of the signal space was N  3 , then the appearance of
the signal space diagram would be something like that shown in Fig. 6.1 assuming s1 t  was
transmitted. Here the cloud around each signalling point represents the spherical (since N  3 )
noise observed after may receptions. At a given instance of time received vector r would be as
shown.

 (t)
2

s
2 s
Noise cloud 1
n

r
 (t)
1

s
3

s
4
 (t)
3

Fig. 6.1 The appearance of signal space after AWGN channel.

Now we aim for an optimum detector that will make a decision based on the computation of the
posterior probability defined as

P signal s m was transmitted given received signal vector r   P s m r  (6.1)

Our criteria will be to find m value that will maximize P s m r  , when m ranges in m  1  M .

Upon finding the m value that has maximized P s m r  , we arrive at the decision that it is most likely

that this particular s m was transmitted. So our optimum decision rule boils down to evaluating

P s m r  and is named as maximum a posteriori probability (MAP) criterion.


Using Bayes rule, we can express P s m r  as

f r s m  P s m 
P s m r   (6.2)
f r 

where f r s m  is the conditional pdf of r given that s m was transmitted. P sm  is the probability

that signal s m was transmitted. f r in the denominator of (6.2) is the pdf vector r and will be given
by the following sum
M
f r    f r s m  P s m  (6.3)
m 1

We can take the formulation of f r s m  from (5.8), but even then, it is not possible to arrive at a

simplified expression of P s m r  , since individual probability of signal sent from the transmitter may

be different. If however the transmitter sends all s m signals, m  1  M with equal probability, then

1
P sm   (6.4)
M

As a result

 
 f r s  
Max  P sm r  Max  M   Max  f r s 
m
  m 
(6.5)
 
  f r sm 
 m1 

The reason that we have been able to write the last expression in (6.5) is that the sum in the
denominator of the middle expression remains the same whichever m is selected. Therefore this
sum has no role in the determination of Max  P s m r . So finding Max  P s m r is equivalent to
   
finding Max  f r s m  , such a reduced decision strategy is called maximum likelyhood (ML)
 
criterion. From (5.8) we see that f r s m  contains a Gaussian exponential, thus it may be easier to

work with the log e (denoted by ln ), hence

 N 1 
 
N
Max ln  f r sm   Max 
2

   2
ln  N0  rn  smn   (6.6)
 N0 n1

We note that terms that do not contain the index m are irrelevant in the maximizing process,
therefore, we take the last term in (6.6) and set it to a distance metrics, D r, sm 

N
2
D  r, s m    rn  smn  (6.7)
n 1
Because of the minus sign in front of the sum in (6.6), seeking Max  P s m r will now be equivalent
 
to Min  D  r, s m  . As the name implies and as also detected from (6.6), when the index m is run

from 1 to M , the distance metrics D r, sm  will calculate one by one the distances of all signals
that are likely to be sent from the transmitter to the received vector r . In the end by selecting
Min  D  r, s1  , D  r, s 2 ,  D  r, s M  , we shall have arrived at the optimum decision based on
MAP criterion. Such an operation is carried out for the sample constellation of Fig. 6.1 and illustrated
in Fig. 6.2.

 (t)
2

s
2 s
1
Noise cloud n
D (r, s2 )
D (r, s1 ) = Min [D (r, sm )]

D (r, s3 )
r
 (t)
1
D (r, s4 )

s
3

s
4
 (t)
3

Fig. 6.2 Calculation of distance metrics for the sample constellation of Fig. 6.1.

As clearly seen from Fig. 6.2, this operation will definitely to the correct decision, so long as the noise
n added to s1 (remember that we have already assumed that s1 t  was sent from the transmitter) is
at the amplitude and angle as shown.

Expanding (6.7), we get

N N N
D  r, s m    rn2  2 rn smn  smn
2

n 1 n 1 n 1
2 2
 r  2 r  sm  sm
2 2
D a  r , s m   2 r  s m  s m , C  r, s m   2 r  s m  s m (6.8)

On the second line of (6.8), we have reverted to vectorial notation, on the third line in the definition
of D r, sm  , we have dropped r , since it is common to all calculations of distance metrics, hence
2

no effect on the result, thus have defined a new function called D  r, sm  . Finally on the third line
a

of (6.8) we have introduced correlation metrics C  r, sm  which is the negative D  r, sm  . Since in


a
the search of Max  P s m r , we opted to seek Min  D  r, s m  and C  r, sm  is opposite sign to
 
D r, sm  and consequently Da  r, sm  , searching for Max  P s m r (i.e. applying the rule of
 
MAP criterion) must be equivalent to

Max  P sm r  Min  D  r, sm   Min  Da  r, s m   Max C  r, s m  (6.9)
 

(6.9) means that our optimum detection rule is simply finding the distances between the received
vector r and all possible signals transmitted, s1  s M and deciding on s m which gives the minimum
distance, i.e. Min  D a  r, s m  or finding the correlation between the received vector r and all

possible signals transmitted , s1  s M and deciding on the one which gives the maximum correlation,
i.e. Max C  r, s m  .

The above development is valid for the situation when all signals are sent from transmitter with
equal probability. If this is not the case, then we go back to (6.2) and (6.3) and keep in mind that the
pdf function f r is a sum that remains constant whichever m is chosen, thus has no effect on

maximization process. Under these circumstances, Max  P s m r will become


 

Max  P sm r  Max  f r sm  P sm  (6.10)


   

It is clear that by the application of D  r, sm  or C  r, sm  we start to define an area of (correct)


a

decision region for the signal sm t  which we denote by Rm . Then, the probability of error Pe sm 

for the signal s m will be given by the integration of f r s m  over the entire area excluding the one

belonging to Rm . This area is denoted by Rm . Then


c

Pe sm    f r sm  dr (6.11)
c
Rm

The average probability of error over the total of M signals will be

1 M
1 M
Pe   P s  
e m   f r s m  dr
M m 1 M m 1 c
Rm

1 M  
   1  f r s m  d r 

(6.12)
M m 1
 Rm 

In case the signals s1  s M are not sent with equal probability, i.e. if MAP criterion is valid, (6.12)
turns into
M
Pe  1  P s m   f r s m  dr (6.13)
m1 Rm

Now we solve two lengthy examples to illustrate the above points.

Example 6.1 : Consider an ASK system where M  2 and the transmitter uses the signal set and the
basis function, s1 t , s2 t  and  t  shown in Fig. 6.3a. s1 t  and s2 t  are transmitted with
unequal probabilities of p and 1 p respectively. Determine the metrics, i.e.,

Max  P sm r  Max  f r sm  P sm  for the MAP optimum detector.
   

s (t) s (t) (t)


1 2

A 1 / T
b
T
b
0 t 0 t 0 t
T T
b b
-A

a) Transmitted signal waveforms, the basis function

b) Block diagram of MF demodulator

Fig. 6.3 Transmitted signal waveforms, the basis function and the block diagram of MF demodulator
for Example 6.1.
Solution : In Fig. 6.2b, the demodulator at the receiver side is shown as matched filter (MF).
Accordingly, for the cases of s1 t  and s2 t  , y t  after MF becomes the following

ys t    r    Tb  t   d
1

0
t t

  s1    Tb  t   d   n    Tb  t   d
0 0
t

ys t    r    Tb  t   d
2

0
t t

  s2    Tb  t   d   n    Tb  t   d (6.14)


0 0

After sampling at t  Tb , (6.14) will be

Tb Tb

ys Tb    s1      d   n      d  A Tb  n Tb    b  n Tb 


1

0 0
Tb Tb

ys t    s2      d   n      d   A Tb  n Tb     b  n Tb 
2

0 0
Tb Tb Tb

n Tb    n      d ,  b   s12 t  dt   s22 t  dt  A2Tb (6.15)


0 0 0

nTb  is the noise sample that has the same characteristics of n t  in the received signal. Thus nTb 
is Gaussian with zero mean and variance  n  N0 / 2 . From (5.8) and (6.15), we work out the
2

individual f r s1  and f r s2  as

1 
  / N 
2
f r s1   0.5
exp  r   b 0
 N 0  
1 
  / N 
2
f  r s2   exp  r   b
0.5 0
(6.16)
 N 0  

For a detection strategy based on finding Max  P sm r  Max  f r sm  P sm  , we simply decide
   
as follows

If f r s1  P  s1   f r s2  P  s2  then decide s1 t  was transmitted


If f r s1  P  s1   f r s2  P  s2  then decide s2 t  was transmitted (6.17)

Substituting from (6.16) into (6.17)


N0 1 p 
If r ln   then decide s1 t  was transmitted
4  b  p 
N0 1 p 
If r ln   then decide s2 t  was transmitted (6.18)
4  b  p 

So our decision requires a knowledge of N0 ,  b and p . It is quite possible that the transmitter sends
information about the last two parameters, but N 0 has to be measured somehow. Note that if
p  0.5 , meaning that s1 t  and s2 t  are sent with equal probabilities from the transmitter then
the decision rule of (6.18) will become independent of the parameters as shown below.

If r  0 then decide s1 t  was transmitted


If r  0 then decide s2 t  was transmitted (6.19)

Example 6.2 : Give at least two different sets of time waveforms, s1 t  s4 t  for 4 PSK. For these
waveforms, find appropriate orthonormalized basis functions, find the representation of
s1 t  s4 t  in terms of orthonormalized basis functions, find s1  s4 vectors, the distances
between vector ends, draw constellation diagram and the diagram of demodulator comprising
correlator and matched filter. Show that distance and correlation metrics function properly (i.e. give
the correct decision) if s1 t  was sent from the transmitter and no noise is mixed with the signal at
receiver. Give correct decision boundaries and find probability of error if all signals are sent from
transmitter with equal probability.

Solution : Two possible sets of s1 t  s4 t  and  1 t ,  2 t  are given in Figs. 6.4 and 6.5

s (t) s (t) s (t) s (t)


1 2 3 4
A2 A2

T/2 T/2 T
0 t 0 t 0 t t
T/2 T/2 T 0

-A2 -A2

 (t)  (t)
1 2

2 / T 2 / T

0 t 0 t
T/2 T/2 T

Fig. 6.4 First possible set of signal waveforms, s1 t  s4 t  and orthonormalized basis functions,

 1 t ,  2 t  for 4 PSK.
sa( t ) sa( t ) sa( t ) sa( t )
1 2 3 4
A A A

0 T 0 T T/2
t t t t
0 T 0 T
T/2
-A -A
a ( t )
1 a ( t )
2
1/T 1/T
T
0 t 0 t
T T/2
-1 /  T

Fig. 6.5 First possible set of signal waveforms, s1 t  s4 t  and orthonormalized basis functions,
a a

 1a t ,  2a t  for 4 PSK.

Note that signal waveforms and orthonormalized basis functions in Figs. 6.4 and 6.5 are
interchangeable. Here we continue our solution with the set shown Fig. 6.4. Initially we write the
time waveform expressions for s1 t  s4 t  and  1 t ,  2 t 

A 2
 0 t T / 2 A 2
 T / 2 t T
s1 t   
 s2 t   


0 otherwise 
0 otherwise
 

 A 2 0 t T / 2 
 A 2 T / 2 t T
s3 t   
 s4 t   

 
0
 otherwise 0
 otherwise

 2/T 0 t T / 2 
 2/T T / 2 t T
 1 t     2 t    (6.20)
 
0
 otherwise 0
 otherwise

Now either by eye inspection or by Gram-Schmidt Orthogonalization Procedure, we write


s1 t  s4 t  in terms of  1 t  and  2 t  . Note that here there is no need to indicate the time
intervals, since they are embedded in  1 t  and  2 t 

s1 t   A T 1 t  , s2 t   A T 2 t  , s3 t   A T 1 t  , s4 t   A T 2 t 
s1   s11 , s12    A T , 0 , s 2   s21 , s22   0, A T 
   
s3   s31 , s32    A T , 0 , s 4   s41 , s42   0,  A T 
   
d12  d14  d 23  A 2T  2 s , d13  d 24  2 A T  2  s
s1  s 2  s3  s 4  A T , 1   2   3   4   s  A2T (6.21)

On the second and third lines of (6.21), we have included the vectorial representation of our signals,
on the fourth line we have given the respective distance between vector ends, on the last line the
length of vectors and the energies are given which can be calculated either from time signals or
vectorial representations. As this is PSK, all vector lengths, thus the energies are equal. Now we can
plot the constellation diagram of s1  s4 . This is illustrated in Fig. 6.6

 (t)
2
s
2

d = A 2T =  2
12 s

s d = 2 A T = 2  
3 13 s
 (t)
s 1
1

s
4

Fig. 6.6 Constellation diagram for the 4 PSK in Example 6.2.

Below, we show the block diagrams of correlator and matched filter type of demodulators.

a) Block diagram of correlator type of demodulator.

You might also like