DCS Module 1
DCS Module 1
OF
ELECTRONICS & COMMUNICATION ENGINEERING
(Theory Notes)
Autonomous Course
Prepared by
Error control is accomplished by the channel coding operation that consists of systematically
adding extra bits to the output of the source coder. These extra bits do not convey any
information but helps the receiver to detect and / or correct some of the errors in the
information bearing bits.
The Channel decoder recovers the information bearing bits from the coded binary stream.
Error detection and possible correction is also performed by the channel decoder.
The important parameters of coder / decoder are: Method of coding, efficiency, error control
capabilities and complexity of the circuit.
1.1.4 Modulator
It is performed for the efficient transmission of the signal over the channel. The modulator
operates by keying shifts in the amplitude, frequency or phase of a sinusoidal carrier wave to
the channel encoder output. The digital modulation techniques are referred to as amplitude-
shift keying, frequency- shift keying or phase-shift keying respectively. The Modulator
converts the input bit stream into an electrical waveform suitable for transmission over the
communication channel. Modulator can be effectively used to minimize the effects of
channel noise, to match the frequency spectrum of transmitted signal with channel
characteristics, to provide the capability to multiplex many signals.
The detector performs demodulation, thereby producing a signal the follows the time
variations in the channel encoder output. The modulator, channel and detector form a discrete
channel (because both its input and output signals are in discrete form.
1.1.5 Channel
The Channel provides the electrical connection between the source and destination. The
different channels are: Pair of wires, Coaxial cable, Optical fibre, Radio channel, Satellite
channel or combination of any of these.
The communication channels have only finite Bandwidth, non-ideal frequency response, the
signal often suffers amplitude and phase distortion as it travels over the channel. Also, the
Dept. of ECE, DSCE Page 3
Digital Communication System DCS Module - 1
signal power decreases due to the attenuation of the channel. The signal is corrupted by
unwanted, unpredictable electrical signals referred to as noise.
The important parameters of the channel are Signal to Noise power Ratio (SNR), usable
bandwidth, amplitude and phase response and the statistical properties of noise.
1.2. Information
The output of a discrete information source is a message that consists of a sequence of
symbols. The actual message that is emitted by the source during a message interval is
selected at random from a set of possible messages. The communication system is designed
to reproduce at the receiver either exactly or approximately the message emitted by the
source.
To measure the information content of a message quantitatively, we are required to arrive at
an intuitive concept of the amount of information.
Consider the examples: A trip to Miami, Florida from Minneapolis in the winter time,
mild and sunny day,
cold day,
possible snow flurries.
The amount of information received is obviously different for these messages.
The first message contains very little information since the weather in Miami is mild
and sunny most of the time.
The forecast of a cold day contains more information since it is not an event that
occurs often.
In contrast, the forecast of snow flurries conveys even more information since the
occurrence of snow in Miami is a rare event.
Thus on intuitive basis the amount of information received from the knowledge of occurrence
of an event is related to the probability or the likelihood of occurrence of the event. The
message associated with an event least likely to occur contains most information.
The information content of a message can be expressed quantitatively in terms of
probabilities as follows:
Suppose an information source emits one of ‘q’ possible messages m1, m2 …… mq with p1, p2
…… pq as their probs. of occurrence. Based on the above intusion, the information content of
the kth message, can be written as
1
I (𝑚𝑘 )𝛼 𝑝
𝑘
Another requirement is that when two independent messages are received, the total
information content is – Sum of the information conveyed by each of the messages.
Thus the equation becomes
The base for the logarithmic in equation determines the unit assigned to the information
content.
Natural logarithm base : ‘nat’
Base - 10 : Hartley / decit
Base - 2 : bit
Using the binary digit as the unit of information is based on the fact that if two possible
binary digits occur with equal proby (p1 = p2 =½) then the correct identification of the binary
digit conveys an amount of information. I (m1) = I (m2) = – log2 (½ ) = 1 bit. Therefore one
bit is the amount of information that we gain when one of two possible and equally likely
events occurs.
Ex1: A source puts out one of five possible messages during each message interval. The
probabilities of these messages are P1 =1/2, P2=1/4, P3=1/4, P4=1/16, P5=1/16. What is the
information content of these messages?
Solution:
1
𝐼(𝑚1 ) = 𝑙𝑜𝑔2 = 1 𝑏𝑖𝑡𝑠
(1/2)
1
𝐼(𝑚2 ) = 𝑙𝑜𝑔2 = 2 𝑏𝑖𝑡𝑠
(1/4)
1
𝐼(𝑚3 ) = 𝑙𝑜𝑔2 = 2 𝑏𝑖𝑡𝑠
(1/4)
1
𝐼(𝑚4 ) = 𝑙𝑜𝑔2 = 4 𝑏𝑖𝑡𝑠
(1/16)
1
𝐼(𝑚5 ) = 𝑙𝑜𝑔2 = 4 𝑏𝑖𝑡𝑠
(1/16)
1.3 Entropy
Suppose a source that emits one of M possible symbols s1, s2,……sM in a
statistically independent sequence. Let p1,p2…….pq be the probabilities of occurrence
of the M symbols, respectively. In a long message containing N symbols, the symbol s1
will occur on the average p1N times, the symbol s2 will occur p2N times, and in general the
1
symbol sM will occur pMN times. The information content of the ith symbol is I (si)= log 2 (𝑝 ).
𝑖
1
Therefore p1N number of messages of type s1 contains 𝑝1 𝑁 log 2 (𝑝 )bits. Similarly p2N
1
1
number of messages of type s1 contains 𝑝2 𝑁 log 2 (𝑝 )bits.
2
1
𝐼𝑡𝑜𝑡𝑎𝑙 = 𝑁 ∑𝑀
𝑖=1 𝑝𝑖 log 2 (𝑝 ) bits
𝑖
The average information per symbol is obtained by dividing the total information content of
the message by the number of symbols in the message, as
𝑇𝑡𝑜𝑡𝑎𝑙 1
𝐸𝑛𝑡𝑟𝑜𝑝𝑦 = 𝐻 = = ∑𝑀
𝑖=1 𝑝𝑖 log 2 (𝑝 ) bits/symbol
𝑁 𝑖
𝛼1 =0
𝛼2 =𝑝1 = 𝑝1+ 𝛼1
𝛼3 = 𝑝2 + 𝑝1 = 𝑝2 + 𝛼2
𝛼4 = 𝑝3 + 𝑝2 + 𝑝1 = 𝑝3 + 𝛼3
.
.
𝛼𝑞+1 = 𝑝𝑞 + 𝛼𝑞 = 1
Determine the smallest integer for 𝑙𝑖 (length of code word) using the inequality
𝑖 1
2𝑙 ≥ 𝑝 for all i=1 to q
𝑖
Ex: 1. Construct the Shannon’s binary code for the following message symbols
S={𝑠1, 𝑠2, 𝑠3 , 𝑠4 ) with probabilities P=(0.4, 0.3,0.2,0.1).
Solution:
0.4 > 0.3 > 0.2 > 0.1
0 = 0,
1 = 0.4
2 = 0.4+0.3=0.7
3= 0.7+0.2=0.9
4= 0.9 + 0.1 = 1.0
H(S) 1.8464
%ηc = ∗ 100 ∗ 100 = 76.93%
L 2.4
Ex: 2: Apply Shannon’s binary encoding procedure to the following set of messages and
obtain code efficiency and redundancy.
1/8, 1/16, 3/16, 1/4, 3/8
Solution:
1 3 8 1 3 16 1 1
𝐻(𝑆) = 𝑙𝑜𝑔2 4 + 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 8 + 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 4 + 𝑙𝑜𝑔2 16
4 8 3 8 16 3 4 16
𝐻(𝑆) = 2.1085 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝑞
1 3 1 3 1
𝐿 = ∑ 𝑃𝑖 𝑙𝑖 = (2) + (2) + (3) + (3) + (4)
4 8 8 16 16
𝑖=1
𝐿 = 2.4375 𝑏𝑖𝑡𝑠/symbol
𝐻(𝑆)
%𝜂 = ∗ 100 = 86.5%
𝐿
Redundancy=1- 𝜂=100-86.5=13.5%
Ex: 3: Repeat the above messages (𝑥1, 𝑥2, 𝑥3 ) with P= (1/2, 1/5, 3/10)
Solution:
1 3 10 1
𝐻(𝑆) = 𝑙𝑜𝑔2 2 + 𝑙𝑜𝑔2 + 𝑙𝑜𝑔2 5
2 10 3 5
𝐻(𝑆) = 1.4855 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
3
1 3 1
𝐿 = ∑ 𝑃𝑖 𝐿𝑖 = (1) + (2) + (3)
2 10 5
𝑖=1
𝐿 = 1.7𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐻(𝑆)
%𝜂 = ∗ 100 = 87.38%
𝐿
1.4 Huffman Coding
Check if q = r + a(r-1) is satisfied and find the integer ‘a’, where q is number of
source symbols and r is number of symbols used in code alphabets. ‘a’ values is
calculated and it should be an integer, otherwise add suitable number of dummy
symbols of zero probability of occurrence to satisfy the equation. This step is not
required if we are to determine binary codes.
Combine the last ‘r’ symbols into a single composite symbol whose probability of
occurrence is equal to the sum of the probabilities of occurrence of the last r –
symbols involved in the step.
Repeat the above three steps respectively on the resulting set of symbols until in the
final step exactly r- symbols are left.
The last source with ‘r’ symbols are encoded with ‘r’ different codes 0,1,2,3,….r-1
In binary coding the last source are encoded with 0 and 1
As we pass from source to source working backward, decomposition of one code
word each time is done in order to form 2 new code words.
This procedure is repeated till we assign the code words to all the source symbols of
alphabet of source ‘s’ discarding the dummy symbols.
1 1 1 1
Ex:1. Construct a Huffman Code for symbols having probabilities{2 , 4 , 8 , 8}. Also find
𝑞 = 𝑟 + 𝛼(𝑟 − 1)
𝐿 = 1.75 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐻(𝑆)
%𝜂 = ∗ 100 = 100%
𝐿
Redundancy=0%
Ex. 2: A source has 9 symbols and each occur with a probability of 1/9. Construct a binary
Huffman code. Find efficiency and redundancy of coding.
Solution:
𝑞 = 𝑟 + 𝛼(𝑟 − 1)
𝐿 = 3.22 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐻(𝑆)
%𝜂 = ∗ 100 = 98.45%
𝐿
Redundancy=100-% 𝜂=1.55%
Ex. 3: Given the messages 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 &𝑥6 with probabilities 0.4, 0.2, 0.2, 0.1, 0.07,
0.03. Construct binary and trinary code by applying Huffman encoding procedure. Also find
efficiency and redundancy.
Solution:
(i) Binary
𝑞 = 𝑟 + 𝛼(𝑟 − 1)
1 1 1 1
𝐻(𝑆) = 0.4 𝑙𝑜𝑔2 ( ) + 0.2 𝑙𝑜𝑔2 ( ) + 0.2 𝑙𝑜𝑔2 ( ) + 0.1 𝑙𝑜𝑔2 ( )
0.4 0.2 0.2 0.1
1 1
+ 0.07𝑙𝑜𝑔2 ( ) + 0.03𝑙𝑜𝑔2 ( )
0.07 0.03
𝐻(𝑆) = 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
6
𝐿= 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐻(𝑆)
%𝜂 = ∗ 100 =
𝐿
Redundancy=
(ii) Trinary
𝑞 = 𝑟 + 𝛼(𝑟 − 1)
1 1 1 1
𝐻(𝑆) = 0.4 𝑙𝑜𝑔3 ( ) + 0.2 𝑙𝑜𝑔3 ( ) + 0.2 𝑙𝑜𝑔3 ( ) + 0.1 𝑙𝑜𝑔3 ( )
0.4 0.2 0.2 0.1
1 1
+ 0.07𝑙𝑜𝑔3 ( ) + 0.03𝑙𝑜𝑔3 ( )
0.07 0.03
H(S)= bits/symbol
Ex. 4: Consider a zero memory source has an alphabet of 7 symbols whose probability of
occurrence of (0.25, 0.25, 0.125, 0.125, 0.125, 0.0625, 0.0625). Compute the Huffman code
for this source moving a combined symbol as high as possible. Evaluate the code efficiency.
And also construct code tree.
Solution:
q=r+α(r-1) => α=5
A channel is defined as the medium through which the coded signals are generated by an
information source are transmitted. In general, the input to the channel is a symbol belonging
to an alphabet ‘A’ with ‘r’ symbols, the output of a channel is a symbol belonging to an
alphabet ‘B’ with ‘s’ symbols.
Due to errors in the channel, the output symbols may differ from input symbols.
𝑎1 𝑏1
𝑎 𝑏𝑗 𝑏
𝐴 2⋮ } → P( ⁄𝑎𝑖 ) → { 2⋮ 𝐵
𝑎𝑟 𝑏𝑠
The conditional probabilities come in to the existence due to the presence of noise in the
channel. Because of noise there will be some amount of uncertainty in the reception of any
symbols. For this reason there are ‘s’ number of symbols at the receiver from ‘r’ symbols at
transmitter. Totally there are r * s conditional probabilities represented in a form of matrix
which is called as Channel Matrix or Noise Matrix.
When 𝑎1 is transmitted, it can be received as any one of the output symbols (𝑏1, 𝑏2, 𝑏3 … … . . 𝑏𝑠 )
𝑏 𝑏 𝑏 𝑏
=>P( 1⁄𝑎1 )+ P( 2⁄𝑎1 )+ P( 3⁄𝑎1 )+…………….. P( 𝑠⁄𝑎1 )=1
𝑏
In general, ∑𝑠𝑗=1 P( 𝑗⁄𝑎𝑖 ) = 1 for i= 1 to r
Thus the sum of all the elements in any row of the channel matrix is equal to UNITY.
Joint probability between any input symbol 𝑎𝑖 and any output symbol 𝑏𝑗 is given by
𝒃
P(𝑎𝑖 ∩ 𝑏𝑗 ) = 𝑷(𝒂𝒊 , 𝒃𝒋 ) = 𝐏( 𝒋⁄𝒂𝒊 )𝑷(𝒂𝒊 )
𝒂
𝑷(𝒂𝒊 , 𝒃𝒋 )= 𝐏( 𝒊⁄𝒃 )𝑷(𝒃𝒋 )
𝒋
Properties:
1
The source entropy is given by 𝐻(𝐴) = ∑𝑟𝑖=1 𝑃𝑎𝑖 𝑙𝑜𝑔2 ( )
𝑃𝑎𝑖
1
The entropy of the receiver or output is given by 𝐻(𝐵) = ∑𝑠𝑗=1 𝑃𝑏𝑗 𝑙𝑜𝑔2 (𝑃 )
𝑏𝑗
If the average value of all the conditional probability is taken as j varies from 1 to s
𝑎 1
=∑𝑠𝑗=1 ∑𝑟𝑖=1 𝑃(𝑏𝑗 ) P ( 𝑖⁄𝑏 ) 𝑙𝑜𝑔2 𝑎𝑖
𝑗 P( ⁄𝑏 )
𝑗
𝟏
𝑯(𝑨⁄𝑩) = ∑𝒔𝒋=𝟏 ∑𝒓𝒊=𝟏 𝑷( 𝒂𝒊 , 𝒃𝒋 ) 𝒍𝒐𝒈𝟐 𝒂𝒊 is conditional entropy of
𝑷( ⁄𝒃 )
𝒋
transmitter
𝟏
Similarly 𝑯(𝑩⁄𝑨) = ∑𝒓𝒊=𝟏 ∑𝒔𝒋=𝟏 𝑷( 𝒂𝒊 , 𝒃𝒋 ) 𝒍𝒐𝒈𝟐 𝒃𝒋 is conditional entropy of
𝑷( ⁄𝒂𝒊 )
receiver.
𝟏
𝑯(𝑨, 𝑩) = ∑𝒓𝒊=𝟏 ∑𝒔𝒋=𝟏 𝑷( 𝒂𝒊 , 𝒃𝒋 ) 𝒍𝒐𝒈𝟐 𝑷(𝒂 ,𝒃 ) is joint conditional probability.
𝒊 𝒋
When an average amount of information H(x) is transmitted over a noisy channel, then an
amount of information 𝐻(𝑥⁄𝑦) is last in the channel. The balance of the information at the
receiver is defined as Mutual Information I(x.y)
𝑦
= H(y)- 𝐻( ⁄𝑥)
1 𝑥 1
I(𝑥𝑖 ) = log(𝑃(𝑥 )) and 𝐼 ( 𝑖⁄𝑦𝑗 ) = log( 𝑥𝑖
𝑖 𝑃( ⁄ 𝑦𝑗 )
The difference between the above 2 is the information gained through the channel.
1 1
I (𝑥𝑖 , 𝑦𝑗 ) = log ( ) − log( 𝑥𝑖 )
𝑃(𝑥𝑖 ) 𝑃( ⁄𝑦𝑗 )
𝑥
𝑃( 𝑖⁄𝑦𝑗 )
I (𝑥𝑖 , 𝑦𝑗 ) = log 𝑃(𝑥𝑖 )
𝑃(𝑥 ,𝑦 )
𝑖 𝑗
I (𝑥𝑖 , 𝑦𝑗 ) = log 𝑃(𝑥 )𝑃(𝑦
𝑖 𝑗)
Properties:
𝑅𝑖𝑛 = 𝑟𝑠 ∗ 𝐻(𝑋). Due to the error, it is not possible to reconstruct the input symbol sequence
with certainity on the recovered sequence. Therefore source information is lost due to the
errors.
Therefore average rate of information transmission is given by 𝑅𝑡 = 𝐼(𝑋, 𝑌) . 𝑟𝑠 .
Bits/sec.
𝐻(𝑋)−𝐻(𝑋⁄ )
%𝜂𝑐ℎ =𝑀𝑎𝑥[𝐻(𝑋)−𝐻(𝑋𝑌⁄ ∗ 100
𝑌)]
Redundancy =1-𝜼𝒄𝒉
1.6.6 Symmetry Channel
Symmetry channel is defined as the channel in which the channel matrix has 2nd and
subsequent rows, the same elements as the first row, but in different order.
∴ 𝐻(𝑌⁄𝑋) = ℎ, where →entropy of any single row. The channel capacity with 𝑟𝑠 =1 bits/sec
is given by,
𝐶 = 𝑀𝑎𝑥(𝑅𝑡 )
=Max[𝐼(𝑋, 𝑌)] 𝑟𝑠
= Max[𝐼(𝑋, 𝑌)]
= Max(𝐻(𝑌) − 𝐻(𝑌⁄𝑋))
= Max[𝐻(𝑌)] −Max[𝐻(𝑌⁄𝑋)]
= Max[𝐻(𝑌)] - Max(ℎ)
𝐶 = Max[𝐻(𝑌)] − ℎ
H(Y) is the entropy of symbol which becomes maximum if and only if all the receive
symbols become equi-probable.
Max[𝐻(𝑌)] = 𝑙𝑜𝑔2 𝑠
∴ 𝑪=𝒍𝒐𝒈𝟐 𝒔 − 𝒉
Ex.1: A transmitter has an alphabet containing of 5 letters {𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 } and the receiver
has an alphabet of four letters{𝑏1 , 𝑏2 , 𝑏3 , 𝑏4 }. The joint probabilities of the system are given
below. Compute different entropies of this channel.
Solution:
Ex.2: A transmitter transmits 5 symbols with probabilities 0.2, 0.3, 0.2, 0.1 and 0.2. Given the
channel matrix 𝑃(𝐵⁄𝐴), Calculate H(B) and H(A,B)
Solution:
Ex.3: For the JPM given below, compute individually H(X), H(Y), H(X,Y), H(X/Y), H(Y/X),
I(X,Y) and channel Capacity if r=1000 symbols/sec. Verify the relationship among these
entropies.
Solution:
Verification:
The binary symmetric channel is one of the most commonly and widely used channel whose
channel diagram is given below
𝑃(𝑋⁄𝑌) = [
𝑃 1−𝑃
]= [𝑃 𝑃̅]
1−𝑃 𝑃 𝑃̅ 𝑃
The matrix is a symmetric matrix. Hence the channel is binary symmetric channel.
1 1
For symmetry channel, 𝐻(𝑌⁄𝑋) = ℎ = 𝑃 log + 𝑃̅ log
𝑃 𝑃̅
∴ 𝐶 = 1 − ℎ bits/sec.
Ex.1: A binary symmetric channel has the following noise matrix with source probabilities of
3/4 1/4
P(𝑥1 )=2/3 and P(𝑥2 )=1/3. 𝑃(𝑌⁄𝑋) = [ ]. Determine H(X), H(Y), H(X,Y), H(Y/X),
1/4 3/4
H(X/Y), I(X,Y), Channel Capacity, Channel efficiency and redundancy.
Solution:
H(Y)=0.9799 bits/symbol
1 1
𝐻(𝑌⁄𝑋) = ℎ = 𝑃 log + 𝑃̅ log
𝑃 𝑃̅
= ¾ log(4/3)+1/4 log(4/1)
=0.8113 bits/symbols
Solution:
(i)
(ii)
(iv)
Ex.1:
Ex.2: A CRT terminal is used to enter alphanumeric data into a chamber. The CRT is
connected through a voice-grade telephone line having usable band width of 3KHz and an
output (S/N) of 10 dB. Assume that the terminal has 128 characters and data is sent in an
independent manner with equal peobability.
Ex.3: A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz.
(a) Calculate channel capacity of the telephone channel foe a signal-to-noise ratio of
30dB.
(b) Calculate the minimum signal-to-noise ratio required to support information
transmission through the telephone channel at the rate of 4800 bits/sec.
Ex.4:
Solution:
Ex.5: