0% found this document useful (0 votes)
14 views81 pages

Ece3151 (B) (14.3.24)

Uploaded by

sharmika das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views81 pages

Ece3151 (B) (14.3.24)

Uploaded by

sharmika das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 81

Data Communication

ECE 3151
Section B
(2024)
Class Hours
Sunday: 11:20-12:20
Wednesday: 11:20-12:20
Thursday: 11:20-12:20

Recommended Books
1. B. A. Forouzan, “Data Communications and Networking”, 5/e
2. W. Stallings, “Data and Computer Communications”, 10/e
3. B. P. Lathi, “Modern Digital and Analog Communication Systems”, 3/e
4. G. Keiser, “Fiber Optic Communications”, Springer(2021)
Lecture Plan
Topics to be Covered:
Calculation of detection-error probability (3 hr.)
Transmission impairments and noise (1 hr.)
Concept of channel coding and channel capacity (1 hr.)
Error detection and correction codes (3 hrs.)
Communication medium ( 1 hr.)
Basics of fiber-optic communication systems (3 hrs.)
Assessment:
2 class tests(CT)
(best one will be counted)

CT-1: 20.3.2023
CT-2: 04.04.2023
Examples of Communication Systems
Source Medium Destination

Human voice
•Destination unit receives the message
•Source originates the message
•Medium carries the message

Television picture

Teletype message/data

Ref. B.P. Lathi,3/e, p.2


A Typical Communication System
Twisted pair cable, coaxial cable, optical fiber, wireless or radio link.

Source modulation Medium demodulation Destination

message signal/baseband signal


Calculation of detection-error probability

Channel coding (encoder), before modulation. Channel coding (decoder), after demodulation.
Error detection and correction codes Error detection and correction codes
Transmission impairments

SNR(signal-to-noise ratio) =signal power/noise power


Ref. B.P. Lathi,3/e, p.3 Can an amplifier help improving the SNR of a received signal?
Normal distribution/Gaussian distribution
f(x)

A C

B-C B B+C x
x= continuous random variable
f(x)= probability density function
A= peak amplitude
B= mean
C= standard deviation
C2= variance
Noise can be modeled by Gaussian distribution

Source: wikipedia

Source: DOI:10.1109/APCC.2012.6388180

Two white noise sources: Spectrum A has a large


variance in power distribution, and thus provides a
low crest factor. Spectrum B has a high crest factor, so
unwanted signal levels are less frequent.

Crest factor=difference between peak and average level of a signal


Detection-Error Probability
The signal received at the detector = desired pulse train + a random channel noise

Assume zero ISI (inter-symbol interference)

Let’s consider a polar transmission with a basic pulse p(t)


0|0 1|1 0|1 1|1 1|1
Without noise:
1=Ap 0=-Ap
With noise the samples
would be:
1=Ap+n , error occurs if n<-Ap
0= -Ap+n, error occurs if n>Ap
Ref. B.P. Lathi,3/e, p.330 n=random noise amplitude
1=Ap+n , error occurs if n<-Ap Decision thresholds
0= -Ap+n, error occurs if n>Ap
n=random noise amplitude

pdf probability of a continuous


random variable falling within a
particular range of values.
Probability density function(pdf) of white noise:
QUIZ: What is pmf?

QUIZ: P(α)=?
P(β)=?, P(Ap)=?,
If 1 and 0 are equally likely: P(-Ap)=?
P(0)=P(1)= 50%

Ref. B.P. Lathi,3/e, p.331


New limit

Q-function:
Binary System: You are sending either 0 or 1
Mutually exclusive events: those events that do not occur simultaneously. e.g., tossing a coin, you get either Head or Tail.
Mutually exclusive events: 2 Mutually exclusive events: 2 Mutually exclusive events: 04

S0 D0 D0.S0 D1.S0
S1 D1 D0.S1 D1.S1
Event S0: 0 was sent Event D0: 0 is detected Event D0.S0: 0 sent, 0 detected
Event S1: 1 was sent Event D1: 1 is detected Event D1.S1: 1 sent, 1 detected
P(S0)+P(S1)=1 P= probability P(D0)+P(D1)=1 Event D1.S0: 0 sent, 1 detected
Event D0.S1: 1 sent, 0 detected
P(bit error)= P(D0.S1) + P(D1.S0)
From Product rule:
P(D0|S1)=P(D0.S1)/P(S1) =>P(D0.S1)= P(S1) P(D0|S1) P(bit error)= P(S1) P(D0|S1)+P(S0) P(D1|S0)
P(D1|S0)=P(D1.S0)/P(S0) =>P(D1.S0)= P(S0) P(D1|S0)
Error probability for on-off signal
Ap/2

-Ap/2
1 0 1
Ap/2

-Ap/2

Ref. B.P. Lathi,3/e, p.333


Ref. B.P. Lathi,3/e, p.333
Error probability for Bipolar signal

and

Ap/2
-Ap/2

Decision thresholds
1 0 1 1 0 1 1 0 1
Ap Ap Ap

-Ap -Ap
Decision thresholds

Polar signal On-off signal Bipolar signal


MATLAB Output

x = 1:0.1:6;
y = qfunc(x);
q1=1./(x*sqrt(2*pi));
q2=1-0.7./x.^2;
q3=exp(-x.^2/2);
Q=q1.*q2.*q3;
semilogy(x,y,x,Q)
axis tight;
Ref. B.P. Lathi,3/e, p.454
Q(5.36)= 0.4161×10-7 Q(6.12)= 0.4679×10-9

Ref. B.P. Lathi,3/e, p.455


If peak pulse amplitude is k times the noise rms value:
For polar signal

1/ 2.87×10-7≈3,484,320
Ref. B.P. Lathi,3/e, p.332
Ref. B.P. Lathi,3/e, p.458
Polar On-off Bipolar

It seems that probability of error is 50% higher for Bipolar case than On-off case. This
increase in error probability can be compensated by just a little increase in signal power.

To attain
Since Ref. B.P. Lathi,3/e, p.333

1.5Q(x)=0.286×10-6
Q(x)=1.9066×10-7
10.16-10 10.162-102 x~ 5.08
= 0.016 = 1.6% ,
102
= 0.03225 = 3.23%
10
This is just a 1.6% increase in the signal amplitude (or 3.23% increase in signal power) required over the on-off case.
Ref. B.P. Lathi,3/e, p.333
E=Ap2
2E=2Ap2=(√2Ap)2

Ref. B.P. Lathi,3/e, p.334


Concept of Frequency, Spectrum, and Bandwidth

Ref. W. Stalling,10/e, p.74


Ref. W. Stalling,10/e, p.75
Ref. W. Stalling,10/e, p.76
f= fundamental frequency, 3f= harmonic frequency

Average amplitude=0, i.e., no dc component

Ref. W. Stalling,10/e, p.78


Fourier Series/Fourier Transform?
Spectrum of periodic signal is discrete Spectrum of aperiodic signal is continuous
4/π=1.27

4/3π=0.424

s(t)
1

t
-x/2 +x/2

Ref. W. Stalling,10/e, p.79


4/π=1.27
1
Average amplitude ≠ 0, i.e., there is a dc component

4/3π=0.424

Ref. W. Stalling,10/e, p.80


Effective bandwidth?

If the waveform represents the repetitive


binary stream 0101.... Then the duration of
each pulse is 1/(2f); thus the data rate is 2f
0 0 bits per second(bps).
0
The frequency components of the square
T/2=1/2f
wave with amplitudes A and -A can be
1 1 expressed as follows:

Data rate=2f bps

Fourier series expansion Ref. W. Stalling,10/e, p.79


Ref. W. Stalling,10/e, p.81
Original signal MATLAB Output for
t = 0:0.001:100;
k=1 to 1 k=1 to 3 k= 1 to 4 A=4;
f=0.03;
w=2*pi*f;
sum=0;
for k = 1:1000
sum=sum+(sin(w*(2*k-
1)*t))/(2*k-1);
end
F1=(4*A/pi)*sum;
plot(t,F1);
grid on;
k=1 to 10 k=1 to 100 k=1 to 1000 Gibbs phenomenon

Zoom in
T=1µs
0.5µs 1 bit
1µs 1/0.5 bit=2 bit
Ref. W. Stalling,10/e, p.82
1s 2×106 bits=2Mb
Data rate=2Mbps
Ref. W. Stalling,10/e, p.82
 Any digital waveform has infinite bandwidth. But channel/medium’s bandwidth is finite.
 For any given medium, the greater the bandwidth transmitted, the greater the cost.
Therefore, Economic and practical reasons dictate that digital information be approximated by a
signal of limited bandwidth.

Negative effect of limiting the signal bandwidth:


 limiting the bandwidth creates distortions, which makes the task of interpreting the received
signal more difficult.
 The more limited the bandwidth, the greater the distortion, and the greater the potential for
error by the receiver. i.e., BER will increase. [BER=bit-error rate or probability of bit error]

Ref. W. Stalling,10/e, p.83


Ref. W. Stalling,10/e, p.83
 If we want to send data at higher speed, greater bandwidth will be required for transmission.
 The greater the bandwidth of a transmission system, the higher the data rate that can be transmitted over that system.
 For a data rate of 2 kbps , 2×2=4kHz bandwidth is required for faithful transmissions.
 If the noise is not severe, less BW can do the same job.
TRANSMISSION IMPAIRMENT
 Signals travel through transmission media, which are not perfect. The imperfection causes
signal impairment. This means that the signal at the beginning of the medium is not the
same as the signal at the end of the medium. What is sent is not what is received.
 Three causes of impairment are attenuation, distortion, and noise.
 For digital signals, bit errors may be introduced, such that a binary 1 is transformed into a
binary 0 or vice versa due to these impairments.

Ref. B. A. Forouzan,5/e, p.77


Attenuation
 Attenuation means a loss of energy.
 When a signal travels through a medium, it loses some of its energy in overcoming the
resistance of the medium.
 E.g., A wire carrying electric signals gets warm after a while. Some of the electrical energy
in the signal is converted to heat.
 To compensate for this loss, amplifiers are used to amplify the signal.

Ref. B. A. Forouzan,5/e, p.77


Ref. B. A. Forouzan,5/e, p.78
Example 3.26: Suppose a signal travels through a transmission medium and its power is
reduced to one-half. This means that P2 is (1/2)P1. In this case, the attenuation (loss of
power) can be calculated as
Ref. W. Stalling,10/e, p.107

A loss of 3 dB (–3 dB) is equivalent to losing one-half the power.

Example 3.27: A signal travels through an amplifier, and its power is increased 10 times. This
means that P2 = 10P1 . In this case, the amplification (gain of power) can be calculated as

Ref. W. Stalling,10/e, p.108


Loss= Negative gain

LdB = loss, in decibels.

Ref. W. Stalling,10/e, p.108


-30 dBW= 10-3 W=1mW=0 dBm

Ref. W. Stalling,10/e, p.109


dBm vs dBW

Source: https://www.giangrandi.org/electronics/anttool/decibel.shtml
Example 3.28: One reason that engineers use the decibel to measure the changes in the strength of a
signal is that decibel numbers can be added (or subtracted) when we are measuring several points
(cascading) instead of just two. In the following figure a signal travels from point 1 to point 4.

Example 3.29: Sometimes the decibel is used to measure signal power in milliwatts. In this case, it is
referred to as dBm and is calculated as dBm = 10 log10 Pm , where Pm is the power in milliwatts. Calculate
the power of a signal with dBm = −30.
dBm=10log10Pm=-30
log10Pm=-3 Ref. B. A. Forouzan,5/e, p.78
Pm= 10-3 mW
Example 3.30: The loss in a cable is usually defined in decibels per kilometer (dB/km). If
the signal at the beginning of a cable with −0.3 dB/km has a power of 2 mW, what is the
power of the signal at 5 km? Ref. B. A. Forouzan,4/e, p.83
The loss in the cable in decibels is 5 × (−0.3) = −1.5 dB. We can calculate the power as

dBm+dBm=>NO! Technique without using calculator:

dBm+dB=> dBm 1 mW power: 0 dBm.


power doubles: add 3 dB.
dB+dB=> dB Power 10 times: add 10 dB.
26 dBm = 0 dBm + 10 dB + 10 dB + 3 dB + 3 dB
dBm to mW conversion: Absolute Power: 1mW×10×10×2×2= 400 mW
–33 dBm = 0 dBm – 10 dB – 10 dB – 10dB – 3 dB
0 dBm 100/10 mW= 100 mW = 1 mW Absolute Power: 1mW×(1/10) ×(1/10) ×(1/10) ×(1/2)=0.0005 mW
10 dBm 1010/10 mW= 101 mW = 10 mW Reverse: 50 mW= 1mW×10×10×(1/2)
x dBm 10x/10 mW In dBm: 0dBm+10dB+10dB-3dB= 17 dBm
Source: https://www.giangrandi.org/electronics/anttool/decibel.shtml
Home Work
Q. Suppose a signal with a power of 0 dBm is launched at point 1 as
shown below. Calculate the received power of the signal at point 4.

Ans: 0.631 mW
Ref. B. A. Forouzan,5/e, p.79
Distortion Distortion means that the signal changes its form or shape.

 Distortion can occur in a composite signal made of different frequencies. Each signal
component has its own propagation speed through a medium and, therefore, its own delay
in arriving at the final destination.

 Differences in delay may create a difference in phase if the delay is not exactly the same as
the period duration.

 In other words, signal components at the receiver have phases different from what they had
at the sender. The shape of the composite signal is therefore not the same.
Noise

Undesired signals are referred to as noise. Noise is the major limiting factor in communications

system performance. [Ref. W. Stalling, 10/e, P.95]

Ref. B. A. Forouzan,5/e, p.80


[Ref. W. Stalling, 10/e, P.96]
[Ref. W. Stalling, 10/e, P.97]

-21
0 degree Celsius = 273.15 Kelvin, 17 degree Celsius = 273.15+17 ≈ 290 K 10×log(4×10 )= -203.98≈ -204

The temperature at which the resistor’s resistance equals the noise power produced by the circuit or
device is known as the effective noise temperature.
[Ref. W. Stalling, 10/e, P.97]
56000 b/s×0.01s= 560 bits
[Ref. W. Stalling, 10/e, P.97]
[Ref. W. Stalling, 10/e, P.98]
Example 3.31: The power of a signal is 10 mW and the power of the noise is 1 μW; what
are the values of SNR and SNRdB ?
10 mW=10,000 µW
SNR(signal-to-noise ratio)
µ
=signal power/noise power

Example 3.32: The values of SNR and SNRdB for a noiseless channel are:

We can never achieve this ratio in real life; it is an ideal.

Ref. B. A. Forouzan,5/e, p.81


Two cases of SNR: a high SNR and a low SNR

QUIZ: Can an amplifier improve the SNR?


Ref. B. A. Forouzan,5/e, p.80
Noise Figure
Noise Figure, F =10log10 NF
Noise Factor, NF=Measure of degradation of SNR in a device caused by the components of the system.

Si=signal power at input


So= signal power at output
Ni=noise power at input
No= noise power at output
Quiz: FdB is always negative. Why?

Lower the value of F better is the performance of the system.


Noiseless Channel: Nyquist Bit Rate
Channel Capacity/Data Rate Limits Noisy Channel: Shannon Capacity
A very important consideration in data communications is how
fast we can send data, in bits per second, over a channel. Data
rate depends on three factors: Harry Nyquist
1. The bandwidth available BW, signal power, noise power (1889-1976), a
2. The level of the signals we use Swedish physicist
3. The quality of the channel (the level of noise) and electronic
The maximum rate at which data can be transmitted over a given engineer.
communication path, or channel, under given conditions, is
referred to as the channel capacity.
Ref: [Wikipedia]

Claude Elwood Shannon (1916-


2001) an American mathematician,
electrical engineer, and
cryptographer, also known as a
"father of information theory”.

Ref. B. A. Forouzan,5/e, p.81


Noiseless Channel: Nyquist Bit Rate

With multilevel signaling, the Nyquist formulation becomes [Ref. W. Stalling, 10/e, P.98]
=Theoretical maximum bit rate(i.e., channel capacity)
[Unit: bps]

 This limitation is due to the effect of intersymbol interference, such as is produced by delay distortion.
where M is the number of discrete signal or voltage levels.
For a given bandwidth, the data rate can be increased by increasing the signal levels.
However, this places an increased burden on the receiver: Instead of distinguishing one
of two possible signal levels during each signal time, it must distinguish one of M
possible signal levels.
If the number of levels in a signal is just 2, the receiver can easily distinguish
between a 0 and a 1. If the level of a signal is 64, the receiver must be very
sophisticated to distinguish between 64 different levels.
Increasing the levels of a signal may reduce the reliability of the system.

The probability that a system performs correctly during a specific time duration.
Two digital signals: one with 2 signal levels and the other with 4 signal levels
Baud rate=Bit rate= 8bps

M=2
Baud rate= number of
signal units per second

Bit rate= number of bits per


Baud rate=Bit rate/2= 8bps second

M=4

Ref. B. A. Forouzan,5/e, p.68


M=? QUIZ M=?

Source: https://elec2pcb.com/2022/04/19/su-khac-biet-giua-toc-do-bit-bit-rate-va-toc-do-truyen-baud-rate/
Example 3.34: Consider a noiseless channel with a bandwidth of 3000 Hz transmitting a signal with two signal levels. The maximum bit rate can be

calculated as

Example 3.35: Consider the same noiseless channel transmitting a signal with four
signal levels (for each level, we send 2 bits). The maximum bit rate can be calculated
as

Ref. B. A. Forouzan,5/e, p.82

[Ref. W. Stalling, 10/e, P.98]


Example 3.36: We need to send 265 kbps over a noiseless channel with a bandwidth of 20 kHz. How many signal levels do we need?

Here, L = M
2 =128 ; 2 =64
7 6

Since this result is not a power of 2, we need to either increase the number of levels or reduce the bit rate. If we have 128 levels, the bit rate is 280 kbps.

If we have 64 levels, the bit rate is 240 kbps.

Note: 280 kbps vs. 240 kbps. Not much degradation. So 64 levels can be a cost-effective solution for this case to avoid receiver complexity.

Ref. B. A. Forouzan,5/e, p.82


Noisy Channel: Shannon Capacity Formula

Nyquist’s formula indicates that, all other things being equal, doubling the bandwidth doubles the data rate. Now

consider the relationship among data rate, noise, and error rate. The presence of noise can corrupt one or more bits.

If the data rate is increased, then the bits become “shorter” so that more bits are affected by a given pattern of noise.

All of these concepts can be tied together neatly in a formula developed by

the mathematician Claude Shannon. As we have just illustrated, the

higher the data rate, the more damage that unwanted noise can do. For a given level of noise, we would expect that a

greater signal strength would improve the ability to receive data correctly in the presence of noise.
Shannon’s result is that the maximum channel capacity, in bits per second, obeys the equation

where C is the capacity of the channel in bits per second and B is the bandwidth of the channel in hertz. The

Shannon formula represents the theoretical maximum that can be achieved.


[Ref. W. Stalling, 10/e, P.99]
In practice, however, only much lower rates are achieved. One reason for this is that the formula

assumes white noise (thermal noise). Impulse noise is not accounted for, nor are attenuation distortion

and delay distortion. Even in an ideal white noise environment, present technology still cannot achieve

Shannon capacity due to encoding issues, such as coding length and complexity.

The capacity indicated in the preceding equation is referred to as the error-free capacity. Shannon

proved that if the actual information rate on a channel is less than the error-free capacity, then it is

theoretically possible to use a suitable signal code to achieve error-free transmission through the channel.

Shannon’s
We define theorem
the unfortunately does not suggest
spectral efficiency, also a called
means for finding suchefficiency,
bandwidth codes, but it of
doesaprovide
digitala

yardstick by which
transmission the performance
as the number of ofbits
practical
per communication schemes
second of data that may
can bebemeasured.
supported by each

hertz of bandwidth. The theoretical maximum spectral efficiency can be expressed in C/B=log2(1+SNR) . C/B

has the dimensions bps/Hz. At SNR = 1, we have C/B = 1. [Ref. W. Stalling, 10/e, P.100]
SNR=signal power/noise power

For a given level of noise, it would appear that the data rate could be increased by

increasing either signal strength or bandwidth. However, as the signal strength

increases, the effects of nonlinearities in the system also increase, leading to an

increase in intermodulation noise. Note also that, because noise is assumed to be

white, the wider the bandwidth, the more noise is admitted to[Ref.
theW.system. Thus, as B
Stalling, 10/e, P.101]

increases, SNR decreases.


1024/10 = 102.4 ≈ 251

[Ref. W. Stalling, 10/e, P.102]


Example 3.37: Consider an extremely noisy channel in which the value of the signal-to-
noise ratio is almost zero. In other words, the noise is so strong that the signal is faint. For
this channel the capacity C is calculated as

This means that the capacity of this channel is zero regardless of the bandwidth. In other words, we cannot
receive any data through this channel.
Example 3.38: We can calculate the theoretical highest bit rate of a regular telephone
line. A telephone line normally has a bandwidth of 3000 Hz. The signal-to-noise ratio is
usually 3162. For this channel the capacity is calculated as

This means that the highest bit rate for a telephone line is 34.860 kbps. If we want to send data faster
than this, we can either increase the bandwidth of the line or improve the signal-to-noise ratio.

Ref. B. A. Forouzan,5/e, p.83


Example 3.39: The signal-to-noise ratio is often given in decibels. Assume that SNRdB = 36
and the channel bandwidth is 2 MHz. The theoretical channel capacity can be calculated
as

Example 3.40: For practical purposes, when the SNR is very high, we can assume that
SNR + 1 is almost the same as SNR. In these cases, the theoretical channel capacity can
be simplified to

𝑙𝑜𝑔2 ( 1+𝑆𝑁𝑅 ) 𝑙𝑜𝑔2 ( 𝑆𝑁𝑅 ) =

For example, we can calculate the theoretical capacity of the previous example as

Ref. B. A. Forouzan,5/e, p.83


Example 3.41: We have a channel with a 1-MHz bandwidth. The SNR for this channel is 63.
What are the appropriate bit rate and signal level?

The Shannon formula gives us 6 Mbps, the upper limit. For better performance we choose something
lower, 4 Mbps, for example. Then we use the Nyquist formula to find the number of signal levels.

Ref. B. A. Forouzan,5/e, p.83

The Shannon capacity gives us the upper limit; the Nyquist formula tells us how many signal levels we need.

Sample Question
Home Work
Q. Suppose a signal with 0 dBm power is launched at point 1 as shown below. (i)
Calculate the received power of the signal at point 4. (ii) If the transmission medium is
considered noiseless and can pass a signal with frequencies spanning from 1 kHz to 4
kHz, what is the maximum data rate of this link? Assume a 16 level signaling scheme.(iii)
If -61 dBm of noise power is added at point 4, what is the number of maximum bit that
can be sent through this link in one second?
Ref. B. P. Lathi,3/e, p.711
1.44S/No

Log2e=Log10e/log102
NoB/S
=0.434/0.301
bps ≈1.44 Capacity can be made infinite
only by increasing the signal
power to infinity which is not
practical.
Ref. B. P. Lathi,3/e, p.712
Sample Question

by/to?

Solution: B=4000Hz, S1/N1=7, No=noise power spectral density, N1=NoB=noise power, N2=NoB/4

Reduced by = figure by how much. =12000 bps


Reduced to = resultant figure.
e.g., 10-2=8.
Here, 10 is reduced by 2 and the
resultant is reduced to 8.
Therefore, as per the given solution,
the concerned term for this problem
should be ‘reduced to 25%.’
(Ans.)
Ref. B. A. Forouzan,5/e, p.84-85
In networking, we use the term bandwidth in two contexts.

 The first, bandwidth in hertz, refers to the range of frequencies in a composite signal or the range of
frequencies that a channel can pass.
 The second, bandwidth in bits per second, refers to the speed of bit transmission in a channel or link.

Example 3.44: A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per minute
with each frame carrying an average of 10,000 bits. What is the throughput of this network?

The throughput is almost one-fifth of the bandwidth in this case.

Bandwidth vs Data rate vs Throughput

Say, Bandwidth of a channel = 5 MHz

Channel capacity (maximum data rate)=10 Mbps (assume a noise free channel and 2 level signaling

scheme).
What is Channel Coding?
 Channel coding is a process of detecting and correcting bit errors in digital
communication systems.
 The conceptual block diagram of a modern wireless communication system is shown
below, where the channel coding block is shown in the inset of the dotted block.
Data compression Transmit antenna

Data decompression

[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
 At the transmit side, channel coding is referred to as encoder, where redundant
bits (parity bits) are added with the raw data before modulation.

 At the receive side, channel coding is referred to as the decoder. This enables
the receiver to detect and correct errors, if they occur during transmission due to
noise, interference and fading*.
*fading is the variation of the attenuation of a signal with various variables such as multipath
propagation, rain, and shadowing from obstacles affecting the wave propagation.

 Since error control coding adds extra bits to detect and correct errors,
transmission of coded information requires more bandwidth.

[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
 As the size and speed of digital data networks continue to expand, bandwidth
efficiency becomes increasingly important. This is especially true for broadband
communication, where the digital signal processing is done keeping in mind the
available bandwidth resources.

 Channel coding forms a very important preprocessing step in the transmission of


digital data (bit-stream).

 Since bandwidth is scarce and therefore expensive, a coding technique that


requires fewer redundant bits without sacrificing error performance is highly
desirable.

[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
Types of Channel Coding
 Channel coding attempts to utilize redundancy to minimize the effect of
various channel impairments, such as noise and fading, and therefore
increase the performance of the communication system.

 There are two basic ways of implementing redundancy to control errors:

(1) Automatic repeat request (ARQ)


(2) Forward error control coding (FECC), or Forward error correction (FEC)

[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
ARQ Technique

[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
ARQ Technique
 The ARQ technique adds parity, or redundant bits, to the transmitted data
stream that are used by the decoder to detect an error in the received data.
When the receiver detects an error, it requests that the data be retransmitted
by the receiver. This continues until the message is received correctly.
 In ARQ, the receiver does not attempt to correct the error, but rather it
sends an alert to the transmitter in order to inform it that an error was
detected and a retransmission is needed. This is known as a negative
acknowledgement (NAK), and the transmitter retransmits the message upon
receipt. If the message is error-free, the receiver sends an acknowledgement
(ACK) to the transmitter.
 This form of error control is only capable of detecting errors; it has no ability
to correct errors that have been detected.
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
FECC Technique or FEC
 In a system which utilizes FECC or FEC coding, the data are encoded
with the redundant bits to allow the receiver to not only detect errors,
but to correct them as well.

 In this system, a sequence of data signals is transformed into a longer


sequence that contains enough redundancy to protect the data. This
type of error control is also classified as channel coding, because these
methods are often used to correct errors that are caused by channel
noise.

 The goal of all FECC techniques is to detect and correct as many errors
as possible without greatly increasing the data rate or the bandwidth.
Application: ARQ vs FEC
 The choice of ARQ or FEC depends on the particular
application.

 ARQ is often used where there is a full-duplex (2-way)


channel because it is relatively inexpensive to implement.

 FEC is used where the channel is not full-duplex or where


ARQ is not desirable because real time is required.
Types of Channel Codes
 There are two main categories of channel codes:
(1) Block codes
(2) Convolutional codes.
 Block codes accept a block of k information bits and produce a block of n coded bits.
By predetermined rules, n-k redundant bits are added to the k information bits to form
the n coded bits. Commonly, these codes are referred to as (n,k) block codes. k/n is
known as code rate or code efficiency.
Example: Hamming Codes and Cyclic Codes

 Convolutional codes convert the entire data stream into one single codeword. The
encoded bits n depend not only on the current k information bits but also on the
previous information bits.
Example: Turbo codes
Choice of Channel Codes
 The choice of a channel coding scheme for a particular application is a trade-off between
various factors:
(i) Code rate (the ratio between the number of information symbols and the number of code symbols)
(ii) Reliability (the bit or message error probability)
(iii) Complexity (the number of calculations required to perform the encoding and decoding operations).

 Shannon showed in his landmark paper (1948) that virtually error-free communication is
possible at any rate below the channel capacity. However, his result did not include explicit
constructions and allowed for infinite bandwidth and complexity.
 Hence, ever since 1948, scientists and engineers have been working to further develop
coding theory and to find practically implementable coding schemes.
 The introduction of turbo codes in 1993 caused a true revolution in error control coding.
These codes allow transmission rates that closely approach channel capacity.

[Ref. ] A book chapter on channel coding,


can be downloaded from https://www.researchgate.net/publication/353917144_Channel_coding

You might also like