Ece3151 (B) (14.3.24)
Ece3151 (B) (14.3.24)
ECE 3151
Section B
(2024)
Class Hours
Sunday: 11:20-12:20
Wednesday: 11:20-12:20
Thursday: 11:20-12:20
Recommended Books
1. B. A. Forouzan, “Data Communications and Networking”, 5/e
2. W. Stallings, “Data and Computer Communications”, 10/e
3. B. P. Lathi, “Modern Digital and Analog Communication Systems”, 3/e
4. G. Keiser, “Fiber Optic Communications”, Springer(2021)
Lecture Plan
Topics to be Covered:
Calculation of detection-error probability (3 hr.)
Transmission impairments and noise (1 hr.)
Concept of channel coding and channel capacity (1 hr.)
Error detection and correction codes (3 hrs.)
Communication medium ( 1 hr.)
Basics of fiber-optic communication systems (3 hrs.)
Assessment:
2 class tests(CT)
(best one will be counted)
CT-1: 20.3.2023
CT-2: 04.04.2023
Examples of Communication Systems
Source Medium Destination
Human voice
•Destination unit receives the message
•Source originates the message
•Medium carries the message
Television picture
Teletype message/data
Channel coding (encoder), before modulation. Channel coding (decoder), after demodulation.
Error detection and correction codes Error detection and correction codes
Transmission impairments
A C
B-C B B+C x
x= continuous random variable
f(x)= probability density function
A= peak amplitude
B= mean
C= standard deviation
C2= variance
Noise can be modeled by Gaussian distribution
Source: wikipedia
Source: DOI:10.1109/APCC.2012.6388180
QUIZ: P(α)=?
P(β)=?, P(Ap)=?,
If 1 and 0 are equally likely: P(-Ap)=?
P(0)=P(1)= 50%
Q-function:
Binary System: You are sending either 0 or 1
Mutually exclusive events: those events that do not occur simultaneously. e.g., tossing a coin, you get either Head or Tail.
Mutually exclusive events: 2 Mutually exclusive events: 2 Mutually exclusive events: 04
S0 D0 D0.S0 D1.S0
S1 D1 D0.S1 D1.S1
Event S0: 0 was sent Event D0: 0 is detected Event D0.S0: 0 sent, 0 detected
Event S1: 1 was sent Event D1: 1 is detected Event D1.S1: 1 sent, 1 detected
P(S0)+P(S1)=1 P= probability P(D0)+P(D1)=1 Event D1.S0: 0 sent, 1 detected
Event D0.S1: 1 sent, 0 detected
P(bit error)= P(D0.S1) + P(D1.S0)
From Product rule:
P(D0|S1)=P(D0.S1)/P(S1) =>P(D0.S1)= P(S1) P(D0|S1) P(bit error)= P(S1) P(D0|S1)+P(S0) P(D1|S0)
P(D1|S0)=P(D1.S0)/P(S0) =>P(D1.S0)= P(S0) P(D1|S0)
Error probability for on-off signal
Ap/2
-Ap/2
1 0 1
Ap/2
-Ap/2
and
Ap/2
-Ap/2
Decision thresholds
1 0 1 1 0 1 1 0 1
Ap Ap Ap
-Ap -Ap
Decision thresholds
x = 1:0.1:6;
y = qfunc(x);
q1=1./(x*sqrt(2*pi));
q2=1-0.7./x.^2;
q3=exp(-x.^2/2);
Q=q1.*q2.*q3;
semilogy(x,y,x,Q)
axis tight;
Ref. B.P. Lathi,3/e, p.454
Q(5.36)= 0.4161×10-7 Q(6.12)= 0.4679×10-9
1/ 2.87×10-7≈3,484,320
Ref. B.P. Lathi,3/e, p.332
Ref. B.P. Lathi,3/e, p.458
Polar On-off Bipolar
It seems that probability of error is 50% higher for Bipolar case than On-off case. This
increase in error probability can be compensated by just a little increase in signal power.
To attain
Since Ref. B.P. Lathi,3/e, p.333
1.5Q(x)=0.286×10-6
Q(x)=1.9066×10-7
10.16-10 10.162-102 x~ 5.08
= 0.016 = 1.6% ,
102
= 0.03225 = 3.23%
10
This is just a 1.6% increase in the signal amplitude (or 3.23% increase in signal power) required over the on-off case.
Ref. B.P. Lathi,3/e, p.333
E=Ap2
2E=2Ap2=(√2Ap)2
4/3π=0.424
s(t)
1
t
-x/2 +x/2
4/3π=0.424
Zoom in
T=1µs
0.5µs 1 bit
1µs 1/0.5 bit=2 bit
Ref. W. Stalling,10/e, p.82
1s 2×106 bits=2Mb
Data rate=2Mbps
Ref. W. Stalling,10/e, p.82
Any digital waveform has infinite bandwidth. But channel/medium’s bandwidth is finite.
For any given medium, the greater the bandwidth transmitted, the greater the cost.
Therefore, Economic and practical reasons dictate that digital information be approximated by a
signal of limited bandwidth.
Example 3.27: A signal travels through an amplifier, and its power is increased 10 times. This
means that P2 = 10P1 . In this case, the amplification (gain of power) can be calculated as
Source: https://www.giangrandi.org/electronics/anttool/decibel.shtml
Example 3.28: One reason that engineers use the decibel to measure the changes in the strength of a
signal is that decibel numbers can be added (or subtracted) when we are measuring several points
(cascading) instead of just two. In the following figure a signal travels from point 1 to point 4.
Example 3.29: Sometimes the decibel is used to measure signal power in milliwatts. In this case, it is
referred to as dBm and is calculated as dBm = 10 log10 Pm , where Pm is the power in milliwatts. Calculate
the power of a signal with dBm = −30.
dBm=10log10Pm=-30
log10Pm=-3 Ref. B. A. Forouzan,5/e, p.78
Pm= 10-3 mW
Example 3.30: The loss in a cable is usually defined in decibels per kilometer (dB/km). If
the signal at the beginning of a cable with −0.3 dB/km has a power of 2 mW, what is the
power of the signal at 5 km? Ref. B. A. Forouzan,4/e, p.83
The loss in the cable in decibels is 5 × (−0.3) = −1.5 dB. We can calculate the power as
Ans: 0.631 mW
Ref. B. A. Forouzan,5/e, p.79
Distortion Distortion means that the signal changes its form or shape.
Distortion can occur in a composite signal made of different frequencies. Each signal
component has its own propagation speed through a medium and, therefore, its own delay
in arriving at the final destination.
Differences in delay may create a difference in phase if the delay is not exactly the same as
the period duration.
In other words, signal components at the receiver have phases different from what they had
at the sender. The shape of the composite signal is therefore not the same.
Noise
Undesired signals are referred to as noise. Noise is the major limiting factor in communications
-21
0 degree Celsius = 273.15 Kelvin, 17 degree Celsius = 273.15+17 ≈ 290 K 10×log(4×10 )= -203.98≈ -204
The temperature at which the resistor’s resistance equals the noise power produced by the circuit or
device is known as the effective noise temperature.
[Ref. W. Stalling, 10/e, P.97]
56000 b/s×0.01s= 560 bits
[Ref. W. Stalling, 10/e, P.97]
[Ref. W. Stalling, 10/e, P.98]
Example 3.31: The power of a signal is 10 mW and the power of the noise is 1 μW; what
are the values of SNR and SNRdB ?
10 mW=10,000 µW
SNR(signal-to-noise ratio)
µ
=signal power/noise power
Example 3.32: The values of SNR and SNRdB for a noiseless channel are:
With multilevel signaling, the Nyquist formulation becomes [Ref. W. Stalling, 10/e, P.98]
=Theoretical maximum bit rate(i.e., channel capacity)
[Unit: bps]
This limitation is due to the effect of intersymbol interference, such as is produced by delay distortion.
where M is the number of discrete signal or voltage levels.
For a given bandwidth, the data rate can be increased by increasing the signal levels.
However, this places an increased burden on the receiver: Instead of distinguishing one
of two possible signal levels during each signal time, it must distinguish one of M
possible signal levels.
If the number of levels in a signal is just 2, the receiver can easily distinguish
between a 0 and a 1. If the level of a signal is 64, the receiver must be very
sophisticated to distinguish between 64 different levels.
Increasing the levels of a signal may reduce the reliability of the system.
The probability that a system performs correctly during a specific time duration.
Two digital signals: one with 2 signal levels and the other with 4 signal levels
Baud rate=Bit rate= 8bps
M=2
Baud rate= number of
signal units per second
M=4
Source: https://elec2pcb.com/2022/04/19/su-khac-biet-giua-toc-do-bit-bit-rate-va-toc-do-truyen-baud-rate/
Example 3.34: Consider a noiseless channel with a bandwidth of 3000 Hz transmitting a signal with two signal levels. The maximum bit rate can be
calculated as
Example 3.35: Consider the same noiseless channel transmitting a signal with four
signal levels (for each level, we send 2 bits). The maximum bit rate can be calculated
as
Here, L = M
2 =128 ; 2 =64
7 6
Since this result is not a power of 2, we need to either increase the number of levels or reduce the bit rate. If we have 128 levels, the bit rate is 280 kbps.
Note: 280 kbps vs. 240 kbps. Not much degradation. So 64 levels can be a cost-effective solution for this case to avoid receiver complexity.
Nyquist’s formula indicates that, all other things being equal, doubling the bandwidth doubles the data rate. Now
consider the relationship among data rate, noise, and error rate. The presence of noise can corrupt one or more bits.
If the data rate is increased, then the bits become “shorter” so that more bits are affected by a given pattern of noise.
higher the data rate, the more damage that unwanted noise can do. For a given level of noise, we would expect that a
greater signal strength would improve the ability to receive data correctly in the presence of noise.
Shannon’s result is that the maximum channel capacity, in bits per second, obeys the equation
where C is the capacity of the channel in bits per second and B is the bandwidth of the channel in hertz. The
assumes white noise (thermal noise). Impulse noise is not accounted for, nor are attenuation distortion
and delay distortion. Even in an ideal white noise environment, present technology still cannot achieve
Shannon capacity due to encoding issues, such as coding length and complexity.
The capacity indicated in the preceding equation is referred to as the error-free capacity. Shannon
proved that if the actual information rate on a channel is less than the error-free capacity, then it is
theoretically possible to use a suitable signal code to achieve error-free transmission through the channel.
Shannon’s
We define theorem
the unfortunately does not suggest
spectral efficiency, also a called
means for finding suchefficiency,
bandwidth codes, but it of
doesaprovide
digitala
yardstick by which
transmission the performance
as the number of ofbits
practical
per communication schemes
second of data that may
can bebemeasured.
supported by each
hertz of bandwidth. The theoretical maximum spectral efficiency can be expressed in C/B=log2(1+SNR) . C/B
has the dimensions bps/Hz. At SNR = 1, we have C/B = 1. [Ref. W. Stalling, 10/e, P.100]
SNR=signal power/noise power
For a given level of noise, it would appear that the data rate could be increased by
white, the wider the bandwidth, the more noise is admitted to[Ref.
theW.system. Thus, as B
Stalling, 10/e, P.101]
This means that the capacity of this channel is zero regardless of the bandwidth. In other words, we cannot
receive any data through this channel.
Example 3.38: We can calculate the theoretical highest bit rate of a regular telephone
line. A telephone line normally has a bandwidth of 3000 Hz. The signal-to-noise ratio is
usually 3162. For this channel the capacity is calculated as
This means that the highest bit rate for a telephone line is 34.860 kbps. If we want to send data faster
than this, we can either increase the bandwidth of the line or improve the signal-to-noise ratio.
Example 3.40: For practical purposes, when the SNR is very high, we can assume that
SNR + 1 is almost the same as SNR. In these cases, the theoretical channel capacity can
be simplified to
For example, we can calculate the theoretical capacity of the previous example as
The Shannon formula gives us 6 Mbps, the upper limit. For better performance we choose something
lower, 4 Mbps, for example. Then we use the Nyquist formula to find the number of signal levels.
The Shannon capacity gives us the upper limit; the Nyquist formula tells us how many signal levels we need.
Sample Question
Home Work
Q. Suppose a signal with 0 dBm power is launched at point 1 as shown below. (i)
Calculate the received power of the signal at point 4. (ii) If the transmission medium is
considered noiseless and can pass a signal with frequencies spanning from 1 kHz to 4
kHz, what is the maximum data rate of this link? Assume a 16 level signaling scheme.(iii)
If -61 dBm of noise power is added at point 4, what is the number of maximum bit that
can be sent through this link in one second?
Ref. B. P. Lathi,3/e, p.711
1.44S/No
Log2e=Log10e/log102
NoB/S
=0.434/0.301
bps ≈1.44 Capacity can be made infinite
only by increasing the signal
power to infinity which is not
practical.
Ref. B. P. Lathi,3/e, p.712
Sample Question
by/to?
Solution: B=4000Hz, S1/N1=7, No=noise power spectral density, N1=NoB=noise power, N2=NoB/4
The first, bandwidth in hertz, refers to the range of frequencies in a composite signal or the range of
frequencies that a channel can pass.
The second, bandwidth in bits per second, refers to the speed of bit transmission in a channel or link.
Example 3.44: A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per minute
with each frame carrying an average of 10,000 bits. What is the throughput of this network?
Channel capacity (maximum data rate)=10 Mbps (assume a noise free channel and 2 level signaling
scheme).
What is Channel Coding?
Channel coding is a process of detecting and correcting bit errors in digital
communication systems.
The conceptual block diagram of a modern wireless communication system is shown
below, where the channel coding block is shown in the inset of the dotted block.
Data compression Transmit antenna
Data decompression
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
At the transmit side, channel coding is referred to as encoder, where redundant
bits (parity bits) are added with the raw data before modulation.
At the receive side, channel coding is referred to as the decoder. This enables
the receiver to detect and correct errors, if they occur during transmission due to
noise, interference and fading*.
*fading is the variation of the attenuation of a signal with various variables such as multipath
propagation, rain, and shadowing from obstacles affecting the wave propagation.
Since error control coding adds extra bits to detect and correct errors,
transmission of coded information requires more bandwidth.
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
As the size and speed of digital data networks continue to expand, bandwidth
efficiency becomes increasingly important. This is especially true for broadband
communication, where the digital signal processing is done keeping in mind the
available bandwidth resources.
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.2
Types of Channel Coding
Channel coding attempts to utilize redundancy to minimize the effect of
various channel impairments, such as noise and fading, and therefore
increase the performance of the communication system.
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
ARQ Technique
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
ARQ Technique
The ARQ technique adds parity, or redundant bits, to the transmitted data
stream that are used by the decoder to detect an error in the received data.
When the receiver detects an error, it requests that the data be retransmitted
by the receiver. This continues until the message is received correctly.
In ARQ, the receiver does not attempt to correct the error, but rather it
sends an alert to the transmitter in order to inform it that an error was
detected and a retransmission is needed. This is known as a negative
acknowledgement (NAK), and the transmitter retransmits the message upon
receipt. If the message is error-free, the receiver sends an acknowledgement
(ACK) to the transmitter.
This form of error control is only capable of detecting errors; it has no ability
to correct errors that have been detected.
[Ref.] Saleh Faruque, “Radio Frequency Channel Coding Made Easy”, (2016), p.3
FECC Technique or FEC
In a system which utilizes FECC or FEC coding, the data are encoded
with the redundant bits to allow the receiver to not only detect errors,
but to correct them as well.
The goal of all FECC techniques is to detect and correct as many errors
as possible without greatly increasing the data rate or the bandwidth.
Application: ARQ vs FEC
The choice of ARQ or FEC depends on the particular
application.
Convolutional codes convert the entire data stream into one single codeword. The
encoded bits n depend not only on the current k information bits but also on the
previous information bits.
Example: Turbo codes
Choice of Channel Codes
The choice of a channel coding scheme for a particular application is a trade-off between
various factors:
(i) Code rate (the ratio between the number of information symbols and the number of code symbols)
(ii) Reliability (the bit or message error probability)
(iii) Complexity (the number of calculations required to perform the encoding and decoding operations).
Shannon showed in his landmark paper (1948) that virtually error-free communication is
possible at any rate below the channel capacity. However, his result did not include explicit
constructions and allowed for infinite bandwidth and complexity.
Hence, ever since 1948, scientists and engineers have been working to further develop
coding theory and to find practically implementable coding schemes.
The introduction of turbo codes in 1993 caused a true revolution in error control coding.
These codes allow transmission rates that closely approach channel capacity.