1-13 Arası
1-13 Arası
1
Significance of Human Communication
◼ Communication is the process of exchanging information.
◼ Communication : To transfer information from one place to
another
◼ Main barriers are language and distance.
◼ Methods of communication:
1. Face to face
2. Signals
3. Written word (letters)
4. Electrical innovations:
◼ Telegraph
◼ Telephone
◼ Radio
◼ Television
◼ Internet (computer)
Introduction
◼ The words "tele", "phone", and "graph" are derived from Greek.
◼ Tele – means ‘at a distance’
◼ Phone – means sound or speech
◼ Graph - means writing or drawing
◼ Therefore, telecommunication means communication at a distance. This can be
done through wires called transmission lines or through atmosphere by a radio
link. Other examples include:
◼ Telephone – speaking at a distance
Television – seeing at a distance
Telegraph – writing at a distance
3
1.1 The Block Diagram of Communication System
COMM SYSTEM
5
COMMUNICATIONS SYSTEMS EXAMPLES
WAN/LAN
(DIGITAL)
IP IP
ANALOG GATEWAY GATEWAY ANALOG
FREE SPACE
AAAIR
RADIO
STATION
0.5
0
0 5 10 15 20
7
1.1 The Block Diagram of Communication System
Analog Signals
❑ An analog signal is a smoothly and continuously varying voltage or
current. Examples are:
▪Sine wave
▪Voice
▪Video (TV)
Figure : Analog signals (a) Sine wave “tone.” (b) Voice. (c) Video (TV) signal.
1.1 The Block Diagram of Communication System
Digital Signals
Digital signals change in steps or in discrete increments.
Most digital signals use binary or two-state codes. Examples are:
◼ Telegraph (Morse code)
Figure: Digital signals (a) Telegraph (Morse code). (b) Serial binary code.
(ii) Input Transducer:
▪ A device that converts energy from one form to another.
Convert an input signal into an electrical waveform.
▪ Example: microphone converts human voice into electrical signal
referred to as the baseband signal or message signal.
WHAT IS BASEBAND ?
Data Electrical
(nonelectrical) Waveform
11
1.1 The Block Diagram of Communication System
(iv) Channel:
▪ Physical medium through which the transmitter output is sent.
▪ Divided into 2 basic groups:
• Guided Electromagnetic Wave Channel – eg. wire, coaxial
cable, optical fiber
• Electromagnetic Wave Propagation Channel – eg. Wireless
broadcast channel, mobile radio channel, satellite etc.
▪ Introduces distortion, noise and interference – in the channel,
transmitted signal is attenuated and distorted. Signal attenuation
increase along with the length of channel.
▪ This results in corrupted transmitted signal received by receiver,
Rx
12
Transmission Medium (Guided)
Twisted pair
Unshielded Twisted Pair (UTP)
Shielded Twisted Pair (STP)
13
Transmission Medium (Guided)
Coaxial
Coaxial cable is a type of transmission line, used to carry high
frequency electrical signals with low losses. It is used in such
applications as telephone trunklines, broadband internet networking
cables, high speed computer data busses, carrying cable television
signals, and connecting radio transmitters and receivers to their
antennas.
14
Transmission Medium (Guided)
Fiber Optic
An optical fiber cable, also known as a fiber
optic cable, is an assembly similar to an
electrical cable, but containing one or more
optical fibers that are used to carry light.
Data can be transmit via fiber optics in the form
of light.
15
Transmission Medium (Guided)
Waveguide
A waveguide is a structure that is used to guide electromagnetic waves
such as high frequency radio waves and microwaves.
There is a similar effect in water waves constrained within a canal.
16
Transmission Medium (Unguided)
17
18
1.1 The Block Diagram of Communication System
(v) Receiver
▪ Receiver decodes the received signal back to message signal – i.e
it attempts to translate the received signal back into the original
message signal sent by the source.
▪ Reprocess the signal received from the channel by undoing the
signal modification made by transmitter and the channel.
▪ Extract the desired signal from the received signal and convert it
to a form that suitable for the output transducer.
▪ Demodulation takes place in the receiver.
19
1.1 The Block Diagram of Communication System
Transceivers
A transceiver is an electronic unit that incorporates
circuits that both send and receive signals.
Examples are:
• Telephones
• Fax machines
• Handheld CB radios
• Cell phones
• Computer modems
20
Block diagram of Communication system
21
Fundamental Concepts : Frequency and Wavelength
❖ Frequency (f)
❑ A signal is located on the frequency spectrum according to its frequency and wavelength.
❑ Frequency is the number of cycles of a repetitive wave that occur in a given period of time.
❑ Frequency is measured in cycles per second (cps). The unit of frequency is the hertz (Hz).
❖ Wavelength (λ)
❑ Wavelength is the distance occupied by one cycle of a wave and is usually expressed in meters.
❑ Wavelength is also the distance travelled by an electromagnetic wave during the time of one cycle
22
Fundamental Concepts : Frequency and Wavelength
Solution : for 200 Hz, the wavelength is λ = c/f =3 × 10^8 /200 = 1.5 × 10^6 m = 1500 km!
Antenna length is 1500km/4= 375 km! or around 400km
For 3000 Hz, is λ = c/f =3 × 10^8 /3000 = 10^5 m = 100 km!
Antenna length is 100km/4= 25 km
23
Fundamental Concepts : Frequency and Wavelength
Example: A cellular phone is actually a radio transmitter and receiver. You receive an
incoming call in the form of a radio wave of frequency 880.65 MHz. What is the
wavelength (in meters) of this wave?
Solution: λ = 3 × 10^8 / 880.65MHz =0.34 m
So the lowest frequencies produce the longest wavelengths and the highest frequencies
produce the shortest wavelengths. So the smaller the signal wavelength, easier the
antenna construction. This is why low frequency signals are modulated before
transmission, instead of directly sending them out.
0.5
-0.5
-1
0 2000 4000 6000 8000 10000 12000 14000 16000 18000
0.04
0.03
0.02
0.01
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500
Hz
25
Modulation
Modulation is the process of moving a low-frequency signal to a high-frequency
and then transmitted the high-frequency signal. Generally the low-frequency
signal carrying the original information is called the modulating signal or
baseband signal. The high-frequency is known as the carrier signal. After the
carrier signal is modulated by the modulating signal, the resultant signal is
called the modulated wave.
26
Modulation
Why Modulation ?
➢ Ease of radiation - related to antenna design & smaller size.
➢ Low loss and dispersion.
➢ Channel assignment (various information sources are not always
suitable for direct transmission over a given channel)
➢ Reduce noise & interference and overcome equipment limitation.
➢ Enabling the multiplexing i.e. combining multiple signals for TX at
the same time over the same carrier.
27
Modulation
A modem receives signals and also transmits signals thus it does modulation and
demodulation at the same time. Thus the name modem has been given (modulation and
demodulation).
29
CW Modulation Types
𝒄 𝒕 = 𝑨𝒄 𝑪𝒐𝒔 (𝟐𝝅𝒇𝒄 𝒕 + 𝜽)
30
Amplitude Modulation
• Amplitude modulation (AM) is a modulation
technique used in electronic communication,
most commonly for transmitting information
via a radio carrier wave.
• In amplitude modulation, the amplitude
(signal strength) of the carrier wave is varied
in proportion to that of the message signal being
transmitted.
Example: A microphone converts sound
waves (energy) into an electrical signal
(energy) proportional to the sound wave
pressure. These frequencies are too low
to transmit. A high frequency carrier
wave is needed. This can be transmitted.
The carrier wave is modulated (varied) by
the signal from the microphone.
31
Amplitude Modulation
Carrier signal
32
Amplitude Modulation
𝑒 𝑗2𝜋10t + 𝑒 −𝑗2𝜋10𝑡
𝑚 𝑡 =
2
𝑐 𝑡 = cos(2𝜋100𝑡)
𝑒 𝑗2𝜋100t + 𝑒 −𝑗2𝜋100𝑡
𝑐 𝑡 =
2
33
Amplitude Modulation
34
Demodulation
ℎ 𝑡 𝑦′ 𝑡 ∗ ℎ 𝑡 = 𝑚 𝑡
𝑦 𝑡 𝑦′ 𝑡 𝑌 ′ 𝑓 × 𝐻 𝑓 = 𝑀(𝑓)
2 2
1 1
0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
𝑐 𝑡
35
Demodulation
36
Demodulation
Band width:B
Band width:2B
38
Amplitude Modulation
39
Amplitude Modulation
40
CW Modulation Types
𝒄 𝒕 = 𝑨𝒄 𝑪𝒐𝒔 (𝟐𝝅𝒇𝒄 𝒕 + 𝜽)
41
Frequency Modulation
• To generate a frequency modulated signal, the frequency of the radio carrier is
changed in line with the amplitude of the message signal.
• In frequency modulation, the frequency of the carrier wave is varied in proportion
to amplitude of the message signal being transmitted. When amplitude of base-band
signal decrease the frequency of the carrier wave decrease and vice versa.
Carrier signal
Base-band signal
Modulated signal
42
Frequency Modulation
43
Frequency Modulation
44
Frequency Division Multiplexing
45
Some terms
46
0 Some terms
-0.5
Bandwidth
• The signal bandwidth is a measure of the important spectral
-1
0 2000 4000 6000 8000 10000 12000 14000 16000 18000
0.03
0.02
0.01
0
0 500 1000 1500 2000 2500 3000 3500 4000 4500
• System bandwidth is the frequency range that the system can use.
• The bandwidth of a communication channel is the difference
between the frequencies allowed by the channel.
47
Some terms
x(t) y(t)
5.0 V 5.03V
Channel
t t
t0 t0
• From the observed y(t0) there is no going back to x(t0) = 5.0 V as the
noise is not known.
49
Analog vs. Digital Information Systems
Digital transmission
x(t 0 ) = 5V =) {5} decimal ⌘ {0101}
= )binary{—A, +A, —A, +A }
+A
51
Course Overview
Channel
Topic 11 Topics 1-7 Noise
Topics
8-10
Error Waveforms
Dest
Correction to Bits received
Received
bits waveform
sent
signal
Bits to
Source
Waveforms
𝒃𝟎 𝒃𝟏 𝒃𝟐 𝒃𝟑 𝒃𝟒 𝒃𝟓
The channel (a wire, the
Channel
sent bits Transmitter
air, a fiber optic cable) may
modify the signal as it
carries it.
Waveforms
Dest to Bits
received received
bits signal
Receiver
Examples:
voltage (1 = high / 0 = low)
current (1 = positive / 0 = negative) Receiver
light (1 = on / 0 = off)
Transmitter
ON
OFF
bit time
Examples:
A voltage or current waveform might be sent over a wire
A light waveform might be sent over a fiber optic link (Internet) or over
plain air (TV remote)
Channel
𝒃′𝟎 𝒃′𝟏 𝒃′𝟐 𝒃′𝟑 𝒃′𝟒 𝒃′𝟓 …
Dest Rx
received
waveform
Common
abbreviations:
Tx = Transmitter
R x = Receiver
• A bit is a variable that can assume only two possible values or “states”,
commonly denoted by 0 or 1.
Note:
Variables that can assume more than two possible values can be
represented by combinations or sequences of bits.
Examples:
• binary numbers
• ASCII codes for letters and text
Example: 1 0 0 1
2 0 1 0
I N 3 3 0 1 1
f x 22 b 21 1b 20 b 4 1 0 0
2 0
5 1 0 1
6 1 1 0
7 1 1 1
Notation:
• bN-1 = Most Significant Bit (MSB)
• b0 = Least Significant Bit (LSB)
Examples:
E = 01000101
MSB LSB
b7 b0
C = 01000011
• Answer: 0001001010010110
• What is the decimal value of the bit sequence "1000"? Assume that the
MSB is listed first.
• Answer: 8
b0 b1 b2 b3 b4 b5 b6 b7
101000101100001010100010
E C E
• The shorter the bit time, the faster we can transmit information (bits)
ON
bit time
OFF
light intensity
ON
OFF
bit time
30 30
28 28
time time
12:00
16:00
12:00
16:00
20:00
00:00
04:00
08:00
20:00
00:00
04:00
08:00
A Discrete Time (DT) signal
A Continuous Time (CT) signal
has a known value only at a
has a known value for all
discrete (discontinuous)
points in a time interval.
set of time points.
Ts = sample period
Ts 2Ts3Ts 4Ts … t
sample
x(2)
x xh(t)
Ts
xc (t)
1
Relationship: Fs
Ts
• Example: Ts 0.2 sec
1 sample 0.2
Fs
0.2 ec
samples
s sample
5
5Hz
sec 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
t (sec)
1 second
Ts
Tw
N Tw Fs
Ts
TW
Tradeoff:
A higher sample frequency is
• Good: Less information lost since less time between samples
• Bad: More storage needed since more samples for a given
length of time
• Answer: 300
• Compact discs record two channels (left and right) of music at a sampling
• frequency of Fs=44.1 kHz. If each sample is encoded with 16 bits, and one byte
• is 8 bits, how many bytes are required to store one minute of music?
• Answer: 10584000
Quantization
• Divides the expected signal
range, R, into 2n different
levels,
• Quantizes the original signal
to the closest level.
3 bits → 23 = 8 levels
4 bits → 24 = 16 levels
1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
0
bit time time
Discrete time
1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
0
then
= 1s = 1 microsecond
Fs
The bit rate = bit time
SPB
1,000,000
= Hz = 250Hz
250kHz
4 Ts
• Answer: 160 𝜇𝑠
• Answer: 3
x(n)
1
Graph
0 4 8 12 16 20
24 28 n
List, table
n = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 … ]
or vector
of values x(n) = [ 1 1 1 1 0 0 0 0 1
1 1 1 0 0 0 0 0
0…]
Sum of unit
x(n) = u(n) – u(n-4) + u(n-8) – u(n-12) + u(n-24) – u(n-28)
step
functions
0 4 8 12 16 20 24 28 n
0 n 0
u(n) = 1
1 0 n
0 n
0 n d
u(n − d) =
1 d n
0 d n
u(n) u(n)
0 n 0 n
-u(n-5) -u(n-10)
u(n)-u(n-5) u(n)-u(n-10)
0 n 0 n
1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
0 4 8 12 24 28 n
n
x(n) = 3 ∙ u(n) + 2 ∙ u(n-2) – 2 ∙ u(n-11) – u(n-14) – u(n-20) + 3 ∙ u(n-27)
• Answer: x(n)=2*u(n+5)+1*u(n-7)-4*u(n-14)+2*u(n-20)
x(t)
Source Transmitter
information to send physical waveform
l
Channe
y(t)
Dest Receiver
information received physical waveform
x(n) x(t)
Bits to
“Hold”
Source Discrete-time
sent bits circuit
Waveform
Channel
Transmitter
y(n)
Discrete-time
Dest waveform Sampler y(t)
received bits to Bits
Receiver
x(n) x(t)
l
Channe
Discrete-time
Transmitter
y(n)
Discrete-time
Dest Waveform Sampler y(t)
received to Bits
bits
Discrete-time
Receiver
(
ym (n) = 1 − a )u(n) +
x (n) = u(n) − u(n − 5) Mathematical Model
n+1
(Model response)
x(n) y(n)
Channel
n n
y(n) d k
c
n
Legend:
k = attenuation (k < 1)
d = delay
c = offset
Mathematical Model: y(n) = kx(n – d) + c
Channel
output
(black)
output
(black)
output
(black)
y(n)
x f(x) y
y a
x
Note: y = ax+b is not linear (unless b=0).
x ax y cx ax cy
• Additivity:
x1 ax y1
x1+x2 ax y1+y2
x2 ax y2
n n
n n
n n
• Homogeneity:
• Additivity
n n
n n
x1(n) y1(n)
system
n n
x2(n) y2(n)
system
n n
x1(n)+x2(n) y1(n)+y2(n)
system
n n
• Homogeneity:
• Additivity
n n
n n
d d
n
d
k y(n)
c
n
u(n) s(n)
system
0 n 0 n
11
k=1
s(n)
a = 0.8
00
0 10 20
20 30
30 40
0 10
n 40
11 k = 0.5
s(n) a = 0.8
0
s(n)
0
k=1
-1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=16
SPB)
1
s(n) = Exponential approach,
s(n)
a = 0.8 0
k=1 -1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=8
SPB)
s(n) = Exponential approach, 1
a = 0.8
s(n)
k=1 0
-1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=4
SPB)
s(n) = Exponential approach,
1
a = 0.8
s(n)
k=1 0
-1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=2
SPB)
s(n) = Exponential approach,
1
a = 0.8
s(n)
k=1 0
-1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=1
SPB)
s(n) = Exponential approach,
1
a = 0.8
s(n)
k=1 0
-1
-10 0 10 20 30 40
1
-s(n-SPB)
0
-1
-10 0 10 20 30 40
1
y(n)
Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to more general input
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
n
Course notes of Bertram SHI
At the receiver
c+noise(n)
System
LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …
System
+
LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …
s(n) = k(1 – an+1)u(n)
c = offset
k = amplitude
a = base of the exponential
Channel Response to a Bit
Sequence
input and channel output
1
c+k
y(n
)
-10 0 10 20 30 40 50 60 70 80
• Training sequence
This topic’s main focus
• Synchronization method
Thresholding
Thresholding
• In our system, for long SPBs at the receiver
− 1 bits usually result in received values close to c+k
− 0 bits usually result in received values close to c
1
c+k
c
pulse width
0
time in samples
0 500 1000 1500
Length of the training
sequence
• Trade-off in the choice of the pulse width
− Shorter pulse widths mean more time available to transmit
data.
− Longer pulse widths enable better estimates of channel
parameters (c, k)
1
c+k
c
pulse width
0
time in samples
0 500 1000 1500
Length of the training
sequence
• The choice of the pulse width is based on an assumption about the value
of “a” in the step response.
1
c+k
c
pulse width
0
time in samples
0 500 1000 1500
Example
• Consider a channel with a step response given by 𝒔(𝒏) = 𝒌(𝟏 − 𝒂𝒏+𝟏) 𝒖(𝒏)
• During transmission, the environment adds a constant offset c, so the
signal at the receiver is 𝒏+𝟏
𝒚(𝒏) = 𝒄 + 𝒌(𝟏 − 𝒂 ) 𝒖(𝒏)
c
0
-10 -5 0 5 10 15 20 25 30
W
Example
Solution: Let the pulse width be W+1 samples.
Since the maximum (max) occurs at the end of the pulse: 𝒎𝒂𝒙 = 𝒄 + 𝒌(𝟏 − 𝒂𝒘+𝟏)
To ensure that:
( )
c +k 1-aW+1 >c +0.9k step input / signal at receiver
k (1-a )>0.9k
1
W+1
c+k
c+0.9k
1 -aW+1 >0.9 y(n
0.1 >aW+1
)
ln0.1
W> -1 ≈20.85 y(n
lna )
0.15
0
-10 -5 0 5 10 15 20 25 30
W
Example
• Suppose the response of a communication
channel to the training sequence is given by
y(n)=6+22u(n−1500)−22u(n−2000).
The response to the training sequence is used to determine a threshold for
estimating the input bit.
Suppose that we receive an output sample with value 15, what would our
estimate of the corresponding input bit be?
• Answer: 0
Example
• Suppose that a discrete time LTI channel has a given step response:
s(n)=0.9(1−0.8𝑛+1 )u(n)
Assume that we transmit, as a training sequence, a pulse with unit height and width W
samples. Assume that the pulse starts at sample index 0. What is the minimum value of W
so that the channel response at the end of the pulse is larger than 0.8?
Answer: A pulse of width W samples that starts at time 0, ends at time W-1. For the response of
the channel, at the end of the pulse, to be greater than 0.8, we must have
Asynchronous Serial
Communication
0 SPB n
< SPB
1
SPB
c+k
c+k/2
0 n
8 bit block
frame
• In our channel, the output is normally low (0), so we choose the start
bit to be 1.
• The stop bits are chosen so that the received signal is the same as when
the channel is idle so that the new start bit can be detected.
• Using more stop bits provides more time between data blocks
− Advantage: the receiver has more time to process the frame
− Disadvantage: reduces the rate we can send information
Data block 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
Frame 1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
0
Sample waveform
(SPB=5)
c+k start
c+k/2
training seq.
c b 0 b1 b2 b3 b4 b5 b6 b7
SPB
2×SPB-1 sample/compare here
• Answer: 10Q :)
• Bit error rate (BER) – the fraction of bits that are wrongly decoded by the
receiver
− We want this to be low
− Unfortunately, the BER increases for smaller bit times
Bit Rate
bit time
0 0
-10 0 10 20 30 40 -10 0 10 20 30 40
Channel Parameters:
0 0 c=0, k=1, a=0.8
-10 0 10 20 30 40 -10 0 10 20 30 40
output
0
0 50 100 150
bit time = 4 SPB
1 threshold
0
0 50 100 150
1 1
0010 0 n
Response 1
0 n
Response 01
-50 0 20 -50 0 20
1 1
0100 0 n
Response 0
0 n
Response 10
-50 0 20 -50 0 20
1 1
0110 0 n
Response 1
0 n
Response 11
-50 0 20 -50 0 20
1 1
1000 0 n
Response 0
0 n
Response 00
-50 0 20 -50 0 20
1 1
1010 0 n
Response 1
0 n
Response 01
-50 0 20 -50 0 20
1 1
1100 0 n
Response 0
0 n
Response 10
-50 0 20 -50 0 20
1 1
1110 0 n
Response 1
0 n
Response 11
-50 0 20 -50 0 20
• The smaller the bit time (SPB) in comparison with the time it takes for
the channel to respond to a transition, the greater the ISI.
− More past symbols interfere with the current symbol
− We observe a larger variety of responses to a “zero” or “one” bit
0
0 5 10 ns 15 20
larger a → larger ns
0.5
diagram by overlaying plots
0
of the channel response for
0 two bit times
5 10 15 20
n
Channel Parameters
a=0.85, ns=13 (with noise)
bit time = 10 SPB
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
height
Eye Diagram
0.5
0 5 10 15 20
width
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 5 1015 20 25 30
n
Channel: Exponential Step, a = 0.85, noise added
0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram
0.5
0 2 4 6 8 10
n
Channel: Exponential Step, a = 0.85, noise added
0 20 40 0 15 30
SPB = 10 SPB = 6
1 1
0 0
0 10 20 0 6 12
SPB = 4 SPB = 2
1 1
0 0
0 4 8 0 2 4
channel
data block decode find start skip past estimate
receive
bits bit training threshold
Bit Rate
max BER
Bit Error Rate bit time
0 0
0 20 40 0 15 30
SPB = 10 SPB = 8
1 1
0 0
0 10 20 0 8 16
SPB = 7 SPB = 2
1 1
Channel parameters:
a = 0.9, k = 1
0 0 noise added
0 7 14 0 2 4
1
0.5
0
x(n)
1
0.5
0
0 5 10
y(n)
1
0.5
0
x(n)
1 1
0.5 0.5
0 0
0 5 10 0 5 10
channel
equalizer
(𝒏)
𝒙 y(n)
1
0.5
0
x(n)
1 1
0.5 0.5
0 0
0 5 10 0 5 10
channel
equalizer
(𝒏)
𝒙 y(n)
actual predicted
channel
channel channel
model
input output
estimated actual
channel ??? channel
input output
k s(n) = k(1-an+1)u(n)
0<a<1
0
0 10 20 30 40
a delay
y(n-1)
n 0 1 2 3 4 5 6 7
x(n) 0 0 1 1 1 0 0 0
1 1 1 1 1 1 1
1 2 1 1 1 1 2 1 2 1 2
2 2 2
1 3 7 +
y(n) 0 2 + 0 2 + 2 + 2 + 2 + 7 2 + 7 2 7
2 4 8 16 32 64
Course notes of Prof. Bertram SHI
Effect of the Parameter “a”
y(n) =a·y(n-1) + (1-a)·k·x(n)
• a=0
− no memory of the past
− y(n) = k·x(n)
− the channel output is just the input multiplied by k
• a=1
− infinite memory of the past
− y(n) = y(n-1)
− the channel output is constant, ignores the channel input
1 1
0.5 0.5
0 0
0 5 10 channel 0 5 10
~ equalizer
x(n) y(n)
Course notes of Prof. Bertram SHI
Modeling the Channel
• In order to reverse the effect of the channel, we start with a
model of the effect of a channel on the input
− Model 2:
› If x(n) is the channel input and y(n) is the output,
y(n) =a·y(n-1) + (1-a)·k·x(n)
Course notes of Prof. Bertram SHI
Intuition for Equalizer
• Due to ISI, the output does not always move far enough to cross
the threshold in response to a change in the bit.
1 SPB = 2 input
threshold
0 output
0 50 100 150
time in samples
• Thus, looking at the value (or level) of the output is not a reliable
way to determine the input bit.
k s(n) = k(1-an+1)u(n)
0<a<1
0
0 10 20 30 n 40
1
x(n) = [y(n) - a‧y(n-1)]
(1 – a)‧k
This is only true if the output of the channel is exactly described by this equation,
however, there may be unmodeled effects, such as nonlinearity and noise or
incorrect parameters.
~ 1
x(n) = (1 – a)‧k [y(n) - a‧y(n-1)]
Course notes of Prof. Bertram SHI
Interpretation in Terms of Changes
1 y(n) − a y(n− 1)
x(n) =
(1 − a) k
1 (1 − a)y(n) + ay(n) − a y(n− 1)
=
(1 − a) k
1 a
= y(n) + (y(n) − y(n−1))
k (1 − a)
a 8
0 if a = 0
1 −a 6
4
= if a = 1
2
0
0 0.25 0.5 0.75 1 a
Course notes of Prof. Bertram SHI
Example
Suppose we have a channel whose step response can be described by
n+1
1 2 u(n)
s(n) = 1 −
2 3
where u(n) is the unit step function.
What is the equation for the equalizer for this channel?
Solution:
Step 1: Find the equivalent recursive model
( )
s(n) = k 1 − an+1 u(n) y(n) = a y (n− 1) + (1 − a) k x (n)
1 2 2 1
y(n) = y(n− 1) + x(n)
2 3 3 6 Course notes of Prof. Bertram SHI
Example
Solution:
Step 2: Invert the recursive model
2 1
y(n) = y(n − 1) + x(n)
3 6
x(n) = 6 y (n) − 4 y (n − 1)
Determine the equation for the equalizer for this channel, by expressing the
equalized waveform x(n) as a function of the received waveform y(n).
Answer: x(n)=4*y(n)-2.5*y(n-1)
• Answer: 1
eye diagram
channel
y(n) = a ‧y(n-1)+(1-a ) ‧k ‧x(n)
ch ch ch
find skip
decode estimate
bs1 start past rx
bits threshold
y(n) bit training
~
decode x(n) channel
bs2
bits equalizer
Ideally,
1
~x(n) =
[y(n) – a ‧y(n-1)] ach = aeq
(1 – aeq )‧k eq
kch = keq
eq
= a parameter
ach for channel
eye diagram
kch = k parameter for channel
aeq = a parameter for equalizer
keq = k parameter for equalizer Course notes of Prof. Bertram SHI
Equalization (no noise, aeq = ach)
no equalization equalization
SPB = 10 1 1
0.5 0.5
0 0
0 5 10 15 20 0 5 10 15 20
SPB = 5 1 1
0.5 0.5
0 0
0 5 10 0 5 10
SPB = 3 1 1
0.5 0.5
0 0
0 2 4 6 0 2 4 6
c = 0,noise = 0,a =a = 0.85,k ch = k eq = 1 Course notes of Prof. Bertram SHI
ch eq
Equalization (no noise, aeq > ach)
no equalization equalization
SPB = 10 1 1
0.5 0.5
0 0
0 5 10 15 20 0 5 10 15 20
SPB = 5 1 1
0.5 0.5
0 0
0 5 10 0 5 10
SPB = 3 1 1
0.5 0.5
0 0
0 2 4 6 0 2 4 6
c = 0,noise = 0,a = 0.85,a = 0.87,k ch = k eq = 1Course notes of Prof. Bertram SHI
ch eq
Equalization (no noise, aeq < ach)
no equalization equalization
SPB = 10 1 1
0.5 0.5
0 0
0 5 10 15 20 0 5 10 15 20
SPB = 5 1 1
0.5 0.5
0 0
0 5 10 0 5 10
SPB = 3 1 1
0.5 0.5
0 0
0 2 4 6 0 2 4 6
c = 0,noise = 0,a = 0.85,a = 0.6,k ch = k eq = 1Course notes of Prof. Bertram SHI
ch eq
Equalization (noise, aeq = ach)
no equalization equalization
1.5 1.5
SPB = 10
1 1
0.5 0.5
0 0
-0.5 0 10 20 -0.5 0 10 20
c+noise(n)
System
+
LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …
v(n)
channel
x(n) r(n) y(n)=r(n)+v(n)
additive
• Definitions:
− x(n): channel input
− r(n): channel output without noise
− v(n): noise
− y(n): received signal
• Additive noise moves the received signal away from the channel output without noise.
• If the noise is large enough and in the right direction, the output sample will be on
the wrong side of the threshold!
Course notes of Prof. Bertram SHI
Simplifying Assumptions for BER Analysis
• Perfect synchronization
− We know exactly where to sample the output to decode each bit.
• No ISI
− The channel response depends only on the current bit, and not on
past bits.
r
= if IN = 0
r r
min
max if IN = 1 y=r+v 0 if y T
IN = 0/1 channel >T OUT =
1 if y T
T=threshold
v
• How can we predict the bit error rate for this model?
0
0 2 4 6 8 10 12 14 16 18 20
y=r+v
rmax
threshold
rmin
0 2 4 6 8 10 12 14 16 18 20
1 OUT
0
0 2 4 6 8 10 12 14 16 18 20
0
0 2 4 6 8 10 12 14 16 18 20
y=r+v
rmax
threshold
rmin
0 2 4 6 8 10 12 14 16 18 20
1 OUT
0
0 2 4 6 8 10 12 14 16 18 20
bit error!
Course notes of Prof. Bertram SHI
Example
• The figure below shows the transmitted and received signal levels corresponding to
20 bits transmitted over a communication system with additive noise.
• Ideally, (when IN=0, OUT=0) and (when IN=1, OUT=1). In this case,
the BER = 0.
0 0
IN OUT
1 1
0 0
IN OUT
1 1
Course notes of Prof. Bertram SHI
Probabilistic Analysis
0 0
IN OUT
1 1
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
IN 1 1 0 1 1 0 0 1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1
OUT 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 0 0 1 1 0 1 0 0 1 1
By definition:
# of errors 3
BER = = 12%
# of bit pairs 25
Using our formula:
P[IN = 0] = 8 1-P = 7
25 8
0 e0
0
P = 1
e0 8
IN P = 2
OUT
e1 17
1 1
P[IN = 1] = 17
25 1-P = 15
e1 17
BER = P 1 8 2 17 3
x [IN = 0] + P x [IN = 1] = x + X =
e0 e1 8 25 17 25 25
Course notes of Prof. Bertram SHI
Intuition
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
• Since P[IN=0]+P[IN=1]=1,
− The BER is a weighted average of Pe0 and Pe1
− The BER is between Pe0 and Pe1
− If IN=0 is more likely, the BER is closer to Pe0
− If IN=1 is more likely, the BER is closer to Pe1
− If IN=0 and 1 are equally likely, BER = ½(Pe0 + Pe1)
− If Pe0 = Pe1, BER = Pe0 = Pe1.
Course notes of Prof. Bertram SHI
Example BER Calculation
P[IN=0]=0.6 1-Pe0=0.8
0 0
Pe0=0.2
IN Pe1=0.3 OUT
1 P[IN=1]=0.4 1-Pe1=0.7 1
Solution: 0.2<BER<0.3
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
IN 0 0 OUT
1 1
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
binary channel v
0 if y T
OUT = T=threshold
1 if y T Course notes of Prof. Bertram SHI
Noise Leads to Bit Errors
1 IN
0
0 2 4 6 8 10 12 14 16 18 20
y=r+v
rmax
threshold
rmin
0 2 4 6 8 10 12 14 16 18 20
1 OUT
0
0 2 4 6 8 10 12 14 16 18 20
bit error!
Course notes of Prof. Bertram SHI
Power Consumption
• Power is energy used per unit time:
energy
− power =
time
− 1 Watt = Unit of Power
− Lifting an apple (~100g) up by 1m in 1s
requires ~1W
• For communication, we are interested in how much the signals differ from their
average: r = r − r
ave
• Since r can be both positive and negative, its average value over many samples
is zero: 1 N
r (n) = 0
N n=1
• The average power is the average squared value over many samples:
1 N
P= (r (n))
2
• If IN = 0,
r = r −r =r − (12 r + 12 r ) = 12 (rmin − r )
min ave min min max max
• If IN = 1,
r = r −r =r − (12 r + 12 r ) = 12 (rmax − r )
max ave max min max min
P = 1 1 (r - r ) + 1 1 (r - r ) =
2 2 (rmax - rmin )2
signal
2 2 max min 2 2 max min 4 Course notes of Prof. Bertram SHI
Gaussian Noise Model
r
= if IN = 0
r r
min
max if IN = 1 y=r+v 0 if y T
IN = 0/1 channel >T OUT =
1 if y T
T=threshold
v
% of samples
n
sample value
% of samples
n
sample value
% of samples
n
sample value
% of samples
n
sample value
bin
width columns
sum to 100%
• As the bins get smaller and
smaller, the curve gets
v smoother and smoother.
v
Course notes of Prof. Bertram SHI
Gaussian Density Function
• The probability density function
of many naturally occurring
-(v-m)2
f (v)= 1 random quantities, such as noise,
e 22
tends to have a bell-like shape,
v
2
known as a Gaussian distribution.
• Applications:
− Noise in communication
systems
− Particles in Brownian motion
− Voltage across a resistor
Course notes of Prof. Bertram SHI
Parameters Controlling the Shape
• The mean (m) is
-(v-m)2
f (v)= 1 − Its average value over many samples
e 22
− The center location of the pdf
v
2
0 m=A v m σ v
fv(v)
fv(v)
Larger Larger 1
Mean Variance 2πσ
0 m=B v m σ v
Course notes of Prof. Bertram SHI
Calculating Probability by Integrating
• The probability that the noise v is between v1 and v2 is the area
under the probability density function between v1 and v2
V1 V2 v
Course notes of Prof. Bertram SHI
Example Probability Calculation
fv(v)
PDF (B) has a larger mean and larger variance than PDF (A).
PDF (B) has a larger mean and smaller variance than PDF (A).
PDF (B) has a smaller mean and smaller variance than PDF (A).
✓ PDF (B) has a smaller mean and larger variance than PDF (A).
Course notes of Prof. Bertram SHI
Calculating the BER
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
y is Gaussian with
v (noise) – mean rmin
– variance σ2
Gaussian with mean 0 and variance σ2
0 v
rmin rmax
y is Gaussian with
v (noise) – mean rmax
– variance σ2
Gaussian with mean 0 and variance σ2
0 v
rmin rmax Course notes of Prof. Bertram SHI
Pe0 (Probability of Error if IN=0)
– OUT = 0
– The noise pushes y below T
P = P[y T if IN = 1]
e1 1
Pe1
increases T increases.
0
T rmin rmax
5
Course notes of Prof. Bertram SHI
Predicting BER
If 0 and 1 input bits are equally likely,
BER = 12 P + 12 P
e0 e1
f(y if IN=1)
f(y if IN=0)
½ ½
Pe0 Pe1
BER
0
rmin rmax T
½Pe1 ½Pe0
IN Pe1 OUT
1 P[IN=1] 1-Pe1 1
P
(r
max - rmin )2
rmax =
signal
4 y P
rave r signal
rmin P
noise
= σ2 SNR = P
noise
(r -r )2
= max min
4σ2
• What is the typical noise power present at the input of a mobile phone?
• When your received signal falls to about this level your phone will lose
its connection
noise
rmin T rmax -1
10
P =
(r
max
-r
min
)
2
signal 4
BER
-2
rmin T rmax 10
high
signal
power -3
10
rmin T rmax -10 -5 0 5 10
SNR (dB) Course notes of Prof. Bertram SHI
BER under Changing Noise Power
0 BER
10
high
noise
power
rmin T rmax -1
10
BER
σ
-2
rmin T rmax 10
low
noise
power -3
10
rmin T rmax -10 -5 0 5 10
SNR (dB) Course notes of Prof. Bertram SHI
Example
Consider a noisy communication channel that takes binary input, IN, and produces an output given by
y=r+v where
a. r=0.4 V if IN = 0.
b. r=0.8 V if IN = 1.
c. v is a Gaussian random variable with zero mean and variance σ2=0.01 V2.
What is the value of the signal to noise ratio in decibles?
Answer: 6.02
What is the probability that v is between 0.4 and 1? Enter the probability as a decimal number between 0
and 1 (100%).
Hint: No need to integrate. The area of a trapezoid is [(a+b)/2].h where a and b are the two bases and h is
the height.
Answer: 0.195
Under the assumptions above, what is the probability of a bit error if IN = 0? Enter the probability as a decimal number
between 0 and 1 (100%).
Answer:
Answer:
Under the assumptions above, what is the estimated bit error rate? Enter the rate as a decimal number between 0 and
1 (100%).
Answer:
Suppose that assumption #3 is changed so that the binary input, IN, is twice as likely to be 0 as 1. All other assumptions
remain unchanged. Would the bit error rate change and if so, how?
The BER would remain the same.
The BER would increase.
There is not enough information to answer this question.
✓ The BER would decrease.
v(n)
Channel
Noise
bits
with errors
received
bits Error Waveforms
Dest
Correction to Bits received
waveform
k
Message bits Extra bits
code rate = n
n
• Related terms Fs
− Gross bit rate: rate that all bits are sent
= SPB
• Also called the data signaling rate
− Net bit rate: rate that useful bits are sent = code rate × gross bit rate
• The Hamming distance measures the number of bit errors it takes transform one
codeword to another.
− For example, if we use no coding, each bit is represented by one of two
code words (“0” and “1”).
− Since the Hamming distance is 1, a single-bit error changes one code
word the other.
single-bit error
“heads” 0 1 “tails”
Course notes of Prof. Bertram SHI
Error Detection vs Correction
Hamming distance determines how powerful the error
correction or error detection capabilities of the code are.
• Error detection
− We can detect errors
− But, we don’t know how to fix them
• Error correction
− We can detect errors
− And, we can correct them
• Examples
message
00 11 message • The Hamming distance between
bit 0 bit 1 code words is d=2.
10
• This code can be used to detect
errors in up to d-1=1 bit .
− There is an error if the number of
received “1” bits in the code word is
odd.
message
bit 0
Course notes of Prof. Bertram SHI
Detecting Errors
message
bit 1 • If we receive and observe a
codeword with a mixture of 0’s
and 1’s, we know that an error
101 111 has occurred.
000 010
message
bit 0
Course notes of Prof. Bertram SHI
Correcting Errors
message
bit 1 • If we assume that at most 1-bit
error can occur, we can do error
correction.
101 111
message
bit 0
Course notes of Prof. Bertram SHI
The (4,1,4) Repetition Code
• We can EITHER
− Detect errors in up to d-1=3 bits.
OR
𝒅−𝟏
− Detect and correct errors in up to = 𝟏. 𝟓 bits?
𝟐
• For example
− If we observe 1000, then
• Either a 1 bit or a 3 bit error occurred.
• If we correct, we assume the 3 bit error did not occur.
− If we observe 1001, the codewords 0000 or 1111 are equidistant. We
have no reliable way to decide which was transmitted.
message message
0000 1000 1001 1101 1111
bit 0 bit 1
Course notes of Prof. Bertram SHI
• Consider a repetition code where codewords are formed by repeating each bit five
times.
Answer: Each bit is transmitted individually, so the value of k=1.
The two possible code words are 00000 and 11111, thus n=5.
The Hamming distance between the two code words is d=5.
Thus, (n,k,d) = (5,1,5).
• Suppose we wish to detect, but not correct errors in each received codeword. What is
the maximum number of bit errors that we can detect?
Answer: The maximum number of bit errors we can detect, but not correct, is d-1=4
• Suppose we wish to detect and correct errors in each received codeword. What is the
maximum number of bit errors that we can detect and correct?
Answer:The maximum number of bit errors we can detect and correct is (d-1)/2=2
Answer: Breaking the line above into individual 5 bit codewords, we obtain
00000 11110 00011 11110 11000 11100 11110 00001
Correcting errors by majority voting, we obtain
00000 11111 00000 11111 00000 11111 11111 00000
Thus, the original bit stream was 01010110.
message block: 0 1 1 0 0 1 0 k 7
code rate = =
codeword: 01100101 k+1 8
message block: 1 0 1 1 1 0 0
codeword: 10111000
Course notes of Prof. Bertram SHI
Error Detection
• With the (k+1,k,2) parity bit code, we can detect single bit errors.
P3 P4
• Rearrange the bits to form the final
codeword.
1
D1 D2 D3 D4 P1 P2 P3 P4 Codeword code rate =
2
Course notes of Prof. Bertram SHI
Example
• Arrange the message block to a 2x2
0 1 1 1 Message Block
square.
0 1 1 1 1 0 1 0 Codeword
D1 D2 P1
• The Pi have been chosen so that each row or
D3 D4 P2 column of the rearranged codeword should
have an even number of bits.
P3 P4
D1 D2 P1 S1
• Syndrome bits Si check this condition in the
D3 D4 P2 S2 received code word.
− Si = 1 indicates the condition for parity
P3 P4
bit Pi is violated.
S3 S4 Course notes of Prof. Bertram SHI
Example Syndrome Bit Calculations
0 1 1 0
1 1 0 0 If D1 + D2 + P1 is even
then S1 = 0
1 0
else S1 = 1
0 0
If D3 + D4 + P2 is even
0 1 1 0 D1 D2 P1 S1 then S2 = 0
1 1 1 1 D3 D4 P2 S2 else S2 = 1
1 0 P3 P4 If D1 + D3 + P3 is even
0 0 S3 S4 then S3 = 0
else S3 = 1
0 1 1 0
If D2 + D4 + P4 is even
1 0 0 1
then S4 = 0
1 0 else S4 = 1
0 1 Course notes of Prof. Bertram SHI
Performing Error Correction
0 1 1 0
1 1 0 0 0 1 1 1 • Since d=3, we can detect and
1 0 corrected data correct (d-1)/2=1 bit errors.
(no errors)
0 0
0 1 1 0 • Check the syndrome bits
1 1 1 1 0 1 1 1 − If all Si = 0, we assume no error.
1 0 P2 incorrect − If only one Si= 1, we assume an
0 0 error in parity bit Pi.
0110 1110 1101 6. Extract the k=4 message bits from each
corrected codeword.
Course notes of Prof. Bertram SHI
Summary
• Noise, always present in communication systems, leads to
bit errors.