0% found this document useful (0 votes)
21 views

1-13 Arası

Uploaded by

serhat376673
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

1-13 Arası

Uploaded by

serhat376673
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 338

EEE360 Communication Systems I

1
Significance of Human Communication
◼ Communication is the process of exchanging information.
◼ Communication : To transfer information from one place to
another
◼ Main barriers are language and distance.
◼ Methods of communication:
1. Face to face
2. Signals
3. Written word (letters)
4. Electrical innovations:
◼ Telegraph

◼ Telephone

◼ Radio

◼ Television

◼ Internet (computer)
Introduction

◼ The words "tele", "phone", and "graph" are derived from Greek.
◼ Tele – means ‘at a distance’
◼ Phone – means sound or speech
◼ Graph - means writing or drawing
◼ Therefore, telecommunication means communication at a distance. This can be
done through wires called transmission lines or through atmosphere by a radio
link. Other examples include:
◼ Telephone – speaking at a distance
Television – seeing at a distance
Telegraph – writing at a distance

3
1.1 The Block Diagram of Communication System

1. Definition - Communication is the transmission of information


from a source to a target via some communication link.

Communication System Block Diagram


4
❖ Possible Schemes for Communication Systems

COMM SYSTEM

ANALOG COMM HYBRID COMM DIGITAL COMM

SOURCE COMMUNICATIONS SYSTEM


ANALOG DATA DESTINATION
ANALOG DATA ANALOG DATA OR
OR DIGITAL DATA OR
DIGITAL DATA DIGITAL DATA

5
COMMUNICATIONS SYSTEMS EXAMPLES

DIGITAL MODEM ANALOG MODEM DIGITAL

WAN/LAN
(DIGITAL)
IP IP
ANALOG GATEWAY GATEWAY ANALOG

FREE SPACE
AAAIR
RADIO
STATION

ANALOG ANALOG ANALOG 6


1.1 The Block Diagram of Communication System
Main Components of Communication System
(i) Input message can be:
▪ Analog – continuous signal i.e value varies continuously eg. human
voice, music, temperature reading
▪ Digital – discrete symbol i.e value limit to a finite set eg. data

0.5

0
0 5 10 15 20

Figure 2: Analog Vs Digital Signal

7
1.1 The Block Diagram of Communication System
Analog Signals
❑ An analog signal is a smoothly and continuously varying voltage or
current. Examples are:
▪Sine wave
▪Voice
▪Video (TV)

Figure : Analog signals (a) Sine wave “tone.” (b) Voice. (c) Video (TV) signal.
1.1 The Block Diagram of Communication System
Digital Signals
 Digital signals change in steps or in discrete increments.
 Most digital signals use binary or two-state codes. Examples are:
◼ Telegraph (Morse code)

◼ Serial binary code (used in computers)

Figure: Digital signals (a) Telegraph (Morse code). (b) Serial binary code.
(ii) Input Transducer:
▪ A device that converts energy from one form to another.
Convert an input signal into an electrical waveform.
▪ Example: microphone converts human voice into electrical signal
referred to as the baseband signal or message signal.
WHAT IS BASEBAND ?

Data Electrical
(nonelectrical) Waveform

Without any shift in the range of frequencies of the signal

The signal is in its original form, not changed by modulation.


Baseband is the original information that is to be Sent.
1.1 The Block Diagram of Communication System

(iii) Transmitter (Tx):


▪ Modifies or converts the baseband signal into format appropriate for
efficient channel of transmission.
▪ Example: If the channel is fiber optic cable, the transmitter
converts the baseband signal into light frequency and the
transmitted signal is light.
▪ Transmitter also use to reformat/reshape the signal so that the
channel will not distort is as much.
▪ Modulation takes place in the transmitter. It involves static
variation of amplitude, phase or frequency of the carrier in
accordance to a message signal.

11
1.1 The Block Diagram of Communication System
(iv) Channel:
▪ Physical medium through which the transmitter output is sent.
▪ Divided into 2 basic groups:
• Guided Electromagnetic Wave Channel – eg. wire, coaxial
cable, optical fiber
• Electromagnetic Wave Propagation Channel – eg. Wireless
broadcast channel, mobile radio channel, satellite etc.
▪ Introduces distortion, noise and interference – in the channel,
transmitted signal is attenuated and distorted. Signal attenuation
increase along with the length of channel.
▪ This results in corrupted transmitted signal received by receiver,
Rx

12
Transmission Medium (Guided)
Twisted pair
 Unshielded Twisted Pair (UTP)
 Shielded Twisted Pair (STP)

Twisted pair is the ordinary copper wire.


To reduce crosstalk or electromagnetic induction between pairs of wires,
two insulated copper wires are twisted around each other.

13
Transmission Medium (Guided)

Coaxial
Coaxial cable is a type of transmission line, used to carry high
frequency electrical signals with low losses. It is used in such
applications as telephone trunklines, broadband internet networking
cables, high speed computer data busses, carrying cable television
signals, and connecting radio transmitters and receivers to their
antennas.

14
Transmission Medium (Guided)

Fiber Optic
An optical fiber cable, also known as a fiber
optic cable, is an assembly similar to an
electrical cable, but containing one or more
optical fibers that are used to carry light.
Data can be transmit via fiber optics in the form
of light.

15
Transmission Medium (Guided)

Waveguide
A waveguide is a structure that is used to guide electromagnetic waves
such as high frequency radio waves and microwaves.
There is a similar effect in water waves constrained within a canal.

16
Transmission Medium (Unguided)

17
18
1.1 The Block Diagram of Communication System
(v) Receiver
▪ Receiver decodes the received signal back to message signal – i.e
it attempts to translate the received signal back into the original
message signal sent by the source.
▪ Reprocess the signal received from the channel by undoing the
signal modification made by transmitter and the channel.
▪ Extract the desired signal from the received signal and convert it
to a form that suitable for the output transducer.
▪ Demodulation takes place in the receiver.

(vi) Output transducer


▪ Convert electrical signals to its original waveform.

19
1.1 The Block Diagram of Communication System

Transceivers
 A transceiver is an electronic unit that incorporates
circuits that both send and receive signals.
 Examples are:
• Telephones

• Fax machines

• Handheld CB radios

• Cell phones

• Computer modems

20
Block diagram of Communication system

Example : Cell phone system

Person who talks on a phone (message source)



Micro-phone (sound is converted into electrical signal)

Sender antenna to base station to receiver antenna
(transmission channel)

Speaker (electrical signal is converted back into sound)

Person who listens (message destination)

21
Fundamental Concepts : Frequency and Wavelength

❖ Frequency (f)
❑ A signal is located on the frequency spectrum according to its frequency and wavelength.
❑ Frequency is the number of cycles of a repetitive wave that occur in a given period of time.
❑ Frequency is measured in cycles per second (cps). The unit of frequency is the hertz (Hz).

❖ Wavelength (λ)
❑ Wavelength is the distance occupied by one cycle of a wave and is usually expressed in meters.

❑ Wavelength is also the distance travelled by an electromagnetic wave during the time of one cycle

22
Fundamental Concepts : Frequency and Wavelength

❑ Wavelength (λ) = speed of light ÷ frequency

Speed of light c = 3 × 10^8 meters/second

For efficient transmission and reception, the antenna length


should be comparable to one-quarter of the wavelength.
(length of antenna is equal to λ /4 for better radiation.)

Example : Telephone-quality speech contains frequencies


between 200 Hz and 3000 Hz.
How long should the antenna be around?
1. 4 m 2. 40 m 3. 400 m
4. 4 km 5. 40 km 6. 400 km

Solution : for 200 Hz, the wavelength is λ = c/f =3 × 10^8 /200 = 1.5 × 10^6 m = 1500 km!
Antenna length is 1500km/4= 375 km! or around 400km
For 3000 Hz, is λ = c/f =3 × 10^8 /3000 = 10^5 m = 100 km!
Antenna length is 100km/4= 25 km
23
Fundamental Concepts : Frequency and Wavelength

Example: A cellular phone is actually a radio transmitter and receiver. You receive an
incoming call in the form of a radio wave of frequency 880.65 MHz. What is the
wavelength (in meters) of this wave?
Solution: λ = 3 × 10^8 / 880.65MHz =0.34 m

Exercise check yourself ?

So the lowest frequencies produce the longest wavelengths and the highest frequencies
produce the shortest wavelengths. So the smaller the signal wavelength, easier the
antenna construction. This is why low frequency signals are modulated before
transmission, instead of directly sending them out.

Hints: Modern cell phones use frequencies near 2 GHz.


24
1

0.5

-0.5

-1
0 2000 4000 6000 8000 10000 12000 14000 16000 18000

0.04

0.03

0.02

0.01

0
0 500 1000 1500 2000 2500 3000 3500 4000 4500

Hz

25
Modulation
Modulation is the process of moving a low-frequency signal to a high-frequency
and then transmitted the high-frequency signal. Generally the low-frequency
signal carrying the original information is called the modulating signal or
baseband signal. The high-frequency is known as the carrier signal. After the
carrier signal is modulated by the modulating signal, the resultant signal is
called the modulated wave.

Modulation is the operations performed at the transmitter to achieve efficient


and reliable information transmission

26
Modulation
Why Modulation ?
➢ Ease of radiation - related to antenna design & smaller size.
➢ Low loss and dispersion.
➢ Channel assignment (various information sources are not always
suitable for direct transmission over a given channel)
➢ Reduce noise & interference and overcome equipment limitation.
➢ Enabling the multiplexing i.e. combining multiple signals for TX at
the same time over the same carrier.

27
Modulation

Here baseband signals comes from a audio/video or computer. Baseband


signals are also called modulating signal as it modulates carrier signal. carrier
signals are high frequency radio waves it generally comes from a radio
frequency (RF) oscillators. These two signals are combined in modulator.
Modulator takes the instantaneous amplitude of baseband signal and varies
amplitude/frequency/phase of carrier signal. Resultant signal is a modulated
signal. It goes to an RF-amplifier for signal power boosting and then feed to
antenna or a transmission medium.
28
Demodulation
Demodulation is the reverse process (to modulation) to recover the message signal
m(t) or d(t) at the receiver. It is the process of shifting the passband signal to baseband
frequency range at the receiver. A radio antenna receives low power signal. A co-axial cable end
point can also taken as an signal input. An RF amplifier boosts the signal amplitude. Then the
signal goes to a demodulator. demodulator does the reverse of modulation and extracts the back
band signal from carrier. Then the baseband signal is amplified to feed a audio speaker or video
monitor.

A modem receives signals and also transmits signals thus it does modulation and
demodulation at the same time. Thus the name modem has been given (modulation and
demodulation).
29
CW Modulation Types
𝒄 𝒕 = 𝑨𝒄 𝑪𝒐𝒔 (𝟐𝝅𝒇𝒄 𝒕 + 𝜽)

30
Amplitude Modulation
• Amplitude modulation (AM) is a modulation
technique used in electronic communication,
most commonly for transmitting information
via a radio carrier wave.
• In amplitude modulation, the amplitude
(signal strength) of the carrier wave is varied
in proportion to that of the message signal being
transmitted.
Example: A microphone converts sound
waves (energy) into an electrical signal
(energy) proportional to the sound wave
pressure. These frequencies are too low
to transmit. A high frequency carrier
wave is needed. This can be transmitted.
The carrier wave is modulated (varied) by
the signal from the microphone.

31
Amplitude Modulation

• In amplitude modulation; when amplitude of base-band signal decrease the


amplitude of the carrier wave decrease. When amplitude of base-band signal
increase the amplitude of the carrier wave increase.

Base-band signal Modulated signal

Carrier signal

32
Amplitude Modulation

• Modulation with exponentials


𝑚 𝑡 = cos(2𝜋10𝑡) 𝑦 𝑡 = x t × 𝑐(𝑡)

𝑒 𝑗2𝜋10t + 𝑒 −𝑗2𝜋10𝑡
𝑚 𝑡 =
2

𝑐 𝑡 = cos(2𝜋100𝑡)
𝑒 𝑗2𝜋100t + 𝑒 −𝑗2𝜋100𝑡
𝑐 𝑡 =
2

33
Amplitude Modulation

Modulation with exponentials

𝑒 𝑗2𝜋10t + 𝑒 −𝑗2𝜋10𝑡 𝑒 𝑗2𝜋100t + 𝑒 −𝑗2𝜋100𝑡


𝑦 𝑡 =x t ×𝑐 𝑡 = ×
2 2
𝑒 𝑗2𝜋110t 𝑒 −𝑗2𝜋110t 𝑒 𝑗2𝜋90t 𝑒 −𝑗2𝜋90t
= + + +
4 4 4 4

34
Demodulation

ℎ 𝑡 𝑦′ 𝑡 ∗ ℎ 𝑡 = 𝑚 𝑡
𝑦 𝑡 𝑦′ 𝑡 𝑌 ′ 𝑓 × 𝐻 𝑓 = 𝑀(𝑓)
2 2

1 1

0 0
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2

𝑐 𝑡

35
Demodulation

Demodulation with exponentials


𝑦′ 𝑡 = 𝑦 𝑡 × 𝑐 𝑡
𝑒 𝑗2𝜋110t 𝑒 −𝑗2𝜋110t 𝑒 𝑗2𝜋90t 𝑒 −𝑗2𝜋90t 𝑒 𝑗2𝜋100t + 𝑒 −𝑗2𝜋100𝑡
= + + +
4 4 4 4 2
𝑒 𝑗2𝜋210t 𝑒 𝑗2𝜋10t 𝑒 −𝑗2𝜋10t 𝑒 −𝑗2𝜋210t 𝑒 𝑗2𝜋190t 𝑒 −𝑗2𝜋10t
= + + + + + +
8 8 8 8 8 8
𝑒 𝑗2𝜋10t 𝑒 −𝑗2𝜋190t
+
8 8
𝑗2𝜋10t −𝑗2𝜋10t
𝑒 𝑒 𝑒 𝑗2𝜋210t 𝑒 −𝑗2𝜋210t 𝑒 𝑗2𝜋190t 𝑒 −𝑗2𝜋190t
= + + + + +
4 4 8 8 8 8

36
Demodulation

EEE362 Analog Communication 37


Amplitude Modulation

Band width:B

Band width:2B

38
Amplitude Modulation

• Double side-band amplitude modulation


• Double side-band suppressed carrier amplitude modulation
• Single side-band amplitude modulation
• Vestigial side-band amplitude modulation

39
Amplitude Modulation

Cons of amplitude modulation


• Information is carried in the amplitude of the
modulated signal.
• Channel noise is added to the modulated
signal.
• Noise can vary the amplitude of the
modulated signal.
• Therefore message signal can vary and
disappear.

40
CW Modulation Types
𝒄 𝒕 = 𝑨𝒄 𝑪𝒐𝒔 (𝟐𝝅𝒇𝒄 𝒕 + 𝜽)

41
Frequency Modulation
• To generate a frequency modulated signal, the frequency of the radio carrier is
changed in line with the amplitude of the message signal.
• In frequency modulation, the frequency of the carrier wave is varied in proportion
to amplitude of the message signal being transmitted. When amplitude of base-band
signal decrease the frequency of the carrier wave decrease and vice versa.

Carrier signal

Base-band signal

Modulated signal

42
Frequency Modulation

• In the frequency modulation; the carrier has a center frequency.


– For example, the frequency of an audio signal is 3 KHz, and if this frequency is
modulated with a 90 MHz carrier, the center frequency of the modulated signal
is 90 MHz.
• The frequency of the modulated signal varies according to the message
signal.
• In the modulated signal, the highest frequency is obtained at the maximum
amplitude value of the message signal, and the lowest frequency at the
minimum amplitude value of the message signal. If the amplitude of the
message signal is zero, the frequency of the modulating signal is the center
frequency.
• Transmitters used amplitude modulation (AM) use 6 - 18 KHz short wave,
150 - 350 KHz medium wave and 550 - 1600 KHz long wave broadcasting.
• Frequency modulation is widely used for FM radio broadcasting. Its
frequencies are between 87.5 and 108 MHz.

43
Frequency Modulation

Fa is the message signal frequency.


Fc is the Carrier Frequency.

44
Frequency Division Multiplexing

45
Some terms

• Modelling of noisy signal g (t ) = s (t ) + n(t )


Noisy signal Signal Noise

• If the noise level is high, information will be lost

46
0 Some terms
-0.5
Bandwidth
• The signal bandwidth is a measure of the important spectral
-1
0 2000 4000 6000 8000 10000 12000 14000 16000 18000

components of the signal for positive frequencies.


0.04

0.03

0.02

0.01

0
0 500 1000 1500 2000 2500 3000 3500 4000 4500

• System bandwidth is the frequency range that the system can use.
• The bandwidth of a communication channel is the difference
between the frequencies allowed by the channel.

47
Some terms

• System bandwidth is the frequency range that the system can


use.
• The bandwidth of a communication channel is the difference
between the frequencies allowed by the channel

EEE362 Analog Communication 48


Analog vs. Digital Information Systems
Analog transmission
Distortion, Noise, ...

x(t) y(t)
5.0 V 5.03V
Channel
t t
t0 t0

• Transmitted signal value ……… : x(t0) = 5.0 V


• Received signal value ………… : y(t0) = 5.03 V
• Due to channel noise and/or distortion y(t0) ≠ x(t0)

• From the observed y(t0) there is no going back to x(t0) = 5.0 V as the
noise is not known.

49
Analog vs. Digital Information Systems
Digital transmission
x(t 0 ) = 5V =) {5} decimal ⌘ {0101}
= )binary{—A, +A, —A, +A }

Distortion, Noise, ...

+A

0 t Channel With noise and


distortion.
-A
-A +A -A +A
0 1 0 1 y(t0)
x(t 0 )

Inspecting y(t0) we conclude that: y(t0) = {0101}binary = {5} decimal = x(t0)

• Theoretically (and practically) it is possible to receive and decode a digital signal


such that y(t) = x(t).
• This is possible since there is only a finite number of possible signals that we can
transmit (e.g. there are two possible signal levels in the case of binary coding ... M
possible signal levels in the case of M-ary coding).
50
Analog vs. Digital Information Systems
Digital Communications Analog Communications
Advantages: Disadvantages:
-Inexpensive -expensive
-symbolic representation of -No privacy preserved
signal value. -No error correction
-Privacy preserved(data capability
encrypted)
-error correction Advantages:
-smaller bandwidth
Disadvantages: -synchronization problem
-Larger bandwidth is relatively easier.
-synchronization problem is
relatively difficult

51
Course Overview

Course notes of Bertram SHI


Part 1: Point-to-Point
Communication more
bits
Sent sent
bits Error waveform
Bits to
Source Correcting
Waveforms
Coding

Channel
Topic 11 Topics 1-7 Noise
Topics
8-10

Error Waveforms
Dest
Correction to Bits received
Received
bits waveform

Course notes of Bertram SHI


Basic Communication System
The transmitter takes a sequence of bits (0 or
1) and creates a physical signal or waveform (e.g.
time varying voltage or light intensity) that is
carried over a channel.

sent
signal
Bits to
Source
Waveforms
𝒃𝟎 𝒃𝟏 𝒃𝟐 𝒃𝟑 𝒃𝟒 𝒃𝟓
The channel (a wire, the

Channel
sent bits Transmitter
air, a fiber optic cable) may
modify the signal as it
carries it.
Waveforms
Dest to Bits
received received
bits signal
Receiver

The receiver tries to figure out what


the transmitted bits were from the
received signal.

Course notes of Bertram SHI


What are Bits?
• A bit is the basic unit of information used in modern
computers and communication systems.

• A bit is a variable that can assume only two possible values


or “states”, commonly denoted by 0 or 1.

• Intuitively, the bit can be thought of as the answer to a


yes/no question.

• More complicated information can be sent with sequences


of bits.

Course notes of Bertram SHI


Representing Bits
Physically, bits can be represented as two distinct states of a physical variable.

Examples:
voltage (1 = high / 0 = low)
current (1 = positive / 0 = negative) Receiver
light (1 = on / 0 = off)

Transmitter

Course notes of Bertram SHI


Representing Bit Sequences as Waveforms
light intensity

b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0


ON
bit time
OFF
time
light intensity

ON

OFF
bit time

Course notes of Bertram SHI


Channel
The transmitter sends the waveform representing the bit sequence to the
receiver over a channel.

Examples:
A voltage or current waveform might be sent over a wire
A light waveform might be sent over a fiber optic link (Internet) or over
plain air (TV remote)

transmitter channel receiver

Course notes of Bertram SHI


Receive
r
sent
waveform
Source Tx
𝒃𝟎 𝒃𝟏 𝒃𝟐 𝒃𝟑 𝒃𝟒 𝒃𝟓 …

Channel
𝒃′𝟎 𝒃′𝟏 𝒃′𝟐 𝒃′𝟑 𝒃′𝟒 𝒃′𝟓 …
Dest Rx
received
waveform

Common
abbreviations:
Tx = Transmitter
R x = Receiver

Course notes of Bertram SHI


Encoding Information
with Bits

Course notes of Bertram SHI


Recap
What are Bits?

• A bit is the basic unit of information used in modern computers and


communication systems.

• A bit is a variable that can assume only two possible values or “states”,
commonly denoted by 0 or 1.

Note:

Variables that can assume more than two possible values can be
represented by combinations or sequences of bits.

Examples:
• binary numbers
• ASCII codes for letters and text

Course notes of Bertram SHI


Binary Digits
Each value is the sum of powers of two.
N
x b2 b1 b0
x 1
2i bi
0 0 0 0

Example: 1 0 0 1
2 0 1 0
I N 3 3 0 1 1
f x 22 b 21 1b 20 b 4 1 0 0
2 0

5 1 0 1
6 1 1 0
7 1 1 1
Notation:
• bN-1 = Most Significant Bit (MSB)
• b0 = Least Significant Bit (LSB)

Course notes of Bertram SHI


ASCII Codes
American Standard Code for Information
Interchange (ASCII) is an 8-bit code
that can represent text symbols.

Examples:
E = 01000101

MSB LSB
b7 b0

C = 01000011

Course notes of Bertram SHI


Example
• The ASCII table below gives the ASCII codes for common alphanumeric
characters and symbols listed from MSB to LSB. What is the bit sequence
encoding the message “Hi”? Assume that we transmit the codes of each
character in sequence with the LSB first.

• Answer: 0001001010010110

• What is the decimal value of the bit sequence "1000"? Assume that the
MSB is listed first.

• Answer: 8

Course notes of Bertram SHI


Bit Sequences

Assuming LSB appears first in the sequence, ECE


would be transmitted as a bit sequence.

b0 b1 b2 b3 b4 b5 b6 b7
101000101100001010100010

E C E

Course notes of Bertram SHI


Continuous vs Discrete
Time Waveforms

Course notes of Bertram SHI


Recap
Representing Bit Sequences as Waveforms
• time
• A bit sequence can be encoded by changing the value of the physical
variable over time.
• Each bit is encoded by holding the state constant over a length of time,
• known as the bit time.

• The shorter the bit time, the faster we can transmit information (bits)

b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0


light intensity

ON
bit time
OFF
light intensity

ON

OFF
bit time

Course notes of Bertram SHI


Continuous and Discrete Time Signals
Air Temperature in Clear Water Bay Temperature Records from HK Observatory

30 30

28 28

time time
12:00

16:00

12:00

16:00
20:00

00:00

04:00

08:00

20:00

00:00

04:00

08:00
A Discrete Time (DT) signal
A Continuous Time (CT) signal
has a known value only at a
has a known value for all
discrete (discontinuous)
points in a time interval.
set of time points.

Course notes of Bertram SHI


Sampling: Continuous to Discrete
• Obtain discrete time waveform xc(t)
by sampling a continuous time
waveform xc(t) at regular
intervals in time. Ts

Ts = sample period

Ts 2Ts3Ts 4Ts … t

• Index each sample by an x(3


integer sample number, n. x(n )
)
• The nth sample corresponds to
the waveform at time
t = nTs

Example: x(n) =xc(nTs) 1 2 3 4 5 6 n

Course notes of Bertram SHI


Discrete to Continuous Time
Given sample x(n), we can obtain a continuous time waveform x h (t) by
holding the waveform at x(n) between times nTs, and (n+1)Ts.

sample
x(2)

x xh(t)

Ts
xc (t)

Ts 2Ts 3Ts 4Ts … t

Course notes of Bertram SHI


Sampling Period vs. Frequency
Ts = sample period (time interval between samples)
Typical unit: seconds (s, sec)
Fs = sampling frequency or rate (number of samples in a fixed period of time)
Typical unit: Hertz (Hz, samples per second)
𝟏
Relationship : 𝑭𝒔 =
𝑻𝒔

1
Relationship: Fs
Ts
• Example: Ts 0.2 sec
1 sample 0.2
Fs
0.2 ec
samples
s sample
5
5Hz
sec 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
t (sec)
1 second

Course notes of Bertram SHI


Number of Samples
Sampling a signal of length Tw with a sample period Ts results in N
samples where

Ts
Tw
N Tw Fs
Ts
TW

Tradeoff:
A higher sample frequency is
• Good: Less information lost since less time between samples
• Bad: More storage needed since more samples for a given
length of time

Course notes of Bertram SHI


Example
Suppose we sample a signal at frequency 𝐹𝑠. If we collect 1500 samples in 5 seconds,
what is Fs in Hz (samples/second)?

• Answer: 300

• Compact discs record two channels (left and right) of music at a sampling
• frequency of Fs=44.1 kHz. If each sample is encoded with 16 bits, and one byte
• is 8 bits, how many bytes are required to store one minute of music?

• Answer: 10584000

Course notes of Bertram SHI


Course notes of Bertram SHI
Quantization

Binary numbers have a limited


number of values, 2n , where n is
the number of bits.

Quantization
• Divides the expected signal
range, R, into 2n different
levels,
• Quantizes the original signal
to the closest level.

3 bits → 23 = 8 levels
4 bits → 24 = 16 levels

Course notes of Bertram SHI


Resolution
Resolution is either
measured as
• the number of bits
e.g., n = 3 bits

the difference between


levels
𝑹
e.g., resolution = 𝚫=
𝟐𝒏 −𝟏

Course notes of Bertram SHI


Example
• Find input output values for a 3 bit 0-5 V quantization system according to the figure
𝑹 𝟓−𝟎 𝟓
• 𝚫= = = = 𝟎, 𝟕𝟏𝟒
𝟐𝒏 −𝟏 𝟐𝟑 −𝟏 𝟕
input Output
Level 1 0 − 𝚫/2 0
Level 2 𝚫/2 − 3𝚫/2 𝚫
Level 3 3𝚫/2 − 5𝚫/2 2𝚫
Level 4 5𝚫/2 − 7𝚫/2 3𝚫
Level 5 7𝚫/2 − 9𝚫/2 4𝚫
Level 6 9𝚫/2 − 11𝚫/2 5𝚫
Level 7 11𝚫/2 − 13𝚫/2 6𝚫
Level 8 13𝚫/2 − 14𝚫/2 7𝚫

Course notes of Bertram SHI


Coding

Course notes of Bertram SHI


Course notes of Bertram SHI
Nyquist Limit
To retain the high frequency information in the
analog signal, a sufficient number of samples
must be taken so that the waveform is
adequately represented.
It has been found that the minimum sampling
frequency is twice the highest analog frequency
content of the signal.
For example, if the analog signal contains a
maximum frequency variation of 3000 Hz, the
analog wave must be sampled at a rate of at
least twice this, or 6000 Hz.
This minimum sampling frequency is known as
the Nyquist frequency fN.

Course notes of Bertram SHI


Discrete Time Bit
Waveforms

Course notes of Bertram SHI


Bit Sequences to Bit Waveforms
Continuous time

1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
0
bit time time

Discrete time
1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0
0

4 samples per bit

Course notes of Bertram SHI


Bit Rate, Sampling
Frequency, SPB
• The bit time measures the bit time = SPB  Ts
length of time it takes to =
SPB
send one bit. Fs

• The bit rate measures the 1


bit rate =
bit time
number of bits we can send
in a given unit of time. 1 Fs
= =
SPB  Ts SPB

• We generally want: bit time


- the bit rate to be large (SPB = 4)
- the bit time to be small
Ts

Course notes of Bertram SHI


Example Bit Rate Calculation

Sample rate Fs = 1MHz = 1MegaHertz


= 1,000,000 samples / second
= 106 samples / second

If we use 4 samples per bit (SPB = 4), Ts = (Fs ) = 10−6 second


−1

then
= 1s = 1 microsecond

The bit time = SPB  Ts = 4s

Fs
The bit rate = bit time
SPB
1,000,000
= Hz = 250Hz
250kHz
4 Ts

Course notes of Bertram SHI


Example
• Consider a system that uses 8-bit ASCII codes to encode letters. How long will it take to transmit
the bit sequence encoding Hello if we use a bit time of 4 samples per bit, and transmit samples at
a rate of 1MHz?

• Answer: 160 𝜇𝑠

• Consider a communication system where the transmitter uses 0 V


to represent bit "0" and 1V to represent bit "1". An example of a
transmitted waveform is given in the following figure.
• Assume that the first bit starts at sample 0 what is the
largest possible bit time (in SPB) used in the transmission?

• Answer: 3

Course notes of Bertram SHI


Representing Bit
Waveforms

Course notes of Bertram SHI


Equivalent Waveform
Representations
Verbal “Encoding of the bit sequence 1,0,1,0,0,0,1 at 4 samples per bit”

x(n)
1
Graph

0 4 8 12 16 20
24 28 n
List, table
n = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 … ]
or vector
of values x(n) = [ 1 1 1 1 0 0 0 0 1
1 1 1 0 0 0 0 0
0…]
Sum of unit
x(n) = u(n) – u(n-4) + u(n-8) – u(n-12) + u(n-24) – u(n-28)
step
functions

Course notes of Bertram SHI


Functions to Specify Waveforms
• Graph
x(n)
1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0

0 4 8 12 16 20 24 28 n

• One possible formula:


1 0  n  4
0 4  n  8
x (n) = 
b k  SPB  n  (k + 1)  SPB
 k


Course notes of Bertram SHI


Unit Step Function
• To get a better formula to define a bit waveform,
define the unit step function u(n):

0 n  0
u(n) =  1
1 0  n

0 n

• Delay the step as follows:

0 n  d
u(n − d) = 
1 d  n

0 d n

Course notes of Bertram SHI


Combining Step Functions
• A single pulse can be described as the difference between
two step functions

Pulse of length 5 Pulse of length 10

u(n) u(n)

0 n 0 n

-u(n-5) -u(n-10)

u(n)-u(n-5) u(n)-u(n-10)

0 n 0 n

Course notes of Bertram SHI


Representing Bit Waveforms
• Any bit sequence can be described as the sum and
difference of unit step functions.
• Use one step function per bit change
− If the bit changes from 0 to 1 at sample D, add u(n-D)
− If the bit changes from 1 to 0 at sample D, subtract u(n-D)
− If there is no change, add nothing

1
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0

0 4 8 12 24 28 n

x(n) = u(n) – u(n-4) + u(n-8) – u(n-12) + u(n-24) – u(n-28)

Course notes of Bertram SHI


Example
x(n)

n
x(n) = 3 ∙ u(n) + 2 ∙ u(n-2) – 2 ∙ u(n-11) – u(n-14) – u(n-20) + 3 ∙ u(n-27)

Course notes of Bertram SHI


Example
• Draw waveform corresponds to the function 𝑥 𝑛 = 2𝑢 𝑛 − 5 − 2𝑢 𝑛 − 17

• Express the waveform using step function

• Answer: x(n)=2*u(n+5)+1*u(n-7)-4*u(n-14)+2*u(n-20)

Course notes of Bertram SHI


The Discrete Time Channel

Course notes of Bertram SHI


Communication System

x(t)
Source Transmitter
information to send physical waveform

l
Channe
y(t)
Dest Receiver
information received physical waveform

Course notes of Bertram SHI


Continuous Time Channel

x(n) x(t)
Bits to
“Hold”
Source Discrete-time
sent bits circuit
Waveform

Channel
Transmitter
y(n)
Discrete-time
Dest waveform Sampler y(t)
received bits to Bits

Receiver

Course notes of Bertram SHI


Discrete Time Channel

x(n) x(t)

Discrete Time Channel


Bits to
“Hold”
Source Discrete-time
sent bits circuit
Waveform

l
Channe
Discrete-time
Transmitter
y(n)
Discrete-time
Dest Waveform Sampler y(t)
received to Bits
bits

Discrete-time
Receiver

Course notes of Bertram SHI


Mathematical model
(Actual response)

Physical Channel ya (n)

(
ym (n) = 1 − a )u(n) +
x (n) = u(n) − u(n − 5) Mathematical Model
n+1

(Model response)

We have a good model if:

– The model and actual responses are similar, ym (n)  ya (n)


– The relationship between ym (n) and x (n) is simple (easy to understand and
calculate)

Course notes of Bertram SHI


Why do engineers use models?
• Understand the operation of the system
− What is the relationship between the input and the
output of the channel?

• Predict the performance of a system


− How fast can I transmit information over the channel?

• Develop modifications to the system that improve


performance
− What can I do to improve the speed?

Course notes of Bertram SHI


Effects of the Channel

Course notes of Bertram SHI


Possible effects of the channel
The channel may cause the received signal y(n) to differ
from the transmitted signal x(n) in several ways.

1. Attenuation (decrease in amplitude)


2. Delay
3. Offset
4. Blurring of transitions
5. Noise

x(n) y(n)

Channel

n n

Course notes of Bertram SHI


Modeling attenuation, delay, and offset

Discrete Time Channel


x(n) 1

y(n) d k

c
n

Legend:
k = attenuation (k < 1)
d = delay
c = offset
Mathematical Model: y(n) = kx(n – d) + c

Course notes of Bertram SHI


Blurring of transitions
Caused by the properties of
• The transducer that creates the physical waveform
• The electronics that drive the transducer
• The physical medium that carries the waveform
• The sensor that senses the physical waveform
• The electronics that process the sensor signal

x(t) What is bandlimited channel? y(t)

Channel

Course notes of Bertram SHI


Effect of bandlimited channel (bit time=20 SPB)

Discrete Time Channel


input
(blue)

output
(black)

Assumes no attenuation, delay, offset or noise.

Course notes of Bertram SHI


Effect of bandlimited channel (bit time=10 SPB)

Discrete Time Channel


input
(blue)

output
(black)

Course notes of Bertram SHI


Effect of bandlimited channel (bit time=5 SPB)

Discrete Time Channel


input
(blue)

output
(black)

Course notes of Bertram SHI


Developing a bandlimited channel
model
• To predict the output of a bandlimited channel to any input
− assume that the channel is linear and time invariant
− use the fact that any input can be expressed as the sum of unit step functions

b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0

Discrete Time Channel


x(n)

y(n)

Course notes of Bertram SHI


Linear Time Invariant Systems

Course notes of Bertram SHI


Linear Functions
• Function: something that takes in an input number x and
produces an output number y

x f(x) y

• A linear function has the form y = ax

y a

x
Note: y = ax+b is not linear (unless b=0).

Course notes of Bertram SHI


Properties of Linear Functions
• Homogeneity:

x ax y cx ax cy

• Additivity:

x1 ax y1
x1+x2 ax y1+y2
x2 ax y2

Course notes of Bertram SHI


Output of Linear Functions
• If you know
− a function is linear, and
− the output for any nonzero input
then, you can compute the output for any other input using
homogeneity and additivity

Example: Suppose a linear function has output y = 4 for input x = 2.

- Use additivity to determine the output if x = 4 y a


Since x = 2 + 2,
y=4+4=8

- Use homogeneity to determine the output if x = 6


Since x = 3 ∙ 2,
y = 3 ∙ 4 = 12

Course notes of Bertram SHI


Systems
System: something that takes in input waveform x(n) and
produces an output waveform y(n).

x(n) system y(n)

n n

n n

n n

Course notes of Bertram SHI


Linear Systems
• A linear system is a system that satisfies the same two properties
as a linear function.

• Homogeneity:

x(n) system y(n) cx(n) system cy(n)

• Additivity

x1(n) system y1(n)


x1(n)+x2(n) system y1(n)+y2(n)
x2(n) system y2(n)

Course notes of Bertram SHI


Homogeneity
• If you scale (multiply) the input by c, the output scales by c.

x(n) system y(n)

n n

cx(n) system cy(n)

n n

Course notes of Bertram SHI


Additivity
• The output to the sum of two inputs is the sum of the outputs to
each input applied individually.

x1(n) y1(n)
system

n n

x2(n) y2(n)
system

n n

x1(n)+x2(n) y1(n)+y2(n)
system

n n

Course notes of Bertram SHI


Linear Systems
• A linear system is a system that satisfies the same two properties
as a linear function.

• Homogeneity:

x(n) system y(n) cx(n) system cy(n)

• Additivity

x1(n) system y1(n)


x1(n)+x2(n) system y1(n)+y2(n)
x2(n) system y2(n)

Course notes of Bertram SHI


Time Invariance
• A system is time invariant if when you delay the input by d, the
output is the same, just delayed by d.

x(n) system y(n)

n n

x(n-d) system y(n-d)

n n
d d

Course notes of Bertram SHI


Linear Time Invariant (LTI) Systems
• LTI system: A system that is both linear and time invariant

Question: Is the channel shown below LTI?


y(n) = k∙x(n-d) + c

Discrete Time Channel


1 x(n)

n
d
k y(n)

c
n

Course notes of Bertram SHI


Output of Linear Functions
• If you know
− a function is linear, and
− the output for any nonzero input
then, you can compute the output for any other input using
homogeneity and additivity

Example: Suppose a linear function has output y = 4 for input x = 2.

- Use additivity to determine the output if x = 4 y a


Since x = 2 + 2,
y=4+4=8

- Use homogeneity to determine the output if x = 6


Since x = 3 ∙ 2,

Course notes of Bertram SHI


Step Response
• If a system is LTI, then you can find the output just by knowing
the output to almost any non-zero input function.

• We choose the unit step function as the input.

• step response s(n): the output to the unit step input

u(n) s(n)
system

0 n 0 n

Course notes of Bertram SHI


Computing the Output of an LTI
System
x(n) system y(n) y(n)=s(n)-s(n-5)
Step 1: Step 3:
Write Use
0 n 0 n
the input additivity
as the to combine
sum individual
of scaled responses
unit step u(n) system s(n)
functions

-u(n-5) system -s(n-5)

x(n)=u(n) - u(n-5) Step 2:


Use homogeneity and time invariance
to compute responses to individual steps

Course notes of Bertram SHI


• The equations below give the relationship between the input x(n) and
the output y(n)
of four different channels.
Example
Are the channels linear time-invariant?

𝑦(𝑛) = 6𝑥(𝑛) + 2𝑥(𝑛 − 1) + 3𝑥(𝑛 − 2) + 4𝑥(𝑛 − 3)


• 𝑦(𝑛) = 2𝑥(𝑛) + 5
• 𝑦(𝑛) = 𝑥 𝑛 2
• 𝑦(𝑛) = 𝑥(𝑛) ∗ sin(𝑛)

• The figure shows step response of


a discrete time LTI system.
• If the input applied to this LTI
system is given as:
• 𝑥(𝑛) = 1, 𝑓𝑜𝑟 1 ≤ 𝑛 ≤ 3 and zero
otherwise.
What is the value of the output at
n=5?

• Answer: 0.7
Course notes of Bertram SHI
Modeling the Channel

Course notes of Bertram SHI


Exponential Step Response
• Changes in amplitude (k)
• Blurring of transitions (a)

s(n) = k(1 – an+1)u(n)

11
k=1
s(n)
a = 0.8

00
0 10 20
20 30
30 40
0 10
n 40

Course notes of Bertram SHI


Different Parameter Settings
s(n) = k(1 – an+1)u(n)
11 k=1
a = 0.8
s(n)
00
00 1
10 2
20 3
30 n 40
4
0 0 0 0

11 k = 0.5
s(n) a = 0.8
0

Course notes of Bertram SHI


Response to single bit (bit time=32
SPB)
s(n) = Exponential approach, 1
a = 0.8

s(n)
0
k=1
-1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=16
SPB)
1
s(n) = Exponential approach,

s(n)
a = 0.8 0
k=1 -1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=8
SPB)
s(n) = Exponential approach, 1
a = 0.8

s(n)
k=1 0

-1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=4
SPB)
s(n) = Exponential approach,
1
a = 0.8

s(n)
k=1 0

-1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=2
SPB)
s(n) = Exponential approach,
1
a = 0.8

s(n)
k=1 0

-1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to single bit (bit time=1
SPB)
s(n) = Exponential approach,
1
a = 0.8

s(n)
k=1 0

-1
-10 0 10 20 30 40
1

-s(n-SPB)
0

-1
-10 0 10 20 30 40
1
y(n)

Blue = input 0
Black = output
-1
-10 0 10 20 30 40
Course notes of Bertram SHI
Response to more general input
b0=1 b1=0 b2=1 b3=0 b4=0 b5=0 b6=1 b7=0

Discrete Time Channel


n
x(n) = u(n) – u(n – 5) + u(n – 10) – u(n – 15)…

y(n) = s(n) – s(n – 5) + s(n – 10) – s(n – 15)…

n
Course notes of Bertram SHI
At the receiver

c+noise(n)

System
LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …

y(n) = c + noise(n) + s(n) – s(n-5) …

Course notes of Bertram SHI


Example
• The measured output of a channel from sample
n = -2 to 39 is shown below.
• This output can be approximated
mathematically by the expression
𝑠 𝑛 = 𝑐 + 𝑘 1 − 𝑎𝑛+1 𝑢(𝑛)
Estimate the values of c and k

Answer: c=2, k=8


Example
• The following four figures plot the step
responses of a channel from sample n = -2
to 39. These can all be described by
𝑠 𝑛 = 1 − 𝑎𝑛+1 𝑢(𝑛)
• for different values of a∈{0.3,0.5,0.8,0.95}.
Which plot corresponds to a=0.95
Communication Protocols
Recap
• The output of a communication channel can be modeled as the sum of
the response to the transmitted signal and other variables.

System
+

LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …
s(n) = k(1 – an+1)u(n)

y(n) = c + s(n) – s(n-5) …

c = offset
k = amplitude
a = base of the exponential
Channel Response to a Bit
Sequence
input and channel output
1

c+k
y(n
)

-10 0 10 20 30 40 50 60 70 80

Question: How do we recover (decode) the original bit stream?


Protocols
• Protocol is an agreement on a set of rules or
procedures to follow during communication.

• Protocols are necessary for any communication


system.
− If the transmitter does not follow the protocol, the
receiver may be able to “hear” what is being said, but
not “understand” it.
Protocols in Data Communication Systems
In data communication, protocols cover all aspects of data
representation and signaling including
• The representation of text characters
− ASCII vs Unicode

• The order in which bit sequences are sent


− LSB or MSB first

• The representation of individual bits


− e.g. 1 = light on, 0 = off

• The bit time (SPB) or bit rate

• Training sequence
This topic’s main focus
• Synchronization method
Thresholding
Thresholding
• In our system, for long SPBs at the receiver
− 1 bits usually result in received values close to c+k
− 0 bits usually result in received values close to c

• This suggests we can recover original bits by comparing received value


with a threshold T
− Intuitively, a good threshold is halfway between c and c+k
− i.e. T = c+k/2
Training
Sequence
• In order to choose a threshold, the receiver needs to know c and k.
• Unfortunately, these may change over time.
• To help the receiver estimate c and k, the transmitter sends a “training
sequence”
Our Training
Sequence
• Assuming Fs = 1MHz, the training sequence consists of
500μs of 0, followed by 500 μs of 1, followed by 500μs of 0.
• Estimating channel parameters from the response
− estimate of c = minimum value of the response
− estimate of k = difference between minimum and maximum

training sequence and response

1
c+k

c
pulse width
0
time in samples
0 500 1000 1500
Length of the training
sequence
• Trade-off in the choice of the pulse width
− Shorter pulse widths mean more time available to transmit
data.
− Longer pulse widths enable better estimates of channel
parameters (c, k)

training sequence and response

1
c+k

c
pulse width
0
time in samples
0 500 1000 1500
Length of the training
sequence
• The choice of the pulse width is based on an assumption about the value
of “a” in the step response.

𝒔(𝒏) = 𝒌(𝟏 − 𝒂𝒏+𝟏) 𝒖(𝒏)


• Question: If the value of “a” is larger (closer to 1), should the pulse
width be made longer or shorter?

training sequence and response

1
c+k

c
pulse width
0
time in samples
0 500 1000 1500
Example
• Consider a channel with a step response given by 𝒔(𝒏) = 𝒌(𝟏 − 𝒂𝒏+𝟏) 𝒖(𝒏)
• During transmission, the environment adds a constant offset c, so the
signal at the receiver is 𝒏+𝟏
𝒚(𝒏) = 𝒄 + 𝒌(𝟏 − 𝒂 ) 𝒖(𝒏)

• Question: How long should the pulse width W


be so that the maximum value of the pulse
response is larger than c+0.9k?
y(n)

c
0

-10 -5 0 5 10 15 20 25 30
W
Example
Solution: Let the pulse width be W+1 samples.
Since the maximum (max) occurs at the end of the pulse: 𝒎𝒂𝒙 = 𝒄 + 𝒌(𝟏 − 𝒂𝒘+𝟏)
To ensure that:

𝒎𝒂𝒙 > 𝒄 + 𝟎. 𝟗𝒌 𝒄 + 𝒌(𝟏 − 𝒂𝒘+𝟏) > 𝒄 + 𝟎. 𝟗𝒌

( )
c +k 1-aW+1 >c +0.9k step input / signal at receiver

k (1-a )>0.9k
1
W+1
c+k
c+0.9k
1 -aW+1 >0.9 y(n
0.1 >aW+1
)

ln0.1 >(W +1)lna c


ln0.1 0
-1 <W
lna -10 -5 0 5 10 15 20 25 30
W
0 <a <1 ⇒lna <0
Example
Suppose we have a communication channel with c=0.15, k=0.75, and
a=0.9. What is the minimum pulse width so that at the end of the pulse,
the response is greater than c+0.9k = 0.825?

step input / signal at receiver


Solution:
1
By our prior analysis 0.9
0.825

ln0.1
W> -1 ≈20.85 y(n
lna )

0.15
0

-10 -5 0 5 10 15 20 25 30
W
Example
• Suppose the response of a communication
channel to the training sequence is given by
y(n)=6+22u(n−1500)−22u(n−2000).
The response to the training sequence is used to determine a threshold for
estimating the input bit.
Suppose that we receive an output sample with value 15, what would our
estimate of the corresponding input bit be?
• Answer: 0
Example
• Suppose that a discrete time LTI channel has a given step response:
s(n)=0.9(1−0.8𝑛+1 )u(n)
Assume that we transmit, as a training sequence, a pulse with unit height and width W
samples. Assume that the pulse starts at sample index 0. What is the minimum value of W
so that the channel response at the end of the pulse is larger than 0.8?
Answer: A pulse of width W samples that starts at time 0, ends at time W-1. For the response of
the channel, at the end of the pulse, to be greater than 0.8, we must have
Asynchronous Serial
Communication

Course notes of Prof. Bertram SHI


Sub-Sampling
• Every bit is sent using many (SPB) samples.
• To recover one bit, we sub-sample b(n) once every SPB samples.
− SUB-SAMPLE = take only a SUBset of the SAMPLES
• We must also determine when to start sub-sampling
input
1
b0 b1 b2 b3 b4 1 b5 b6
1 0 1 0 1 1 SPB
0 n

channel output / thresholded output


1
c+k
c+k/2
c

0 SPB n
< SPB

Course notes of Prof. Bertram SHI


Problem
• Suppose the transmitter only sends information
occasionally. If we receive the following waveform over an
asynchronous channel, what was the input bit sequence?

1
SPB

c+k

c+k/2

0 n

Course notes of Prof. Bertram SHI


Asynchronous Communication
• In many communication systems, the transmitter and
receiver are not synchronized (aligned in time)
− syn = same, chron = time

• The receiver does not know when then transmitter will


transmit data.

• This type of communication is known as an asynchronous


(not synchronous) link.

• In this type of link, the receiver needs a signal from the


transmitter indicating when it starts to transmit data
Question: How do we do this as humans?

Course notes of Prof. Bertram SHI


Framing
• In an asynchronous communication,
− data bits are first grouped into blocks
− the blocks are then “framed,” i.e., surrounded by extra bits
• Framing bits
− Start bit – indicates the start of a data transmission
− Stop bits – allow for time between transmission
• The data block plus framing bits is called a frame.

8 bit block

frame

Course notes of Prof. Bertram SHI


Start Bits
• In order for the receiver to know the start of a transmitted bit
sequence, we add a start bit before the bit sequence.

• The start bit is chosen to be either 0 or 1, depending upon the normal


received output of the idle channel (when there is no transmission).

• In our channel, the output is normally low (0), so we choose the start
bit to be 1.

Course notes of Prof. Bertram SHI


Data Block
• For the receiver to know how long to listen, the transmitter and
receiver must agree upon how many bits will follow the start bit. We
will refer to these bits as a character or data block.
• In RS232 serial transmission used in PCs, there are usually 8 data bits
following the start bit.

• We will use blocks of 160 bytes = 1280 bits.


• If the transmitter has
− too few bits: add zeros to the end (padding)
− too many bits: split the bits to multiple blocks

Course notes of Prof. Bertram SHI


Stop Bits
• In some cases, we add one or more stop bits to the end of each frame to
allow time for the receiver to process the frame.

• The stop bits are chosen so that the received signal is the same as when
the channel is idle so that the new start bit can be detected.

• Using more stop bits provides more time between data blocks
− Advantage: the receiver has more time to process the frame
− Disadvantage: reduces the rate we can send information

Course notes of Prof. Bertram SHI


Questions
• If each character in a 45 character text message is encoded using an 8-bit ASCII code,
how many bits would be required to encode the entire message?
✓ Answer: 360 bits
• Assume the bit sequence from Question 1 is transmitted over a communication channel
that follows the following protocol:
– Bit time: Bits last for SPB=50 samples and samples are transmitted at a rate of Fs= 1MHz.
– Block size: Data is divided into blocks of 320 bits. If the data is too long to fit into one block, data is
split into multiple blocks. If there is not enough data to fill a block, zero padding is applied.
– Framing: Each block is framed by one start bit and one stop bit. Frames are transmitted sequentially
with no time between them.
How many data blocks would be required to transmit this bit sequence?
✓ Answer: 2 blocks
• How long (in milliseconds) would it take to transmit the 45 character text
message using the communication protocol indicated above?
✓ Answer: 32 milliseconds

Course notes of Prof. Bertram SHI


A Simple Protocol

Course notes of Prof. Bertram SHI


A Simple Protocol
At the transmitter:
• Take an input sequence of bits.
• Create a block of N bytes (1 byte=8 bits).
− If the input sequence is too short,
› pad it with zeros (add extra zeros to the end)
− If the input sequence is too long,
› cut it to N bytes (throw out the rest), or
› split it into multiple blocks.
• Add start and stop bits to beginning and end of each block
• Encode the resulting bit sequence as a sample waveform with SPB
samples per bit.
• Add the training sequence before transmitting data.

Course notes of Prof. Bertram SHI


A Simple Protocol
At the transmitter: 8 bits
1 byte=8 bits
Input sequence “E” = 10100010
2 bytes, 16 bits block

Data block 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0

Frame 1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
0

Sample waveform
(SPB=5)

Add training sequence


(1 1 1 1 1 0 0)

Course notes of Prof. Bertram SHI


A Simple Protocol
At the receiver:
• Use the training sequence to estimate c and k
− Define threshold = c + k/2
• Skip past the training sequence
• Find the beginning of the start bit.
• Decode (figure out) the bit stream by comparing samples spaced SPB apart
starting from an estimate of the end of the first bit, which is the beginning of
the start bit plus 2*SPB - 1.

c+k start

c+k/2
training seq.
c b 0 b1 b2 b3 b4 b5 b6 b7

SPB
2×SPB-1 sample/compare here

Course notes of Prof. Bertram SHI


Question
• A text string is converted to a discrete time waveform according to the following steps:
1. Each character in the string is converted to an 8-bit ASCII codeword according to the table below,
where the MSB is listed first.
2. Each codeword is arranged so that the MSB is transmitted first.
3. The resulting bit stream is divided into 16-bit blocks.
4. Each block is framed with a start bit of '1' and no stop bit.
5. Each bit is represented using two samples.

Suppose we obtain the following sample sequence:


1100001111000000110000111100000000
1100110011000000110000110000000000
1100001111110011000000110011000011
What was the original text string?

• Answer: 10Q :)

Course notes of Prof. Bertram SHI


Trade-off between
Bit Rate and Bit Error Rate

Course notes of Prof. Bertram SHI


Trade-off between Bit Rate and BER
• Bit rate – the number of bits that can be transmitted per second.
− We want this to be high, i.e. the bit time should be small.

• Bit error rate (BER) – the fraction of bits that are wrongly decoded by the
receiver
− We want this to be low
− Unfortunately, the BER increases for smaller bit times

Bit Rate

Bit Error Rate

bit time

Course notes of Prof. Bertram SHI


Effect of Smaller Bit Time
bit time = 32 SPB bit time = 16 SPB
1 1 output
input
0 0
-10 0 10 20 30 40 -10 0 10 20 30 40

bit time =8 SPB bit time = 4 SPB


1 1

0 0
-10 0 10 20 30 40 -10 0 10 20 30 40

bit time = 2 SPB bit time = 1 SPB


1 1

Channel Parameters:
0 0 c=0, k=1, a=0.8

-10 0 10 20 30 40 -10 0 10 20 30 40

Course notes of Prof. Bertram SHI


Bit Sequences with Varying Bit Time
bit time = 8 SPB
1
input

output
0
0 50 100 150
bit time = 4 SPB
1 threshold

0
0 50 100 150

bit time = 2 SPB


1
Channel
Parameters:
c=0, k=1, a=0.8
0
0 50 100 150
time in samples

Course notes of Prof. Bertram SHI


Question
• If we choose a longer bit time, which of the
following statements about bit rate and bit error
rate (BER) is generally true _________? Please
select the correct answer.
Bit rate increases; BER increases;
Bit rate increases; BER decreases;
Bit rate decreases; BER increases;
✓Bit rate decreases; BER decreases;

Course notes of Prof. Bertram SHI


Intersymbol Interference

Course notes of Prof. Bertram SHI


Responses to a Zero Bit input waveform
channel output
output to zero bit
bit sequence bit time = 15 SPB bit time = 10 SPB
1 1
0000 0 n
Response 0
0 n
Response 00
-50 0 20 -50 0 20

1 1
0010 0 n
Response 1
0 n
Response 01
-50 0 20 -50 0 20

1 1
0100 0 n
Response 0
0 n
Response 10
-50 0 20 -50 0 20

1 1
0110 0 n
Response 1
0 n
Response 11
-50 0 20 -50 0 20

1 1
1000 0 n
Response 0
0 n
Response 00
-50 0 20 -50 0 20

1 1
1010 0 n
Response 1
0 n
Response 01
-50 0 20 -50 0 20

1 1
1100 0 n
Response 0
0 n
Response 10
-50 0 20 -50 0 20

1 1
1110 0 n
Response 1
0 n
Response 11
-50 0 20 -50 0 20

Course notes of Prof. Bertram SHI


Intersymbol Interference (ISI)
• The response to a “zero” or “one” bit depends upon what bits were
transmitted before it, because of the time it takes for the channel to
respond to a transition.

• This is referred as intersymbol interference.


− Bits = “symbols”

• The smaller the bit time (SPB) in comparison with the time it takes for
the channel to respond to a transition, the greater the ISI.
− More past symbols interfere with the current symbol
− We observe a larger variety of responses to a “zero” or “one” bit

Course notes of Prof. Bertram SHI


Settling Time
• The settling time, ns, of a channel is
a=0.75, ns=7.0
k the time needed for its step response
0.9k to reach 90% of its max
• For the exponential step response,
0
0 5 ns 10 15 20

a=0.85, ns = 13.2 u(ns)


k
0.9k

0
0 5 10 ns 15 20

larger a → larger ns

Course notes of Prof. Bertram SHI


Minimizing ISI
• We generally want intersymbol interference (ISI) to be
small as possible.
− This makes the response to each bit becomes more predictable.

• To reduce ISI, we can


− Make the channel faster (reduce ns)
› However, we may not have control over all aspects of the channel,
e.g. room acoustics.
› This may incur extra cost (e.g. faster electronics)

− Make the bit time longer (increase SPB)


› However, this reduces the bit rate.

Course notes of Prof. Bertram SHI


Question
• Suppose we transmit binary bits over a given LTI channel using two different
communication protocols. Protocol A uses a bit time of 5 microseconds and protocol
B uses a bit time of 20 microseconds. All other aspects of the protocols are the same.
Which one of the following statements is correct?
• Please select the correct answer.

✓ More ISI will be seen when using protocol A.


More ISI will be seen when using protocol B.
The same ISI will be seen when using protocols A and B.

Course notes of Prof. Bertram SHI


Questions
• Suppose we transmit binary bits using a given communication protocol over two
different LTI channels. Both channels have exponential step responses. Channel A
has a settling time of 5 microseconds. Channel B has a settling time of 20
microseconds. Which one of the following statements is correct?
• Please select the correct answer.

More ISI will be seen when using Channel A.


✓ More ISI will be seen when using Channel B.
The same ISI will be seen when using Channels A and B.

Course notes of Prof. Bertram SHI


Eye Diagrams

Course notes of Prof. Bertram SHI


Eye Diagrams
Response to Random Bit Stream • An eye diagram summarizes
1
the effect of intersymbol
interference by showing all
0 responses to “zeros” and
0 20 40 60 80 100 120 140 160 180 200
“ones” simultaneously.
eye
• We generate an eye
Eye Diagram

0.5
diagram by overlaying plots
0
of the channel response for
0 two bit times
5 10 15 20
n

Channel Parameters
a=0.85, ns=13 (with noise)
bit time = 10 SPB

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 1
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 2
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 3
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 4
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 5
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Construction of the Eye Diagram 6
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 10 15 20
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Final Eye Diagram
Response to Random Bit Stream, SPB = 10

0
0 20 40 60 80 100 120 140 160 180 200

height
Eye Diagram

0.5

0 5 10 15 20
width
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Eye Diagram (SPB = 15)
Response to Random Bit Stream, SPB = 15

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 5 1015 20 25 30
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Eye Diagram (SPB = 5)
Response to Random Bit Stream, SPB = 5

0
0 20 40 60 80 100 120 140 160 180 200
Eye Diagram

0.5

0 2 4 6 8 10
n
Channel: Exponential Step, a = 0.85, noise added

Course notes of Prof. Bertram SHI


Eye Diagrams (Varying bit time)
SPB = 20 SPB = 15
1 1

Channel: Exponential, a = 0.85, noise added


0 0

0 20 40 0 15 30

SPB = 10 SPB = 6
1 1

0 0

0 10 20 0 6 12

SPB = 4 SPB = 2
1 1

0 0

0 4 8 0 2 4

Course notes of Prof. Bertram SHI


Eye Diagrams (Varying bit time)

Course notes of Prof. Bertram SHI


Questions
• How does the eye diagram change if we decrease the bit time?
The eye will be more open.
✓ The eye will be more closed.

• In the diagram below, the left column shows the responses of


four different communication channels to random bit inputs
with bit time 20 samples. The right column shows the eye
diagrams for those channels with SPB=20, but in random
order. Match the channels I through IV with their
corresponding eye diagrams labelled A to D.
• Answer: CADB

Course notes of Prof. Bertram SHI


Equalization

Course notes of Prof. Bertram SHI


Our Simple Protocol

data block add start/ encode at SPB add training


transmit
stop bits samples per bit sequence

channel
data block decode find start skip past estimate
receive
bits bit training threshold

Course notes of Prof. Bertram SHI


Trade-off between Bit Rate and BER

Bit Rate

max BER
Bit Error Rate bit time

Course notes of Prof. Bertram SHI


Eye Diagrams
SPB = 20 SPB = 15
1 1

0 0

0 20 40 0 15 30

SPB = 10 SPB = 8
1 1

0 0

0 10 20 0 8 16

SPB = 7 SPB = 2
1 1
Channel parameters:
a = 0.9, k = 1
0 0 noise added
0 7 14 0 2 4

Course notes of Prof. Bertram SHI


Equalization
• The channel introduces intersymbol interference, which causes the eye to close.

1
0.5
0

Discrete Time Channel


0 5 10

x(n)

1
0.5
0
0 5 10

y(n)

Course notes of Prof. Bertram SHI


Equalization
• The channel introduces intersymbol interference, which causes the eye to close.
• The goal of a channel equalizer is to “undo” the effect of the channel.
• This will cause the eye to open.

1
0.5
0

Discrete Time Channel


0 5 10

x(n)

1 1
0.5 0.5
0 0
0 5 10 0 5 10
channel
equalizer
෥ (𝒏)
𝒙 y(n)

Course notes of Prof. Bertram SHI


Question
• The relationships between bit time and
the bit rate/bit error rate for a
hypothetical communication channel
are shown in the graph below.
• If the maximum bit error rate we can
tolerate is 0.05%, estimate the
maximum bit rate in Kbps that we can
use for transmission.
• Answer: 100

Course notes of Prof. Bertram SHI


Question
• What is the function of an equalizer in the
receiver?

To synchronize the transmitter and the receiver.


To estimate the threshold for detecting 0 or 1
bits.
To model the effects of the channel.
✓ To compensate for the effects of the channel.

Course notes of Prof. Bertram SHI


Developing the Equalizer

Course notes of Prof. Bertram SHI


Equalization
• The channel introduces intersymbol interference, which causes the eye to close.
• The goal of a channel equalizer is to “undo” the effect of the channel.
• This will cause the eye to open.

1
0.5
0

Discrete Time Channel


0 5 10

x(n)

1 1
0.5 0.5
0 0
0 5 10 0 5 10
channel
equalizer
෥ (𝒏)
𝒙 y(n)

Course notes of Prof. Bertram SHI


Designing the Equalizer
• We have developed a model that enables us to predict the
channel output for any input.

actual predicted
channel
channel channel
model
input output

• Can we reverse this model, i.e. estimate the input from


the output?

estimated actual
channel ??? channel
input output

Course notes of Prof. Bertram SHI


Step Response Model
• We have modeled the communication channel by assuming
− It is linear and time invariant (LTI)
− It has known step response s(n).

k s(n) = k(1-an+1)u(n)
0<a<1

0
0 10 20 30 40

Course notes of Prof. Bertram SHI


Step Response Model
• Given an input x(n), we can
predict the output y(n) by

Discrete Time Channel


n
– Expressing the input as the sum
x(n) = u(n) - u(n-5) + u(n-10) …
and difference of unit steps
– Predict the outputs to each step
individually
– Combine the individual
responses

y(n) = s(n) - s(n-5) + s(n-10) … • Unfortunately, it is not easy


to reverse this process, i.e.
given the output, estimate
n
the input.

Course notes of Prof. Bertram SHI


Equivalent Models
• A model of the channel is a way of predicting the channel
output given the channel input.

• Given the same input, equivalent models make the same


prediction for the output, but are expressed in a different
way.

• Why do we need more than one model?


− The more models we have, the better we understand the system.

− Different models may be better suited for different purposes, i.e.


developing the equalizer

Course notes of Prof. Bertram SHI


Recursive Channel Model

Course notes of Prof. Bertram SHI


Recursive Models
• Recursive = involving repeated application of a rule

• A recursive model for a discrete time waveform x(n) has two


parts
− a formula that defines the nth sample in terms of the past samples, e.g.
x(n) = f(x(n-1))
− an initial (starting) condition, e.g.
x(0) = 0

• Generating the waveform by recursion:


− Given x(0), find x(1) = f(x(0)).
− Given x(1), find x(2) = f(x(1)).
− and so on…
Course notes of Prof. Bertram SHI
Examples
• Can you think of recursive models for the following sequences:

− x(n) = c (a constant) x(0) = c x(n) = x(n-1)

− x(n) = n (a linear ramp) x(0) = 0 x(n) = x(n-1) + 1

− x(n) = 0.2n x(0) = 0 x(n) = x(n-1) + 0.2

− x(n) = an alternating bit stream (0 1 0 1 0 1…) x(0) = 0 x(n) = 1- x(n-1)

Course notes of Prof. Bertram SHI


Recursive Model of IR Channel
• It turns out that the response of the IR channel y(n) to an input x(n)
can be described by a recursive formula:

y(n) = a‧y(n-1)+(1-a) ‧k ‧x(n)

x(n) k 1-a y(n)

a delay
y(n-1)

• The parameter a lies between 0 and 1.


• The parameter k scales the input.
• This is also known as a “feedback” system, since the output feeds back
as an input to the system to determine the next output.
Course notes of Prof. Bertram SHI
Example
• Given the channel model:
y(n) = a‧y(n-1)+(1-a) ‧k ‧x(n)

• Assume that a = 1/2 and k = 1, i.e., x(n)

y(n) = 0.5y(n-1) + 0.5x(n) n

• Find the output of the channel if the input is

n 0 1 2 3 4 5 6 7
x(n) 0 0 1 1 1 0 0 0
1 1 1 1 1 1 1
1 2 1 1 1 1 2 1 2 1 2
2 2 2
1 3 7 +
y(n) 0 2 + 0 2 + 2 + 2 + 2 + 7 2 + 7 2 7
2 4 8 16 32 64
Course notes of Prof. Bertram SHI
Effect of the Parameter “a”
y(n) =a·y(n-1) + (1-a)·k·x(n)

• The parameter a determines the “memory” in the channel

• a=0
− no memory of the past
− y(n) = k·x(n)
− the channel output is just the input multiplied by k

• a=1
− infinite memory of the past
− y(n) = y(n-1)
− the channel output is constant, ignores the channel input

Course notes of Prof. Bertram SHI


Fact 2: Same Step Response (example)
Assume: y (−1) = 0, a = 12 , k = 1
Model 1 Model 2
n s(n)=1-( 1 ) n+1 y(n)= 1 y(n-1)+ 1
2 2 2

s(0) = 1 − (1)1 = 1 − 1 = 1 y(0)= 1 y(-1)+ 1 = 1  0+ 1 = 1


0 2 2 2 2 2 2 2 2

s(1) = 1 − (1)2 = 1 − 1 = 3 y(1)= 1 y(0)+ 1 =11+ 2 = 3


1 2 4 4 2 2 2 2 4 4

s(2) = 1 − (1)3 = 1 − 1 = 7 y(2)= 1 y(1)+ 1 = 13+4= 7


2 2 8 8 2 2 2 4 8 8

s(3) = 1 − (1)4 = 1 − 1 = 15 y(3)= 1 y(2)+ 1 = 17+ 8 = 15


3 2 16 16 2 2 2 8 16 16

s(4) = 1 − (1)5 = 1 − 1 = 31 y(4)= 1 y(3)+ 1 = 1  15 + 16 = 31


4 2 32 32 2 2 2 16 32 32

s(5) = 1 − (1)6 = 1 − 1 = 63 y(5)= 1 y(4)+ 1 = 1  31 + 32 =


63
5 2 64 64 2 2 2 32 64 64
Course notes of Prof. Bertram SHI
Example
• Which of the following is an equivalent recursive specification
for the sequence 𝑥(𝑛) = 0.5𝑛 + 1
x(0)=0, x(n)=x(n−1)+0.5
x(0)=0, x(n)=0.5x(n−1)
x(n)=0.5⋅(n+2)
✓x(0)=1, x(n)=x(n−1)+0.5
x(0)=1, x(n)=0.5⋅x(n−1)

Course notes of Prof. Bertram SHI


Example
• The response of a channel, y(n), to an input, x(n), can be
modelled by the following recursive system:

• Assume x(0:4)=[01110], and y(0)=0. Determine the value of the output


y(2).
• Answer:0.72

Course notes of Prof. Bertram SHI


Intuition for Equalizer

Course notes of Prof. Bertram SHI


Motivation: Equalization
• The channel introduces intersymbol interference, which causes the “eye” to close.
• The goal of a channel equalizer is to undo the effect of the channel.
• This will cause the “eye” to open.

Discrete Time Channel


input x(n)

1 1
0.5 0.5
0 0
0 5 10 channel 0 5 10
~ equalizer
x(n) y(n)
Course notes of Prof. Bertram SHI
Modeling the Channel
• In order to reverse the effect of the channel, we start with a
model of the effect of a channel on the input

• We have seen that the response of the channel to an input x(n)


can be described (modeled) in two equivalent ways
− Model 1:
› Channel is linear and time invariant
› Channel has step response
s(n) = k(1-an+1)u(n)

− Model 2:
› If x(n) is the channel input and y(n) is the output,
y(n) =a·y(n-1) + (1-a)·k·x(n)
Course notes of Prof. Bertram SHI
Intuition for Equalizer
• Due to ISI, the output does not always move far enough to cross
the threshold in response to a change in the bit.

1 SPB = 2 input
threshold
0 output
0 50 100 150
time in samples

Channel Parameters: c=0, k=1, a=0.8, d=0

• Thus, looking at the value (or level) of the output is not a reliable
way to determine the input bit.

Course notes of Prof. Bertram SHI


Intuition for Equalizer
• When the input goes from zero to one,
− The channel output does not move immediately to k
− Rather, the output starts to change from zero to k.

k s(n) = k(1-an+1)u(n)

0<a<1
0
0 10 20 30 n 40

• Can we do better by looking at how the output is changing, rather


than the output level?

• How might we combine this information with the output level?


Course notes of Prof. Bertram SHI
Example
• The response of a communication
channel to the training sequence is shown.

• The following figures plot received


waveforms for different bit times
(SPB=10 and SPB=5).
• Assume that
1. The response to the first bit starts at index 0.
2. Bit decisions are made by comparing the last
sample corresponding to each bit to a decision
threshold.
3. The decision threshold is chosen based on the
response to the training sequence
Course notes of Prof. Bertram SHI
• What is the input bit sequence estimated from the received
waveform with SPB=10?
• Answer: 1010
• What is the bit sequence estimated from the received
waveform with SPB=5?
• Answer: 0010

Course notes of Prof. Bertram SHI


Derivation of Equalizer

Course notes of Prof. Bertram SHI


Deriving the Equalizer
• According to Model 2:

y(n) = a‧y(n-1)+(1-a) ‧k ‧x(n)


• Solving for x(n):

1
x(n) = [y(n) - a‧y(n-1)]
(1 – a)‧k

This is only true if the output of the channel is exactly described by this equation,
however, there may be unmodeled effects, such as nonlinearity and noise or
incorrect parameters.

• Thus, this is just an approximation to the input:

~ 1
x(n) = (1 – a)‧k [y(n) - a‧y(n-1)]
Course notes of Prof. Bertram SHI
Interpretation in Terms of Changes
1 y(n) − a y(n− 1)
x(n) = 
(1 − a)  k 
1 (1 − a)y(n) + ay(n) − a y(n− 1)
=
(1 − a)  k  
1 a
= y(n) + (y(n) − y(n−1))
k (1 − a) 

current channel change in output


output 10

a 8
0 if a = 0
1 −a 6

4
=  if a = 1
2

0
0 0.25 0.5 0.75 1 a
Course notes of Prof. Bertram SHI
Example
Suppose we have a channel whose step response can be described by
n+1 
1 2  u(n)
s(n) = 1 −   
2 3
 
where u(n) is the unit step function.
What is the equation for the equalizer for this channel?

Solution:
Step 1: Find the equivalent recursive model

( )
s(n) = k 1 − an+1 u(n) y(n) = a y (n− 1) + (1 − a)  k  x (n)

1 2 2 1
y(n) = y(n− 1) + x(n)
2 3 3 6 Course notes of Prof. Bertram SHI
Example
Solution:
Step 2: Invert the recursive model

2 1
y(n) = y(n − 1) + x(n)
3 6

x(n) = 6  y (n) − 4  y (n − 1)

Course notes of Prof. Bertram SHI


Example
Suppose we have a linear time invariant channel whose step response is given
by s(n)=23∗(1−(58)n+1)u(n) where u(n) is the unit step function.

Determine the equation for the equalizer for this channel, by expressing the
equalized waveform x(n) as a function of the received waveform y(n).

Answer: x(n)=4*y(n)-2.5*y(n-1)

Course notes of Prof. Bertram SHI


Example
• The following figure plots a waveform received at the output of this channel introduced
above. Estimate the value of the input at index 15, x(15).

• Answer: 1

Course notes of Prof. Bertram SHI


Effect of Equalization
on the Eye Diagram

Course notes of Prof. Bertram SHI


Adding the Equalizer
x(n)
bit add start/ encode at SPB add training
tx
seq stop bits samples per bit sequence

eye diagram

channel
y(n) = a ‧y(n-1)+(1-a ) ‧k ‧x(n)
ch ch ch

find skip
decode estimate
bs1 start past rx
bits threshold
y(n) bit training
~
decode x(n) channel
bs2
bits equalizer
Ideally,
1
~x(n) =
[y(n) – a ‧y(n-1)] ach = aeq
(1 – aeq )‧k eq
kch = keq
eq

= a parameter
ach for channel
eye diagram
kch = k parameter for channel
aeq = a parameter for equalizer
keq = k parameter for equalizer Course notes of Prof. Bertram SHI
Equalization (no noise, aeq = ach)
no equalization equalization
SPB = 10 1 1

0.5 0.5

0 0

0 5 10 15 20 0 5 10 15 20

SPB = 5 1 1

0.5 0.5

0 0

0 5 10 0 5 10

SPB = 3 1 1

0.5 0.5

0 0

0 2 4 6 0 2 4 6
c = 0,noise = 0,a =a = 0.85,k ch = k eq = 1 Course notes of Prof. Bertram SHI
ch eq
Equalization (no noise, aeq > ach)
no equalization equalization
SPB = 10 1 1

0.5 0.5

0 0

0 5 10 15 20 0 5 10 15 20

SPB = 5 1 1

0.5 0.5

0 0

0 5 10 0 5 10

SPB = 3 1 1

0.5 0.5

0 0

0 2 4 6 0 2 4 6
c = 0,noise = 0,a = 0.85,a = 0.87,k ch = k eq = 1Course notes of Prof. Bertram SHI
ch eq
Equalization (no noise, aeq < ach)
no equalization equalization
SPB = 10 1 1

0.5 0.5

0 0

0 5 10 15 20 0 5 10 15 20

SPB = 5 1 1

0.5 0.5

0 0

0 5 10 0 5 10

SPB = 3 1 1

0.5 0.5

0 0

0 2 4 6 0 2 4 6
c = 0,noise = 0,a = 0.85,a = 0.6,k ch = k eq = 1Course notes of Prof. Bertram SHI
ch eq
Equalization (noise, aeq = ach)
no equalization equalization
1.5 1.5

SPB = 10
1 1

0.5 0.5

0 0

-0.5 0 10 20 -0.5 0 10 20

c = 0,noise  0,a =a = 0.87,k ch = k eq = 1


ch eq

• Although equalization has “opened” the eye width, it has


also increased the size of the noise, “closing” the eye
height. Thus, with noise, we need to be careful. Course notes of Prof. Bertram SHI
Summary
• Using a model of the relationship between the channel
input and the channel output, we have developed an
equalizer that “undoes” the effect of the channel.

• This “opens” the eye, which “closes” due to intersymbol


interference.

• The equalizer is robust (still works) even if the parameters


of the channel are not correctly estimated.

• However, it may magnify the effect of noise.

Course notes of Prof. Bertram SHI


Noise

Course notes of Prof. Bertram SHI


The Received Signal
The signal at the receiver is the sum of two parts:
• The response to the input, which can be computed from the step
response according to the LTI assumption
• Signals, such as offset (c) and noise, that are introduced by the
environment (e.g. other users, electronic components)

c+noise(n)

System
+

LTI
x(n) = u(n) – u(n – 5) + … s(n) – s(n – 5) + …

y(n) = c + noise(n) + s(n) – s(n-5) …

our focus today


Course notes of Prof. Bertram SHI
Noise
• Noise is one of the most critical and fundamental concepts in
communications systems
• Without noise this course would not exist!
• Noise occurs everywhere and a typical noise signal may look like
• It is essentially a “random signal”

Course notes of Prof. Bertram SHI


Where does noise come from?
• Noise occurs naturally in nature and the
most common type is thermal noise

• Resisters, devices and the atmosphere


are all sources of thermal noise

• Thermal noise is simply due to the


ambient heat causing electrons to move
and vibrate and create random voltages
and emissions

• Noise arises internally in systems as


well as externally from such things as
the atmosphere

Course notes of Prof. Bertram SHI


Why is noise so critical?
• Without noise, you would be able to talk
very, very, very, very quietly and you would
still be heard and understood.

• The amount of noise determines the


minimum signal of what you can understand.

• Noise determines the minimum signal that


can be decoded by radios and receivers.

• We like to use small signals to save energy.

• If the desired signal falls below the noise


level, bit errors increase significantly.

Course notes of Prof. Bertram SHI


Additive Noise and Its Effect

Course notes of Prof. Bertram SHI


Additive Noise

v(n)
channel
x(n) r(n) y(n)=r(n)+v(n)

additive

• Definitions:
− x(n): channel input
− r(n): channel output without noise
− v(n): noise
− y(n): received signal

• Additive noise moves the received signal away from the channel output without noise.
• If the noise is large enough and in the right direction, the output sample will be on
the wrong side of the threshold!
Course notes of Prof. Bertram SHI
Simplifying Assumptions for BER Analysis
• Perfect synchronization
− We know exactly where to sample the output to decode each bit.

• Single sample decoding


− We decode each bit by comparing one output sample with a threshold

• No ISI
− The channel response depends only on the current bit, and not on
past bits.

• Additive “White” Gaussian Noise (AWGN)


− White: the noise varies fast enough that its value at different samples
are unrelated to each other.
− Gaussian: to be defined next time

Course notes of Prof. Bertram SHI


Simplified Model
• Under these assumptions, we only need to consider one
sample per bit and can analyze each bit in isolation
(independently) of the other bits.

r
=  if IN = 0
r r
min

 max if IN = 1 y=r+v 0 if y  T
IN = 0/1 channel >T OUT = 
1 if y  T
T=threshold
v

• How can we predict the bit error rate for this model?

Course notes of Prof. Bertram SHI


No Noise = No Bit Errors
1 IN

0
0 2 4 6 8 10 12 14 16 18 20

y=r+v

rmax
threshold
rmin

0 2 4 6 8 10 12 14 16 18 20

1 OUT

0
0 2 4 6 8 10 12 14 16 18 20

Course notes of Prof. Bertram SHI


Noise Leads to Bit Errors
1 IN

0
0 2 4 6 8 10 12 14 16 18 20

y=r+v

rmax
threshold
rmin

0 2 4 6 8 10 12 14 16 18 20

1 OUT

0
0 2 4 6 8 10 12 14 16 18 20

bit error!
Course notes of Prof. Bertram SHI
Example
• The figure below shows the transmitted and received signal levels corresponding to
20 bits transmitted over a communication system with additive noise.

• Assume that bit decisions are


made by comparing the
receivedsignal level with the
threshold shown by the
green dashed line.
• How many bit errors are made?
• Answer: 4
• Based on the data shown in Question 1, estimate the bit error rate (BER) of the
communication channel. Express the BER as a ratio lying between 0 and 1.
• Answer: 0.2
Course notes of Prof. Bertram SHI
The Binary Channel and
Calculating BER

Course notes of Prof. Bertram SHI


Binary Channel Model
• To start with, we will further simplify the model
− We ignore details about the noise and received signal levels rmin/rmax
− We look only at the input and output bits

• Binary channel: both input and output have possible two


values, 0 or 1 (“bi” = two).

channel r y=r+v >T


IN = 0/1 OUT = 0/1
binary channel v

Course notes of Prof. Bertram SHI


Binary Channel Behavior
IN = 0/1 binary channel OUT = 0/1

• Ideally, (when IN=0, OUT=0) and (when IN=1, OUT=1). In this case,
the BER = 0.
0 0
IN OUT
1 1

• Unfortunately, due to noise, sometimes (IN=0 but OUT=1) or (IN=1 but


OUT=0). In this case, BER > 0.

0 0
IN OUT
1 1
Course notes of Prof. Bertram SHI
Probabilistic Analysis
0 0
IN OUT
1 1

• The BER depends upon


− “How often” IN = 0, but OUT = 1
− “How often” IN = 1, but OUT = 0
− “How often” IN = 0.
− “How often” IN = 1.
• We quantify this notion of “how often” using probability theory.
• Intuitively, the probability of something happening (i.e. IN=0) is the percentage
of
time that thing happens. For example,
P[IN=0]=0.5 implies that the input bits are zero half the time
• Since there are only two possibilities,
P[IN=0]+P[IN=1]=1 P[IN=1]=1-P[IN=0]=0.5
Course notes of Prof. Bertram SHI
Modeling the Binary Channel
1-Pe0
0 P[IN=0] Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

P[IN=0] = probability (% of the time) that IN=0

P[IN=1] = probability (% of the time) that IN=1


Pe0 = probability (% of the time) there is an error when IN=0
= probability (% of the time) that OUT=1 when IN=0

Pe1 = probability (% of the time) there is an error when IN=1


= probability (% of the time) that OUT=0 when IN=1
Course notes of Prof. Bertram SHI
Computing the BER
1-Pe0
0 P[IN=0] Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

The BER is the probability of error, Pe


BER=Pe=Pe0 ·P[IN=0] + Pe1·P[IN=1]

probability that probability that


OUT=1 and IN=0 OUT=0 and IN=1

Two types of errors! Course notes of Prof. Bertram SHI


Example
• Assume that a student took a final exam with an equal number of "easy" and
"hard" questions. Assume that the student made errors on only 12% of the
"easy" questions, but made errors on 24% of the "hard" questions. If a
question is selected at random (all questions have equal probablity of being
selected), what is the probability that the student made an error on that
question?
• Answer:0.18
• Suppose that the test is made easier. There are twice as many "easy"
questions as "hard" questions. If the probability of error on the "easy" and
"hard" questions remains the same, what is the probability the student makes
an error on a randomly selected question?
• Answer:0.16

Course notes of Prof. Bertram SHI


Examples

Course notes of Prof. Bertram SHI


Example
Input/Output Bit Streams
n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

IN 1 1 0 1 1 0 0 1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 1 1
OUT 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 0 0 1 1 0 1 0 0 1 1
By definition:
# of errors 3
BER  = = 12%
# of bit pairs 25
Using our formula:
P[IN = 0] = 8 1-P = 7
25 8
0 e0
0
P = 1
e0 8
IN P = 2
OUT
e1 17
1 1
P[IN = 1] = 17
25 1-P = 15
e1 17

BER = P 1 8 2 17 3
x [IN = 0] + P x [IN = 1] = x + X =
e0 e1 8 25 17 25 25
Course notes of Prof. Bertram SHI
Intuition

0 P[IN=0] Pe0 1-Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1]

• Since P[IN=0]+P[IN=1]=1,
− The BER is a weighted average of Pe0 and Pe1
− The BER is between Pe0 and Pe1
− If IN=0 is more likely, the BER is closer to Pe0
− If IN=1 is more likely, the BER is closer to Pe1
− If IN=0 and 1 are equally likely, BER = ½(Pe0 + Pe1)
− If Pe0 = Pe1, BER = Pe0 = Pe1.
Course notes of Prof. Bertram SHI
Example BER Calculation
P[IN=0]=0.6 1-Pe0=0.8
0 0
Pe0=0.2

IN Pe1=0.3 OUT
1 P[IN=1]=0.4 1-Pe1=0.7 1

Question: What is the BER for the Binary Channel above?

Solution: 0.2<BER<0.3

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1] =0.2×0.6 +


0.3×0.4
=0.12+0.12
=0.24 Course notes of Prof. Bertram SHI
What we need to know to predict BER

0 P[IN=0] Pe0 1-Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

• In order to predict the BER, we need to know


− P[IN=0] (we can find P[IN=1] = 1 - P[IN=0])
− Pe0
− Pe1
• Usually, the transmitter determines P[IN=0]
− e.g. P[IN=0] = P[IN=1] = 0.5

• Pe0 and Pe1 depend on


− the transmit levels (rmin,rmax)
− the “size” of the noise Course notes of Prof. Bertram SHI
Summary
Noise is one of the critical and fundamental concepts in communications.
Without noise, there would be no difficulty in communication!
We started our analysis by considering only input/output bits using a
simple binary channel model.

0/1 binary channel 0/1

IN 0 0 OUT
1 1

We use probability to get a formula for BER

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1]


Usually, the transmitter controls P[IN=0] and P[IN=1]
• e.g. P[IN=0] = P[IN=1] = 0.5
Course notes of Prof. Bertram SHI
Example
• The figure below shows an example set of input and output bit streams from a binary
channel. Use the data in this figure to answer the four questions below.

• Estimate the bit error rate (BER) of this channel.


• Answer: 0.25
• Estimate the probability the transmitter sends a 0 bit, P[IN=0].
• Answer: 0.45
• Estimate the probability of an error if a 0 bit is tranmitted, Pe0
• Answer: 0.33

Course notes of Prof. Bertram SHI


Average Power in Signals

Course notes of Prof. Bertram SHI


Binary Channel Model
0 P[IN=0] Pe0 1-Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1]

• Usually, the transmitter determines P[IN=0/1]


− e.g. P[IN=0] = P[IN=1] = 0.5

• Pe0 and Pe1 depend on


− the transmit levels (rmin rmax)
− the power in the noise
− the threshold Course notes of Prof. Bertram SHI
Inside the Binary Channel
channel >T
IN = 0/1 r y=r+v OUT = 0/1

binary channel v

• Under our simplifying assumptions, we can consider one bit at a time.


• The channel adds an offset rmin and scaling by rmax-rmin
r if IN = 0
r =  min
r
 max if IN = 1
• The noise v is additive: y = r + v
• The output is obtained by thresholding y:

0 if y  T
OUT =  T=threshold
1 if y  T Course notes of Prof. Bertram SHI
Noise Leads to Bit Errors
1 IN

0
0 2 4 6 8 10 12 14 16 18 20

y=r+v

rmax
threshold
rmin

0 2 4 6 8 10 12 14 16 18 20

1 OUT

0
0 2 4 6 8 10 12 14 16 18 20

bit error!
Course notes of Prof. Bertram SHI
Power Consumption
• Power is energy used per unit time:

energy
− power =
time
− 1 Watt = Unit of Power
− Lifting an apple (~100g) up by 1m in 1s
requires ~1W

• Batteries contain a fixed amount of energy.


− The higher the power consumption of the
device they are powering, the faster this
energy is used up.
energy
− usable time =
power consumption
Course notes of Prof. Bertram SHI
Power Consumption
• Calculating the amount of energy in a battery
− Batteries are typically rated at fixed voltage
in volts (V) and a charge capacity in
milliamp-hours (mAh)
− Multiplying these together gives the total
energy stored in the battery in milliwatt-
hours (mWh)
− For example, this mobile phone battery
contains 3700mWh of energy

• Typical power consumption:


− microwave oven 1000W
− desktop computer 120W
− notebook computer 40W
− human brain 10W
− mobile phone 1W
Course notes of Prof. Bertram SHI
Average Power in Signals
• For communication, we usually have signals that vary around an average value

rave = average value r

• For communication, we are interested in how much the signals differ from their
average: r = r − r
ave

• Since r can be both positive and negative, its average value over many samples
is zero: 1 N
 r (n) = 0
N n=1
• The average power is the average squared value over many samples:
1 N
P=  (r (n))
2

N n=1 Course notes of Prof. Bertram SHI


Average Power for Bit Signals
rmax
1 N
 (r (n))
2
rave P =
r signal
rmin N n=1

• If 0 and 1’s are equally likely,


r = 12 r + 12 r
ave min max

• If IN = 0,
r = r −r =r − (12 r + 12 r ) = 12 (rmin − r )
min ave min min max max

• If IN = 1,
r = r −r =r − (12 r + 12 r ) = 12 (rmax − r )
max ave max min max min

• The average power is

P = 1  1 (r - r ) + 1  1 (r - r ) =
2 2 (rmax - rmin )2
signal
2  2 max min  2  2 max min  4 Course notes of Prof. Bertram SHI
Gaussian Noise Model

Course notes of Prof. Bertram SHI


Inside the Binary Channel

r
=  if IN = 0
r r
min

 max if IN = 1 y=r+v 0 if y  T
IN = 0/1 channel >T OUT = 
1 if y  T
T=threshold
v

We call the noise, v, a random variable.

Course notes of Prof. Bertram SHI


Statistics of the Noise
• The value of each noise sample is random, but the statistics of a large
number of samples is predictable.

Histogram of 100 samples


sample value

% of samples
n

sample value

Course notes of Prof. Bertram SHI


Statistics of the Noise
• The value of each noise sample is random, but the statistics of a large
number of samples is predictable.

Histogram of 1,000 samples


sample value

% of samples
n

sample value

Course notes of Prof. Bertram SHI


Statistics of the Noise
• The value of each noise sample is random, but the statistics of a large
number of samples is predictable.

Histogram of 10,000 samples


sample value

% of samples
n

sample value

Course notes of Prof. Bertram SHI


Statistics of the Noise
• The value of each noise sample is random, but the statistics of a large
number of samples is predictable.

Histogram of 1,000,000 samples


sample value

% of samples
n

sample value

Course notes of Prof. Bertram SHI


Probability Density Function

Histogram • The histogram is not totally


smooth since we count the
samples in bins of finite width.
% of samples

bin
width columns
sum to 100%
• As the bins get smaller and
smaller, the curve gets
v smoother and smoother.

PDF fv(v) • It approaches a function


known as the probability

area=1 density function (pdf), fv(v)

v
Course notes of Prof. Bertram SHI
Gaussian Density Function
• The probability density function
of many naturally occurring
-(v-m)2
f (v)= 1 random quantities, such as noise,
e 22
tends to have a bell-like shape,
v
2
known as a Gaussian distribution.

• This very important result is


called the Central-Limit Theorem.
area=1
• The Gaussian distribution is so
common that it is also called the
v “normal” distribution.

• Applications:
− Noise in communication
systems
− Particles in Brownian motion
− Voltage across a resistor
Course notes of Prof. Bertram SHI
Parameters Controlling the Shape
• The mean (m) is
-(v-m)2
f (v)= 1 − Its average value over many samples
e 22
− The center location of the pdf
v
2

1 • The standard deviation (σ) is


2 − An indication the “spread” of the samples
− A measure of the width of the pdf
1
2
e-0.5

~60% of  • The variance (σ2) is


peak
− The square of the standard deviation

m v − The average power over many samples

Course notes of Prof. Bertram SHI


Changing the Mean and Variance
Changes in mean shift the Changes in variance narrow
center of mass of PDF or broaden the PDF
fv(v)
fv(v)
Smaller 1
Smaller
Variance 2πσ
Mean

0 m=A v m σ v

fv(v)
fv(v)
Larger Larger 1
Mean Variance 2πσ

0 m=B v m σ v
Course notes of Prof. Bertram SHI
Calculating Probability by Integrating
• The probability that the noise v is between v1 and v2 is the area
under the probability density function between v1 and v2

fv(v) P[v1 < noise < v2]

V1 V2 v
Course notes of Prof. Bertram SHI
Example Probability Calculation
fv(v)

• Verify that overall area is 1:


− Since the curve defines a rectangle, the area is base × height:
1
2 =1
2
• Find the probability that v is between 0.5 and 1.0:
1 1 1
− The area of the shaded region is  =
2 2 4
1
− Thus, P[0.5  v  1.0] =
4 Course notes of Prof. Bertram SHI
Example
• Assuming that a laptop's average power consumption is 30 watts and
that it is powered by the battery shown below, how long do you predict
the laptop would operate for if the battery starts out fully charged?
• Answer: 1.6 hours

Course notes of Prof. Bertram SHI


Example
• Consider the two probability density functions (PDFs) shown below.
Which one of the following statements is correct?

PDF (B) has a larger mean and larger variance than PDF (A).
PDF (B) has a larger mean and smaller variance than PDF (A).
PDF (B) has a smaller mean and smaller variance than PDF (A).
✓ PDF (B) has a smaller mean and larger variance than PDF (A).
Course notes of Prof. Bertram SHI
Calculating the BER

Course notes of Prof. Bertram SHI


Binary Channel Model

0 P[IN=0] Pe0 1-Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1]

• The values of Pe0 and Pe1 depend upon


– the transmit levels (rmin, rmax)
– the power in the noise (σ2)
– the threshold (T) Course notes of Prof. Bertram SHI
PDF of Received Signal + Noise
• If IN=0 channel rmin y=rmin+v f(y if IN=0)

y is Gaussian with
v (noise) – mean rmin
– variance σ2
Gaussian with mean 0 and variance σ2

0 v
rmin rmax

• If IN=1 channel rmax y=rmax+v f(y if IN=1)

y is Gaussian with
v (noise) – mean rmax
– variance σ2
Gaussian with mean 0 and variance σ2

0 v
rmin rmax Course notes of Prof. Bertram SHI
Pe0 (Probability of Error if IN=0)

Course notes of Prof. Bertram SHI


Pe1 (Probability of Error if IN=1)
IN = 1 channel rmax y=rmax+v <T OUT f(y if IN=1)

binary channel v (noise) Pe1

• There is an error if T rmin T rmax y

– OUT = 0
– The noise pushes y below T
P = P[y  T if IN = 1]
e1 1

• The probability of error

Pe1
increases T increases.
0
T rmin rmax
5
Course notes of Prof. Bertram SHI
Predicting BER
If 0 and 1 input bits are equally likely,

BER = 12 P + 12 P
e0 e1

f(y if IN=1)
f(y if IN=0)
½ ½
Pe0 Pe1

rmin T rmax y ½f(y if IN=0) ½f(y if IN=1) rmin T rmax y

BER

rmin T rmax y Course notes of Prof. Bertram SHI


Changing the Threshold
• Choosing T is a tradeoff between minimizing Pe0 and Pe1.
BER
1

BER = 12 Pe0 + 12 Pe1


0.5

0
rmin rmax T

½f(y if IN=0) ½f(y if IN=1)

½Pe1 ½Pe0

rmin T rmax y rmin T rmax y rmin T rmax y

best threshold if P[IN = 0] = P[IN = 1]


T= 1
2
(rmin + rmax ) Course notes of Prof. Bertram SHI
The Effect of Signal to
Noise Ratio

Course notes of Prof. Bertram SHI


Binary Channel Model

0 P[IN=0] Pe0 1-Pe0 0

IN Pe1 OUT

1 P[IN=1] 1-Pe1 1

BER=Pe0 ·P[IN=0] + Pe1·P[IN=1]

• This expression enables us to understand the effect of


– the transmit levels (rmin, rmax)
– the power in the noise (σ2)

Course notes of Prof. Bertram SHI


Signal-to-Noise Ratio
• It is not the absolute signal or noise power that is important, but rather the
Signal-to-Noise Ratio (SNR).

P
(r
max - rmin )2
rmax =
signal
4 y P
rave r signal
rmin P
noise
= σ2 SNR = P
noise
(r -r )2
= max min
4σ2

• SNR is often measured in decibels (dB):


P
0dB signal power is equal to noise power SNR (dB) = 10log signal
10dB signal power is 10 times noise power 10 P
noise
20dB signal power is 100 times noise power
30dB signal power is 1000 times noise power
Course notes of Prof. Bertram SHI
Noise Levels in Mobile Phones
• It determines the minimum signal that can be received by radios and
receivers

• What is the typical noise power present at the input of a mobile phone?

• Extremely, extremely small 10-15W

• When your received signal falls to about this level your phone will lose
its connection

• The exact level is determined by


− Quality of circuits and components
− Symbol (bit) rate

Course notes of Prof. Bertram SHI


Factors affecting SNR
• Received power decreases as the receiver moves away
• Decrease in received signal power leads to decreased SNR
• Once SNR falls below around 10dB, the receiver will stop functioning-
for a mobile phone this is around 10-14W

noise

Course notes of Prof. Bertram SHI


BER under Changing Signal Power
0 BER
10
low
signal
power

rmin T rmax -1
10

P =
(r
max
-r
min
)
2

signal 4
BER

-2
rmin T rmax 10

high
signal
power -3
10
rmin T rmax -10 -5 0 5 10
SNR (dB) Course notes of Prof. Bertram SHI
BER under Changing Noise Power
0 BER
10
high
noise
power

rmin T rmax -1
10

BER
σ
-2
rmin T rmax 10

low
noise
power -3
10
rmin T rmax -10 -5 0 5 10
SNR (dB) Course notes of Prof. Bertram SHI
Example
Consider a noisy communication channel that takes binary input, IN, and produces an output given by
y=r+v where
a. r=0.4 V if IN = 0.
b. r=0.8 V if IN = 1.
c. v is a Gaussian random variable with zero mean and variance σ2=0.01 V2.
What is the value of the signal to noise ratio in decibles?

Answer: 6.02

Course notes of Prof. Bertram SHI


Example
Suppose that the noise, v, in a communication channel has the probability density function shown below.
fv(v)=0.5−0.25|v| if −2≤v≤2 and 0 otherwise

What is the probability that v is between 0.4 and 1? Enter the probability as a decimal number between 0
and 1 (100%).
Hint: No need to integrate. The area of a trapezoid is [(a+b)/2].h where a and b are the two bases and h is
the height.

Answer: 0.195

Course notes of Prof. Bertram SHI


An Expression for BER
with Gaussian Noise

Course notes of Prof. Bertram SHI


The Q-function

Course notes of Prof. Bertram SHI


Probabilities for Other Gaussians

Course notes of Prof. Bertram SHI


Expressions for Pe0 and Pe1

Course notes of Prof. Bertram SHI


Predicting BER
If 0 and 1 input bits are equally likely

Course notes of Prof. Bertram SHI


Example
Consider a communication system with binary input IN and binary output OUT where
1.The output of the channel is given by y=r+v where
a. r=0.4 V if IN = 0.
b. r=0.9 V if IN = 1.
c. v is a Gaussian random variable with zero mean and variance σ2=0.04 V2.
2.The binary output of the communication system, OUT, is obtained by comparing the channel output y with a threshold
T=0.7.
3.The binary input, IN, is equally likely to be 0 or 1.

Under the assumptions above, what is the probability of a bit error if IN = 0? Enter the probability as a decimal number
between 0 and 1 (100%).
Answer:

Course notes of Prof. Bertram SHI


Under the assumptions above, what is the probability of a bit error if IN = 1? Enter the probability as a decimal number
between 0 and 1 (100%).

Answer:

Under the assumptions above, what is the estimated bit error rate? Enter the rate as a decimal number between 0 and
1 (100%).
Answer:

Suppose that assumption #3 is changed so that the binary input, IN, is twice as likely to be 0 as 1. All other assumptions
remain unchanged. Would the bit error rate change and if so, how?
The BER would remain the same.
The BER would increase.
There is not enough information to answer this question.
✓ The BER would decrease.

Course notes of Prof. Bertram SHI


E&CE 411, Spring 2009, Table of Q Function 1

Table 1: Values of Q(x) for 0 ≤ x ≤ 9


x Q(x) x Q(x) x Q(x) x Q(x)
0.00 0.5 2.30 0.010724 4.55 2.6823×10−6 6.80 5.231×10−12
0.05 0.48006 2.35 0.0093867 4.60 2.1125×10−6 6.85 3.6925×10−12
0.10 0.46017 2.40 0.0081975 4.65 1.6597×10−6 6.90 2.6001×10−12
0.15 0.44038 2.45 0.0071428 4.70 1.3008×10−6 6.95 1.8264×10−12
0.20 0.42074 2.50 0.0062097 4.75 1.0171×10−6 7.00 1.2798×10−12
0.25 0.40129 2.55 0.0053861 4.80 7.9333×10−7 7.05 8.9459×10−13
0.30 0.38209 2.60 0.0046612 4.85 6.1731×10−7 7.10 6.2378×10−13
0.35 0.36317 2.65 0.0040246 4.90 4.7918×10−7 7.15 4.3389×10−13
0.40 0.34458 2.70 0.003467 4.95 3.7107×10−7 7.20 3.0106×10−13
0.45 0.32636 2.75 0.0029798 5.00 2.8665×10−7 7.25 2.0839×10−13
0.50 0.30854 2.80 0.0025551 5.05 2.2091×10−7 7.30 1.4388×10−13
0.55 0.29116 2.85 0.002186 5.10 1.6983×10−7 7.35 9.9103×10−14
0.60 0.27425 2.90 0.0018658 5.15 1.3024×10−7 7.40 6.8092×10−14
0.65 0.25785 2.95 0.0015889 5.20 9.9644×10−8 7.45 4.667×10−14
0.70 0.24196 3.00 0.0013499 5.25 7.605×10−8 7.50 3.1909×10−14
0.75 0.22663 3.05 0.0011442 5.30 5.7901×10−8 7.55 2.1763×10−14
0.80 0.21186 3.10 0.0009676 5.35 4.3977×10−8 7.60 1.4807×10−14
0.85 0.19766 3.15 0.00081635 5.40 3.332×10−8 7.65 1.0049×10−14
0.90 0.18406 3.20 0.00068714 5.45 2.5185×10−8 7.70 6.8033×10−15
0.95 0.17106 3.25 0.00057703 5.50 1.899×10−8 7.75 4.5946×10−15
1.00 0.15866 3.30 0.00048342 5.55 1.4283×10−8 7.80 3.0954×10−15
1.05 0.14686 3.35 0.00040406 5.60 1.0718×10−8 7.85 2.0802×10−15
1.10 0.13567 3.40 0.00033693 5.65 8.0224×10−9 7.90 1.3945×10−15
1.15 0.12507 3.45 0.00028029 5.70 5.9904×10−9 7.95 9.3256×10−16
1.20 0.11507 3.50 0.00023263 5.75 4.4622×10−9 8.00 6.221×10−16
1.25 0.10565 3.55 0.00019262 5.80 3.3157×10−9 8.05 4.1397×10−16
1.30 0.0968 3.60 0.00015911 5.85 2.4579×10−9 8.10 2.748×10−16
1.35 0.088508 3.65 0.00013112 5.90 1.8175×10−9 8.15 1.8196×10−16
1.40 0.080757 3.70 0.0001078 5.95 1.3407×10−9 8.20 1.2019×10−16
1.45 0.073529 3.75 8.8417×10−5 6.00 9.8659×10−10 8.25 7.9197×10−17
1.50 0.066807 3.80 7.2348×10−5 6.05 7.2423×10−10 8.30 5.2056×10−17
1.55 0.060571 3.85 5.9059×10−5 6.10 5.3034×10−10 8.35 3.4131×10−17
1.60 0.054799 3.90 4.8096×10−5 6.15 3.8741×10−10 8.40 2.2324×10−17
1.65 0.049471 3.95 3.9076×10−5 6.20 2.8232×10−10 8.45 1.4565×10−17
1.70 0.044565 4.00 3.1671×10−5 6.25 2.0523×10−10 8.50 9.4795×10−18
1.75 0.040059 4.05 2.5609×10−5 6.30 1.4882×10−10 8.55 6.1544×10−18
1.80 0.03593 4.10 2.0658×10−5 6.35 1.0766×10−10 8.60 3.9858×10−18
1.85 0.032157 4.15 1.6624×10−5 6.40 7.7688×10−11 8.65 2.575×10−18
1.90 0.028717 4.20 1.3346×10−5 6.45 5.5925×10−11 8.70 1.6594×10−18
1.95 0.025588 4.25 1.0689×10−5 6.50 4.016×10−11 8.75 1.0668×10−18
2.00 0.02275 4.30 8.5399×10−6 6.55 2.8769×10−11 8.80 6.8408×10−19
2.05 0.020182 4.35 6.8069×10−6 6.60 2.0558×10−11 8.85 4.376×10−19
2.10 0.017864 4.40 5.4125×10−6 6.65 1.4655×10−11 8.90 2.7923×10−19
2.15 0.015778 4.45 4.2935×10−6 6.70 1.0421×10−11 8.95 1.7774×10−19
2.20 0.013903 4.50 3.3977×10−6 6.75 7.3923×10−12 9.00 1.1286×10−19
2.25 0.012224
Channel Coding

Course notes of Prof. Bertram SHI


Bit Errors

x(n) channel r(n) y(n)=r(n)+v(n)

v(n)

• Noise added during transmission causes bit errors in our digital


data stream.

• It is usually impossible to completely eliminate errors.

• Often, the best we can do is to bound the probability of bit error.

• Channel coding is a way to detect or correct bit errors by adding


redundancy to the transmission.
Course notes of Prof. Bertram SHI
Channel Coding
more
sent bits sent
bits Error waveform
Bits to
Source Correcting
Waveforms
Coding

Channel
Noise
bits
with errors
received
bits Error Waveforms
Dest
Correction to Bits received
waveform

• We add redundant information to the transmitted bit stream, so that we


can detect errors at the receiver.
• Ideally we’d like to
− correct commonly occurring errors.
− detect uncommon errors and deal with them by methods like retransmission.
Course notes of Prof. Bertram SHI
Block Codes

Course notes of Prof. Bertram SHI


(n,k,d) Block Codes
• Split the message into k-bit blocks
k bits (n-k) bits
• Create a codeword by adding (n-k)
extra bits to each block.
Message bits Extra bits
− The extra bits are computed
based on the message bits.
Codeword (n bits) − Thus, they contain no new
information.

• d = minimum Hamming Distance


between codewords

• Sometimes we drop the d and


indicate only (n,k)

Course notes of Prof. Bertram SHI


Code Rate
• Code rate: the fraction of sent bits that contain useful information (i.e. the
message).
• For the (n,k,d) block code
k n-k

k
Message bits Extra bits
code rate = n
n
• Related terms Fs
− Gross bit rate: rate that all bits are sent
= SPB
• Also called the data signaling rate

− Net bit rate: rate that useful bits are sent = code rate × gross bit rate

Course notes of Prof. Bertram SHI


Hamming Distance
• The Hamming Distance between two codewords is the number of bit positions
where the corresponding bits are different.
• For example
− The Hamming distance between (00) and (10) is 1.
− The Hamming distance between (00000000) and (11110011) is 6.

• The Hamming distance measures the number of bit errors it takes transform one
codeword to another.
− For example, if we use no coding, each bit is represented by one of two
code words (“0” and “1”).
− Since the Hamming distance is 1, a single-bit error changes one code
word the other.

single-bit error

“heads” 0 1 “tails”
Course notes of Prof. Bertram SHI
Error Detection vs Correction
Hamming distance determines how powerful the error
correction or error detection capabilities of the code are.

• Error detection
− We can detect errors
− But, we don’t know how to fix them

• Error correction
− We can detect errors
− And, we can correct them

• For a given code, the receiver can choose whether to


use the code to detect errors or to correct them.

Course notes of Prof. Bertram SHI


The Error Detection/Correction Capability
• The minimum Hamming distance determines the maximum number of bit
errors the receiver can detect or correct.

• If the minimum Hamming distance is d, the receiver can either


− Detect but not correct errors in at most d-1 bits of each codeword
OR
− Detect and correct errors in at most (d-1)/2 bits of each code word

• For example, if d = 3, the receiver can either


− Detect 1 or 2 bit errors in each codeword.
− Detect and correct 1 bit errors in each codeword.
• If a 2 bit error does occur, it will be detected, but incorrectly corrected.

Course notes of Prof. Bertram SHI


Repetition Codes

Course notes of Prof. Bertram SHI


(n,1,n) Repetition Codes
• Repetition codes are block codes, where
− The message is split to 1-bit blocks.
− Blocks are expanded to n bits by repeating the bit n times.

• Examples

− (2,1,2) repetition code


• message bit 0 →Codeword 00 1
code rate =
• message bit 1 →Codeword 11 2

− (3,1,3) repetition code


• message bit 0 →Codeword 000 1
code rate =
• message bit 1 →Codeword 111 3

Course notes of Prof. Bertram SHI


The (2,1,2) Repetition Code
• Each bit is repeated twice.
single-bit error
− Each code word has an even number
of “1” bits. We refer to this as “even
01 parity”.

message
00 11 message • The Hamming distance between
bit 0 bit 1 code words is d=2.
10
• This code can be used to detect
errors in up to d-1=1 bit .
− There is an error if the number of
received “1” bits in the code word is
odd.

Course notes of Prof. Bertram SHI


The (3,1,3) Repetition Code
message
bit 1 • Each bit is repeated 3 times.

101 111 • The Hamming distance between


code words is d=3.
100 110
• We can EITHER
– Detect errors in up to d-1=2 bits
OR
001 011 – Detect and correct errors in up
𝒅−𝟏
to = 𝟏 bit
𝟐
000 010

message
bit 0
Course notes of Prof. Bertram SHI
Detecting Errors
message
bit 1 • If we receive and observe a
codeword with a mixture of 0’s
and 1’s, we know that an error
101 111 has occurred.

100 110 • If we receive 100, either


− 000 was sent and a 1 bit error
occurred.
− 111 was sent and a 2 bit error
001 011 occurred.

000 010

message
bit 0
Course notes of Prof. Bertram SHI
Correcting Errors
message
bit 1 • If we assume that at most 1-bit
error can occur, we can do error
correction.
101 111

• If we receive 100, since at most


100 110 1-bit error occurred, 000 must
have been sent.

• We can correct errors by seeing


001 011 whether 0 or 1 has the most
“votes”.
000 010

message
bit 0
Course notes of Prof. Bertram SHI
The (4,1,4) Repetition Code
• We can EITHER
− Detect errors in up to d-1=3 bits.
OR
𝒅−𝟏
− Detect and correct errors in up to = 𝟏. 𝟓 bits?
𝟐

Detect and correct 1 bit error, and detect 2 bit errors.

• For example
− If we observe 1000, then
• Either a 1 bit or a 3 bit error occurred.
• If we correct, we assume the 3 bit error did not occur.
− If we observe 1001, the codewords 0000 or 1111 are equidistant. We
have no reliable way to decide which was transmitted.

message message
0000 1000 1001 1101 1111
bit 0 bit 1
Course notes of Prof. Bertram SHI
• Consider a repetition code where codewords are formed by repeating each bit five
times.
Answer: Each bit is transmitted individually, so the value of k=1.
The two possible code words are 00000 and 11111, thus n=5.
The Hamming distance between the two code words is d=5.
Thus, (n,k,d) = (5,1,5).
• Suppose we wish to detect, but not correct errors in each received codeword. What is
the maximum number of bit errors that we can detect?
Answer: The maximum number of bit errors we can detect, but not correct, is d-1=4
• Suppose we wish to detect and correct errors in each received codeword. What is the
maximum number of bit errors that we can detect and correct?
Answer:The maximum number of bit errors we can detect and correct is (d-1)/2=2

Course notes of Prof. Bertram SHI


• Consider a repetition code where codewords are formed by repeating each bit five
times.
Suppose we receive the following bitstream.
0000011110000111111011000111001111000001
If we assume that we can both detect and correct errors, what was the original bit
stream?

Answer: Breaking the line above into individual 5 bit codewords, we obtain
00000 11110 00011 11110 11000 11100 11110 00001
Correcting errors by majority voting, we obtain
00000 11111 00000 11111 00000 11111 11111 00000
Thus, the original bit stream was 01010110.

Course notes of Prof. Bertram SHI


Parity Bit Based Codes

Course notes of Prof. Bertram SHI


Single Parity Bit Code
• Given a k-bit message, we can create a (k+1,k,2) block code by adding a
single bit.
− The bit is chosen so that the sum of the (k+1) bits in the codeword is even.
− Equivalently, the bit is 1 if the sum of the k message bit is odd, and 0
otherwise.
− The bit is called a parity bit.
− The resulting codeword is said to have even parity.

• Example: (8,7,2) code parity bit

message block: 0 1 1 0 0 1 0 k 7
code rate = =
codeword: 01100101 k+1 8

message block: 1 0 1 1 1 0 0
codeword: 10111000
Course notes of Prof. Bertram SHI
Error Detection
• With the (k+1,k,2) parity bit code, we can detect single bit errors.

• If the received codeword has an even number of ones,


then, we assume no bit errors have occurred
otherwise, we assume a one bit error has occurred

• Example: (8,7,2) code

message block: 01100 1 0


sent codeword: 01100 1 01
received codeword: 01101 1 0 1 → single bit error (detected)
received codeword: 01111 1 0 1 → two bit error (not detected)

Course notes of Prof. Bertram SHI


An (8,4,3) Code
D1 D2 D3 D4 Message Block • Arrange the message block to a 2x2
square.

• Add a parity bit (Pi) to each row or


column, so that it has even parity.
D1 D2 P1 – Choose P1 so row 1 has even parity.
– Choose P2 so row 2 has even parity.
D3 D4 P2 – Choose P3 so column 1 has even parity.
– Choose P4 so column 2 has even parity.

P3 P4
• Rearrange the bits to form the final
codeword.

1
D1 D2 D3 D4 P1 P2 P3 P4 Codeword code rate =
2
Course notes of Prof. Bertram SHI
Example
• Arrange the message block to a 2x2
0 1 1 1 Message Block
square.

• Add a parity bit (Pi) to each row or


column, so that it has even parity.
0 1 1 – Choose P1 so row 1 has even parity.
– Choose P2 so row 2 has even parity.
1 1 0 – Choose P3 so column 1 has even parity.
– Choose P4 so column 2 has even parity.
1 0
• Rearrange the bits to form the final
codeword.

0 1 1 1 1 0 1 0 Codeword

Course notes of Prof. Bertram SHI


Syndrome Bits
D1 D2 D3 D4 P1 P2 P3 P4 Received codeword

Rearrange the codeword

D1 D2 P1
• The Pi have been chosen so that each row or
D3 D4 P2 column of the rearranged codeword should
have an even number of bits.
P3 P4

Compute syndrome bits

D1 D2 P1 S1
• Syndrome bits Si check this condition in the
D3 D4 P2 S2 received code word.
− Si = 1 indicates the condition for parity
P3 P4
bit Pi is violated.
S3 S4 Course notes of Prof. Bertram SHI
Example Syndrome Bit Calculations
0 1 1 0
1 1 0 0 If D1 + D2 + P1 is even
then S1 = 0
1 0
else S1 = 1
0 0
If D3 + D4 + P2 is even
0 1 1 0 D1 D2 P1 S1 then S2 = 0
1 1 1 1 D3 D4 P2 S2 else S2 = 1
1 0 P3 P4 If D1 + D3 + P3 is even
0 0 S3 S4 then S3 = 0
else S3 = 1
0 1 1 0
If D2 + D4 + P4 is even
1 0 0 1
then S4 = 0
1 0 else S4 = 1
0 1 Course notes of Prof. Bertram SHI
Performing Error Correction
0 1 1 0
1 1 0 0 0 1 1 1 • Since d=3, we can detect and
1 0 corrected data correct (d-1)/2=1 bit errors.
(no errors)
0 0
0 1 1 0 • Check the syndrome bits
1 1 1 1 0 1 1 1 − If all Si = 0, we assume no error.
1 0 P2 incorrect − If only one Si= 1, we assume an
0 0 error in parity bit Pi.

0 1 1 0 − If syndrome bits for column i


and row j are 1, we assume a
1 0 0 1 0 1 1 1 bit error in the data bit at
1 0 D4 incorrect position i,j
0 1 Course notes of Prof. Bertram SHI
Performing Error Detection
0 1 1 0
• With the (8,4,3) code, since
1 1 0 0 d=3, we can detect 1 or 2
1 bit
error 1 0 bit errors.
0 1 1 0 0 0
1 1 1 1 • Check the syndrome bits
− If all Si = 0, we assume no
1 0 error.
0 0 0 1 1 0 − Otherwise, we assume there
has been an error in at least
1 0 1 0 one bit
2 bit
error 1 1
0 0
Course notes of Prof. Bertram SHI
Error Correction Summary
011011101101 1. Take an input message stream:

0110 1110 2. Break the message stream into k-bit blocks


1101 (e.g. k = 4).

01101111 3. Add (n-k) parity bits to form n-bit codeword


11100101 (e.g. n = 8)
11010110
4. Transmit data through noisy channel and
01100111 receive codewords with some errors
11110101
11000110 5. Perform error correction

0110 1110 1101 6. Extract the k=4 message bits from each
corrected codeword.
Course notes of Prof. Bertram SHI
Summary
• Noise, always present in communication systems, leads to
bit errors.

• Error Correcting Codes can be used to reduce bit error rate.

• With (n,k,d) block codes, we use n bits to encode k


message bits, where n > k.

• The minimum Hamming distance, d, between the


codewords indicates how many bit errors the code can
detect or correct.

Course notes of Prof. Bertram SHI


Course notes of Prof. Bertram SHI
Course notes of Prof. Bertram SHI

You might also like