0% found this document useful (0 votes)
4 views

lec3

This lecture covers the basics of modulation, focusing on the conversion of analog signals to digital signals through sampling and quantization. It explains the Nyquist criteria for sampling frequency and the relationship between bit rate, symbol rate, and quantization levels. The lecture also discusses the necessity of modulation for frequency multiplexing, antenna size considerations, and reducing interference and noise in communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

lec3

This lecture covers the basics of modulation, focusing on the conversion of analog signals to digital signals through sampling and quantization. It explains the Nyquist criteria for sampling frequency and the relationship between bit rate, symbol rate, and quantization levels. The lecture also discusses the necessity of modulation for frequency multiplexing, antenna size considerations, and reducing interference and noise in communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Lecture 3

Communication : Basics 1
In this module, we will go through some basics of modulation. As I had mentioned earlier, I am
assuming that you have not done any course in communication. So, we will talk about certain
basics of communication so that all of us are at the same page. We will start with analog and
digital signals.

All the naturally occurring signals are analog in nature, where the signal levels change
continuously with time. These are digitized for various purposes through a process called analog-
to-digital conversion. The digitization of an analog signal comprises of two processes: (1)
sampling, and (2) quantization.

Let us start with an analog signal, which is traced shown as the red line in the figure. We decide
to sample this signal at specific sampling frequency (fs). The sampling frequency is decided by
the bandwidth of the signal through the Nyquist criteria, which states that the sampling
frequency must be at least twice the largest frequency content of the analog signal. If the largest
frequency content in the signal is at frequency fB, the signal has to be sampled with a frequency
of at least 2fB. The sampling process decides the time duration between consecutive samples, or
the sampling duration. Thus, the sampling process discretizes the x-axis (time axis), which now
consists of the sampling instants. Next, the range of the amplitude values of the signal (y-axis) is
also discretized into quantization levels. Now, the values of the signal at the sampling instants

NPTEL-Fiber Optic Communication Technology – Lecture3 Page 1


are assigned the value corresponding to the nearest quantization level. In this manner, the analog
signal is converted to a digital signal.

Let us consider the example shown here. We have considered 8-level quantization, which
requires 3 bits. In general, for N-level quantization, the number of bits required is 𝑏 = 𝑙𝑜𝑔 𝑁.
Thus, each quantized level or symbol is represented using b bits. If the bandwidth of the signal is
fB, the minimum sampling rate required according to Nyquist criteria is 2fB, which is equal to the
symbol rate (symbols/s). Thus, the bit rate of the digital signal is given as follows.

Bit rate = symbol rate × bits per symbol

𝐵𝑖𝑡 𝑟𝑎𝑡𝑒 = 2 × 𝑓 × log 𝑁

The optimum number of quantization levels depends on the amplitude accuracy or the
quantization accuracy required in the system. Larger the number of levels, more accurate is the
signal representation. But on the downside, larger the number of vertical levels, more is number
of bits required to represent the same information. That is the trade-off.

For example, consider speech signal. Human beings are more sensitive in the frequency range 2
kHz to 4 kHz. Hence, in general, the speech signal has frequency range between 2 to 4 kHz.
Now, we perform the calculation of bit rate for a speech signal. Assume that the largest
frequency (fB) in the speech signal to be 4 kHz, and consider 8-bit quantization. This means that
each sample is to be represented by 8 bits, so that the number of quantization levels is 𝑁 = 2 =
256. In this case, the bit rate can be computed to be 2 × 4 × 10 × 8 = 64 kbps. This is the data
rate of a basic telephone signal.

Depending on the kind of signal to be transmitted (image, audio, video), the data content and the
bandwidth may vary, which decides the number of quantization levels and the sampling rate, but
the basic calculation of bit rate remains the same. This is how we move from bandwidth to bit
rate in your analog-to-digital conversion process.

Now, we have sampled and quantized the analog signal to obtain a digital signal. The next thing
you want to do is modulation. The first question is why do we want to modulate? Before that, let
us understand what happens during modulation.

In modulation, we start with the baseband signal, which is the digital signal that we have
obtained through analog-to-digital conversion, represented by x(t). We multiply the baseband
signal with a sinusoid generated from a local oscillator whose frequency is fC, which results in
𝑥(𝑡) × 𝑐𝑜𝑠 (2𝜋𝑓 𝑡). This is the basic time-domain representation of modulation. We now
visualize the same idea in the frequency-domain. The baseband signal gets multiplied by the
sinusoid, and according to the laws of Fourier transform, we know that, a multiplication in the
time domain corresponds to a convolution in frequency domain. The spectrum of the signal x(t)
is represented as X(f). When this spectrum is convolved with the spectrum of a sinusoid of

NPTEL-Fiber Optic Communication Technology – Lecture3 Page 2


frequency fC, we get replicas of X(f) centered at +fC and -fC. Thus, we have moved from a
baseband signal to a passband signal. Centre frequency is decided by the local oscillator
frequency.

Fourier transform of cos(2𝜋𝑓 𝑡) = 𝛿(𝑓 − 𝑓 ) + 𝛿(𝑓 + 𝑓 )

Fourier transform of

𝑥(𝑡) cos(2𝜋𝑓 𝑡) = 𝑋(𝑓) ∗ [𝛿(𝑓 − 𝑓 ) + 𝛿(𝑓 + 𝑓 )] = 𝑋(𝑓 − 𝑓 ) + 𝑋(𝑓 + 𝑓 )

This process of modulation enables the frequency-multiplexing of the data from various users.
Different users can be assigned slightly different carrier frequencies, and thus all the users can
transmit the data simultaneously over the same channel without the respective signals getting
mixed.

Apart from this, there are other reasons for the requirement of modulation. You may have
learned in electromagnetics that the antenna size required to transmit a signal is decided by the
frequency of operation. The size of the antenna is proportional to the wavelength. So, smaller is
the frequency, larger is the wavelength, and hence, longer antennas are required. So, for a free
space communication link, the practical limitations on antenna size will necessitate the need to
push the signals into pass bands.

NPTEL-Fiber Optic Communication Technology – Lecture3 Page 3


For example, 2G or LTE signals have a certain allocated band of frequencies, and the
information from different users gets modulated at slightly different fC’s in that band. This is one
way of multiplexing information. Additionally, this baseband signal cannot be transmitted
directly because there is a lot of environmental noise in the system at low frequencies, for
example, the 50 Hz noise of the power transmission line. Baseband signals from two different
sources cannot be transmitted in the same channel, because they will interfere. All these issues
because of the interference and the noise can be eliminated by shifting the signal to the pass
band. So, that is why modulation is needed.

NPTEL-Fiber Optic Communication Technology – Lecture3 Page 4

You might also like