0% found this document useful (0 votes)
14 views

Watermark z360 Full Notes-output

Uploaded by

Vishnu C PSN CET
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Watermark z360 Full Notes-output

Uploaded by

Vishnu C PSN CET
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 138

Click Here for Communication Theory full study material.

www.edubuzz360.com

EC6402 COMMUNICATION THEORY L T P C 3 0 0 3

UNIT I AMPLITUDE MODULATION 9


Generation and detection of AM wave-spectra-DSBSC, Hilbert Transform, Pre-envelope &
complex envelope - SSB and VSB –comparison -Superheterodyne Receiver.

UNIT II ANGLE MODULATION 9


Phase and frequency modulation-Narrow Band and Wind band FM - Spectrum - FM modulation
and demodulation – FM Discriminator- PLL as FM Demodulator - Transmission bandwidth.

UNIT III RANDOM PROCESS 9


Random variables, Central limit Theorem, Random Process, Stationary Processes, Mean,
Correlation & Covariance functions, Power Spectral Density, Ergodic Processes, Gaussian
Process, Transmission of a Random Process Through a LTI filter.

UNIT IV NOISE CHARACTERIZATION 9


Noise sources and types – Noise figure and noise temperature – Noise in cascaded systems.
Narrow band noise – PSD of in-phase and quadrature noise –Noise performance in AM systems –
Noise performance in FM systems – Pre-emphasis and de-emphasis – Capture effect, threshold
effect.

UNIT V INFORMATION THEORY 9


Entropy - Discrete Memoryless channels - Channel Capacity -Hartley - Shannon law - Source
coding theorem - Huffman & Shannon - Fano codes

TOTAL: 45 PERIODS OUTCOMES: At the end of the course, the students would
esign AM communication systems.
esign Angle modulated communication systems
pply the concepts of Random Process to the design of Communication systems
alyze the noise performance of AM and FM systems

TEXT BOOKS:
1. J.G.Proakis, M.Salehi, ―Fundamentals of Communication Systems‖, Pearson Education 2006.
2. S. Haykin, ―Digital Communications‖, John Wiley, 2005.

REFERENCES:
1. B.P.Lathi, ―Modern Digital and Analog Communication Systems‖, 3rd Edition, Oxford
University Press, 2007.
2. B.Sklar, ―Digital Communications Fundamentals and Applications‖, 2nd Edition Pearson
Education 2007
3. H P Hsu, Schaum Outline Series - ―Analog and Digital Communications‖ TMH 2006
4. Couch.L., "Modern Communication Systems", Pearson, 2001.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

CHAPTER I

AMPLITUDE MODULATION

PREREQUISTING ABOUT MODULATION:


In this chapter we discussed about Modulation is the process of varying one or more properties of
a periodic waveform, called the carrier signal (high frequency signal), with a modulating signal
that typically contains information to be transmitted.
 Need for modulation:
 Antenna Height
 Narrow Banding
 Poor radiation and penetration
 Diffraction angle
 Multiplexing.
 Functions of the Carrier Wave:
The main function of the carrier wave is to carry the audio or video signal from the transmitter to
the receiver. The wave that is resulted due to superimposition of audio signal and carrier wave is
called the modulated wave.
 Types of modulation:
The sinusoidal carrier wave can be given by the equation,
vc = Vc Sin(wct + θ) = Vc Sin(2fct + θ)
Vc – Maximum Value
fc – Frequency
θ – Phase Relation
Since the three variables are the amplitude, frequency, and phase angle, the modulation can be
done by varying any one of them. Thus there are three modulation types namely:


Amplitude Modulation (AM)
 Frequency Modulation (FM)
 Phase Modulation (PM)
 We have introduced linear modulation. In particular,
 DSB-SC, Double sideband suppressed carrier
 DSB-LC, Double sideband large carrier (AM)

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 SSB, Single sideband


 VSB, Vestigial sideband

AMPLITUDE MODULATION:
"Modulation is the process of superimposing a low frequency signal on a high frequency
carrier signal."
OR
"The process of modulation can be defined as varying the RF carrier wave in accordance
with the intelligence or information in a low frequency signal."
OR
"Modulation is defined as the process by which some characteristics, usually amplitude,
frequency or phase, of a carrier is varied in accordance with instantaneous value of some
other voltage, called the modulating voltage."
 Need For Modulation
1. If two musical programs were played at the same time within distance, it would be difficult
for anyone to listen to one source and not hear the second source. Since all musical sounds
have approximately the same frequency range, from about 50 Hz to 10KHz. If a desired
program is shifted up to a band of frequencies between 100KHz and 110KHz, and the
second program shifted up to the band between 120KHz and 130KHz, Then both programs
gave still 10KHz bandwidth and the listener can (by band selection) retrieve the program
of his own choice. The receiver would down shift only the selected band of frequencies to
a suitable range of 50Hz to 10KHz.
2. A second more technical reason to shift the message signal to a higher frequency is related
to antenna size. It is to be noted that the antenna size is inversely proportional to the

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

frequency to be radiated. This is 75 meters at 1 MHz but at 15KHz it has increased to 5000
meters (or just over 16,000 feet) a vertical antenna of this size is impossible.
3. The third reason for modulating a high frequency carrier is that RF (radio frequency)
energy will travel a great distance than the same amount of energy transmitted as sound
power.
 Types of Modulation
The carrier signal is a sine wave at the carrier frequency. Below equation shows that the sine wave
has three characteristics that can be altered.
Instantaneous voltage (E) =Ec(max)Sin(2πfct + θ)

The term that may be varied are the carrier voltage Ec, the carrier frequency fc, and the carrier
phase angle θ. So three forms of modulations are possible.

1. AmplitudeModulation
Amplitude modulation is an increase or decrease of the carrier voltage (Ec), will all other
factors remaining constant.
2. FrequencyModulation
Frequency modulation is a change in the carrier frequency (fc) with all other factors
remaining constant.
3. PhaseModulation
Phase modulation is a change in the carrier phase angle (θ). The phase angle cannot
change without also affecting a change in frequency. Therefore, phase modulation is in
reality a second form of frequency modulation.
 EXPLANATION OF AM:
The method of varying amplitude of a high frequency carrier wave in accordance with the
information to be transmitted, keeping the frequency and phase of the carrier wave unchanged is
called Amplitude Modulation. The information is considered as the modulating signal and it is
superimposed on the carrier wave by applying both of them to the modulator. The detailed
diagram showing the amplitude modulation process is given below.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIG 1.1 Amplitude Modulation

As shown above, the carrier wave has positive and negative half cycles. Both these cycles are
varied according to the information to be sent. The carrier then consists of sine waves whose
amplitudes follow the amplitude variations of the modulating wave. The carrier is kept in an
envelope formed by the modulating wave. From the figure, you can also see that the amplitude
variation of the high frequency carrier is at the signal frequency and the frequency of the carrier
wave is the same as the frequency of the resulting wave.
 Analysis of Amplitude Modulation Carrier Wave:
Let vc = Vc Sin wct
vm = Vm Sin wmt
vc – Instantaneous value of the carrier
Vc – Peak value of the carrier
Wc – Angular velocity of the carrier
vm – Instantaneous value of the modulating signal
Vm – Maximum value of the modulating signal
wm – Angular velocity of the modulating signal
fm – Modulating signal frequency
It must be noted that the phase angle remains constant in this process. Thus it can be ignored.
The amplitude of the carrier wave varies at fm.The amplitude modulated wave is given by the
equation A = Vc + vm = Vc + Vm Sin wmt = Vc [1+ (Vm/Vc Sin wmt)]
= Vc (1 + mSin wmt)

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

m – Modulation Index. The ratio of Vm/Vc.


Instantaneous value of amplitude modulated wave is given by the equation
v = A Sin wct = Vc (1 + m Sin wmt) Sin wct
= Vc Sin wct + mVc (Sin wmt Sin wct)
v = Vc Sin wct + [mVc/2 Cos (wc-wm)t – mVc/2 Cos (wc + wm)t]
The above equation represents the sum of three sine waves. One with amplitude of Vc and a
frequency of wc/2 , the second one with an amplitude of mV c/2 and frequency of (wc – wm)/2 and
the third one with an amplitude of mV c/2 and a frequency of (wc + wm)/2 .
In practice the angular velocity of the carrier is known to be greater than the angular velocity of
the modulating signal (w c >> wm). Thus, the second and third cosine equations are more close to
the carrier frequency. The equation is represented graphically as shown below.
 Frequency Spectrum of AM Wave:
Lower side frequency – (wc – wm)/2
Upper side frequency – (wc +wm)/2
The frequency components present in the AM wave are represented by vertical lines
approximately located along the frequency axis. The height of each vertical line is drawn in
proportion to its amplitude. Since the angular velocity of the carrier is greater than the angular
velocity of the modulating signal, the amplitude of side band frequencies can never exceed half of
the carrier amplitude.
Thus there will not be any change in the original frequency, but the side band frequencies (wc –
wm)/2 and (wc +wm)/2 will be changed. The former is called the upper side band (USB)
frequency and the later is known as lower side band (LSB) frequency.
Since the signal frequency w m/2 is present in the side bands, it is clear that the carrier voltage
component does not transmit any information.
Two side banded frequencies will be produced when a carrier is amplitude modulated by a single
frequency. That is, an AM wave has a band width from (w c – wm)/2 to (wc +wm)/2 , that is, 2wm/2
or twice the signal frequency is produced. When a modulating signal has more than one
frequency, two side band frequencies are produced by every frequency. Similarly for two
frequencies of the modulating signal 2 LSB‘s and 2 USB‘s frequencies will be produced.
The side bands of frequencies present above the carrier frequency will be same as the ones present
below. The side band frequencies present above the carrier frequency is known to be the upper
side band and all those below the carrier frequency belong to the lower side band. The USB
frequencies represent the some of the individual modulating frequencies and the LSB frequencies

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

represent the difference between the modulating frequency and the carrier frequency. The total
bandwidth is represented in terms of the higher modulating frequency and is equal to twice this
frequency.
 Modulation Index (m):
The ratio between the amplitude change of carrier wave to the amplitude of the normal carrier
wave is called modulation index. It is represented by the letter ‗m‘.
It can also be defined as the range in which the amplitude of the carrier wave is varied by the
modulating signal. m = V m/Vc.
Percentage modulation, %m = m*100 = Vm/Vc * 100
The percentage modulation lies between 0 and 80%.
Another way of expressing the modulation index is in terms of the maximum and minimum values
of the amplitude of the modulated carrier wave. This is shown in the figure below.

FIG 1.2 Amplitude Modulation Carrier Wave


2 Vin = Vmax – Vmin
Vin = (Vmax – Vmin)/2
Vc = Vmax – Vin
= Vmax – (Vmax-Vmin)/2
=(Vmax + Vmin)/2
Substituting the values of Vm and Vc in the equation m = Vm/Vc , we get
M = Vmax – Vmin/Vmax + Vmin

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

As told earlier, the value of ‗m‘ lies between 0 and 0.8. The value of m determines the strength
and the quality of the transmitted signal. In an AM wave, the signal is contained in the variations
of the carrier amplitude. The audio signal transmitted will be weak if the carrier wave is only
modulated to a very small degree. But if the value of m exceeds unity, the transmitter output
produces erroneous distortion.
 Power Relations in an AM wave:
A modulated wave has more power than had by the carrier wave before modulating. The total
power components in amplitude modulation can be written as:
Ptotal = Pcarrier + PLSB + PUSB
Considering additional resistance like antenna resistance R.
Pcarrier = [(Vc/√2)/R]2 = V2C/2R
Each side band has a value of m/2 Vc and r.m.s value of mVc/2√2. Hence power in LSB and USB
can be written as
PLSB = PUSB = (mVc/2√2)2/R = m2/4*V2C/2R = m2/4 Pcarrier
2 2
V2+ m2
P total = (1
Ptot= Pcarrier /2)
In some applications, the carrier is simultaneously modulated by several sinusoidal modulating
signals. In such a case, the total modulation index is given as
Mt = √(m12 + m22 + m32 + m42 + …..
If Ic and It are the r.m.s values of unmodulated current and total modulated current and R is the
resistance through which these current flow, then
Ptotal/Pcarrier = (It.R/Ic.R)2 = (It/Ic)2
Ptotal/Pcarrier = (1 + m2/2)
It/Ic = 1 + m2/2
 Limitations of Amplitude Modulation:

1. Low Efficiency- Since the useful power that lies in the small bands is quite small, so the efficiency
of AM system is low.
2. Limited Operating Range – The range of operation is small due to low efficiency. Thus,
transmission of signals is difficult.
3. Noise in Reception – As the radio receiver finds it difficult to distinguish between the amplitude
variations that represent noise and those with the signals, heavy noise is prone to occur in its
reception.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

4. Poor Audio Quality – To obtain high fidelity reception, all audio frequencies till 15 KiloHertz
must be reproduced and this necessitates the bandwidth of 10 KiloHertz to minimise the
interference from the adjacent broadcasting stations. Therefore in AM broadcasting stations audio
quality is known to be poor.
AM TRANSMITTERS:
Transmitters that transmit AM signals are known as AM transmitters. These transmitters are used
in medium wave (MW) and short wave (SW) frequency bands for AM broadcast. The MW band
has frequencies between 550 KHz and 1650 KHz, and the SW band has frequencies ranging from
3 MHz to 30 MHz. The two types of AM transmitters that are used based on their transmitting
powers are:
 High Level
 Low Level
High level transmitters use high level modulation, and low level transmitters use low level
modulation. The choice between the two modulation schemes depends on the transmitting power
of the AM transmitter. In broadcast transmitters, where the transmitting power may be of the order
of kilowatts, high level modulation is employed. In low power transmitters, where only a few
watts of transmitting power are required , low level modulation is used.
High-Level and Low-Level Transmitters Below figure's show the block diagram of high-level and
low-level transmitters. The basic difference between the two transmitters is the power
amplification of the carrier and modulating signals
Figure (a) shows the block diagram of high-level AM transmitter.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Figure (a) is drawn for audio transmission. In high-level transmission, the powers of the carrier
and modulating signals are amplified before applying them to the modulator stage, as shown in
figure (a). In low-level modulation, the powers of the two input signals of the modulator stage are
not amplified. The required transmitting power is obtained from the last stage of the transmitter,
the class C power amplifier.
The various sections of the figure (a) are:
 Carrier oscillator
 Buffer amplifier
 Frequency multiplier
 Power amplifier
 Audio chain
 Modulated class C power amplifier
 Carrier oscillator
The carrier oscillator generates the carrier signal, which lies in the RF range. The frequency of the
carrier is always very high. Because it is very difficult to generate high frequencies with good
frequency stability, the carrier oscillator generates a sub multiple with the required carrier
frequency. This sub multiple frequency is multiplied by the frequency multiplier stage to get the
required carrier frequency. Further, a crystal oscillator can be used in this stage to generate a low
frequency carrier with the best frequency stability. The frequency multiplier stage then increases
the frequency of the carrier to its requirements.
 Buffer Amplifier
The purpose of the buffer amplifier is twofold. It first matches the output impedance of the carrier
oscillator with the input impedance of the frequency multiplier, the next stage of the carrier
oscillator. It then isolates the carrier oscillator and frequency multiplier.
This is required so that the multiplier does not draw a large current from the carrier oscillator. If
this occurs, the frequency of the carrier oscillator will not remain stable.
 Frequency Multiplier
The sub-multiple frequency of the carrier signal, generated by the carrier oscillator , is now
applied to the frequency multiplier through the buffer amplifier. This stage is also known as
harmonic generator. The frequency multiplier generates higher harmonics of carrier oscillator
frequency. The frequency multiplier is a tuned circuit that can be tuned to the requisite carrier
frequency that is to be transmitted.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Power Amplifier
The power of the carrier signal is then amplified in the power amplifier stage. This is the
basic requirement of a high-level transmitter. A class C power amplifier gives high power current
pulses of the carrier signal at its output.
 Audio Chain
The audio signal to be transmitted is obtained from the microphone, as shown in figure (a). The
audio driver amplifier amplifies the voltage of this signal. This amplification is necessary to drive
the audio power amplifier. Next, a class A or a class B power amplifier amplifies the power of the
audio signal.
 Modulated Class C Amplifier
This is the output stage of the transmitter. The modulating audio signal and the carrier signal, after
power amplification, are applied to this modulating stage. The modulation takes place at this
stage. The class C amplifier also amplifies the power of the AM signal to the reacquired
transmitting power. This signal is finally passed to the antenna., which radiates the signal into
space of transmission.
Figure (b) shows the block diagram of a low-level AM transmitter.

The low-level AM transmitter shown in the figure (b) is similar to a high-level transmitter, except
that the powers of the carrier and audio signals are not amplified. These two signals are directly
applied to the modulated class C power amplifier.
Modulation takes place at the stage, and the power of the modulated signal is amplified to the
required transmitting power level. The transmitting antenna then transmits the signal.
 Coupling of Output Stage and Antenna
The output stage of the modulated class C power amplifier feeds the signal to the transmitting
antenna. To transfer maximum power from the output stage to the antenna it is necessary that the
impedance of the two sections match. For this , a matching network is required. The matching
between the two should be perfect at all transmitting frequencies. As the matching is required at

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

different frequencies, inductors and capacitors offering different impedance at different


frequencies are used in the matching networks.
The matching network must be constructed using these passive components. This is shown in
figure ©

The matching network used for coupling the output stage of the transmitter and the antenna is
1
called double π-network. This network is shown in figure (c). It consists of two inductors , L and
L2 and two capacitors, C1 and C2. The values of these components are chosen such that the input

impedance of the network between 1 and 1'. Shown in figure (c) is matched with the output
impedance of the output stage of the transmitter. Further, the output impedance of the network is
matched with the impedance of the antenna.
The double π matching network also filters unwanted frequency components appearing at the
output of the last stage of the transmitter. The output of the modulated class C power amplifier
may contain higher harmonics, such as second and third harmonics, that are highly undesirable.
The frequency response of the matching network is set such that these unwanted higher harmonics
are totally suppressed, and only the desired signal is coupled to the antenna.
 Comparision of Am and Fm Signals
Both AM and FM system are used in commercial and non-commercial applications. Such as radio
broadcasting and television transmission. Each system has its own merits and demerits. In a
Particular application, an AM system can be more suitable than an FM system. Thus the two are
equally important from the application point of view.
 Advantage of FM systems over AM Systems
The advantages of FM over AM systems are:
 The amplitude of an FM wave remains constant. This provides the system designers an
opportunity to remove the noise from the received signal. This is done in FM receivers by

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

employing an amplitude limiter circuit so that the noise above the limiting amplitude is
suppressed. Thus, the FM system is considered a noise immune system. This is not
possible in AM systems because the baseband signal is carried by the amplitude variations
it self and the envelope of the AM signal cannot be altered.
 Most of the power in an FM signal is carried by the side bands. For higher values of the
modulation index, mc, the major portion of the total power is contained is side bands, and
the carrier signal contains less power. In contrast, in an AM system, only one third of the
total power is carried by the side bands and two thirds of the total power is lost in the form
of carrier power.
 In FM systems, the power of the transmitted signal depends on the amplitude of the
unmodulated carrier signal, and hence it is constant. In contrast, in AM systems, the power
depends on the modulation index ma. The maximum allowable power in AM systems is
100 percent when ma is unity. Such restriction is not applicable int case of FM systems.
This is because the total power in an FM system is independent of the modulation index,
mf and frequency deviation fd. Therefore, the power usage is optimum in an FM system.
 In an AM system, the only method of reducing noise is to increase the transmitted power
of the signal. This operation increases the cost of the AM system. In an FM system, you
can increase the frequency deviation in the carrier signal to reduce the noise. if the
frequency deviation is high, then the corresponding variation in amplitude of the baseband
signal can be easily retrieved. if the frequency deviation is small, noise 'can overshadow
this variation and the frequency deviation cannot be translated into its corresponding
amplitude variation. Thus, by increasing frequency deviations in the FM signal, the noise
effect can he reduced. There is no provision in AM system to reduce the noise effect by
any method, other than increasing itss transmitted power.
 In an FM signal, the adjacent FM channels are separated by guard bands. In an FM system
there is no signal transmission through the spectrum space or the guard band. Therefore,
there is hardly any interference of adjacent FM channels. However, in an AM system,
there is no guard band provided between the two adjacent channels. Therefore, there is
always interference of AM radio stations unless the received signalis strong enough to
suppress the signal of the adjacent channel.
 The disadvantages of FM systems over AM systems
 There are an infinite number of side bands in an FM signal and therefore the theoretical
bandwidth of an FM system is infinite. The bandwidth of an FM system is limited by

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Carson's rule, but is still much higher, especially in WBFM. In AM systems, the
bandwidth is only twice the modulation frequency, which is much less than that of WBFN.
This makes FM systems costlier than AM systems.
 The equipment of FM system is more complex than AM systems because of the complex
circuitry of FM systems; this is another reason that FM systems are costlier AM systems.
 The receiving area of an FM system is smaller than an AM system consequently FM
channels are restricted to metropolitan areas while AM radio stations can be received
anywhere in the world. An FM system transmits signals through line of sight
propagation, in which the distance between the transmitting and receiving antenna should
not be much. in an AM system signals of short wave band stations are transmitted through
atmospheric layers that reflect the radio waves over a wider area.
SSB TRANSMISSION:
There are two methods used for SSB Transmission.
1. Filter Method
2. Phase Shift Method
3. Block diagram of SSB
 Filter Method:
This is the filter method of SSB suppression for the transmission. Fig 1.3

FIG 1.3 Filter Method


1. A crystal controlled master oscillator produces a stable carrier frequency fc (say 100 KHz)
2. This carrier frequency is then fed to the balanced modulator through a buffer amplifier
which isolates these two satges.
3. The audio signal from the modulating amplifier modulates the carrier in the balanced
modulator. Audio frequency range is 300 to 2800 Hz. The carrier is also suppressed in this
stage but allows only to pass the both side bands. (USB & LSB).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

4. A band pass filter (BPF) allows only a single band either USB or LSB to pass through it. It
depends on our requirements.
5. This side band is then heterodyned in the balanced mixer stage with 12 MHz frequency
produced by crystal oscillator or synthesizer depends upon the requirements of our
transmission. So in mixer stage, the frequency of the crystal oscillator or synthersizer is
added to SSB signal. The output frequency thus being raised to the value desired for
transmission.
6. Then this band is amplified in driver and power amplifier stages and then fed to the aerial
for the transmission.
 Phase Shift Method:
The phaseing method of SSB generation uses a phase shift technique that causes one of the side
bands to be conceled out. A block diagram of a phasing type SSB generator is shown in fig 1.4.

FIG 1.4 Phase Shift Method


It uses two balanced modulators instead of one. The balanced modulators effectively eliminate
the carrier. The carrier oscillator is applied directly to the upper balanced modulator along
with the audio modulating signal. Then both the carrier and modulating signal are shifted in
phase by 90o and applied to the second, lower, balanced modulator. The two balanced
modulator output are then added together algebraically. The phase shifting action causes one
side band to be canceled out when the two balanced modulator outputs are combined.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Block diagram of SSB:

FIG 1.5 Balance Ring Modulator


 Operation of Balance Ring Modulator:
 Ring modulation is a signal-processing function in electronics, an implementation of
amplitude modulation or frequency mixing, performed by multiplying two signals, where
one is typically a sine-wave or another simple waveform. It is referred to as "ring"
modulation because the analog circuit of diodes originally used to implement this
technique took the shape of a ring. This circuit is similar to a bridge rectifier, except that
instead of the diodes facing "left" or "right", they go "clockwise" or "anti-clockwise". A
ring modulator is an effects unit wo rking on this principle.
 The carrier, which is AC, at a given time, makes one pair of diodes conduct, and reverse-
biases the other pair. The conducting pair carries the signal from the left transformer
secondary to the primary of the transformer at the right. If the left carrier terminal is
positive, the top and bottom diodes conduct. If that terminal is negative, then the "side"
diodes conduct, but create a polarity inversion between the transformers. This action is
much like that of a DPDT switch wired for reversing connections.
 Ring modulators frequency mix or heterodyne two waveforms, and output the sum and
difference of the frequencies present in each waveform. This process of ring modulation
produces a signal rich in partials. As well, neither the carrier nor the incoming signal is
prominent in the outputs, and ideally, not at all.
 Two oscillators, whose frequencies were harmonically related and ring modulated against
each other, produce sounds that still adhere to the harmonic partials of the notes, but
contain a very different spectral make up. When the oscillators' frequencies are not
harmonically related, ring modulation creates inharmonic, often producing bell-like or
otherwise metallic sounds.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 If the same signal is sent to both inputs of a ring modulator, the resultant harmonic
spectrum is the original frequency domain doubled (if f1 = f2 = f, then f2 − f1 = 0 and f2 + f1
= 2f). Regarded as multiplication, this operation amounts to squaring. However, some
distortion occurs due to the forward voltage drop of the diodes.
 Some modern ring modulators are implemented using digital signal processing techniques
by simply multiplying the time domain signals, producing a nearly-perfect signal output.
Before digital music synthesizers became common, at least some analog synthesizers (such
as the ARP 2600) used analog multipliers for this purpose; they were closely related to
those used in electronic analog computers. (The "ring modulator" in the ARP 2600 could
multiply control voltages; it could work at DC.)
 Multiplication in the time domain is the same as convolution in the frequency domain, so
the output waveform contains the sum and difference of the input frequencies. Thus, in the
basic case where two sine waves of frequencies f1 and f2 (f1 < f2) are multiplied, two new
sine waves are created, with one at f1 + f2 and the other at f2 - f1. The two new waves are
unlikely to be harmonically related and (in a well designed ring modulator) the original
signals are not present. It is this that gives the ring modulator its unique tones.
 Inter modulation products can be generated by carefully selecting and changing the
frequency of the two input waveforms. If the signals are processed digitally, the frequency-
domain convolution becomes circular convolution. If the signals are wideband, this will
cause aliasing distortion, so it is common to oversample the operation or low-pass filter the
signals prior to ring modulation.
 One application is spectral inversion, typically of speech; a carrier frequency is chosen to
be above the highest speech frequencies (which are low-pass filtered at, say, 3 kHz, for a
carrier of perhaps 3.3 kHz), and the sum frequencies from the modulator are removed by
more low-pass filtering. The remaining difference frequencies have an inverted spectrum -
High frequencies become low, and vice versa.
 Advantages:
 It allows better management of the frequency spectrum. More transmission can fit into a
given frequency range than would be possible with double side band DSB signals.
All of the transmitted power is message power none is dissipate as carrier power.
 Disadvantages:
1. The cost of a single side band SSB receiver is higher than the double side band DSB
counterpart be a ratio of about 3:1.

2. The average radio users wants only to flip a power switch and dial station.Single side band
https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

SSB receivers require several precise frequency control settings to minimize distortion and
may require continual readjustment during the use of the system.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

VESTIGIAL SIDE BAND (VSB) MODULATION:


• The following are the drawbacks of SSB signal generation:
1. Generation of an SSB signal is difficult.
2. Selective filtering is to be done to get the original signal back.

3. Phase shifter should be exactly tuned to 90°.

• To overcome these drawbacks, VSB modulation is used. It can view as a compromise

between SSB and DSB-SC. Figure1.5 shows all the three modulation schemes.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Spectrum of VSB Signals:

FIG 1.6 Spectrum of VSB Signals


 Vestigial sideband (VSB) transmission is a compromise between DSB and SSB
 In VSB modulation, one passband is passed almost completely whereas only a residual
portion of the other sideband is retained in such a way that the demodulation process can
still reproduce the original signal.
 VSB signals are easier to generate because some roll-off in filter edges is allowed. This
results in system simplification. And their bandwidth is only slightly greater than that of
SSB signals (-25 %).
 The filtering operation can be represented by a filter H(f) that passes some of the lower (or
upper) sideband and most of the upper (or lower) sideband.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Heterodyning means the translating or shifting in frequency.


 By heterodyning the incoming signal at ωRF with the local oscillator frequency
ωLO, the message is translated to an intermediate frequency
ωIF, which is equal to either the sum or the difference of ωRF and ωIF.
 If ωIF = 0, the bandpass filter becomes a low-pass filter and the original baseband signal
is presented at the output. This is called homodyning
 Heterodyning: Image Response:
Methods to solve the image response in heterodyne receiver
1. Careful selection of intermediate frequency ωIF for a given frequency band.
2. Attenuate the image signal before heterodyning.
 Advantages:
 VSB is a form of amplitude modulation intended to save bandwidth over regular AM.
Portions of one of the redundant sidebands are removed to form a vestigial side band
signal.
 The actual information is transmitted in the sidebands, rather than the carrier; both
sidebands carry the same information. Because LSB and USB are essentially mirror
images of each other, one can be discarded or used for a second channel or for diagnostic
purposes.
 Disadvantages:
 VSB transmission is similar to (SSB) transmission, in which one of the sidebands is
completely removed. In VSB transmission, however, the second sideband is not
completely removed, but is filtered to remove all but the desired range of frequencies.
DSB-SC:
Double-sideband suppressed-carrier transmission (DSB-SC) is transmission in which
frequencies produced by amplitude modulation (AM) are symmetrically spaced above and below
the carrier frequency and the carrier level is reduced to the lowest practical level, ideally being
completely suppressed.
 Spectrum:
DSB-SC is basically an amplitude modulation wave without the carrier, therefore reducing power
waste, giving it a 50% efficiency. This is an increase compared to normal AM transmission
(DSB), which has a maximum efficiency of 33.333%, since 2/3 of the power is in the carrier

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

which carries no intelligence, and each sideband carries the same information. Single Side Band
(SSB) Suppressed Carrier is 100% efficient.

FIG 1.7 Spectrum plot of an DSB-SC signal


 Generation:
DSB-SC is generated by a mixer. This consists of a message signal multiplied by a carrier signal.
The mathematical representation of this process is shown below, where the product-to-sum
trigonometric identity is used.

FIG 1.8 Generation of DSB-SC signal


 Demodulation:
Demodulation is done by multiplying the DSB-SC signal with the carrier signal just like the
modulation process. This resultant signal is then passed through a low pass filter to produce a
scaled version of original message signal. DSB-SC can be demodulated if modulation index is less
than unity.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The equation above shows that by multiplying the modulated signal by the carrier signal, the
result is a scaled version of the original message signal plus a second term. Since ,
this second term is much higher in frequency than the original message.
Once this signal passes through a low pass filter, the higher frequency component is removed,
leaving just the original message.
 Distortion and Attentuation:
For demodulation, the demodulation oscillator's frequency and phase must be exactly the
same as modulation oscillator's, otherwise, distortion and/or attenuation will occur.
To see this effect, take the following conditions:

 Message signal to be transmitted:

 Modulation (carrier) signal:


 Demodulation signal (with small frequency and phase deviations from the modulation

signal):
The resultant signal can then be given by

The terms results in distortion and attenuation of the original


message signal. In particular, contributes to distortion while adds to the
attenuation.
HILBERT TRANSFORM:
(t) of a signal x(t) is defined by the equation
1 x(s)
(t) = ds,
t-s

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

where the integral is the Cauchy principal value integral. The reconstruction formula
1 (s)
x(t) = - ds,
t-s

defines the Hilbert inverse transform.


 Hilbert transformer:

FIG 1.9 Block diagram of Hilbert Transform Pair


The pair x(t), (t) is called a Hilbert transform pair is an LTI system wh ose transfer function is
H(v) = - j · sgn v,because (t) = (1/ t) * x(t) which, by taking the Fourier transform implies
(v) = - j (sgn v) X(v).
A Hilbert transformer produces a -90 degree phase shift for the positive frequency components of
the input x(t), the amplitude doesn't change.
 Properties of the Hilbert transform:
A signal x(t) and its Hilbert transform (t) have
1. the same amplitude spectrum
2. the same autocorrelation function
3. x(t) and (t) are orthogonal
4. The Hilbert transform of (t) is -x(t)
 Pre envelope:
The pre envelope of a real signal x(t) is the complex function
x+(t) = x(t) + j (t).
The pre envelope is useful in treating band pass signals and systems. This is due to the result
2 X(v), v > 0

X(0), v = 0
X+(v) = 0, v<0

 Complex envelope:
The complex envelope of a band pass signal x(t) is

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

− 2𝜋 𝑐𝑡
(t) = x+(t) .

SUPERHETERODYNE RECEIVER:

A superheterodyne receiver(often shortened to superhet) uses frequency mixing to convert a


received signal to a fixed intermediate frequency (IF) which can be more conveniently processed than the
original radio carrier frequency.
 Basic Superheterodyne Block Diagram and Functionality:
The basic block diagram of a basic superhet receiver is shown below. This details the most basic
form of the receiver and serves to illustrate the basic blocks and their function.

FIG 1.10 Block Diagram of a Basic Superheterodyne Radio Receiver


The way in which the receiver works can be seen by following the signal as is passes through the
receiver.
 Front end amplifier and tuning block: Signals enter the front end circuitry from the
antenna. This circuit block performs two main functions:
o Tuning: Broadband tuning is applied to the RF stage. The purpose of this is to
reject the signals on the image frequency and accept those on the wanted
frequency. It must also be able to track the local oscillator so that as the receiver is
tuned, so the RF tuning remains on the required frequency. Typically the selectivity
provided at this stage is not high. Its main purpose is to reject signals on the image
frequency which is at a frequency equal to twice that of the IF away from the
wanted frequency. As the tuning within this block provides all the rejection for the
image response, it must be at a sufficiently sharp to reduce the image to an
acceptable level. However the RF tuning may also help in preventing strong off-
channel signals from entering the receiver and overloading elements of the
receiver, in particular the mixer or possibly even the RF amplifier.
o Amplification: In terms of amplification, the level is carefully chosen so that it
does not overload the mixer when strong signals are present, but enables the signals

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

to be amplified sufficiently to ensure a good signal to noise ratio is achieved. The


amplifier must also be a low noise design. Any noise introduced in this block will
be amplified later in the receiver.
 Mixer / frequency translator block: The tuned and amplified signal then enters one port
of the mixer. The local oscillator signal enters the other port. The performance of the
mixer is crucial to many elements of the overall receiver performance. It should eb as
linear as possible. If not, then spurious signals will be generated and these may appear as
'phantom' received signals.
 Local oscillator: The local oscillator may consist of a variable frequency oscillator that
can be tuned by altering the setting on a variable capacitor. Alternatively it may be a
frequency synthesizer that will enable greater levels of stability and setting accuracy.
 Intermediate frequency amplifier, IF block : Once the signals leave the mixer they
enter the IF stages. These stages contain most of the amplification in the receiver as well
as the filtering that enables signals on one frequency to be separated from those on the
next. Filters may consist simply of LC tuned transformers providing inter-stage coupling,
or they may be much higher performance ceramic or even crystal filters, dependent upon
what is required.
 Detector / demodulator stage: Once the signals have passed through the IF stages of the
superheterodyne receiver, they need to be demodulated. Different demodulators are
required for different types of transmission, and as a result some receivers may have a
variety of demodulators that can be switched in to accommodate the different types of
transmission that are to be encountered. Different demodulators used may include:
o AM diode detector: This is the most basic form of detector and this circuit block
would simple consist of a diode and possibly a small capacitor to remove any
remaining RF. The detector is cheap and its performance is adequate, requiring a
sufficient voltage to overcome the diode forward drop. It is also not particularly
linear, and finally it is subject to the effects of selective fading that can be apparent,
especially on the HF bands.
o Synchronous AM detector: This form of AM detector block is used in where
improved performance is needed. It mixes the incoming AM signal with another on
the same frequency as the carrier. This second signal can be developed by passing
the whole signal through a squaring amplifier. The advantages of the synchronous

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

AM detector are that it provides a far more linear demodulation performance and it
is far less subject to the problems of selective fading.
o SSB product detector: The SSB product detector block consists of a mixer and a
local oscillator, often termed a beat frequency oscillator, BFO or carrier insertion
oscillator, CIO. This form of detector is used for Morse code transmissions where
the BFO is used to create an audible tone in line with the on-off keying of the
transmitted carrier. Without this the carrier without modulation is difficult to
detect. For SSB, the CIO re-inserts the carrier to make the modulation
comprehensible.
o Basic FM detector: As an FM signal carries no amplitude variations a
demodulator block that senses frequency variations is required. It should also be
insensitive to amplitude variations as these could add extra noise. Simple FM
detectors such as the Foster Seeley or ratio detectors can be made from discrete
components although they do require the use of transformers.
o PLL FM detector: A phase locked loop can be used to make a very good FM
demodulator. The incoming FM signal can be fed into the reference input, and the
VCO drive voltage used to provide the detected audio output.
o Quadrature FM detector: This form of FM detector block is widely used within
ICs. IT is simple to implement and provides a good linear output.
 Audio amplifier: The output from the demodulator is the recovered audio. This is passed
into the audio stages where they are amplified and presented to the headphones or
loudspeaker.
COMPARISION OF VARIOUS AM:
PARAMETER VSB - SC SSB - SC DSB-SC
Definition A vestigial sideband (in Single-sideband In radio
radio communication) is a modulation (SSB) is a communications,
sideband that has been refinement of asidebandis
only partly cut off or amplitude modulation a band of frequencies hig
suppressed. that more efficiently her than or lower than
uses electrical power the carrier frequency,
and bandwidth. containing power as a
result of
the modulation process.

Application Tv broadcastings & Tv broadcastings & Tv broadcastings &


Radio broadcastings ShortwaveRadio Radio broadcastings
broadcastings Garage door opens

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

keyless remotes
Uses Transmits TV signals Short wave radio Two way radio
communications communications.

APPLICATION & ITS USES:


 Radio broadcastings
 Tv broadcastings
 Garage door opens keyless remotes
 Transmits TV signals
 Short wave radio communications
 Two way radio communication.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

CHAPTER 2

ANGLE MODULATION

PREREQUISTING ABOUT ANGLE MODULATION:


Angle modulation is a class of analog modulation. These techniques are based on altering
the angle (or phase) of a sinusoidal carrier wave to transmit data, as opposed to varying
the amplitude, such as in AM tr ansmission.
Angle Modulation is modulation in which the angle of a sine-wave carrier is varied by a
modulating wave. Frequency Modulation (FM) and Phase Modulation (PM) are two types of
angle modulation. In frequency modulation the modulating signal causes the carrier frequency to
vary. These variations are controlled by both the frequency and the amplitude of the modulating
wave. In phase modulation the phase of the carrier is controlled by the modulating waveform.
The two main types of angle modulation are:
 Frequency modulation ( FM), with its digital correspondence frequency-shift keying
( FSK).
 Phase modulation (PM), with its digital correspondence phase-shift keying (PSK).

FREQUENCY & PHASE MODULATION:


Besides using the amplitude of carrier to carrier information, one can also use the angle of a
carrier to carrier information. This approach is called angle modulation, and includes frequency
modulation (FM) and phase modulation (PM). The amplitude of the carrier is maintained constant.
The major advantage of this approach is that it allows the trade-off between bandwidth and noise
performance.
An angle modulated signal can be written as
s t = Acosθ(t)

where θ(t) is usually of the form θ t = 2πfct + ∅(t) and fc is the carrier frequency. The signal

∅(t) is derived from the message signal m(t) . If ∅ t = kpm(t) for some constant kp ,the

resulting modulation is called phase modulation. The parameter kp is called the phase

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

sensitivity.In telecommunications and signal processing, frequency modulation (FM) is the


encoding of information in a carrier wave by varying the instantaneous frequency of the wave.
(Compare with amplitude modulation, in which the amplitude of the carrier wave varies, while the
frequency remains constant.) Frequency modulation is known as phase modulation when the
carrier phase modulation is the time integral of the FM signal.

If the information to be transmitted (i.e., the baseband signal) is and the sinusoidal c
arrier is , where fc is the carrier's base frequency, and Ac is the carrier's
amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted
signal:

In this equation, is the instantaneous frequency of the oscillator and is the frequency
deviation, whi ch represents the maximum shift away from fc in one direction, assuming xm(t) is
limited to the range ±1.
While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier

analysis that a wider range of frequencies is required to precisely represent an FM signal.

The frequency spectrum of an actual FM signal has components extending infinitely, although
their amplitude decreases and higher-order components are often neglected in practical desig n
problems.
Sinusoidal baseband signal:
Mathematically, a baseband modulated signal may be approximated by a sinusoidal continuous
wave signal with a frequency fm.
The integral of such a signal is:

In this case, the expression for y(t) above simplifies to:

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

where the amplitude of the modulating sinusoid is represented by the peak deviation
The harmonic dis tribution of a sine wave carrier modulated by such a sinusoidal si gnal can
be represented with Bessel functions; this provides the basis for a mathematical understanding
of frequency modulation in the frequency domain.
 Modulation index:
As in other modulation systems, the value of the modulation index indicates by how much the
modulated variable varies around its unmodulated level. It relates to variations in the carrier
frequency:

where is the highest frequency component present in the modulating signal xm(t), and is
the peak frequency-deviation—i.e. the maximum deviation of the instantaneous frequency from
the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of
the amplitude of the modulating sine wave to the amplitude of the carrier wave (here unity).

If , the modulation is called narrowband FM, and its bandwidth is approximately .


For digital modulation systems, for example Binary Frequency Shift Keying (BFSK), where a
binary signal modulates the carrier, the modulation index is given by:

where is the symbol period, and is used as the highest frequency of the
modulating binary waveform by convention, even though it would be more accurate to say it is the
highest fundamental of the modulating binary waveform. In the case of digital modulation, the
carrier is never transmitted. Rather, one of two frequencies is transmitted, either
or , depending on the binary state 0 or 1 of the modulation signal.
If , the modulation is called wideband FM and its bandwidth is approximately .
While wideband FM uses more bandwidth, it can improve the signal-to-noise ratio
significantly; for example, doubling the value of , while keeping constant, results in
an eight-fold improvement in the signal-to-noise ratio. (Compare this with Chirp spread
spectrum, which uses extremely wide frequency deviations to achieve processing gains
comparable to traditional, better-known spread-spectrum modes).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation
index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing
between spectra remains the same; some spectral components decrease in strength as others
increase. If the frequency deviation is held constant and the modulation frequency increased, the
spacing between spectra increases.
Frequency modulation can be classified as narrowband if the change in the carrier frequency is
about the same as the signal frequency, or as wideband if the change in the carrier frequency is
much higher (modulation index >1) than the signal frequency. For example, narrowband FM is
used for two way radio systems such as Family Radio Service, in which the carrier is allowed to
deviate only 2.5 kHz above and below the center frequency with speech signals of no more than
3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are
transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a 20-
kHz bandwidth.
Carson's rule:
BT = 2 ∆f + fm .
PHASE MODULATION:
Phase Modulation (PM) is another form of angle modulation. PM and FM are closely related to
each other. In both the cases, the total phase angle θ of the modulated signal varies. In an FM
wave, the total phase changes due to the change in the frequency of the carrier corresponding to
the changes in the modulating amplitude.
In PM, the total phase of the modulated carrier changes due to the changes in the instantaneous
phase of the carrier keeping the frequency of the carrier signal constant. These two types of
modulation schemes come under the category of angle modulation. However, PM is not as
extensively used as FM.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

At time t1, the amplitude of m(t) increases from zero to E1. Therefore, at t1, the phase modulated
carrier also changes corresponding to E1, as shown in Figure (a). This phase remains to this
attained value until time t2, as between t1 and t2, the amplitude of m(t) remains constant at El. At
t2, the amplitude of m(t) shoots up to E2, and therefore the phase of the carrier again increases
corresponding to the increase in m(t). This new value of the phase attained at time t2remains
constant up to time t3. At time t3, m(t) goes negative and its amplitude becomes E3.
Consequently, the phase of the carrier also changes and it decreases from the previous value
attained at t2. The decrease in phase corresponds to the decrease in amplitude of m(t). The phase
of the carrier remains constant during the time interval between t3 and t4. At t4, m(t) goes positive
to reach the amplitude El resulting in a corresponding increase in the phase of modulated carrier at
time t4. Between t4 and t5, the phase remains constant. At t5 it decreases to the phase of the
unmodulated carrier, as the amplitude of m(t) is zero beyond t5.
 Equation of a PM Wave:
To derive the equation of a PM wave, it is convenient to consider the modulating signal as a pure
sinusoidal wave. The carrier signal is always a high frequency sinusoidal wave. Consider the
modulating signal, em and the carrier signal ec, as given by, equation 1 and 2, respectively.
em = Em cos ωm t------------ (1)
ec = Ec sin ωc t -------------- (2)

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The initial phases of the modulating signal and the carrier signal are ignored in Equations (1) and
(2) because they do not contribute to the modulation process due to their constant values. After
PM, the phase of the carrier will not remain constant. It will vary according to the modulating
signal em maintaining the amplitude and frequency as constants. Suppose, after PM, the equation
of the carrier is represented as:
e = Ec Sin θ ------------ (3)
Where θ, is the instantaneous phase of the modulated carrier, and sinusoid ally varies in
proportion to the modulating signal. Therefore, after PM, the instantaneous phase of the
modulated carrier can be written as:
θ = ωc t + Kp em ------------------ (4)
Where, kp is the constant of proportionality for phase modulation.
Substituting Equation (1) in Equation (4), yon get:
θ = ωc t + Kp Em Cos ωm t -------------------- (5)
In Equation (5), the factor, kpEm is defined as the modulation index, and is given as:
mp = Kp Em------------------------ (6)
where, the subscript p signifies; that mp is the modulation index of the PM wave. Therefore,
equation (5) becomes
θ = ωc t + mp Cos ωm t -------------------- (7)
Substituting Equation (7) and (3), you get:
e = Ec sin (ωct + mp cos ωmt) ------------------- (8)
NARROW BAND FM MODULATION:
The case where |θm(t)| ≪ 1 for all t is called narrow band FM. Using the approximations cos x ≃
1 and sin x ≃ x for |x| ≪ 1, the FM signal can be approximated as:
s(t) = Ac cos[ωct + θm(t)]
= Ac cos ωct cos θm(t) − Ac sin ωctsin θm(t)

≃ Ac cos ωct − Acθm(t) sin ωct

or in complex notation
s(t) = Ac Re{ejwct (1 + jθm t }

This is similar to the AM signal except that the discrete carrier component Ac coswc(t) is 90° out
of phase with the sinusoid Ac sinwc(t) multiplying the phase angle θm(t). The spectrum of
narrow band FM is similar to that of AM.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 The Bandwidth of an FM Signal:


The following formula, known as Carson‘s rule is often used as an estimate of the
FM signal
bandwidth: BT = 2(∆f + fm) Hz
where ∆f is the peak frequency deviation and fm is the maximum baseband

message

frequency component.
 FM Demodulation by a Frequency Discriminator:
A frequency discriminator is a device that converts a received FM signal into a
voltage that
is proportional to the instantaneous frequency of its input without using a local
oscillator and, consequently, in a non coherent manner.
• When the instantaneous frequency changes slowly relative to the time-constants of the
filter, a quasi-static analysis can be used.
• In quasi-static operation the filter output has the same instantaneous frequency as the
input but with an envelope that varies according to the amplitude response of the
filter at the instantaneous frequency.
• The amplitude variations are then detected with an envelope detector like the ones
used for
AM
demodulat
ion.
 An FM Discriminator Using the Pre-Envelope:
When θm(t) is small and band-limited so that cos θm(t) and sinθm(t) are essentially band-
limited
signals with cut off frequencies less than fc, the pre-envelope of the FM signal is
s+(t) = s(t) + jˆs(t) = Ace j (ωct+θm(t))
The angle of the pre-envelope is φ'(t) = arctan[ˆs(t)/s(t)] = ωct + θm(t)

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The derivative of the phase is =ωct+ kθm(t)

which is exactly the instantaneous frequency. This can be approximated in discrete-time by using
FIR filters to form the derivatives and Hilbert transform. Notice that the denominator is the squared
envelope of the FM signal.
This formula can also be derived by observing,

So

The bandwidth of an FM discriminator must be at least as great as that of the received FM signal
which is usually much greater than that of the baseband message. This limits the degree of noise
reduction that can be achieved by preceding the discriminator by a bandpass receive filter.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Using a Phase-Locked Loop for FM Demodulation:


A device called a phase-locked loop (PLL) can be used to demodulate an FM signal with better
performance in a noisy environment than a frequency discriminator. The block diagram of a
discrete-time version of a PLL as shown in figure,

FIG 2.2 PLL Block diagram


The block diagram of a basic PLL is shown in the figure below. It is basically a flip flop
consisting of a phase detector, a low pass filter (LPF),and a Voltage Controlled Oscillator (VCO)
The input signal Vi with an input frequency fi is passed through a phase detector. A phase detector

basically a comparator which compares the input frequency fiwith the feedback frequency fo .The
phase detector provides an output error voltage Ver (=fi+fo),which is a DCvoltage. This DC
voltage is then passed on to an LPF. The LPF removes the high frequency noise and produces a
steady DC level, Vf (=Fi-Fo). Vf also represents the dynamic characteristics of the PLL.
The DC level is then passed on to a VCO. The output frequency of the VCO (fo) is directly
proportional to the input signal. Both the input frequency and output frequency are compared and
adjusted through feedback loops until the output frequency equals the input frequency. Thus the
PLL works in these stages – free-running, capture and phase lock.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

As the name suggests, the free running stage refer to the stage when there is no input voltage
applied. As soon as the input frequency is applied the VCO starts to change and begin producing
an output frequency for comparison this stage is called the capture stage. The frequency
comparison stops as soon as the output frequency is adjusted to become equal to the input
frequency. This stage is called the phase locked state.
 Comments on PLL Performance:
• The frequency response of the linearized loop characteristics of a band-limited differentiator.
• The loop parameters must be chosen to provide a loop bandwidth that passes the desired
baseband message signal but is as small as possible to suppress out-of-band noise.
• The PLL performs better than a frequency discriminator when the FM signal is corrupted by
additive noise. The reason is that the bandwidth of the frequency discriminator must be large
enough to pass the modulated FM signal while the PLL bandwidth only has to be large enough to
pass the baseband message. With wideband FM, the bandwidth of the modulated signal can be
significantly larger than that of the baseband message.
 Bandwidth of FM PLL vs. Costas Loop:
The PLL described in this experiment is very similar to the Costas loop presented in coherent
demodulation of DSBSC-AM. However, the bandwidth of the PLL used for FM demodulation
must be large enough to pass the baseband message signal, while the Costas loop is used to
generate a stable carrier reference signal so its bandwidth should be very small and just wide
enough to track carrier drift and allow a reasonable acquisition time.
WIDE-BAND FM:
s ( t ) = ACcos(2πfct + φ(t)
Finding its FT is not easy:ϕ(t) is inside the cosine.

To analyze the spectrum, we use complex envelope.


s(t) can be written as:

Consider single tone FM: s(t) =ACcos(2πfct + βsin2πfm(t))


Wideband FM is defined as the situation where the modulation index is above 0.5. Under these
circumstances the sidebands beyond the first two terms are not insignificant. Broadcast FM
stations use wideband FM, and using this mode they are able to take advantage of the wide
bandwidth available to transmit high quality audio as well as other services like a stereo channel,
and possibly other services as well on a single carrier.
The bandwidth of the FM transmission is a means of categorising the basic attributes for the
signal, and as a result these terms are often seen in the technical literature associated with

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

frequency modulation, and products using FM. This is one area where the figure for modulation
index is used.
 GENERATION OF WIDEBAND FM SIGNALS:
Indirect Method for Wideband FM Generation:
Consider the following block diagram

Narrowband
m(t)
FM ( . )P gFM (WB) (t)
Modulator

gFM (NB) (t)


Assume a BPF is included in this
block to pass the signal with the
highest carrier freuqnecy and
reject all others

FIG 2.3 Block diagram of FM generation


A narrowband FM signal can be generated easily using the block diagram of the narrowband FM
modulator that was described in a previous lecture. The narrowband FM modulator generates a
narrowband FM signal using simple components such as an integrator (an OpAmp), oscillators,
multipliers, and adders. The generated narrowband FM signal can be converted to a wideband FM
signal by simply passing it through a non–linear device with power P. Both the carrier frequency
and the frequency deviation f of the narrowband signal are increased by a factor P. Sometimes,
the desired increase in the carrier frequency and the desired increase in f are different. In this
case, we increase f to the desired value and use a frequency shifter (multiplication by a sinusoid
followed by a BPF) to change the carrier frequency to the desired value.
 System 1:
Frequency Shifter

Narrowband BPF
m(t) FM ( . )2200 X gFM2 (WB)
(t)
CF=135 MHz
BWm = 5 kHz Modulator BW = 164 kHz f2 = 77 kHz
gFM3 (WB) (t)
gFM (NB) (t) BW2 =
= 164
2(fkHz
2 + BW m)
f3 = 77 kHz
 fc3 = 660 MHz
f1 = 35 Hz
fc1 = 300 kHz W3 = 2(f3 + BWm) cos(2(525M)t)
B = 164 kHz
BW = 2*5 = 10 kHz

FIG 2.4 Block diagram of FM generation


In this system, we are using a single non–linear device with an order of 2200 or multiple devices
with a combined order of 2200. It is clear that the output of the non–linear device has the correct
f but an incorrect carrier frequency which is corrected using a the frequency shifter with an
oscillator that has a frequency equal to the difference between the frequency of its input signal and

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

the desired carrier frequency. We could also have used an oscillator with a frequency that is the
sum of the frequencies of the input signal and the desired carrier frequency. This system is
characterized by having a frequency shifter with an oscillator frequency that is relatively large.
 System 2:
Frequency Shifter

Narrowband BPF
m(t) FM ( . )44 X ( . )50 gFM2 (WB) (t)
CF= 2.7 MHz
BWm = 5 kHz Modulator BW = 13.08 kHz f2 = 77 kHz
gFM3 (WB) (t) BW2 = 2(f2 + BWm)
gFM (NB) (t)
f3 = 1540 Hz
f1 = 35 Hz = 164 kHz
fc3 = 13.2 MHz
fc1 = 300 kHz cos(2(10.5M)t)
BW3 = 2(f3 + BWm) gFM4 (WB) (t)
BW = 2*5 = 10 kHz
= 13080 Hz
f4 = 1540 Hz
fc4 = 135/50 = 2.7 MHz
BW4 = 2(f4 + BWm) =
13080 Hz

FIG 2.5 Block diagram of FM generation


In this system, we are using two non–linear devices (or two sets of non–linear devices) with
orders 44 and 50 (44*50 = 2200). There are other possibilities for the factorizing 2200 such as
2*1100,4*550,8*275,10*220.. Depending on the available components, one of these
factorizations may be better than the others. In fact, in this case, we could have used the same
factorization but put 50 first followed by 44. We want the output signal of the overall system to be
as shown in the block diagram above, so we have to insure that the input to the non–linear device
with order 50 has the correct carrier frequency such that its output has a carrier frequency of 135
MHz. This is done by dividing the desired output carrier frequency by the non–linearity order of
50, which gives 2.7 Mhz. This allows us to figure out the frequency of the require oscillator which
will be in this case either 13.2–2.7 = 10.5 MHz or 13.2+2.7 = 15.9 MHz. We are generally free
to choose which ever we like unless the available components dictate the use of one of them and
not the other. Comparing this system with System 1 shows that the frequency of the oscillator that
is required here is significantly lower (10.5 MHz compared to 525 MHz), which is generally an
advantage.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

TRANSMISSION BANDWIDTH:

FIG 2.6 Spectrum of FM Bandwidth


FM TRANSMITTER
 Indirect method (phase shift) of modulation
The part of the Armstrong FM transmitter (Armstrong phase modulator) which is expressed in
dotted lines describes the principle of operation of an Armstrong phase modulator. It should be
noted, first that the output signal from the carrier oscillator is supplied to circuits that perform the
task of modulating the carrier signal. The oscillator does not change frequency, as is the case of
direct FM. These points out the major advantage of phase modulation (PM), or indirect FM, over
direct FM. That is the phase modulator is crystal controlled for frequency.

FIG 2.7 Armstrong Modulator

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The crystal-controlled carrier oscillator signal is directed to two circuits in parallel. This signal
(usually a sine wave) is established as the reference past carrier signal and is assigned a value
0°.The balanced modulator is an amplitude modulator used to form an envelope of double side-
bands and to suppress the carrier signal (DSSC). This requires two input signals, the carrier signal
and the modulating message signal. The output of the modulator is connected to the adder circuit;
here the 90° phase-delayed carriers signal will be added back to replace the suppressed carrier.
The act of delaying the carrier phase by 90° does not change the carrier frequency or its wave-
shape. This signal identified as the 90° carrier signal.

FIG 2.8 Phasor diagram of Armstrong Modulator

The carrier frequency change at the adder output is a function of the output phase shift and is
found by. fc = ∆θfs (in hertz)
When θ is the phase change in radians and fs is the lowest audio modulating frequency. In most
FM radio bands, the lowest audio frequency is 50Hz. Therefore, the carrier frequency change at
the adder output is 0.6125 x 50Hz = ± 30Hz since 10% AM represents the upper limit of carrier
voltage change, then ± 30Hz is the maximum deviation from the modulator for PM.
The 90° phase shift network does not change the signal frequency because the components and
resulting phase change are constant with time. However, the phase of the adder output voltage is
in a continual state of change brought about by the cyclical variations of the message signal, and
during the time of a phase change, there will also be a frequency change.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

In figure. (c). during time (a), the signal has a frequency f1, and is at the zero reference phase.
During time (c), the signal has a frequency f1 but has changed phase to θ. During time (b) when
the phase is in the process of changing, from 0 to θ. the frequency is less than f1.

 Using Reactance modulator direct method

FIG 2.9 Reactance Modulator


The FM transmitter has three basic sections.
1. The exciter section contains the carrier oscillator, reactance modulator and the buffer
amplifier.
2. The frequency multiplier section, which features several frequency multipliers.
3. The poweroutput ection, which includes a low-
level power amplifier, the final power amplifier, and the impedance matching network to
properly load the power section with the antenna impedance.
The essential function of each circuit in the FM transmitter may be described as follows.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 The Exciter
1. The function of the carrier oscillator is to generate a stable sine wave signal at the
rest frequency, when no modulation is applied. It must be able to linearly change
frequency when fully modulated, with no measurable change in amplitude.
2. The buffer amplifier acts as a constant high-impedance load on the oscillator to
help stabilize the oscillator frequency. The buffer amplifier may have a small gain.
3. The modulator acts to change the carrier oscillator frequency by application of the
message signal. The positive peak of the message signal generally lowers the
oscillator's frequency to a point below the rest frequency, and the negative message
peak raises the oscillator frequency to a value above the rest frequency. The greater
the peak-to-peak message signal, the larger the oscillator deviation.
 Frequency multipliers are tuned-input, tuned-output RF amplifiers in which the output
resonant circuit is tuned to a multiple of the input frequency. Common frequency
multipliers are 2x, 3x and 4x multiplication. A 5x Frequency multiplier is sometimes
seen, but its extreme low efficiency forbids widespread usage. Note that multiplication is
by whole numbers only. There can not a 1.5x multiplier, for instance.
 The final power section develops the carrier power, to be transmitted and often has a
low-power amplifier driven the final power amplifier. The impedance matching network
is the same as for the AM transmitter and matches the antenna impedance to the correct
load on the final over amplifier.
 Frequency Multiplier
A special form of class C amplifier is the frequency. multiplier. Any class C amplifier is capable
of performing frequency multiplidàtion if the tuned circuit in the collector resonates at some
integer multiple of the input frequency.
For example a frequency doubler can be constructed by simply connecting a parallel tuned circuit
in the collector of a class C amplifier that resonates at twice the input frequency. When the
collector current pulse occurs, it excites or rings the tuned circuit at twice the input frequency. A
current pulse flows for every other cycle of the input.
A Tripler circuit is constructed in the same way except that the tuned circuit resonates at 3 times
the input - frequency. In this way, the tuned circuit receives one input pulse for every three cycles
of oscillation it produces Multipliers can be constructed to increase the input

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

frequency by any integer factor up to approximately 10. As' the multiplication factor gets higher,
the power output of the multiplier decreases. For most practical applications, the best result is
obtained with multipliers of 2 and 3.
Another way to look the operation of class C multipliers is .to .remember that the non-sinusoidal
current pulse is rich in harmonics. Each time the pulse occurs, the second, third, fourth, fifth, and
higher harmonics are generated. The purpose of the tuned circuit in the collector is to act as a filter
to select the desired harmonics.

FIG 2.10 Block Diagram of Frequency Multiplier - 1

FIG 2.10 Block Diagram of Frequency Multiplier - 2


In many applications a multiplication factor greater than that achievable with a single multiplier
stage is required. In such cases two or more multipliers are cascaded to produce an overall
multiplication of 6. In the second example, three multipliers provide an overall multiplication of
30. The total multiplication factor is simply the product of individual stage multiplication factors.
 Reactance Modulator
The reactance modulator takes its name from the fact that the impedance of the circuit acts as a
reactance (capacitive or inductive) that is connected in parallel with the resonant circuit of the
Oscillator. The varicap can only appear as a capacitance that becomes part of the frequency
determining branch of the oscillator circuit. However, other discrete devices can appear as a
capacitor or as an inductor to the oscillator, depending on how the circuit is arranged. A colpitts

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

oscillator uses a capacitive voltage divider as the phase-reversing feedback path and would most
likely tapped coil as the phase-reversing element in the feedback loop and most commonly uses a
modulator that appears inductive.
COMPARISION OF VARIOUS MODULATIONS:
 Comparisons of Various Modulations:
Amplitude modulation Frequency modulation Phase modulation
1. Amplitude of the carrier 1. Frequency of the carrier 1. Phase of the carrier wave
wave is varied in accordance wave is varied in accordance is varied in accordance with
with the message signal. with the message signal. the message signal.
2.Much affected by noise. 2.More immune to the noise. 2. Noise voltage is constant.
3.System fidelity is poor. 3.Improved system fidelity. Improved system fidelity.
4.Linear modulation 4.Non Linear modulation 4.Non Linear modulation

 Comparisons of Narrowband and Wideband FM:


Narrowband FM Wideband FM
1. Modulation index > 1. 1. Modulation index < 1.
2.Bandwidth B = 2∆f Hz 2.Bandwidth B = 2 fm Hz
3. Occupies more bandwidth. 3. Occupies less bandwidth.
4.Used in entertainment 4.Used in FM Mobile
broadcastings communication services.

APPLICATION & ITS USES:


 Magnetic Tape Storage.
 Sound
 Noise Fm Reduction
 Frequency Modulation (FM) stereo decoders, FM Demodulation networks for FM
operation.
 Frequency synthesis that provides multiple of a reference signal frequency.
 Used in motor speed controls, tracking filters.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

UNIT – III RANDOM

PROCESS

PREREQUISTING ABOUT RANDOM PROCESS:

In probability theory, a stochastic process, or sometimes random process is a collection of


random variables, representing the evolution of some system of random values over time. This is
the probabilistic counterpart to a deterministic process. A random process, or stochastic process,
X(t), is an ensemble of number of sample functions {X1(t),X2(t), . . . ,X_(t)} together with a
probability rule which assigns a probability to any meaningful event associated with the
observation of these functions. Suppose the sample function Xi(t) corresponds to the sample point
si in the sample space S and occurs with probability Pi.
• may be finite or infinite.
• Sample functions may be defined at discrete or continuous time instants.
Random process associated with the Poisson model, and more generally, renewal theory include
 The sequence of inter arrival times.
 The sequence of arrival times.
 The counting process.

RANDOM VARIABLES:
A random variable, usually written X, is a variable whose possible values are numerical outcomes
of a random phenomenon. Random variable consists of two types they are discrete and continuous
type variable this defines discrete- or continuous-time random processes. Sample function values
may take on discrete or continuous a value is defines discrete- or continuous sample function
values may take on discrete or continuous values. This defines discrete- or continuous-parameter
random process.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 RANDOM PROCESSES VS. RANDOM VARIABLES:


• For a random variable, the outcome of a random experiment is mapped onto variable, e.g., a
number.
• For a random processes, the outcome of a random experiment is mapped onto a waveform that is
a function of time.Suppose that we observe a random process X(t) at some time t1 to generate the
servation X(t1) and that the number of possible waveforms is finite. If Xi(t1) is observed with
probability Pi, the collection of numbers {Xi(t1)}, i =1, 2, . . . , n forms a random variable,
denoted by X(t1), having the probabilitydistribution Pi, i = 1, 2, . . . , n. E[ ・ ] = ensemble
average operator.
 DISCRETE RANDOM VARIABLES:
A discrete random variable is one which may take on only a countable number of distinct values
such as 0,1,2,3,4,........ Discrete random variables are usually (but not necessarily) counts. If a
random variable can take only a finite number of distinct values, then it must be discrete.
Examples of discrete random variables include the number of children in a family, the Friday
night attendance at a cinema, the number of patients in a doctor's surgery, the number of defective
light bulbs in a box of ten.
 PROBABILITY DISTRIBUTION:
The probability distribution of a discrete random variable is a list of probabilities associated with
each of its possible values. It is also sometimes called the probability function or the probability
mass function. Suppose a random variable X may take k different values, with the probability
that X = xi defined to be P(X = xi) = pi. The probabilities pi must satisfy the following:
1: 0 < pi < 1 for each i
2: p1 + p2 + ... + pk = 1.
All random variables (discrete and continuous) have a cumulative distribution function. It is a
function giving the probability that the random variable X is less than or equal to x, for every
value x. For a discrete random variable, the cumulative distribution function is found by summing
up the probabilities.
CENTRAL LIMIT THEOREM:
In probability theory, the central limit theorem (CLT) states that, given certain conditions, the
arithmetic mean of a sufficiently large number of iterates of independent random variables, each

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

with a well-defined expected value and well-defined variance, will be approximately normally
distributed.
The Central Limit Theorem describes the characteristics of the "population of the means" which
has been created from the means of an infinite number of random population samples of size (N),
all of them drawn from a given "parent population". The Central Limit Theorem predicts
that regardless of the distribution of the parent population:
[1] The mean of the population of means is always equal to the mean of the parent population
from which the population samples were drawn.
[2] The standard deviation of the population of means is always equal to the standard deviation of
the parent population divided by the square root of the sample size (N).
[3] The distribution of means will increasingly approximate a normal distribution as the size N of
samples increases.
A consequence of Central Limit Theorem is that if we average measurements of a particular
quantity, the distribution of our average tends toward a normal one. In addition, if a measured
variable is actually a combination of several other uncorrelated variables, all of them
"contaminated" with a random error of any distribution, our measurements tend to be
contaminated with a random error that is normally distributed as the number of these variables
increases.Thus, the Central Limit Theorem explains the ubiquity of the famous bell-shaped
"Normal distribution" (or "Gaussian distribution") in the measurements domain.
Examples:
 Uniform distribution
 Triangular distribution
 1/X distribution
 Parabolic distribution
 CLT Summary
 more statistical fine-print
The uniform distribution on the left is obviously non-Normal. Call that the parent distribution.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIG 3.1 Non uniform distributions


To compute an average, Xbar, two samples are drawn, at random, from the parent distribution and
averaged. Then another sample of two is drawn and another value of Xbar computed. This
process is repeated, over and over, and averages of two are computed. The distribution of
averages of two is shown on the left.

FIG 3.2 Distributions of Xbar


Repeatedly taking three from the parent distribution, and computing the averages, produce
the probability density on the left.

FIG 3.3 Distributions of Xbar


STATIONARY PROCESS:
In mathematics and statistics, a stationary process is a stochastic process whose joint probability
distribution does not change when shifted in time. Consequently, parameters such as

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

the mean and variance, if they are present, also do not change over time and do not follow any
trends.
Stationary is used as a tool in time series analysis, where the raw data is often transformed to
become stationary; for example, economic data are often seasonal and/or dependent on a non-
stationary price level. An important type of non-stationary process that does not include a trend-
like behaviour is the cyclostationary process.
Note that a "stationary process" is not the same thing as a "process with a stationary
distribution". Indeed there are further possibilities for confusion with the use of "stationary" in the
context of stochastic processes; for example a "time-homogeneous" Markov chain is sometimes
said to have "stationary transition probabilities". Besides, all stationary Markov random processes
are time-homogeneous.
 Definition:

Formally, let be a stochastic process and let represent


the cumulative distribution function of the joint distribution of at
times . Then, is said to be stationary if, for all , for all , and for
all ,

Since does not affect , is not a function of time.


 Wide Sense Stationary:
Weaker form of stationary commonly employed in signal processing is known as weak-sense
stationary, wide-sense stationary (WSS), covariance stationary, or second-order stationary. WSS
random processes only require that 1st moment a nd covariance do not vary with respect to
time. Any strictly stationary process which has a mean and a covariance is also WSS.
So, a continuous-time random process x(t) which is WSS has the following restrictions on its
mean function.

and auto covariance function.


CORRELATION:
In statistics, dependence is any statistical relationship between two random variables or two sets
of data. Correlation refers to any of a broad class of statistical relationships involving dependence.
Familiar examples of dependent phenomena include the correlation between the
physical statures of parents and their offspring, and the correlation between the demand for a

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

product and its price. Correlations are useful because they can indicate a predictive relationship
that can be exploited in practice. For example, an electrical utility may produce less power on a
mild day based on the correlation between electricity demand and weather. In this example there
is a causal relationship, because extreme weather causes people to use more electricity for heating
or cooling; however, statistical dependence is not sufficient to demonstrate the presence of such a
causal relationship.
Formally, dependence refers to any situation in which random variables do not satisfy a
mathematical condition of probabilistic independence. In loose usage, correlation can refer to any
departure of two or more random variables from independence, but technically it refers to any of
several more specialized types of relationship between mean values. There are several correlation
coefficients, often denoted ρ or r, measuring the degree of correlation. The most common of these
is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two

variables. Other correlation coefficients have been developed to be more robust than the Pearson
correlation that is, more sensitive to nonlinear relationships. Mutual information c an also be
applied to measure dependence between two variables.
 Pearson's correlation coefficient:
He most familiar measure of dependence between two quantities is the Pearson product-moment
correlation coefficient, or "Pearson's correlation coefficient", commonly called simply "the
correlation coefficient". It is obtained by dividing the covariance of the two variables by the
product of their standard deviations. Karl Pearson developed the coefficient from a similar but
slightly different idea by Francis Galton.
The population correlation coefficient ρX,Y between two random variables X and Y with expected

Values µx and µy and standard deviation σx and σy is given by

where E is the expected value operator, cov means covariance, and, corr a widely used alternative
notation for the correlation coefficient.
The Pearson correlation is defined only if both of the standard deviations are finite and nonzero. It
is a corollary of the Cauchy–Schwarz inequality that the correlation cannot exceed 1 in absolute
value. The correlation coefficient is symmetric: corr(X,Y) = corr(Y,X).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The Pearson correlation is +1 in the case of a perfect direct (increasing) linear relationship

(correlation), −1 in the case of a perfect decreasing (inverse) linear relationship (autocorrelation),


and some value between −1 and 1 in all other cases, indicating the degree of linear
dependence between the variables. As it approaches zero there is less of a relationship (closer to
uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between
the variables.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true
because the correlation coefficient detects only linear dependencies between two variables. For
example, suppose the random variable X is symmetrically distributed about zero, and Y = X 2.
Then Y is completely determined by X, so that X and Y are perfectly dependent, but their
correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly
normal, uncorrelatedness is equivalent to independence.
If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, ..., n, then
the sample correlation coefficient can be used to estimate the population Pearson
correlation r between X and Y.
where x and y are the sample means of X and Y, and sx and sy are the sample standard
deviations of X and Y.
This can also be written as:

If x and y are results of measurements that contain measurement error, the realistic limits on the
correlation coefficient are not −1 to +1 but a smaller range.

If x and y are results of measurements that contain measurement error, the realistic limits on the
correlation coefficient are not −1 to +1 but a smaller range.

COVARIANCE FUNCTIONS:
In probability theory and statistics, covariance is a measure of how much two variables change
together, and the covariance function, or kernel, describes the spatial covariance of a random
variable process or field. For a random field or stochastic process Z(x) on a domain D, a covariance
function C(x, y) gives the covariance of the values of the random field at the two locations x and y:

The same C(x, y) is called the auto covariance function in two instances: in time series (to denote
exactly the same concept except that x and y refer to locations in time rather than in space), and in
multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross
covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Mean & Variance of covariance functions:


For locations x1, x2, …, xN ∈ D the variance of every linear combination

can be computed as

A function is a valid covariance function if and only if this variance is non-negative for all possible
choices of N and weights w1, …, wN. A function with this property is called positive definite.
ERGODIC PROCESS:
In the event that the distributions and statistics are not available we can avail ourselves of the
time averages from the particular sample function. The mean of the sample function Xλo(t)
is referred to as the sample mean of the process X(t) and is defined as

This quantity is actually a random-variable by itself because its value depends on the parameter
sample function over it was calculated. the sample variance of the random process is defined as

The time-averaged sample ACF is obtained via the relation is

These quantities are in general not the same as the ensemble averages described before. A
random process X(t) is said to be ergodic in the mean, i.e., first-order ergodic if the mean
of sample average asymptotically approaches the ensemble mean

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

In a similar sense a random process X(t) is said to be ergodic in the ACF, i.e, second-order
ergodic if

The concept of ergodicity is also significant from a measurement perspective because in

Practical situations we do not have access to all the sample realizations of a random process. We
therefore have to be content in these situations with the time-averages that we obtain from a single
realization. Ergodic processes are signals for which measurements based on a single sample
function are sufficient to determine the ensemble statistics. Random signal for which this property
does not hold are referred to as non-ergodic processes. As before the Gaussian random signal is an
exception where strict sense ergodicity implies wide sense ergodicity.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

GUASSIAN PROCESSS:
A random process X(t) is a Gaussian process if for all n and all (t1 ,t2 ,…,tn ), the random
variables have a jointly Gaussian density function. For Gaussian processes, knowledge of the
mean and autocorrelation; i.e., mX (t) and Rx (t1 ,t2 ) gives a complete statistical description of
the process. If the Gaussian process X(t) is passed through an LTI system, then the output process
Y(t) will also be a Gaussian process. For Gaussian processes, WSS and strict stationary are
equivalent.
t
A Gaussian process is a stochastic process X , t ∈ T, for which any finite linear
combination of samples has a joint Gaussian distribution. More accurately, any

linea r functional applied to the sample functionXt will give a normally distributed result. Notation-
wise, one can write X ~ GP(m,K), meaning the random function X is distributed as a GP with mean

function m and covariance function K.[1] When the input vector t is two- or multi- dimensional a
Gaussian process might be also known as a Gaussian random field.
A sufficient condition for the ergodicity of the stationary zero-mean Gaussian process X(t) is that

 Jointly Gaussian processes:


The random processes X(t) and Y(t) are jointly Gaussian if for all n, m and all (t1 ,t2 ,…,tn ), and
(τ1 , τ2 ,…, τm ), the random vector (X(t1 ),X(t2 ),…,X(tn ), Y(τ1 ),Y( τ2 ),…, Y(τm )) is
distributed according to an n+M dimensional jointly Gaussian distribution.

For jointly Gaussian processes, uncorrelatedness and independence are equivalent.


LINEAR FILTERING OF RANDOM PROCESSES:
• A random process X(t) is applied as input to a linear time-invariant filter of impulse
response h(t),
• It produces a random process Y (t) at the filter output as

X(t) →→→→→h(t)→→→ Y(t)


• Difficult to describe the probability distribution of the output random process Y (t), even when

the probability distribution of the input random process X(t) is completely specified for
−∞ ≤ t ≤ +∞.
https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

• Estimate characteristics like mean and autocorrelation of the output and try to analyse its

behaviour.

• Mean The input to the above system X(t) is assumed stationary. The mean of the output random

process Y (t) can be calculated

where H(0) is the zero frequency response of the system.

 Autocorrelation:
The autocorrelation function of the output random process Y (t). By definition, we have
RY (t, u) = E[Y (t)Y (u)]
where t and u denote the time instants at which the process is observed. We may therefore use the
convolution integral to write

When the input X(t) is a wide-stationary random process, autocorrelation function of X(t) is only a
function of the difference between the observation times t − τ1 and u − τ2.
Putting τ = t − u, we get

The mean square value of the output random process Y (t) is obtained by putting τ = 0 in the

above equation.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The mean square value of the output of a stable linear time-invariant filter in response to a wide-
sense stationary random process is equal to the integral over all frequencies of the power spectral
density of the input random process multiplied by the squared magnitude of the transfer function
of the filter.

APPLICATION AND ITS USES:


 A Gaussian process can be used as a prior probability
distribution over functions in Bayesian inference.
 Wiener process (aka Brownian motion) is the integral of a white noise Gaussian process.
It is not stationary, but it has stationary increments.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

UNIT – IV

NOISE CHARACTERISATION

PREREQUISTING ABOUT NOISE:

Noise is an inevitable consequence of the working of minerals and is an important health and
safety consideration for those working on the site. Whether it becomes "environmental noise"
depends on whether it disrupts or disturbs people outside the site boundary.

INTRODUCTION:
Noise is often described as the limiting factor in communication systems: indeed if there as no
noise there would be virtually no problem in communications.
Noise is a general term which is used to describe an unwanted signal which affects a wanted
signal. These unwanted signals arise from a variety of sources which may be considered in one of
two main categories:-
a) Interference, usually from a human source (manmade)
b) Naturally occurring random noise.
Interference arises for example, from other communication systems (cross talk), 50 Hz supplies
(hum) and harmonics, switched mode power supplies, thyristor circuits, ignition (car spark plugs)
motors … etc. Interference can in principle be reduced or completely eliminated by careful
engineering (i.e. good design, suppression, shielding etc). Interference is essentially deterministic
(i.e. random, predictable), however observe.
When the interference is removed, there remains naturally occurring noise which is essentially
random (non-deterministic),. Naturally occurring noise is inherently present in electronic
communication systems from either ‗external‘ sources or ‗internal‘ sources.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Naturally occurring external noise sources include atmosphere disturbance (e.g. electric storms,
lighting, ionospheric effect etc), so called ‗Sky Noise‘ or Cosmic noise which includes noise from
galaxy, solar noise and ‗hot spot‘ due to oxygen and water vapour resonance in the earth‘s
atmosphere. These sources can seriously affect all forms of radio transmission and the design of a
radio system (i.e. radio, TV, satellite) must take these into account.The diagram below shows
noise temperature (equivalent to noise power, we shall discuss later) as a function of frequency for
sky noise.

The upper curve represents an antenna at low elevation (~ 5 o above horizon), the lower curve
represents an antenna pointing at the zenith (i.e. 90o elevation).

Contributions to the above diagram are from galactic noise and atmospheric noise as shown
below.Note that sky noise is least over the band – 1 GHz to 10 GHz. This is referred to as a low
noise ‗window‘ or region and is the main reason why satellite links operate at frequencies in this

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

band (e.g. 4 GHz, 6GHz, 8GHz). Since signals received from satellites are so small it is important
to keep the background noise to a minimum.

Naturally occurring internal noise or circuit noise is due to active and passive electronic devices
(e.g. resistors, transistors ...etc) found in communication systems. There are various mechanism
which produce noise in devices; some of which will be discussed in the following sections.
 THERMAL NOISE (JOHNSON NOISE):
This type of noise is generated by all resistances (e.g. a resistor, semiconductor, the resistance of a
resonant circuit, i.e. the real part of the impedance, cable etc).
Free electrons are in contact random motion for any temperature above absolute zero (0 degree K,
~ -273 degree C). As the temperature increases, the random motion increases, hence thermal
noise, and since moving electron constitute a current, although there is no net current flow, the
motion can be measured as a mean square noise value across the resistance.

FIGURE 4.1 Circuit Diagram of Thermal Noise Voltage


Experimental results (by Johnson) and theoretical studies (by Nyquist) give the mean square noise
_2
voltage as V  4 k TBR (volt 2 )
Where k = Boltzmann‘s constant = 1.38 x 10-23 Joules per K
T = absolute temperature
B = bandwidth noise measured in (Hz)
R = resistance (ohms)
The law relating noise power, N, to the temperature and bandwidth is
N = k TB watts
These equations will be discussed further in later section.
The equations above held for frequencies up to > 1013 Hz (10,000 GHz) and for at least all
practical temperatures, i.e. for all practical communication systems they may be assumed to be

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

valid.Thermal noise is often referred to as ‗white noise‘ because it has a uniform ‗spectral
density‘.
Note – noise power spectral density is the noise power measured in a 1 Hz bandwidth i.e. watts
per Hz. A uniform spectral density means that if we measured the thermal noise in any 1 Hz
bandwidth from ~ 0Hz → 1 MHz → 1GHz …….. 10,000 GHz etc we would measure the same
amount of noise.
From the equation N=kTB, noise power spectral density is po  k T watts per Hz.

Graphically figure 4.2 is shown as,

 SHOT NOISE:
Shot noise was originally used to describe noise due to random fluctuations in electron emission
from cathodes in vacuum tubes (called shot noise by analogy with lead shot). Shot noise also
occurs in semiconductors due to the liberation of charge carriers, which have discrete amount of
charge, in to potential barrier region such as occur in pn junctions. The discrete amounts of charge
give rise to a current which is effectively a series of current pulses.
For pn junctions the mean square shot noise current is
I n2  2I DC  2 Io q e B (amps) 2

Where
I DC is the direct current as the pn junction (amps)

I 0 is the reverse saturation current (amps)

qe is the electron charge = 1.6 x 10-19 coulombs


B is the effective noise bandwidth (Hz)
Shot noise is found to have a uniform spectral density as for thermal noise.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 LOW FREQUENCY OR FLICKER NOISE:


Active devices, integrated circuit, diodes, transistors etc also exhibits a low frequency noise,
which is frequency dependent (i.e. non uniform) known as flicker noise or ‗one – over – f‘ noise.

The mean square value is found to be proportional to  1  where f is the frequency and n= 1.
n

 f 
Thus the noise at higher frequencies is less than at lower frequencies. Flicker noise is due to
impurities in the material which in turn cause charge carrier fluctuations.
 EXCESS RESISTOR NOISE:
Thermal noise in resistors does not vary with frequency, as previously noted, by many resistors
also generates as additional frequency dependent noise referred to as excess noise. This noise also
exhibits a (1/f) characteristic, similar to flicker noise.
Carbon resistor generally generates most excess noise whereas were wound resistors usually
generates negligible amount of excess noise. However the inductance of wire wound resistor
limits their frequency and metal film resistor are usually the best choices for high frequency
communication circuit where low noise and constant resistance are required.
 BURST NOISE OR POPCORN NOISE:
Some semiconductors also produce burst or popcorn noise with a spectral density which is

proportional to  1  .
2

 f 
 GENERAL COMMENTS:
The diagram below illustrates the variation of noise with frequency.

For frequencies below a few KHz (low frequency systems), flicker and popcorn noise are the most
significant, but these may be ignored at higher frequencies where ‗white‘ noise predominates.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Thermal noise is always presents in electronic systems. Shot noise is more or less significant
depending upon the specific devices used for example as FET with an insulated gate avoids
junction shot noise. As noted in the preceding discussion, all transistors generate other types of
‗non-white‘ noise which may or may not be significant depending on the specific device and
application. Of all these types of noise source, white noise is generally assumed to be the most
significant and system analysis is based on the assumption of thermal noise. This assumption is
reasonably valid for radio systems which operates at frequencies where non-white noise is greatly
reduced and which have low noise ‗front ends‘ which, as shall be discussed, contribute most of the
internal (circuit) noise in a receiver system. At radio frequencies the sky noise contribution is
significant and is also (usually) taken into account.
Obviously, analysis and calculations only gives an indication of system performance.
Measurements of the noise or signal-to-noise ratio in a system include all the noise, from whatever
source, present at the time of measurement and within the constraints of the measurements or
system bandwidth.
Before discussing some of these aspects further an overview of noise evaluation as applicable to
communication systems will first be presented.
 NOISE EVALUATION:
 OVERVIEW:

It has been stated that noise is an unwanted signal that accompanies a wanted signal, and, as
discussed, the most common form is random (non-deterministic) thermal noise.
The essence of calculations and measurements is to determine the signal power to Noise power
ratio, i.e. the (S/N) ratio or (S/N) expression in dB.
i.e. Let S= signal power (mW)
N = noise power (mW)

S S
  

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Powers are usually measured in d Bm (or dBw) in communications systems. The equation

 S 
The   at various stages in a communication system gives an indication of system quality and
 N 

performance in terms of error rate in digital data communication systems and ‗fidelity‘ in case of

 S 
analogue communication systems. (Obviously, the larger the   , the better the system will be).
 N 

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.

www.edubuzz360.com


Noise, which accompanies the signal is usually considered to be additive (in terms of powers) and
its often described as Additive White Gaussian Noise, AWGN, noise. Noise and signals may also

 S 
be multiplicative and in some systems at some levels of   , this may be more significant then
 N 

AWGN.In order to evaluate noise various mathematical models and techniques have to be used,
particularly concepts from statistics and probability theory, the major starting point being that
random noise is assumed to have a Gaussian or Normal distribution.
We may relate the concept of white noise with a Gaussian distribution as follows:

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.3 Probability of noise voltage vs voltage


Gaussian distribution – ‗graph‘ shows Probability of noise voltage vs voltage – i.e. most probable
noise voltage is 0 volts (zero mean). There is a small probability of very large +ve or –ve noise
voltages.
White noise – uniform noise power from ‗DC‘ to very high frequencies.
Although not strictly consistence, we may relate these two characteristics of thermal noise as
follows:

FIGURE 4.4 Characteristics of Thermal Noise


The probability of amplitude of noise at any frequency or in any band of frequencies (e.g. 1 Hz,
10Hz… 100 KHz .etc) is a Gaussian distribution.Noise may be quantified in terms of noise power
spectral density, p0 watts per Hz, from which Noise power N may be expressed as
N= p0 Bn watts
Where Bn is the equivalent noise bandwidth, the equation assumes p 0 is constant across the band
(i.e. White Noise).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Note - Bn is not the 3dB bandwidth, it is the bandwidth which when multiplied by p 0
Gives the actual output noise power N. This is illustrated further below.

FIGURE 4.5 Basic Ideal Low Pass Filter


Ideal low pass filter
Bandwidth B Hz = Bn
N= p0 Bn watts
Practical LPF
3 dB bandwidth shown, but noise does not suddenly cease at B3dB
Therefore, Bn > B3dB, Bn depends on actual filter.
N= p0 Bn
In general the equivalent noise bandwidth is > B3dB.

2
Alternatively, noise may be quantified in terms of ‗mean square noise‘ i.e. V , which is
effectively a power. From this a ‗Root mean square (RMS)‘ value for the noise voltage may be
determined.
____

i.e. RMS = V2
In order to ease analysis, models based on the above quantities are used. For example, if we
imagine noise in a very narrow bandwidth, f , as f  df , the noise approaches a sine wave
(with frequency ‗centred‘ in df).Since an RMS noise voltage can be determined, a ‗peak‘ value of
the noise may be invented since for a sine wave
Peak
RMS =
2

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Note – the peak value is entirely fictious since in theory the noise with a Gaussian distribution
could have a peak value of +  or -  volts.
Hence we may relate
Mean square  RMS  2 (RMS)  Peak noise voltage (invented for convenience)
Problems arising from noise are manifested at the receiving end of a system and hence most of the
analysis relates to the receiver / demodulator with transmission path loss and external noise
sources (e.g. sky noise) if appropriate, taken into account.
The transmitter is generally assumed to transmit a signal with zero noise (i.e (S/N) at the Tx  ∞
General communication system block diagrams to illustrate these points are shown below.

FIGURE 4.6 Block Diagrams of Communication System


Transmission Line
R = repeater (Analogue) or Regenerators (digital)
These systems may facilitate analogue or digital data transfer. The diagram below characterizes
these typical systems in terms of the main parameters.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.7 Block Diagrams of Communication System


PT represents the output power at the transmitter.
GT represents the Tx aerial gain.
Path loss represents the signal attenuation due to inverse square law and absorption e.g. in
atmosphere.
G represents repeater gains.
PR represents receiver inout signal power
NR represents the received external noise (e.g. sky noise)
GR represents the receiving aerial gain.
 S  S
 N  represents the  N  at the input to the demodulator
 IN  
 S 
  represents the quality of the output signal for analogue systems
 N OUT
 DATA ERROR RATE represents the quality of the output (probability of error) for
digital data system.
ANALYSIS OF NOISE IN COMMUNICATION SYSTEMS:
 Thermal Noise (Johnson noise)
It has been discussed that the thermal noise in a resistance R has a mean square value given by

V 2  4 k TBR (volt 2 )
Where k = Boltzmann‘s constant = 1.38 x 10-23 Joules per K
T = absolute temperature
B = bandwidth noise measured in (Hz)

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

R = resistance (ohms)
This is found to hold for large bandwidth (>1013 Hz) and large range in temperature.
This thermal noise may be represented by an equivalent circuit as shown below.

FIGURE 4.7 Equivalent Circuit of Thermal Noise Voltage


equivalent to the ideal noise free resistor (with same resistance R) in series with a voltage
source with voltage Vn.
Since

V 2  4 k TBR (volt 2 ) (mean square value , power)


____

then VRMS = V 2 = 2 kTBR Vn in above


i.e. Vn is the RMS noise voltage.
The above equation indicates that the noise power is proportional to bandwidth.
For a given resistance R, at a fixed temperature T (Kelvin)

We have V 2 (4 k TR) B , where (4 k TR) is a constant – units watts per Hz.
For a given system, with (4 k TR) constant, then if we double the bandwidth from B Hz to 2B Hz,
the noise power will double (i.e increased by 3 dB). If the bandwidth were increased by a factor of
10, the noise power is increased by a factor of 10.For this reason it is important that the system
bandwidth is only just ‗wide‘ enough to allow the signal to pass to limit the noise bandwidth to a
minimum.
 I.e. Signal Spectrum
Signal Power = S
A) System BW = B Hz
N= Constant B (watts) = KB

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

B) System BW
N= Constant 2B (watts) = K2B

FIGURE 4.8 Spectrum Diagrams of Communication System


S S S S
For A,  , For B, 
N KB N K 2B
S
for B only ½ that for A.
N
 Noise Voltage Spectral Density
Since date sheets specify the noise voltage spectral density with unit‘s volts per Hz (volts per
root Hz).

This is from Vn  ( 2 kTR ) B i.e. Vn is proportional to B . The quantity in bracket, i.e.

( 2 kTR ) has units of volts per Hz . If the bandwidth B is doubled the noise voltage will

increased by 2 . If bandwidth is increased by 10, the noise voltage will increased by 10 .


 Resistance in Series

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.9 Circuit Diagram of Resistance in Series


Assume that R1 at temperature T1 and R2 at temperature T2, then

Vn  V n1  V n 2
2 2 2
(we add noise power not noise voltage)

Vn1 2  4 k T 1 B R1 Vn 2 2  4 k T 2 B R2

 V  4 k B (T R T R )
2
Mean square noise
n 1 1 2 2

If T1= T2 = T then V 2  4 kT B (R  R )
n 1 2

The resistor in series at same temperature behave as a single resistor (R1  R2 ) .


 Resistance in Parallel

FIGURE 4.10 Circuit Diagram of Resistance in Parallel


Since an ideal voltage source has zero impedance, we can find noise as an output V o1,
due to Vn1 , an output voltage V02 due to Vn2.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 MATCHED COMMUNICATION SYSTEMS:


In communication systems we are usually concerned with the noise (i.e. S/N) at the receiver end
of the system.

FIGURE 4.11 Circuit Diagram of Transmission Path


The transmission path may be for example:-
a) A transmission line (e.g. coax cable).

FIGURE 4.12 Circuit Diagram of Coxial Cable


Zo is the characteristics impedance of the transmission line, i.e. the source resistance Rs.
This is connected to the receiver with an input impedance RIN .
b) A radio path (e.g. satellite, TV, radio – using aerial)

FIGURE 4.13 Circuit Diagram of Radio Path.


Again Zo=Rs the source resistance, connected to the receiver with input resistance Rin.
Typically Zo is 600 ohm for audio/ telephone systems
Or Zo is 50 ohm (for radio/TV systems)
75 ohm (radio frequency systems).
An equivalent circuit, when the line is connected to the receiver is shown below. (Note we omit
the noise due to Rin – this is considered in the analysis of the receiver section).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

e Voltage

F
I
G
U
R
E

4
.
1
4

E
q
u
i
v
a
l
e
n
t

C
i
r
c
u
i
t

o
f

N
o
i
s

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Note: that p0 is independent of frequency, i.e. white noise.


These equations indicate the need to keep the system bandwidth to a minimum, i.e. to that
required to pass only the band of wanted signals, in order to minimize noise power, N.
For example, a resistance at a temperature of 290 K (17 deg C), p =0 kT is 4 x 10-21 watts per Hz.
For a noise bandwidth Bn = 1 KHz, N is 4 x 10 -18 watts (-174 dBW).If the system bandwidth is
increased to 2 KHz, N will decrease by a factor of 2 (i.e. 8 x 10-18 watts or -171 dBW) which will
degrade the (S/N) by 3 dB.Care must also be exercised when noise or (S/N ) measurements are
made, for example with a power meter or spectrum analyser, to be clear which bandwidth the
noise is measured in, i.e. system or test equipment. For example, assume a system bandwidth is 1
MHz and the measurement instrument bandwidth is 250 KHz.

FIGURE 4.15 Block Diagram of Measurement Instrumentation

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

In the above example, the noise measured is band limited by the test equipment rather than the
system, making the system appears less noisy than it actually is. Clearly if the relative bandwidths
are known (they should be) the measured noise power may be corrected to give the actual noise
power in the system bandwidth.
If the system bandwidth was 250 KHz and the test equipment was 1 MHz then the measured result
now would be – 150 dBW (i.e. the same as the actual noise) because the noise power monitored
by the test equipment has already been band limited to 250 KHz.
 SIGNAL – TO – NOISE :
The signal to noise ratio is given by
S Signal Power

N Noise Power

The signal to noise in dB is expressed by

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


NOISE FACTOR – NOISE FIGURE:
 S   S   S 
Consider the network shown below, in which   represents the   at the input and  
 N IN  N  N OUT
 S 
represents the   at the output.
 N 


FIGURE 4.16 Block Diagram of S/N Ratio

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com
EC6402 COMMUNICATION THEORY

S 
In general   ≥ , i.e. the network ‗adds‘ noise (thermal noise tc from the network devices) so
N IN
that the output (S/N) is generally worst than the input.
The amount of noise added by the network is embodied in the Noise Factor F, which is defined by


S N 

Noise factor F = IN

S
N OUT
F equals to 1 for noiseless network and in general F > 1.
The noise figure in the noise factor quoted in dB
i.e. Noise Figure F dB = 10 log10 F F ≥ 0 dB
The noise figure / factor is the measure of how much a network degrades the (S/N) IN, the lower
the value of F, the better the network.
The network may be active elements, e.g. amplifiers, active mixers etc, i.e. elements with gain > 1
or passive elements, e.g. passive mixers, feeders cables, attenuators i.e. elements with gain <1.

NOISE FIGURE – NOISE FACTOR FOR ACTIVE ELEMENTS :
For active elements with power gain G>1, we have

FIGURE 4.17 Circuit Diagram of Noise Factor -1


S 
N IN S IN N OUT

F = S
N
 =
N
IN
S
OUT
OUT

But SOUT = G S IN
S N
IN OUT

N
Therefore F= IN G S IN
N
OUT

F = G N IN

If the NOUT was due only to G times N IN the F would be 1 i.e. the active element would be noise

free. Since in general F v> 1 , then NOUT is increased by noise due to the active element i.e.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Na represents ‗added‘ noise measured at the output. This added noise may be referred to the input
as extra noise, i.e. as equivalent diagram is

FIGURE 4.17 Circuit Diagram of Noise Factor -3


Ne is extra noise due to active elements referred to the input; the element is thus effectively
noiseless.
N
Hence F = = F = G(N IN  N e )
OUT

G N IN G N IN
Rearranging gives,
N e (F 1) N IN

NOISE TEMPERATURE:
N IN is the ‗external‘ noise from the source i.e. N IN = k TS Bn
TS is the equivalent noise temperature of the source (usually 290K).

We may also write N e = k Te Bn , where Te is the equivalent noise temperature of the element
i.e. with noise factor F and with source temperature TS .

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

i.e. k Te Bn = (F-1) k TS Bn

or Te = (F-1) TS
The noise factor F is usually measured under matched conditions with noise source at ambient
temperature TS , i.e. TS ~ 290K is usually assumed, this is sometimes written as

Te F290 1290

This allows us to calculate the equivalent noise temperature of an element with noise factor F,
measured at 290 K.
For example, if we have an amplifier with noise figure FdB = 6 dB (Noise factor F=4) and
equivalent Noise temperature Te = 865 K.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Comments:-
a) We have introduced the idea of referring the noise to the input of an element, this noise is
not actually present at the input, it is done for convenience in the analysis.
b) The noise power and equivalent noise temperature are related, N=kTB, the temperature T
is not necessarily the physical temperature, it is equivalent to the temperature of a
resistance R (the system impedance) which gives the same noise power N when measured
in the same bandwidth Bn.

c) Noise figure (or noise factor F) and equivalent noise temperature Te are related and both
indicate how much noise an element is producing.
Since, Te = (F-1) TS

Then for F=1, Te = 0, i.e. ideal noise free active element.



NOISE FIGURE – NOISE FACTOR FOR PASSIVE ELEMENTS :
The theoretical argument for passive networks (e.g. feeders, passive mixers, attenuators) that is
networks with a gain < 1 is fairly abstract, and in essence shows that the noise at the input, N IN is
attenuated by network, but the added noise Na contributes to the noise at the output such that
N =N .
OUT IN
S N
IN OUT
Thus, since F = and N OUT = N IN .
N S
IN OUT
S
IN 1
F = G S IN G

If we let L denote the insertion loss (ratio) of the network i.e. insertion loss
LdB = 10 log L

Then L = 1 and hence for passive network


G
F=L
Also, since Te = (F-1) TS
Then for passive network

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Te = (L-1) TS

Where Te is the equivalent noise temperature of a passive device referred to its input.

REVIEW OF NOISE FACTOR – NOISE FIGURE –TEMPERATURE:

F, dB and Te are related by = 10 logdB F FdB

Te = (F-1)290
Some corresponding values are tabulated below:
F FdB (dB) T e (degree K)

1 0 0
2 3 290
4 6 870
8 9 2030
16 12 4350

Typical values of noise temperature, noise figure and gain for various amplifiers and attenuators
are given below:
Device Frequency T (K) FdB (dB) Gain (dB)
e

Maser Amplifier 9 GHz 4 0.06 20


Ga As Fet amp 9 GHz 330 303 6
Ga As Fet amp 1 GHz 110 1.4 12
Silicon Transistor 400 MHz 420 3.9 13
L C Amp 10 MHz 1160 7.0 50
Type N cable 1 GHz 2.0 2.0

 CASCADED NETWORK:
A receiver systems usually consists of a number of passive or active elements connected in series,
each element is defined separately in terms of the gain (greater than 1 or less than 1 as the case
may be), noise figure or noise temperature and bandwidth (usually the 3 dB bandwidth). These
elements are assumed to be matched.
A typical receiver block diagram is shown below, with example

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.18 Circuit Diagram of Cascade System -1

In order to determine the (S/N) at the input, the overall receiver noise figure or noise temperature
must be determined. In order to do this all the noise must be referred to the same point in the
receiver, for example to A, the feeder input or B, the input to the first amplifier.
The equations so far discussed refer the noise to the input of that specific element i.e.

Te or N e is the noise referred to the input.

To refer the noise to the output we must multiply the input noise by the gain
G. For example, for a lossy feeder, loss L, we had
N e = (L-1) N IN , noise referred to

input Or Te = (L-1) TS - referred to the input.


Noise referred to output is gain x noise referred to input, hence
1
N e referred to output = G Ne = L (L-1) N IN
1
= (1- )N
L IN

Similarly, the equivalent noise temperature referred to the output is

1
Te referred to output = (1- L ) TS
These points will be clarified later; first the system noise figure will be considered.

SYSTEM NOISE FIGURE:
Assume that a system comprises the elements shown below, each element defined and specified
separately.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.18 Circuit Diagram of Cascade System -2


The gains may be greater or less than 1, symbols F denote noise factor (not noise figure, i.e. not in
dB). Assume that these are now cascaded and connected to an aerial at the input, with
N IN  N ae from the aerial.

FIGURE 4.18 Circuit Diagram of Cascade System -3


Note: - N IN for each stage is equivalent to a source at a temperature of 290 K since this is how
each element is specified. That is, for each device/ element is specified

Now ,
N G3 N IN 3  N e3 
OUT

G3 N IN 3  F3 1N IN 


 N G N  N N
Since IN 3 2  IN 2 e 2 G2  IN 2  F2 1N IN 
N
and similarly IN 2 G1 Nae F1 1N IN 
then
NOUT  G3 G2 G1 N ae G1 F1 1N IN G2 F2 1N IN G3 F3 1N IN
The overall system Noise Factor is
N N
F  OUT OUT

sys GG G N
GN IN 1N 2 3 ae

 1 F1 1 IN

F2 1 N IN  F3 1 N IN
N N G GN
1 2 ae
ae G1 ae
N
If we assume ae is ≈ N IN , i.e. we would measure and specify Fsys under similar conditions as
F1 , F2 etc (i.e. at 290 K), then for n elements in cascade.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

F2 1 F3 1 F4 1 Fn 1


F
sys  F1 
   .......... .  G
G1 G1G2 G1G2 G3 G1G2 n1

The equation is called FRIIS Formula.


This equation indicates that the system noise factor depends largely on the noise factor of the first
stage if the gain of the first stage is reasonably large. This explains the desire for ―low noise
front ends‖ or low noise most head preamplifiers for domestic TV reception. There is a danger
however; if the gain of the first stage is too large, large and unwanted signals are applied to the
mixer which may produce intermodulation distortion. Some receivers apply signals from the aerial
directly to the mixer to avoid this problem. Generally a first stage amplifier is designed to have a
good noise factor and some gain to give an acceptable overall noise figure.

SYSTEM NOISE TEMPERATURE:



 Te
Te = (L-1) TS , i.e. F= 1 +
Since
Ts

T
F e sys whereTe sys is the equivalentNoise temperature of the system
Then sys  1  
 T
s  and Ts is the noisetemperature of the source
and
T
1 e2
1

   
T T  
e sys e1  Ts 
1    1  ...etc
   

 Ts 
Ts   G1
F2 1
i.e. from Fsys  F1   etc
G1
which gives
T T T T
Te1    .......... .......... ......
e2 e3 e4
e sys

G1 G1G2 G1G2 G3
T
is the receiver system equivalent noise temperature. Again, this shows that the system noise
e sys

temperature depends on the first stage to a large extent if the gain of the first stage is reasonably
large.The equations for Te sys and F sys refer the noise to the input of the first stage. This can best
be classified by examining the equation for Te sys in conjunction with the diagram below.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.18 Circuit Diagram of Cascade System - 4


T is already referred to input of 1st stage.
e1

T is referred to input of the 2nd stage – to refer this to the input of the 1st stage we must divide
e2

Te 2 by G1.
T is referred to input of third stage, (  G G ) to refer to input of 1st stage, etc.
e3 1 2

It is often more convenient to work with noise temperature rather than noise factor.
Given a noise factor we find Te from Te = (F-1)290.
Note: also that the gains (G1G2 G3 etc) may be gains > 1 or gains <1, i.e. losses L where L = 1 .
G
See examples and tutorials for further classifications.

REVIEW AND APPLICATION:
It is important to realize that the previous sections present a technique to enable a receiver
performance to be calculated. The essence of the approach is to refer all the noise contributed at
various stages in the receiver to the input and thus contrive to make all the stages ideal, noise free.
i.e. in practice or reality : -

FIGURE 4.18 Circuit Diagram of Cascade System - 5

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The noise gets worse as we proceed from the aerial to the output. The technique outlined.

FIGURE 4.18 Circuit Diagram of Cascade System - 6

All noise referred to input and all stages assumed noise free.
To complete the analysis consider the system below

FIGURE 4.18 Circuit Diagram of Cascade System - 7


The overall noise temp = T sky  T sys (referred to A)
Noise referred to A = kTB
= k ( T sky  T sys )B watts

Where , k = 1.38 x 10-23 J/K


B is the bandwidth as measured at the output, in this case from
the IF amp.
If we let this noise by N R, i.e. NR = k ( T sky  T sys )B then the noise at the output would be

1 
N N G  1 G 
 
R  2 4
OUT
L
 1 
L
 3 
i.e. the noise referred to input times the gain of each
stage. Now consider

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.18 Circuit Diagram of Cascade System - 8


S   S 

 R
 is not the actual (real), ratio that would be measured at A because NR is all referred
 N 
 
N
  R

back from later stages.

  S SR 
 Actual   where BAe is the aerial bandwidth.
  N k T B Ae 
 
sky


Signal power at the output would be

1 
  1 
S S 
L1  L3 
OUT R   2 4

S  S
OUT
S
R
and    
N
 N OUT OUT NR
S
Hence, by receiving all the noise to the input, and finding NR, we can find R
which is the same
N
R
 S 
as   - i.e. we do not need to know all the system gain.
N OUT

Consider now the diagram below

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

A filter in same form often follows the receiver and we often need to know po , the noise power
spectral density.
recall that N= po B = kTB.

po = kT

We may find po from

po = k( T sky  T sys )

ALGEBRAIC REPRESENTATION OF NOISE:
 General
In order for the effects of noise to be considered in systems, for example the analysis of
probability of error as a function of Signal to noise for an FSK modulated data systems, or the
performance of analogue FM system in the presence of noise, it is necessary to develop a model
which will allow noise to be considered algebraically.
Noise may be quantified in terms of noise power spectral density po watts per Hz such that the
average noise power in a noise bandwidth Bn Hz is
N= po Bn watts
Thus the actual noise power in a system depends on the system bandwidth and since we are often
 S 
interested in the   at the input to a demodulator, Bn is the smallest noise equivalent bandwidth
 N 
before the demodulator.
Since average power is proportional to V 2 we may relate N to a ―peak‖ noise voltage so that
rms

 Vn 
2

N= = po Bn
 2 

Vn  2 po Bn  ‘peak‘ value of noise

In practice noise is a random signal with (in theory) a Gaussian distribution and hence peak values
up to   or as otherwise limited by the system dynamic range are possible. Hence this ―peak‖
value for noise is a fictitious value which will give rise to the same average noise as the actual
noise.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 PHASOR REPRESENTATION OF SIGNAL AND NOISE:


shown below:

FIGURE 4.19 PHASOR Diagram -1

The phasor represents a signal with peak value Vc, rotating with angular frequencies Wc rads per
sec and with an angle   c t to some reference axis at time t=0.

If we now consider a carrier with a noise voltage with ―peak‖ value superimposed we may
represents this as:

FIGURE 4.19 PHASOR Diagram -2


In this case Vn is the peak value of the noise and is the phase of the noise relative to the carrier.
Both Vn and  n are random variables, the above phasor diagram represents a snapshot at some
instant in time.
The resultant or received signal R, is the sum of carrier plus noise. If we consider several
snapshots overlaid as shown below we can see the effects of noise accompanying the signal and
how this affects the received signal R.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 4.19 PHASOR Diagram -3


Thus the received signal has amplitude and frequency changes (which in practice occur randomly)
due to noise.We may draw, for a single instant, the phasor with noise resolved into 2 components,
which are:
a) x(t) in phase with the carriers
x(t) Vn Cosn

b) y(t) in quadrature with the carrier


y(t) Vn Sinn

FIGURE 4.19 PHASOR Diagram -4


The reason why this is done is that x(t) represents amplitude changes in Vc (amplitude changes
affect the performance of AM systems) and y(t) represents phase (i.e. frequency) changes (phase /
frequency changes affect the performance of FM/PM systems)
We note that the resultant from x(t) and y(t) i.e.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

V n  y(t)2  x(t)2
 V 2 Cos 2 V 2 Sin2
n n n n

 Vn (Since Cos 2  Sin2  1)

We can regard x(t) as a phasor which is in phase with Vc Cos c t , i.e a phasor rotating at c

. i.e. x(t)Cos c t

and by similar reasoning, y(t) in


quadrature i.e. y(t)Sin c t
Hence we may write
Vn t  x(t) Cosc t  y(t) Sinc t
Or – alternative approach
Vn t  Vn Cosc t n 

Vn t  Vn Cosn Cosc t Vn Sinn Sinc t

Vn t  x(t) Cosc t  y(t) Sinc t


This equation is algebraic representation of noise and since

x(t) Vn Cosn = 2 po Bn Cosn

the peak value of x(t) is 2 po Bn (i.e. when Cosn  1 )

Similarly the peak value of y(t) is also 2 po Bn (i.e. when Sinn  0 )


V
 peak 
2

 Vrms 
2

The mean square value in general is  

 2 

2
2  2 po Bn 


 po Bn


and thus the mean square of x(t), i.e x(t)


2  

2
 2 po Bn 
also the mean square value of y(t), i.e y(t)    p B
 
2

2 o n
 
The total noise in the bandwidth, Bn is

V  2
x(t)2 y(t)2
N =  v   po Bn  
 2 2 2

i.e. NOT x(t) 2  y(t) 2 as might be expected.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The reason for this is due to the Cosn and Sinn relationship in the representation e.g. when say

―x(t)‖ contributes po Bn , the ―y(t)‖ contribution is zero, i.e. sum is always equal to po Bn .

The algebraic representation of noise discussed above is quite adequate for the analysis of many
systems, particularly the performance of ASK, FSK and PSK modulated systems.
When considering AM and FM systems, assuming a large (S/N) ratio, i.e Vc>> Vn, the following
may be used.
Considering the general phasor representation below:-

FIGURE 4.19 PHASOR Diagram -5

For AM systems, the signal is of the form Vc  m(t)Cosc t where m(t) is the message or
modulating signal and the resultant for (S/N) >> 1 is
AM received = Vc  m(t)Cosc t  x(t) Cos c t

Since AM is sensitive to amplitude changes, changes in the Resultant length are predominantly
due to x(t).

FIGURE 4.19 PHASOR Diagram -6

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

For FM systems the signal is of the form Vc Cosc  t . Noise will produce both amplitude

changes (i.e. in Vc) and frequency variations – the amplitude variations are removed by a limiter
in the FM receiver. Hence,

FIGURE 4.19 PHASOR Diagram -7

The angle  represents frequency / phase variations in the received signal due to noise. From the
diagram.
 V Sin t 
  tan 1 n n

 
 t
n 
n n Cos  
VcVV
Sinn t 
 Vc 
 tan1  
V 
1 
n
Cos  n t 
 Vc 
Since V >>V (assumed) then Vn Cos t << 1
c n n
V
c
V 
So   tan 1  n
Sin t {which is also obvious from diagram}
 n 

Vc 
Since tan =  for small  and  is small since Vc >>Vn
Vn
Then   Sinn t
V
c

The above discussion for AM and FM serve to show bow the ‗model‘ may be used to describe the
effects of noise.Applications of this model to ASK, FSK and PSK demodulation, and AM and FM
demodulation are discussed elsewhere.

ADDITIVE WHITE GAUSSIAN NOISE:
Noise in Communication Systems is often assumed to be Additive White Gaussian Noise
(AWGN).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Additive
Noise is usually additive in that it adds to the information bearing signal. A model of the received
signal with additive noise is shown below.

FIGURE 4.20 Block Diagram of White Noise


The signal (information bearing) is at its weakest (most vulnerable) at the receiver input. Noise at
the other points (e.g. Receiver) can also be referred to the input.The noise is uncorrelated with the
signal, i.e. independent of the signal and we may state, for average powers
Output Power = Signal Power + Noise Power
= (S+N)
 White
As we have stated noise is assumed to have a uniform noise power spectral density, given that the
noise is not band limited by some filter bandwidth.

We have denoted noise power spectral density by po  f .


White noise = po  f = Constant

Also Noise power = po Bn

FIGURE 4.21 Spectral Density Diagram of White Noise

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 GAUSSIAN
We generally assume that noise voltage amplitudes have a Gaussian or Normal distribution.
CLASSIFICATION OF NOISE:
Noise is random, undesirable electrical energy that enters the communications system via the
communicating medium and interferes with the transmitted message. However, some noise is also
produced in the receiver.
(OR)
With reference to an electrical system, noise may be defined as any unwanted form of energy
which tends to interfere with proper reception and reproduction of wanted signal.
Noise may be put into following two categories.
1. External noises, i.e. noise whose sources are external.
External noise may be classified into the following three types:

 Atmospheric noises
 Extraterrestrial noises
 Man-made noises or industrial noises.

2. Internal noise in communication, i.e. noises which get, generated within the receiver
or communication system. Internal noise may be put into the following four categories.
 Thermal noise or white noise or Johnson noise
 Shot noise.
 Transit time noise
 MiDSCEllaneous internal noise.
External noise cannot be reduced except by changing the location of the receiver or the entire
system. Internal noise on the other hand can be easily evaluated mathematically and can be
reduced to a great extent by proper design. As already said, because of the fact that internal noise
can be reduced to a great extent, study of noise characteristics is a very important part of the
communication engineering.

Explanation of External Noise
 Atmospheric Noise:
Atmospheric noise or static is caused by lighting discharges in thunderstorms and other natural
electrical disturbances occurring in the atmosphere. These electrical impulses are random in

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

nature. Hence the energy is spread over the complete frequency spectrum used for radio
communication.
Atmospheric noise accordingly consists of spurious radio signals with components spread over a
wide frequency range. These spurious radio waves constituting the noise get propagated over the
earth in the same fashion as the desired radio waves of the same frequency. Accordingly at a given
receiving point, the receiving antenna picks up not only the signal but also the static from all the
thunderstorms, local or remote.

Extraterrestrial noise:

 Solar noise
 Cosmic noise

Solar noise:
This is the electrical noise emanating from the sun. Under quite conditions, there is a steady
radiation of noise from the sun. This results because sun is a large body at a very high temperature
(exceeding 6000°c on the surface), and radiates electrical energy in the form of noise over a very
wide frequency spectrum including the spectrum used for radio communication. The intensity
produced by the sun varies with time. In fact, the sun has a repeating 11-year noise cycle. During
the peak of the cycle, the sun produces some amount of noise that causes tremendous radio signal
interference, making many frequencies unusable for communications. During other years. The
noise is at a minimum level.

Cosmic noise:
Distant stars are also suns and have high temperatures. These stars, therefore, radiate noise in the
same way as our sun. The noise received from these distant stars is thermal noise (or black body
noise) and is distributing almost uniformly over the entire sky. We also receive noise from the
center of our own galaxy (The Milky Way) from other distant galaxies and from other virtual
point sources such as quasars and pulsars.

Man-Made Noise (Industrial Noise):
By man-made noise or industrial- noise is meant the electrical noise produced by such sources as
automobiles and aircraft ignition, electrical motors and switch gears, leakage from high voltage
lines, fluoreDSCEnt lights, and numerous other heavy electrical machines. Such noises are
produced by the arc discharge taking place during operation of these machines. Such man-made
noise is most intensive in industrial and densely populated areas. Man-made noise in such areas
far exceeds all other sources of noise in the frequency range extending from about 1 MHz to 600
MHz.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


Explanation of Internal Noise in communication:
 Thermal Noise:
Conductors contain a large number of 'free" electrons and "ions" strongly bound by molecular
forces. The ions vibrate randomly about their normal (average) positions, however, this vibration
being a function of the temperature. Continuous collisions between the electrons and the vibrating
ions take place. Thus there is a continuous transfer of energy between the ions and electrons. This
is the source of resistance in a conductor. The movement of free electrons constitutes a current
which is purely random in nature and over a long time averages zero. There is a random motion
of the electrons which give rise to noise voltage called thermal noise.Thus noise generated in any
resistance due to random motion of electrons is called thermal noise or white or Johnson noise.
The analysis of thermal noise is based on the Kinetic theory. It shows that the temperature of
particles is a way of expressing its internal kinetic energy. Thus "Temperature" of a body can be
said to be equivalent to the statistical rms value of the velocity of motion of the particles in the
body. At -273°C (or zero degree Kelvin) the kinetic energy of the particles of a body becomes
zero .Thus we can relate the noise power generated by a resistor to be proportional to its absolute
temperature. Noise power is also proportional to the bandwidth over which it is measured. From
the above discussion we can write down.
P n ∝ TB

Pn = KTB -------(1)
Where Pn = Maximum noise power output of a resistor.
K = Boltzmann‘s constant = 1.38 x10-23 joules I Kelvin.
T = Absolute temperature.
B = Bandwidth over which noise is measured.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Transit Time Noise:


Another kind of noise that occurs in transistors is called transit time noise.
Transit time is the duration of time that it takes for a current carrier such as a hole or current to
move from the input to the output.
The devices themselves are very tiny, so the distances involved are minimal. Yet the time it takes
for the current carriers to move even a short distance is finite. At low frequencies this time is
negligible. But when the frequency of operation is high and the signal being processed is the
magnitude as the transit time, then problem can occur. The transit time shows up as a kind of
random noise within the device, and this is directly proportional to the frequency of operation.
 MiDSCEllaneous Internal Noises Flicker Noise:
Flicker noise or modulation noise is the one appearing in transistors operating at low audio
frequencies. Flicker noise is proportional to the emitter current and junction temperature.
However, this noise is inversely proportional to the frequency. Hence it may be neglected at
frequencies above about 500 Hz and it, Therefore, possess no serious problem.
 Transistor Thermal Noise:
Within the transistor, thermal noise is caused by the emitter, base and collector internal

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com
resistances. Out of these three regions, the base region contributes maximum thermal noise.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Partition Noise:
Partition noise occurs whenever current has to divide between two or more paths, and results from
the random fluctuations in the division. It would be expected, therefore, that a diode would be less
noisy than a transistor (all other factors being equal) If the third electrode draws current (i.e.., the
base current). It is for this reason that the inputs of microwave receivers are often taken directly to
diode mixers.
 Shot Noise:
The most common type of noise is referred to as shot noise which is produced by the random
arrival of 'electrons or holes at the output element, at the plate in a tube, or at the collector or drain
in a transistor. Shot noise is also produced by the random movement of electrons or holes across a
PN junction. Even through current flow is established by external bias voltages, there will still be
some random movement of electrons or holes due to discontinuities in the device. An example of
such a discontinuity is the contact between the copper lead and the semiconductor materials. The
interface between the two creates a discontinuity that causes random movement of the current
carriers.

Signal to Noise Ratio:
Noise is usually expressed as a power because the received signal is also expressed in terms of
power. By Knowing the signal to noise powers the signal to noise ratio can be computed. Rather
than express the signal to noise ratio as simply a number, you will usually see it expressed in
terms of decibels.

A receiver has an input signal power of l.2µW. The noise power is 0.80µW. The signal to noise
ratio is
Signal to Noise Ratio = 10 Log (1.2/0.8)
= 10 log 1.5
= 10 (0.176)
= 1.76 Db

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIG 4.22 Block diagram S/N Ratio.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

4.2.4 Noise Figure:


Noise Figure is designed as the ratio of the signal-to-noise power at the input to the signal to noise
power at the output.The device under consideration can be the entire receiver or a single amplifier
stage. The noise figure also called the noise factor can be computed with the expression , F =
Signal to Noise power Input/Signal to noise power output .You can express the noise figure as a
number, more often you will see it expressed in decibels.

NOISE IN CASCADE SYSTEMS:


Cascade noise figure calculation is carried out by dealing with gain and noise figure as a ratio
rather than decibels, and then converting back to decibels at the end. As the following equation
shows, cascaded noise figure is affected most profoundly by the noise figure of components
closest to the input of the system as long as some positive gain exists in the cascade. If only loss
exists in the cascade, then the cascaded noise figure equals the magnitude of the total loss. The
following equation is used to calculate cascaded noise figure as a ratio based on ratio values for
gain and noise figure (do not use decibel values).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIG 4.23 Block diagram of cascaded systems


NARROW BAND NOISE:
Definition: A random process X(t) is bandpass or narrowband random process if its
power spectral density SX(f) is nonzero only in a small neighborhood of some high
frequency fc Deterministic signals: defined by its Fourier transform Random processes:
defined by its power spectral density.
Notes:
1. Since X(t) is band pass, it has zero mean: E[(X(t)] = 0.
2. fc needs not be the center of the signal bandwidth, or in
the signal bandwidth at all.
Narrowband Noise Representation:
In most communication systems, we are often dealing with band-pass filtering of signals.
Wideband noise will be shaped into bandlimited noise. If the bandwidth of the
bandlimited noise is relatively small compared to the carrier frequency, we refer to this as
narrowband noise. We can derive the power spectral density Gn(f) and the auto-
correlation function Rnn(τ) of the narrowband noise and use them to analyse the
performance of linear systems. In practice, we often deal with mixing (multiplication),
which is a non-linear operation, and the system analysis becomes difficult. In such a case,
it is useful to express the narrowband noise as n(t) = x(t) cos 2πfct - y(t) sin 2πfct.

where fc is the carrier frequency within the band occupied by the noise. x(t) and
y(t) are known as the quadrature components of the noise n(t). The Hibert transform
of n(t) is n^ (t) = H[n(t)] = x(t) sin 2πfct + y(t) cos 2πfct.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

x(t) and y(t) have the following properties:

1. E[x(t) y(t)] = 0. x(t) and y(t) are uncorrelated with each other.
2. x(t) and y(t) have the same means and variances as n(t).
3. If n(t) is Gaussian, then x(t) and y(t) are also Gaussian.
4. x(t) and y(t) have identical power spectral densities, related to the power
spectral density of n(t) by Gx(f) = Gy(f) = Gn(f- fc) + Gn(f+ fc) (28.5)
for fc - 0.5B < | f | < fc + 0.5B and B is the bandwidth of n(t).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FM NOISE REDUCTION:

FM CAPTURE EFFECT:
A phenomenon, associated with FM reception, in which only the stronger of two signals at or near
the same frequency will be demodulated

 The complete suppression of the weaker signal occurs at the receiver limiter, where it is
treated as noise and rejected.

 When both signals are nearly equal in strength, or are fading independently, the receiver
may switch from one to the other.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

In the frequency modulation, the signal can be affected by another frequency modulated signal
whose frequency content is close to the carrier frequency of the desired FM wave. The receiver
may lock such an interference signal and suppress the desired FM wave when interference signal
is stronger than the desired signal. When the strength of the desired signal and interference signal
are nearly equal, the receiver fluctuates back and forth between them, i.e., receiver locks
interference signal for some times and desired signal for some time and this goes on randomly.
 This phenomenon is known as the capture effect.
PRE-EMPHASIS & DE-EMPHASIS:
Pre-emphasis refers to boosting the relative amplitudes of the modulating voltage for higher audio
frequencies from 2 to approximately 15 KHz.

DE-EMPHASIS:
De-emphasis means attenuating those frequencies by the amount by which they are boosted.
However pre-emphasis is done at the transmitter and the de-emphasis is done in the receiver. The
purpose is to improve the signal-to-noise ratio for FM reception. A time constant of 75µs is
specified in the RC or L/Z network for pre-emphasis and de-emphasis.
Pre-Emphasis Circuit:
At the transmitter, the modulating signal is passed through a simple network which amplifies the
high frequency, components more than the low-frequency components. The simplest form of such
a circuit is a simple high pass filter of the type shown in fig (a). Specification dictate a time
constant of 75 microseconds (µs) where t = RC. Any combination of resistor and capacitor (or
resistor and inductor) giving this time constant will be satisfactory. Such a circuit has a cutoff
frequency fco of 2122 Hz. This means that frequencies higher than 2122 Hz will he linearly
enhanced. The output amplitude increases with frequency at a rate of 6 dB per octave. The pre-
emphasis curve is shown in Fig (b). This pre-emphasis circuit increases the energy content of the
higher-frequency signals so that they will tend to become stronger than the high frequency noise
components. This improves the signal to noise ratio and increases intelligibility and fidelity.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

De-Emphasis Circuit:

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FM THRESHOLD EFFECT:
In an FM receiver, the effect produced when the desired-signal gain begins to limit the desired
signal, and thus noise limiting (suppression).(188) Note: FM threshold effect occurs at (and
above) the point at which the FM signal-to-noise improvement is measured. The output signal to
noise ratio of FM receiver is valid only if the carrier to noise ratio is measured at the discriminator
input is high compared to unity. It is observed that as the input noise is increased so that the
carrier to noise ratio decreased, the FM receiver breaks. At first individual clicks are heard in the
receiver output and as the carrier to noise ratio decreases still further, the clicks rapidly merge in
to a crackling or sputtering sound. Near the break point eqn8.50 begins to fail predicting values of
output SNR larger than the actual ones. This phenomenon is known as the threshold effect.The
threshold effect is defined as the minimum carrier to noise ratio that gives the output SNR not less
than the value predicted by the usual signal to noise formula assuming a small noise power. For a
qualitative discussion of the FM threshold effect, Consider, when there is no signal present, so that
the carrier is unmodulated. Then the composite signal at the frequency discriminator input is
x(t) = [Ac +nI(t)] cos2 fct – nQ(t) sin 2 fct ------------------- (1)

Where nI(t) and nQ(t) are inphase and quadrature component of the narrow band noise n(t) with
respect to carrier wave Accos2 fct. The phasor diagram below shows the phase relations b/n the
various components of x(t) in eqn (1).This effect is shown in fig below, this calculation is based
on the following two assumptions:

1. The output signal is taken as the receiver output measured in the absence of noise. The average
output signal poweris calculated for a sinusoidal modulation that produces a frequency deviation
Iequal to 1/2 of the IF filter bandwidth B, The carrier is thus enabled to swing back and forth
across the entire IF band.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

2. The average output noise power is calculated when there is no signal present, i.e.,the carrier is
unmodulated, with no restriction placed on the value of the carrier to noise ratio.

Assumptions:
Single-tone modulation, ie: m(t) = Am cos(2∆ fmt)

The message bandwidth W = fm;

For the AM system, µ = 1;

 For the FM system, β = 5 (which is what is used in commercial FM transmission, with ∆f =


75 kHz, and W = 15 kHz).
APPLICATION & ITS USES:
 Tape Noise reduction.
 PINK Noise or 1/f noise.
 Noise masking and baby sleep.

FIGURE 4.16 Block Diagram of S/N Ratio

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com
UNIT – V

INFORMATION THEORY

5.0 PREREQUISTING ABOUT INFORMATION THEORY:


We would like to develop a usable measure of the information we get from observing the
occurrence of an event having probability p Our first reduction will be to ignore any particular
features of the event, and only observe whether or not it happened. Thus we will think of an event
as the observance of a symbol whose probability of occurring is p. We
will thus be defining the information in terms of the probability p.
1. Information is a non-negative quantity: I(p) ≥ 0.
2. If an event has probability 1, we get no information from the occurrence of
the event: I(1) = 0.
3. If two independent events occur (whose joint probability is the product of their individual
probabilities), then the information we get from observing the events is the sum of the two
Information‘s: I(p1 ∗ p2) = I(p1) + I(p2).

Information Theory
Information theory is a branch of science that deals with the analysis of a communications system
We will study digital communications – using a file (or network protocol) as the channel Claude
Shannon Published a landmark paper in 1948 that was the beginning of the branch of information
theory We are interested in communicating information from a source to a destination In our case,
the messages will be a sequence of binary digits Does anyone know the term for a binary digit.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 5.1 Block Diagram of Information Theory


One detail that makes communicating difficult is noise noise introduces uncertainty Suppose I
wish to transmit one bit of information what are all of the possibilities tx 0, rx 0 - good tx 0, rx 1 -
error tx 1, rx 0 - error tx 1, rx 1 - good Two of the cases above have errors – this is where
probability fits into the picture In the case of steganography, the noise may be due to attacks on
the hiding algorithm. Claude Shannon introduced the idea of self-information.

Suppose we have an event X, where Xi represents a particular outcome of the


Consider flipping a fair coin, there are two equiprobable outcomes: say X0 = heads, P0 = 1/2, X1
= tails, P1 = 1/2 The amount of self-information for any single result is 1 bit. In other words, the
number of bits required to communicate the result of the event is 1 bit. When outcomes are
equally likely, there is a lot of information in the result. The higher the likelihood of a particular
outcome, the less information that outcome conveys However, if the coin is biased such that it
lands with heads up 99% of the time, there is not much information conveyed when we flip the
coin and it lands on heads. Suppose we have an event X, where Xi represents a particular outcome
of the event. Consider flipping a coin, however, let‘s say there are 3 possible outcomes: heads (P =
0.49), tails (P=0.49), lands on its side (P = 0.02) – (likely much higher than in reality).

Information
There is no some exact definition, however:Information carries new specific knowledge, which is
definitely new for its recipient; Information is always carried by some specific carrier in different
forms (letters, digits, different specific symbols, sequences of digits, letters, and symbols , etc.);
Information is meaningful only if the recipient is able to interpret it. According to the Oxford
English Dictionary, the earliest historical meaning of the word information in English was the act

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com
of informing, or giving form or shape to the mind. The English word was apparently derived by
adding the common "noun of action" ending "-action the information materialized is a message.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Information is always about something (size of a parameter, occurrence of an event, etc). Viewed
in this manner, information does not have to be accurate; it may be a truth or a lie. Even a
disruptive noise used to inhibit the flow of communication and create misunderstanding would in
this view be a form of information. However, generally speaking, if the amount of information in
the received message increases, the message is more accurate. Information Theory How we can
measure the amount of information? How we can ensure the correctness of information? What to
do if information gets corrupted by errors? How much memory does it require to store
information? Basic answers to these questions that formed a solid background of the modern
information theory were given by the great American mathematician, electrical engineer, and
computer scientist Claude E. Shannon in his paper ―A Mathematical Theory of Communication‖
published in ―The Bell System Technical Journal in October, 1948.

A noiseless binary channel 0 0 transmits bits without error, What to do if we have a noisy channel
and you want to send information across reliably? Information Capacity Theorem (Shannon
Limit) The information capacity (or channel capacity) C of a continuous channel with bandwidth
BHertz can be perturbed by additive Gaussian white noise of power spectral density N0/2, C=B
log2(1+P/N0B) bits/sec provided bandwidth B satisfies where P is the average transmitted power
P = Eb Rb ( for an ideal system, Rb= C). Eb is the transmitted energy per bit, Rb is transmission
rate.
ENTROPY:
Entropy is the average amount of information contained in each message received.
Here, message stands for an event, sample or character drawn from a distribution or data stream.
Entropy thus characterizes our uncertainty about our source of information. (Entropy is best
understood as a measure of uncertainty rather than certainty as entropy is larger for more random
sources.) The source is also characterized by the probability distribution of the samples drawn
from it.

Formula for entropy:
Information strictly in terms of the probabilities of events. Therefore, let us suppose that we
have a set of probabilities (a probability distribution) P = {p1, p2, . . . , pn}. We define the

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

a discrete random variable X with possible values {x1, ..., xn} and probability mass
function P(X) as: Here E is the expected value operator, and I is the information content of X. I(X)
is itself a random variable. One may also define the conditional entropy of two
events X and Y taking values xi and yj respectively, as

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

where p(xi,yj) is the probability that X=xi and Y=yj.



Properties:
 If X and Y are two independent experiments, then knowing the value of Y doesn't influence
our knowledge of the value of X (since the two don't influence each other by
independence):

 The entropy of two simultaneous events is no more than the sum of the entropies of each
individual event, and are equal if the two events are independent. More specifically, ifX
and Y are two random variables on the same probability space, and (X,Y) denotes their
Cartesian product, then

DISCRETE MEMORYLESS CHANNEL:


 Transmission rate over a noisy channel
Repetition code
Transmission rate
 Capacity of DMC
Capacity of a noisy channel
Examples

All these transition probabilities from xi to yj are gathered in a transition matrix.

The (i ; j) entry of the matrix is P(Y = yj /jX = xi ), which is called forward transition
probability.

In DMC the output of the channel depends only on the input of the channel at the
same instant and not on the input before or after.

The input of a DMC is a RV (random variable) X who selects its value from a discrete
limited set X.

The cardinality of X is the number of the point in the used constellation.

In an ideal channel, the output is equal to the input.

In a non-ideal channel, the output can be different from the input with a given
probability.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

 Transmission rate:

H(X) is the amount of information per symbol at the input of the channel. 

H(Y ) is the amount of information per symbol at the output of the channel. 
H(XjY ) is the amount of uncertainty remaining on X knowing Y .

The information transmission is given by:I (X; Y ) = H(X) − H(XjY ) bits/channel use

For an ideal channel X = Y , there is no uncertainty over X when we observe Y . So all the
information is transmitted for each channel use: I (X;Y ) = H(X)

If the channel is too noisy, X and Y are independent. So the uncertainty over X remains
the same knowing or not Y , i.e. no information passes through the channel: I (X; Y ) = 0.
 Hard and soft decision:

Normally the size of constellation at the input and at the output are the same, i.e.,
jXj = jYj

In this case the receiver employs hard-decision decoding.

It means that the decoder makes a decision about the transmitted symbol. 

It is possible also that jXj 6= jY j.

In this case the receiver employs a soft-decision.

Channel models and channel capacity:

input output
data channel waveform waveform waveform channel data
detector
demodulato
encoder modulator channel r decoder

compo site discrete-i nput, discrete-outpu t cha nnel

FIGURE 5.2 Block Diagram of Discrete Memory less Channels


1. The encoding process is a process that takes a k information bits at a time and maps each k-bit
sequence into a unique n-bit sequence. Such an n-bit sequence is called a code word.
2. The code rate is defined as k/n.
3. If the transmitted symbols are M-ary (for example, M levels), and at the receiver the output of
the detector, which follows the demodulator, has an estimate of the transmitted data symbol with
(a). M levels, the same as that of the transmitted symbols, then we say the detector has made a
hard decision;

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

(b). Q levels, Q being greater than M, then we say the detector has made a soft decision.

Channels models:
1. Binary symmetric channel (BSC):
If (a) the channel is an additive noise channel, and (b) the modulator and demodulator/detector are
included as parts of the channel. Furthermore, if the modulator employs binary waveforms, and
the detector makes hard decision, then the channel has a discrete-time binary input sequence and a
discrete-time binary output sequence.
1-p
0 0

p
Outpu
Input t
p

1 1
1-p

FIGURE 5.3 Circuit Diagram of BSC


Note that if the channel noise and other interferences cause statistically independent errors in the
transmitted binary sequence with average probability p, the channel is called a BSC. Besides,
since each output bit from the channel depends only upon the corresponding input bit, the channel
is also memoryless.
2. Discrete memoryless channels (DMC):
A channel is the same as above, but with q-ary symbols at the output of the channel encoder, and
Q-ary symbols at the output of the detector, where Q  q . If the channel and the modulator are
memoryless, then it can be described by a set of qQ conditional probabilities
P (Y  y i | X  x j )  P ( y i | x j ), i  0,1,...,Q 1; j  0,1,..., q 1
Such a channel is called discrete memory channel (DSC).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

x y0
0

x1 y1

y2

x
q1

y
Q 1

FIGURE 5.4 Block Diagram of Discrete Memory less Channels

If the input to a DMC is a sequence of n symbols u1 , u2 ,..., un selected from the alphabet X and

the corresponding output is the sequence v1 , v 2 ,..., vn of symbols from the alphabet Y, the joint

conditional probability is

the probability transition matrix for the channel.


3. Discrete-input, continuous-output channels:
Suppose the output of the channel encoder has q-ary symbols as above, but the output of the
detector is unquantized (Q  ) . The conditional probability density functions

p ( y | X  x k ), k  0,1, 2,..., q 1
AWGN is the most important channel of this type.
YXG

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

Yi  X i  Gi , i  1, 2,..., n
If, further, the channel is memoryless, then the joint conditional pdf of the detector‘s output is

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

p ( y1 , y 2 ,..., y n | X 1  u1 , X 2  u2 ,..., X n  u n )   p ( y i | X i  ui )
i1

4. Waveform channels:

input output
data channel waveform waveform waveform channel data
detector
demodulato
encoder modulator channel r decoder

channel

FIGURE 5.5 Block Diagram of Wave forrm Channels

If such a channel has bandwidth W with ideal frequency response C ( f )  1 , and if the
bandwidth-limited input signal to the channel is x ( t) , and the output signal, y ( t) of the channel is
corrupted by AWGN, then
y(t)x(t)n(t)
The channel can be described by a complete set of orthonormal functions:
y ( t )   y i f i ( t ), x ( t )  xi f i ( t ), n ( t )  ni f i ( t)
i i i

where
T T
 
yi   y ( t ) f i ( t ) dt  x ( t )  n ( t ) f i ( t )dt  xi  ni
0 0

Since { ni } are uncorrelated and are Gaussian, therefore, statistically independent. So


n

p ( y1 , y 2 ,..., y N | x1 , x 2 ,..., x n )   p ( y i | xi )
i1


Channel Capacity:
Channel model: DMC
Input alphabet: X  {x0 , x1 , x 2 ,..., xq1}

Output alphabet: Y  {y 0 , y1 , y 2 ,..., yq1}

Suppose x j is transmitted, yi is received, then

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

The mutual information (MI) provided about the event {X  x j } by the occurrence of the event
q1
{Y  y j } is log P ( y | x ) / P ( y ) with P ( y )  P (Y  yi )  P ( x )P ( iy | x )
 ij i i k k
k 0

Hence, the average mutual information (AMI) provided by the output Y about the input X is
q 1 Q 1
I ( X , Y )   P ( x ) P ( y | x ) log P ( y | x ) / P ( y )
j i j j
i i 
j 0 i 0

To maximize the AMI, we examine the above equation:


(1). P ( yi ) represents the jth output of the detector;

(2). P ( y i | x j ) represents the channel characteristic, on which we cannot do anything;

(3). P ( x j ) represents the probabilities of the input symbols, and we may do something or control
them. Therefore, the channel capacity is defined by
q 1 Q 1
C  max  P ( x ) P ( y | x ) log P ( y | x ) / P ( y )
j i j  ij i 
P ( x j ) j 0 i0

q1

with two constraints: P ( x j )  0 ; P ( x j )  1


j 0

 Unit of C:

bits/channel use when log  log2 ; and

loglog ln
 nats/input symbol when e
 If a symbol enters the channel every  s seconds (seconds/channel use)

 Channel capacity: C/ s (bits/s or nats/s).

2.6 CHANNEL CODING THEOREM:


The noisy-channel coding theorem (sometimes Shannon's theorem), establishes that for any given
degree of noise contamination of a communication channel, it is possible to communicate discrete
data (digital information) nearly error-free up to a computable maximum rate through the channel.
This result was presented by Claude Shannon in 1948 and was based in part on earlier work and
ideas of Harry Nyquist and Hartley. The Shannon limit or Shannon capacity of a communications
channel is the theoretical maximum information transfer rate of the channel, for a particular noise
level.
The theorem describes the maximum possible efficiency of error-correcting methods versus
levels of noise interference and data corruption. Shannon's theorem has wide-ranging

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

applications in both communications and data storage. This theorem is of foundational importance
to the modern field of information theory. Shannon only gave an outline of the proof. The first
rigorous proof for the discrete case is due to Amiel Feinstein in 1954.
The Shannon theorem states that given a noisy channel with channel capacity C and information
transmitted at a rate R, then if there exist codes that allow the probability of error at the
receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit
information nearly without error at any rate below a limiting rate, C.
The converse is also important. If , an arbitrarily small probability of error is not
achievable. All codes will have a probability of error greater than a certain positive minimal level,
and this level increases as the rate increases. So, information cannot be guaranteed to be
transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not
address the rare situation in which rate and capacity are equal.
The channel capacity C can be calculated from the physical properties of a channel; for a band-
limited channel with Gaussian noise, using the Shannon–Hartley theorem.
For every discrete memory less channel, the channel capacity has the following property. For any
ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding
algorithm, such that the maximal probability of block error is ≤ ε.
2. If a probability of bit error pb is acceptable, rates up to R(pb) are achievable, where

and is the binary entropy function

SOURCE CODING:
A code is defined as an n-tuple of q elements. Where q is any alphabet. Ex. 1001 n=4, q={1,0} Ex.
2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9} Ex. (a,b,c,d,e) n=5, q={a,b,c,d,e,…,y,z} The
most common code is when q={1,0}. This is known as a binary code. The purpose A message can
become distorted through a wide range of unpredictable errors.
• Humans
• Equipment failure
• Lighting interference
• Scratches in a magnetic tape

Error-correcting code:
To add redundancy to a message so the original message can be recovered if it has been
garbled. e.g. message = 10 code =
1010101010

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


Send a message:

FIGURE 5.6 Block Diagram of Source Coding



Source Coding loss:
It may consider semantics of the data depends on characteristics of the data e.g. DCT, DPCM,
ADPCM, color model transform A code is distinct if each code word can be distinguished from
every other (mapping is one-to-one) uniquely decodable if every code word is identifiable when
immersed in a sequence of code words e.g., with previous table, message 11 could be defined as
either ddddd or bbbbbb Measure of Information Consider symbols si and the probability of
occurrence of each symbol p(si)

Example Alphabet = {A, B} p(A) = 0.4; p(B) = 0.6 Compute Entropy (H) -0.4*log2 0.4 + -
0.6*log2 0.6 = .97 bits Maximum uncertainty (gives largest H) occurs when all probabilities are
equal Redundancy Difference between avg. codeword length (L) and avg. information content (H)
If H is constant, then can just use L Relative to the optimal value

Shannon-Fano Algorithm:
• Arrange the character set in order of decreasing probability
• While a probability class contains more than one symbol:

Divide the probability class in two so that the probabilities in the two halves are as nearly
as possible equal.

Assign a '1' to the first probability class, and a '0' to the second
TABLE : 5.1
Character Probability code

X6 0.25 1/0 11
X3 0.2 1 10
X4 0.15 1 1/0 011
X5 0.15 010
X1 0.1 0 1/0 001
X7 0.1 0001
X2 0.05 0 1/0 0000

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


Huffman Encoding:
Statistical encoding To determine Huffman code, it is useful to construct a binary tree Leaves are
characters to be encoded Nodes carry occurrence probabilities of the characters belonging to the
subtree Example: How does a Huffman code look like for symbols with statistical symbol
occurrence probabilities: P(A) = 8/20, P(B) = 3/20, P(C ) = 7/20, P(D) = 2/20? Step 1 : Sort all
Symbols according to their probabilities (left to right) from Smallest to largest these are the leaves
of the Huffman tree Step 2: Build a binary tree from left toRight Policy: always connect two
smaller nodes together (e.g., P(CE) and P(DA) had both Probabilities that were smaller than P(B),
Hence those two did connect first Step 3: label left branches of the tree With 0 and right branches
of the tree With 1 Step 4: Create Huffman Code Symbol A = 011 Symbol B = 1 Symbol C = 000
Symbol D = 010 Symbol E = 001

SHANNON-FANO CODING:
This is a basic information theoretic algorithm. A simple example will be used to illustrate the
algorithm:
Symbol A B C D E
----------------------------------
Count 15 7 6 6 5

Encoding for the Shannon-Fano Algorithm:

 A top-down approach

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

1. Sort symbols according to their frequencies/probabilities, e.g., ABCDE.

2. Recursively divide into two parts, each with approx. same number of counts.

Procedure for shannon fano algorithm:
A Shannon–Fano tree is built according to a specification designed to define an effective code
table. The actual algorithm is simple:
1. For a given list of symbols, develop a corresponding list of probabilities or frequency
counts so that each symbol‘s relative frequency of occurrence is known.
2. Sort the lists of symbols according to frequency, with the most frequently occurring
symbols at the left and the least common at the right.
3. Divide the list into two parts, with the total frequency counts of the left part being as close
to the total of the right as possible.
4. The left part of the list is assigned the binary digit 0, and the right part is assigned the digit
1. This means that the codes for the symbols in the first part will all start with 0, and the
codes in the second part will all start with 1.
5. Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups and
adding bits to the codes until each symbol has become a corresponding code leaf on the
tree.

Tree diagram:

FIGURE 5.7 Tree Diagram of Shannon fano coding

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


Table For Shannon Fano Coding:
Symbol Count log(1/p) Code Subtotal (# of bits)

A 15 1.38 00 30
B 7 2.48 01 14
C 6 2.70 .10 12
D 6 2.70 110 18
E 5 2.96 111 15

TOTAL NO OF BITS: 89

HUFFMAN CODING:
The Shannon–Fano algorithm doesn't always generate an optimal code. In 1952, David A.
Huffman gave a different algorithm that always produces an optimal tree for any given
probabilities. While the Shannon–Fano tree is created from the root to the leaves, the Huffman

Procedure for Huffman Algorithm:

1. Create a leaf node for each symbol algorithm works from leaves to the root in the opposite
direction and add it to frequency of occurrence.
2. While there is more than one node in the queue:
 Remove the two nodes of lowest probability or frequency from the queue
 Prepend 0 and 1 respectively to any code already assigned to these nodes

 Create a new internal node with these two nodes as children and with probability
equal to the sum of the two nodes' probabilities.
 Add the new node to the queue.
3. The remaining node is the root node and the tree is complete.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com


Tree diagram:

FIGURE 5.8 Tree Diagram of Huffman coding

SHANNON–HARTLEY THEOREM:

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information
can be transmitted over a communications channel of a specified bandwidth in the presence
of noise. It is an application of the noisy channel coding theorem to the archetypal case of
a continuous-time analog communications channel subject to Gaussian noise. The theorem

establishes Shannon's channel capacity for such a communication link, a bound on the maximum
amount of error-free digital data (that is, information) that can be transmitted with a specified
bandwidth in the presence of the noise interference, assuming that the signal power is bounded,
and that the Gaussian noise process is characterized by a known power or power spectral density.
The law is named after Claude Shannon and Ralph Hartley.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

FIGURE 5.7 Characteristics of Channel Capacity and Power Efficiency


Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley
theorem states the channel capacity C, meaning the theoretical tightest upper bound on the
information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data
that can be sent with a given average signal power S through an analog communication channel
subject to additive white Gaussian noise of power N, is:

Where C is the channel capacity in bits per second;


B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated
signal);
S is the average received signal power over the bandwidth (in case of a modulated signal,
often denoted C, i.e. modulated carrier), measured in watts (or volts squared);
N is the average noise or interference power over the bandwidth, measured in watts (or
volts squared); and

S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the
communication signal to the Gaussian noise interference expressed as a linear power ratio
(not as logarithmic decibels).

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

APPLICATION & ITS USES:


1. Huffman coding is not always optimal among all compression methods.
2. Discrete memory less channels.
3. To find 100% of efficiency using these codings.

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com
Click Here for Communication Theory full study material.
www.edubuzz360.com

https://play.google.com/store/apps/details?id=com.sss.edubuzz360
www.BrainKart.com

You might also like