0% found this document useful (0 votes)
39 views63 pages

RSP Question Bank Final (1)

Uploaded by

kaushikvetri2.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views63 pages

RSP Question Bank Final (1)

Uploaded by

kaushikvetri2.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Department of Electronics and Communication Engineering

II B. TECH I SEMESTER 2021-22


A6404 – RANDOM SIGNAL PRINCIPLES

Hours Per Week Hours Per Semester Credits Assessment Marks


L T P L T P C CIE SEE Total
3 1 0 42 14 0 3 30 70 100

1. Course Description
Course Overview
This is the fundamental course in signal processing and communication Engineering. This
course provides a foundation in the theory and applications of probability and stochastic
processes and an understanding of the mathematical techniques relating to random processes
in the areas of signal processing, detection & estimation theory, and communications. This
course also focuses on the application of statistical techniques to the study of random signals
and noise concepts. This course forms the basis for the study of advanced subjects like Analog
and Digital Communications, Radar Communications, Cellular and Mobile Communications,
Digital image processing, Speech processing and Machine Learning.
Course Pre/corequisites
The course has no specific prerequisite and corequisite.

2. Course Outcomes (COs)


After the completion of the course, the learner will be able to:
CO# CO Statement BL#
CO1 Summarize the knowledge of single and multiple random variables L2

CO2 L3
Distinguish temporal and spectral characteristics of stochastic processes
CO3 Apply the concepts of correlation and covariance functions to solve the L3
engineering problems of signal processing systems
CO4 Analyze the random signal response of linear systems with the help of L4
temporal and spectral characteristics
CO5 Develop the knowledge of random noise and model practical noisy sources L6

Page 1 of 5
Department of Electronics and Communication Engineering
5. Course Syllabus
UNIT I:
Concepts of Random variables: single and multiple random variables- definition and
classification, joint distribution and density functions, properties, jointly Gaussian random
variables, central limit theorem, joint moments, joint characteristic functions, sum of random
variables, Transformations of a random variable.
UNIT II:
Random Process - Temporal Characteristics: Random process concept, distribution and
density functions, statistical independence, stationary, time averages and Ergodicity,
correlation functions and properties, covariance functions.
UNIT III:
Random Process-Spectral Characteristics: Power density spectrum and its properties,
relationship between PSD and autocorrelation function, cross-power density spectrum and its
properties, relationship between cross-PSD and cross-correlation function.
UNIT IV:
Random signal response of linear systems: Linear system fundamentals, system response-
convolution, mean and mean squared value of system response, autocorrelation function of
response, cross correlation functions of input and output, spectral characteristics of system
response.
UNIT V:
Random Noise Processes: Noise definitions and classification, white and colored noise, Noise
bandwidth, Band pass, band-limited and Narrow band processes, properties of band-limited
processes, modelling of Noise sources, Incremental modelling of Noisy Networks, modelling
of practical noisy networks.
6. Books and Materials
Text Book(s)

1. Peyton Z. Peebles (2009), Probability Random variables and Random signal principles,
4th Edition, Tata McGraw Hill, New Delhi, India.
Reference Book(s)

1. Athanasius Papoulis, Unni Krishna Pillai (2002), Probability, Random variables and
stochastic processes, 4th edition, Tata McGraw Hill, New Delhi, India.
2. Steven Kay (2006), Intuitive probability and random processes using MATLAB, Springer
US.

Page 3 of 5
HOD, EC
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

UNIT-1
Concept of random variables
1.1 Define random variable. What are the different types of random variables
explain through examples?

Ans:- RANDOM VARIABLE:


 A random variable is defined as a rule that assigns a numerical value to each
possible outcome of a chance experiment.
 Random variable is also called as a mapping function since it is a rule that
assigns a real value to each of the outcome, i.e.
𝑋: 𝑆 → 𝑅
 Always random variables are denoted by capital letters such as 𝑊, 𝑋, or
𝑌 and real values are denoted with small letters such as 𝑤, 𝑥, or 𝑦.

CLASSIFICATION OF RANDOM VARIABLES


 A continuous random variable is one which is having only continuous values
Ex: 𝑋 = {1 ≤ 𝑥 ≤ 5}
 A discrete random variable is one which is having only discrete values.
Ex: 𝑋 = {1, 2, 3, 4, 5}
 A mixed random variable is one for which some of its values are discrete
and some are continuous
Ex: 𝑋 = {1, 2, 3 ≤ 𝑥 ≤ 4, 5}

1.2 Discuss on vector random Variables.


(OR)
Explain the concept of Multiple random variables.

Ans:- VECTOR RANDOM VARIABLES/MULTIPLE RANDOM VARIABLES


 Suppose two random variables X and Y are defined on a sample space S, where
specific values of X and Y are denoted by x and y respectively.
 Then any ordered pair of number (x, y) may be considered as random point in
the xy-plane.
 The point may be taken as a vector random variable or random vector.
 The sample space is called joint sample space 𝑆𝐽 and is illustrated in Figure
below.
 The plane of all points (𝑥, 𝑦) in the ranges of 𝑋 and 𝑌 may be considered a new
sample space.
 It is a vector space where the components of any vector are the values of the
random variables 𝑋 and 𝑌.

Page 1 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 The new space is sometimes called the range sample space or two-dimensional
product space.

𝑆
𝐴 = {𝑋 ≤ 𝑥} 𝑆𝐽

𝐴
𝑦

𝐴∩𝐵
𝐴∩𝐵 =

{𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦}
𝐵 𝐵 = {𝑌 ≤ 𝑦}

Figure 1 Illustration of joint sample space.


 As shown in Figure above, let us define an event A by 𝐴 = {𝑋 ≤ 𝑥}, and another
event B by 𝐵 = {𝑌 ≤ 𝑦}.
 Here, events A and B refer to the sample space S, while events {𝑋 ≤ 𝑥} and {𝑌 ≤
𝑦} refer to the joint sample space 𝑺𝑱 .
 Here, event 𝐴 corresponds to all points in 𝑆𝐽 for which the 𝑋 coordinate values
are not greater than 𝑥, and event 𝐵 corresponds to the 𝑌 coordinate values in 𝑆𝐽
not exceeding 𝑦.
 Similarly, event AB defined on S corresponds to the joint event {𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦
} defined on SJ. This joint event is shown as crosshatched in the Figure.
 The probabilities of the two events 𝐴 = {𝑋 ≤ 𝑥} and 𝐵 = {𝑌 ≤ 𝑦} are called
probability distributions denoted by 𝐹𝑋 (𝑥) and 𝐹𝑌 (𝑦) respectively, and are given
by
𝑁

𝐹𝑋 (𝑥) = 𝑃{𝑋 ≤ 𝑥} = ∑ 𝑃(𝑥𝑛 )𝑢(𝑥 − 𝑥𝑛 )


𝑛=1
𝑀

𝐹𝑌 (𝑦) = 𝑃{𝑌 ≤ 𝑦} = ∑ 𝑃(𝑦𝑚 )𝑢(𝑦 − 𝑦𝑚 )


𝑚=1
 Now, the probability of the joint event {𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦} which is the function of
𝑥 and 𝑦, is called the joint probability distribution function and is denoted
𝐹𝑋,𝑌 (𝑥, 𝑦). Thus,
𝑁 𝑀

𝐹𝑋,𝑌 (𝑥, 𝑦) = 𝑃{𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦} = ∑ ∑ 𝑃(𝑥𝑛 , 𝑦𝑚 ) 𝑢(𝑥 − 𝑥𝑛 )𝑢(𝑦 − 𝑦𝑚 )


𝑛=1 𝑚=1
 Similarly, for 𝑁 number of random variables, the joint probability distribution
function is given by;
𝐹𝑋1 ,𝑋2 ,…,𝑋𝑁 (𝑥1 , 𝑥2 , … , 𝑥𝑁 ) = 𝑃{𝑋1 ≤ 𝑥1 , 𝑋2 ≤ 𝑥2 , … , 𝑋𝑁 ≤ 𝑥𝑁 }

Page 2 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

1.3 Define probability distribution and density functions and discuss their
properties.
(OR)
Differentiate Probability Distribution function and Probability Density function

Ans:- (1) PROBABILITY DISTRIBUTION FUNCTION


 The probability 𝑃{𝑋 ≤ 𝑥} is the probability of the event {𝑋 ≤ 𝑥}.
i.e., it is a number that depends on 𝑥; that is, it is a function of 𝑥.
 We call this function, the cumulative probability distribution function (CDF) of
the random variable 𝑋.
 It is denoted by 𝐹𝑋 (𝑥). Thus,
𝐹𝑋 (𝑥) = 𝑃{𝑋 ≤ 𝑥}
 𝐹𝑋 (𝑥) is often called distribution of 𝑋 and simply denoted as CDF.
 The argument 𝑥 is any real number ranging from −∞ to ∞.
Properties of cumulative distribution function (CDF):
1. 𝐹𝑋 (−∞) = 0
2. 𝐹𝑋 (∞) = 1
3. 0 ≤ 𝐹𝑋 (𝑥) ≤ 1
4. 𝐹𝑋 (𝑥1 ) ≤ 𝐹𝑋 (𝑥2 ), if 𝑥1 < 𝑥2
5. 𝑃{𝑥1 ≤ 𝑋 ≤ 𝑥2 } = 𝐹𝑋 (𝑥2 ) − 𝐹𝑋 (𝑥1 )
6. 𝐹𝑋 (𝑥 + ) = 𝐹𝑋 (𝑥), i.e. 𝐹𝑋 (𝑥) is a continuous function.

(2) PROBABILITY DENSITY FUNCTION:


 The probability density function (PDF), denoted by 𝑓𝑋 (𝑥), is defined as the
derivative of the distribution function. i.e.,
𝑑
𝑓𝑋 (𝑥) = [𝐹 (𝑥)]
𝑑𝑥 𝑋
 𝑓𝑋 (𝑥) is often called just the density function of the random variable X.
 For a discrete random variable 𝑋, PDF is given by
𝑁 𝑁

𝑓𝑋 (𝑥) = ∑ 𝑃{𝑋 = 𝑥𝑖 } 𝛿(𝑥 − 𝑥𝑖 ) = ∑ 𝑃(𝑥𝑖 ) 𝛿(𝑥 − 𝑥𝑖 )


𝑖=1 𝑖=1
where, 𝛿(𝑥) is unit impulse function, defined by
1, 𝑥=0
𝛿(𝑥) = {
0, 𝑒𝑙𝑠𝑒
and N may be infinite for some random variables.
Properties of probability density function (PDF):
1. 0 ≤ 𝑓𝑋 (𝑥) ∀ 𝑥 i.e. density function is ‘non-negative’.

2. ∫−∞ 𝑓𝑋 (𝑥) 𝑑𝑥 = 1 i.e., density function will have a ‘unit area’.
𝑥
3. 𝐹𝑋 (𝑥) = ∫−∞ 𝑓𝑋 (𝜉) 𝑑𝜉
𝑥
4. 𝑃{𝑥1 ≤ 𝑋 ≤ 𝑥2 } = ∫𝑥 2 𝑓𝑋 (𝑥) 𝑑𝑥
1

Page 3 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

1.4 Define joint distribution and joint density functions and discuss their properties.
(OR)
Define the following terms and list their properties:
(i) Joint cumulative distribution function
(ii) Joint probability density function

Ans:- (i) JOINT PROBABILITY DISTRIBUTION FUNCTION (JOINT CDF)


 The probabilities of the two events 𝐴 = {𝑋 ≤ 𝑥} and 𝐵 = {𝑌 ≤ 𝑦} are called
probability distributions denoted by 𝐹𝑋 (𝑥) and 𝐹𝑌 (𝑦) respectively, and are given
by
𝑁

𝐹𝑋 (𝑥) = 𝑃{𝑋 ≤ 𝑥} = ∑ 𝑃(𝑥𝑛 )𝑢(𝑥 − 𝑥𝑛 )


𝑛=1
𝑀

𝐹𝑌 (𝑦) = 𝑃{𝑌 ≤ 𝑦} = ∑ 𝑃(𝑦𝑚 )𝑢(𝑦 − 𝑦𝑚 )


𝑚=1
 Now, the probability of the joint event {𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦} which is the function of
𝑥 and 𝑦, is called the joint probability distribution function and is denoted
𝐹𝑋,𝑌 (𝑥, 𝑦). Thus,
𝑁 𝑀

𝐹𝑋,𝑌 (𝑥, 𝑦) = 𝑃{𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦} = ∑ ∑ 𝑃(𝑥𝑛 , 𝑦𝑚 ) 𝑢(𝑥 − 𝑥𝑛 )𝑢(𝑦 − 𝑦𝑚 )


𝑛=1 𝑚=1
 Similarly, for 𝑁 number of random variables, the joint probability distribution
function is given by;
𝐹𝑋1 ,𝑋2 ,…,𝑋𝑁 (𝑥1 , 𝑥2 , … , 𝑥𝑁 ) = 𝑃{𝑋1 ≤ 𝑥1 , 𝑋2 ≤ 𝑥2 , … , 𝑋𝑁 ≤ 𝑥𝑁 }

 Properties of joint probability distribution function


1. 𝐹𝑋,𝑌 (−∞, −∞) = 0, 𝐹𝑋,𝑌 (−∞, 𝑦) = 0, 𝐹𝑋,𝑌 (𝑥, −∞) = 0
2. 𝐹𝑋,𝑌 (∞, ∞) = 1
3. 0 ≤ 𝐹𝑋,𝑌 (𝑥, 𝑦) ≤ 1
4. 𝐹𝑋,𝑌 (𝑥, 𝑦) is a non-decreasing function of both 𝑥 and 𝑦.
5. 𝐹𝑋,𝑌 (𝑥2 , 𝑦2 ) + 𝐹𝑋,𝑌 (𝑥1 , 𝑦1 ) − 𝐹𝑋,𝑌 (𝑥1 , 𝑦2 ) − 𝐹𝑋,𝑌 (𝑥2 , 𝑦1 )
= 𝑃{𝑥1 ≤ 𝑥 ≤ 𝑥2 ; 𝑦1 ≤ 𝑦 ≤ 𝑦2 } ≥ 0
(𝑥,
6. 𝐹𝑋,𝑌 ∞) = 𝐹𝑋 (𝑥) and 𝐹𝑋,𝑌 (∞, 𝑦) = 𝐹𝑌 (𝑦). Here 𝐹𝑋 (𝑥) and 𝐹𝑌 (𝑦) are
called marginal distribution functions.

(ii) JOINT PROBABILITY DENSITY FUNCTION (JOINT PDF)


 The joint density of two random variables 𝑋 and 𝑌 is the second (partial)
derivative of their joint distribution function, denoted by 𝑓𝑋,𝑌 (𝑥, 𝑦).
Mathematically it is given by,
𝜕 2 𝐹𝑋,𝑌 (𝑥, 𝑦)
𝑓𝑋,𝑌 (𝑥, 𝑦) =
𝜕𝑥 𝜕𝑦
 For any two discrete random variables 𝑋 and 𝑌, it is defined as;
𝑁 𝑀

𝑓𝑋,𝑌 (𝑥, 𝑦) = ∑ ∑ 𝑃(𝑥𝑛 , 𝑦𝑚 ) 𝛿(𝑥 − 𝑥𝑛 ) 𝛿(𝑦 − 𝑦𝑚 )


𝑛=1 𝑚=1

Page 4 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 Similarly, for 𝑁 random variables, the joint density function is given by,
𝜕 𝑁 𝐹𝑋1 ,𝑋2 ,…,𝑋𝑁 (𝑥1 , 𝑥2 , … , 𝑥𝑁 )
𝑓𝑋1 ,𝑋2,…,𝑋𝑁 (𝑥1 , 𝑥2 , … , 𝑥𝑁 ) =
𝜕𝑥1 𝜕𝑥2 … 𝜕𝑥𝑁

 Properties of joint density function:


1. 𝑓𝑋,𝑌 (𝑥, 𝑦) ≥ 0
∞ ∞
2. ∫−∞ ∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑥 𝑑𝑦 = 1
𝑥 𝑦
3. 𝑓𝑋,𝑌 (𝑥, 𝑦) = ∫−∞ ∫−∞ 𝑓𝑋,𝑌 (𝛼1 , 𝛼2 ) 𝑑𝛼1 𝑑𝛼2
𝑥 ∞
4. 𝐹𝑋 (𝑥) = ∫−∞ ∫−∞ 𝑓𝑋,𝑌 (𝛼1 , 𝛼2 ) 𝑑𝛼1 𝑑𝛼2 and
∞ 𝑦
𝐹𝑌 (𝑦) = ∫−∞ ∫−∞ 𝑓𝑋,𝑌 (𝛼1 , 𝛼2 ) 𝑑𝛼1 𝑑𝛼2
𝑥 𝑦
5. 𝑃{𝑥1 ≤ 𝑥 ≤ 𝑥2 ; 𝑦1 ≤ 𝑦 ≤ 𝑦2 } = ∫𝑥 2 ∫𝑦 2 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑥 𝑑𝑦
1 1
∞ ∞
6. 𝑓𝑋 (𝑥) = ∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑦 and 𝑓𝑌 (𝑦) = ∫−∞ 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑥

1.5 Define Gaussian Random Variable. Derive the equation for its normalized case.
(OR)
Explain Gaussian random variable with necessary equations and graph.
(OR)
Write a note on PDF and CDF of Gaussian Random Variable.

Ans:- GAUSSIAN RANDOM VARIABLE


 A random variable 𝑋 is said to be Gaussian if it has the density and distribution
functions of the form
1 2 2
𝑓𝑿 (𝑥) = 𝑒 −(𝑥−𝑎𝑋) ⁄2𝜎𝑋
√2𝜋𝜎𝑋2
𝑥
1 2 ⁄2𝜎 2
𝐹𝑿 (𝑥) = ∫ 𝑒 −(𝑡−𝑎𝑋 ) 𝑋 𝑑𝑡
√2𝜋𝜎𝑋2 −∞
where 𝜎𝑋 > 0 and −∞ < 𝑎𝑋 < ∞ are real constants.

 The sketch of these functions is shown in Figure below.

𝑓𝑿 (𝑥) 𝐹𝑿 (𝑥)
1
1
√2𝜋𝜎2𝑋
0.5

𝑎𝑥 𝑥 𝑎𝑥 𝑥
(a) (b)
Fig. (a) Density and (b) Distribution functions of Gaussian random variable.

Page 5 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

SPECIAL CASE OF GAUSSIAN DISTRIBUTION FUNCTION:


 Let us consider the normalized case where 𝑎𝑥 = 0 and 𝜎𝑥 = 1. Then,
𝑥
1 2 ⁄2
𝐹𝑿 (𝑥) = ∫ 𝑒 −(𝑡) 𝑑𝑡 = 𝐹(𝑥) (𝑖)
√2𝜋
−∞
For negative values of x, we get,
𝐹(−𝑥) = 1 − 𝐹(𝑥) (𝑖𝑖)
 Now, let us take 𝑢 = (𝑡 − 𝑎𝑋 )/ 𝜎𝑥 , and by substituting in Gaussian distribution
function, we get
(𝑥−𝑎𝑋 )/ 𝜎𝑥
1 2 ⁄2
𝐹𝑿 (𝑥) = ∫ 𝑒 −(𝑢) 𝑑𝑢 (𝑖𝑖𝑖)
√2𝜋
−∞
On comparing (i) and (ii), we can write
𝒙 − 𝒂𝑿
𝑭𝑿 (𝒙) = 𝑭 ( ) (𝑖𝑣)
𝝈𝒙

1.6 Write the equation of Joint Gaussian density function and prove that the joint
gaussian density function is equal to the product of marginal densities.

Ans:- JOINT GAUSSIAN DENSITY FUNCTION


 Two random variables 𝑋 and 𝑌 are sad to be jointly Gaussian if their joint density
function is of the form
1 −1 (𝑥 − 𝑋̅)2 2𝜌(𝑥 − 𝑋̅)(𝑦 − 𝑌̅) (𝑦 − 𝑌̅)2
𝑓𝑋,𝑌 (𝑥, 𝑦) = . 𝑒𝑥𝑝 { [ − + ]}
2𝜋𝜎𝑋 𝜎𝑌 √1 − 𝜌2 2(1 − 𝜌2 ) 𝜎𝑋2 𝜎𝑋 𝜎𝑌 𝜎𝑌2
where
𝑋̅ = 𝐸[𝑋], 𝑌̅ = 𝐸[𝑌],

𝜎𝑋2 = 𝐸[(𝑋 − 𝑋̅)2 ], 𝜎𝑌2 = 𝐸[(𝑦 − 𝑌̅)2 ] and

𝐸[(𝑋 − 𝑋̅)(𝑌 − 𝑌̅)]


𝜌=
𝜎𝑋 𝜎𝑌

 Appearance of the joint Gaussian density function is shown in figure below

 Its maximum value is located at the point (𝑋̅, 𝑌̅). The maximum value is obtained
from
1
𝑓𝑋,𝑌 (𝑥, 𝑦) ≤ 𝑓𝑋,𝑌 (𝑋̅, 𝑌̅) =
2𝜋𝜎𝑋 𝜎𝑌 √1 − 𝜌2
Page 6 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 If 𝜌 = 0, random variables 𝑋 and 𝑌 are uncorrelated. Thus


1 −1 (𝑥 − 𝑋̅)2 (𝑦 − 𝑌̅)2
𝑓𝑋,𝑌 (𝑥, 𝑦) = . 𝑒𝑥𝑝 { [ + ]}
2𝜋𝜎𝑋 𝜎𝑌 √1 − 0 2(1 − 0) 𝜎𝑋2 𝜎𝑌2
1 −(𝑥 − 𝑋̅)2 −(𝑦 − 𝑌̅)2
= . 𝑒𝑥𝑝 { } 𝑒𝑥𝑝 { }
2𝜋𝜎𝑋 𝜎𝑌 2𝜎𝑋2 2𝜎𝑌2
1 −(𝑥 − 𝑋̅)2 1 −(𝑦 − 𝑌̅)2
= . 𝑒𝑥𝑝 { } 𝑒𝑥𝑝 { }
√2𝜋𝜎𝑋2 2𝜎𝑋2 √2𝜋𝜎𝑌2 2𝜎𝑌2

∴ 𝒇𝑿,𝒀 (𝒙, 𝒚) = 𝒇𝑿 (𝒙). 𝒇𝒀 (𝒚)


Where 𝑓𝑋 (𝑥) and 𝑓𝑌 (𝑦)are the marginal Gaussian density functions of 𝑋 and 𝑌 given
by
1 −(𝑥 − 𝑋̅)2
𝑓𝑋 (𝑥) = ∙ 𝑒𝑥𝑝 { }
√2𝜋𝜎𝑋2 2𝜎𝑋2

1 −(𝑦 − 𝑌̅)2
𝑓𝑌 (𝑦) = 𝑒𝑥𝑝 { }
√2𝜋𝜎𝑌2 2𝜎𝑌2

1.7 Write note on moments of a random variable. Derive expression for variance
and skew.
(OR)
Define the following:
a) Moments about the origin
b) Central moment
c) Variance
d) Skew

Ans:- MOMENTS OF A RANDOM VARIABLE


 Usually, two types of moments are defined for a random variable.
 Moments about the Origin and
 Moments about the Mean

(1) MOMENTS ABOUT THE ORIGIN:

 Moments about the origin of a random variable are denoted by ‘𝑚’.


 The suffix of ‘𝑚’ indicates the number of a moment of the random variable.
 The nth moment about the origin of the random variable 𝑋 is given by

𝒏]
𝒎𝒏 = 𝑬[𝑿 = ∫ 𝒙𝒏 𝒇𝑿 (𝒙)𝒅𝒙, 𝑿 𝒊𝒔 𝒄𝒐𝒏𝒕𝒊𝒏𝒖𝒐𝒖𝒔
−∞

or

Page 7 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK


𝒏]
𝒎𝒏 = 𝑬[𝑿 = ∑ 𝒙𝒏 𝑷(𝒙𝒊 ) , 𝑿 𝒊𝒔 𝒅𝒊𝒔𝒄𝒓𝒆𝒕𝒆
𝒊=𝟏

 Here,
𝑖𝑓 𝑛 = 0, 𝑚0 = 𝐸[𝑋 0 ] = 1
𝑖𝑓 𝑛 = 1, 𝑚1 = 𝐸[𝑋1 ] = 𝑋̅
𝑖𝑓 𝑛 = 2, 𝑚2 = 𝐸[𝑋 2 ] = ̅̅̅
𝑋 2̅

(2) MOMENTS ABOUT THE MEAN:

 Moments about the mean of a random variable are also called as “Central
moments”.
 Moments about the mean of a random variable are denoted by ‘𝝁’.
 The number of a moment of the random variable is indicated with its suffix.
 The nth moment about the mean of a random variable is given by:

̅ )𝒏 ] = ∫ (𝒙 − 𝒎𝟏 )𝒏 𝒇𝑿 (𝒙)𝒅𝒙, 𝑋 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
𝝁𝒏 = 𝑬[(𝑿 − 𝑿
−∞
or

𝝁𝒏 = 𝑬[(𝑿 − 𝑿 ̅ )𝒏 ] = ∑(𝒙 − 𝒎𝟏 )𝒏 𝑷(𝒙𝒊 ) , 𝑋 𝑖𝑠 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒


𝒊=𝟏
 Here,
 𝑖𝑓 𝑛 = 0, 𝜇0 = 𝐸[(𝑋 − 𝑋̅)0 ] = 1
 𝑖𝑓 𝑛 = 1, 𝜇1 = 𝐸[(𝑋 − 𝑋̅)1 ] = 𝐸[𝑋] − 𝑋̅ = 𝑋̅ − 𝑋̅ = 0
 𝑖𝑓 𝑛 = 2, 𝜇2 = 𝐸[(𝑋 − 𝑋̅)2 ] = 𝐸[𝑋 2 − 2𝑋𝑋̅ + 𝑋̅ 2 ]
= 𝐸[𝑋 2 ] − 2𝐸[𝑋]𝑋̅ + 𝐸[𝑋̅ 2 ] = 𝑚2 − 2. 𝑚1 . 𝑚1 + 𝑚12 = 𝑚2 − 𝑚12
Therefore, 𝝁𝟐 = 𝒎𝟐 − 𝒎𝟐𝟏

 The second order central moment 𝝁𝟐 is also called the “variance” of the random
variable 𝑋.
 The non-negative square-root of the variance is called the “standard deviation”
of the random variable 𝑋. It is denoted by 𝝈𝑿 .

∴ 𝝈𝑿 = √𝝁𝟐 = √𝒎𝟐 − 𝒎𝟐𝟏

 𝑖𝑓 𝑛 = 3, 𝝁𝟑 = 𝐸[(𝑋 − 𝑋̅)3 ] = 𝐸[𝑋 3 − 3𝑋𝑋̅(𝑋 − 𝑋̅) − 𝑋̅ 3 ]


= 𝐸[𝑋 3 − 3𝑋 2 𝑋̅ + 3𝑋𝑋̅ 2 − 𝑋̅ 3 ] = 𝐸[𝑋 3 ] − 3𝑋̅𝐸[𝑋 2 ] + 3𝑋̅ 2 𝐸[𝑋] − 𝑋̅ 3
= 𝐸[𝑋 3 ] − 3𝑋̅𝐸[𝑋 2 ] − 2𝑋̅ 3
∴ 𝝁𝟑 = 𝒎𝟑 − 𝟑𝒎𝟏 𝒎𝟐 − 𝟐𝒎𝟑𝟏

 The third central moment 𝝁𝟑 is called the “skew” of the density function
 It is a measure of the ‘asymmetry’ of 𝑓𝑋 (𝑥) about 𝑥 = 𝑋̅ = 𝑚1.

Page 8 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 The normalized third central moment 𝝁𝟑 /𝝈𝟑𝑿 is known as the “Coefficient of the
Skewness” of the density function.

1.8 Write a note on Joint characteristic functions. Write the equation to compute joint
moments using the same.

Ans:- JOINT CHARACTERISTIC FUNCTIONS:


 The joint characteristic function of two random variables 𝑋 and 𝑌 is given by,
∞ ∞

𝝓𝑿,𝒀 (𝝎𝟏 , 𝝎𝟐 ) = 𝑬[𝒆𝒋𝝎𝟏 𝑿+𝒋𝝎𝟐 𝒀 ] = ∫ ∫ 𝒆𝒋𝝎𝟏 𝒙+𝒋𝝎𝟐 𝒚 𝒇𝑿,𝒀 (𝒙, 𝒚) 𝒅𝒙 𝒅𝒚


−∞ −∞

 This expression is recognized as the two-dimensional Fourier transform of joint


density function. Therefore, from the definition of inverse Fourier transform, we
have,
∞ ∞
1
𝑓𝑋,𝑌 (𝑥, 𝑦) = 𝑓(𝑥, 𝑦) = ∫ ∫ 𝜙𝑋,𝑌 (𝜔1 , 𝜔2 ) 𝑒 −(𝑗𝜔1𝑥+𝑗𝜔2𝑦) 𝑑𝜔1 𝑑𝜔2
(2𝜋)2
−∞ −∞

By setting either 𝜔1 = 0 or 𝜔2 = 0, the characteristic functions of X and Y are


obtained. They are called marginal characteristic functions. Thus,

𝜙𝑋,𝑌 (𝜔1 , 0) = 𝐸[𝑒 𝑗𝜔1 𝑋+0 ] = 𝐸[𝑒 𝑗𝜔1 𝑋 ] = 𝜙𝑋 (𝜔1 )

𝜙𝑋,𝑌 (0, 𝜔2 ) = 𝐸[𝑒 0+𝑗𝜔2 𝑋 ] = 𝐸[𝑒 𝑗𝜔2 𝑋 ] = 𝜙𝑋 (𝜔2 )

 Joint moments about the origin 𝒎𝒏𝒌 can be found from the joint characteristic
function as follows:

𝒏+𝒌
𝝏𝒏+𝒌 𝝓𝑿,𝒀 (𝝎𝟏 , 𝝎𝟐 )
𝒎𝒏𝒌 = (−𝒋) |
𝝏𝝎𝒏𝟏 𝝏𝝎𝒌𝟐 𝝎 𝟏 =𝟎,𝝎𝟐 =𝟎

1.9 Define the following with respect to a random variable.


(i) Moment Generating Function
(ii) Characteristic function

Ans:- (1) MOMENT GENERATING FUNCTION


 Moment generating function (MGF) of a random variable 𝑋 is defined by

𝑀𝑋 (𝛼) = 𝐸[𝑒 𝛼𝑋 ] = ∫ 𝑒 𝛼𝑋 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
where 𝛼 is a real number and −∞ < 𝛼 < ∞.
 The nth moment about the origin of the random variable X can be computed
using:
𝑑𝑛
𝑚𝑛 = [ 𝑛 (𝑀𝑋 (𝛼))]|
𝑑𝛼 𝛼=0

Page 9 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

(2) CHARACTERISTIC FUNCTION:

 Characteristic function of a random variable 𝑋 is defined by



𝜱𝑿 (𝝎) = 𝑬[𝒆𝒋𝝎𝑿 ] = ∫ 𝒆𝒋𝝎𝑿 𝒇𝑿 (𝒙) 𝒅𝒙
−∞
Where 𝑗 = √−1, 𝜔 is a real variable, −∞ < 𝜔 < ∞.
 If the above function is written in terms of density function,
1 ∞ −𝑗𝜔𝑋
𝑓𝑋 (𝑥) = ∫ 𝑒 𝛷𝑋 (𝜔) 𝑑𝜔
2𝜋 −∞
 The above two expressions are known as Fourier Transform pair.
 The nth moment about the origin of the random variable 𝑋 can be computed
using:
𝒏
𝒅𝒏
𝒎𝒏 = (−𝒋) [ 𝒏 ( 𝜱𝑿 (𝝎))]|
𝒅𝜶 𝝎=𝟎
 For a characteristic function, the maximum magnitude of it occurs at 𝜔 = 0.
|𝛷𝑋 (𝜔)| ≤ 𝛷𝑋 (0) = 1

1.10 Explain monotonic transformation of a random variable.


(OR)
If a random variable X is transformed into a new random variable Y using
monotonic transformations, derive an equation for density function of Y.

Ans:- MONOTONIC TRANSFORMATION OF A RANDOM VARIABLE


 Monotonic transformation of a random variable can be obtained by considering
following two cases.

Case A: Monotonically increasing transformation


 A transformation 𝑇 of a continuous random variable 𝑋 is said to be
“Monotonically increasing” if it satisfies the condition: 𝑇( 𝑥1 ) < 𝑇(𝑥2 ) for any
𝑥1 < 𝑥2 ∈ 𝑋.
 Let 𝑌 have a particular value 𝑦0 corresponding to the particular value 𝑥0 of 𝑋 as
shown in Figure below.

 The two numbers are related by


𝑦0 = 𝑇(𝑥0 ) or 𝑥0 = 𝑇 −1 (𝑦0 )
 Now the probability of the event {𝑌 ≤ 𝑦0 } must equal the probability of the
event {𝑋 ≤ 𝑥0 } because of the one-to-one correspondence between random
variables 𝑋 and 𝑌. Thus,
𝑃{𝑌 ≤ 𝑦0 } = 𝑃{𝑋 ≤ 𝑥0 }

Page 10 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 From the properties of probability density function,


𝑦0 𝑥0
∫ 𝑓𝑌 (𝑦) 𝑑𝑦 = ∫ 𝑓𝑋 (𝑥) 𝑑𝑥
−∞ −∞
 Next, differentiating with respect to 𝑦0 using Leibniz’s rule, we get
𝑑𝑥0
𝑓𝑌 (𝑦0 ) = 𝑓𝑋 (𝑥0 )
𝑑𝑦0
 Generalizing above equation, we get
𝒅𝒙)
𝒇𝒀 (𝒚) = 𝒇𝑿 (𝒙)
𝒅𝒚
Case B: Monotonically decreasing transformation

 A transformation 𝑇 of a continuous random variable 𝑋 is said to be


“Monotonically decreasing” if it satisfies the condition: 𝑇( 𝑥1 ) > 𝑇(𝑥2 ) for any
𝑥1 < 𝑥2 ∈ 𝑋.
 Let 𝑌 have a particular value 𝑦0 corresponding to the particular value 𝑥0 of 𝑋 as
shown in Figure below.

 The two numbers are related by 𝑦0 = 𝑇(𝑥0 ) or 𝑥0 = 𝑇 −1 (𝑦0 )


 Now the probability of the event {𝑌 ≤ 𝑦0 } must equal the probability of the
event {𝑋 ≥ 𝑥0 } because of the one-to-one correspondence between random
variables 𝑋 and 𝑌. Thus,
𝑃{𝑌 ≤ 𝑦0 } = 𝑃{𝑋 ≥ 𝑥0 } = 1 − 𝑃{𝑋 ≤ 𝑥0 }

From the properties of probability density function,


𝑦0 𝑥0
∫ 𝑓𝑌 (𝑦) 𝑑𝑦 = 1 − ∫ 𝑓𝑋 (𝑥) 𝑑𝑥
−∞ −∞
Next, differentiating with respect to 𝑦0 using Leibniz’s rule, we get
𝑑𝑥0
𝑓𝑌 (𝑦0 ) = 0 − 𝑓𝑋 (𝑥0 )
𝑑𝑦0
Generalizing above equation, we get
𝑑𝑇 −1 (𝑦)
𝑓𝑌 (𝑦) = −𝑓𝑋 (𝑇 −1 (𝑦))
𝑑𝑦
or
𝑑𝑥
𝑓𝑌 (𝑦) = −𝑓𝑋 (𝑥)
𝑑𝑦
Generalizing the above two cases, we get
𝑑𝑥
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥) | |
𝑑𝑦

Page 11 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

1.11 List the properties of Gaussian random variables.

Ans:- PROPERTIES OF GAUSSIAN RANDOM VARIABLES


 Gaussian random variables are completely defined through only their first and
second-order moments; that is by their means, variances, and covariance.
 If random variables are uncorrelated, they are also statistically independent.
 Random variables produced by a linear transformation of 𝑋1 , 𝑋2 , … , 𝑋𝑁 will
also be Gaussian.
 Any 𝑘 −dimensional marginal density function by integrating out 𝑁 − 𝑘
random variables will be Gaussian.
 The conditional density 𝑓𝑋1 ,𝑋2,…,𝑋𝑘 (𝑥1 , 𝑥2 , … , 𝑥𝑘 |𝑋𝑘+1 = 𝑥𝑘+1 , 𝑋𝑘+2 =
𝑥𝑘+2 , … , 𝑋𝑁 = 𝑥𝑁 ) is Gaussian for 𝑘 < 𝑁.

1.12 Prove that the density function of the sum of two statistically independent
random variables is the convolution of their individual density functions.
(OR)
Derive an equation for the density function of sum of two independent random
variables. Also extend the concept for N- random variables.
(OR)
Prove that 𝒇𝑾 (𝒘) = 𝒇𝒀 (𝒚) ∗ 𝒇𝑿 (𝒙) where X and Y are two statistically
independent random variables and W=X+Y.

Ans:- DENSITY FUNCTION OF THE SUM OF TWO STATISTICALLY INDEPENDENT RANDOM


VARIABLES
Let 𝑊 be a random variable equal to the sum of two independent random variables
𝑋 and 𝑌:
𝑊 =𝑋+𝑌
The probability distribution function of W is given by,
𝐹𝑊 (𝑤) = 𝑃{𝑊 ≤ 𝑤} = 𝑃{𝑋 + 𝑌 ≤ 𝑤}
Following figure illustrates the region in the 𝑥𝑦 − plane where 𝑥 + 𝑦 ≤ 𝑤.

Page 12 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

The probability corresponding to an elemental area 𝑑𝑥𝑑𝑦 in the 𝑥𝑦 plane located at


the point (𝑥, 𝑦) is 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑥 𝑑𝑦.
If we sum all such probabilities over the region where 𝑥 + 𝑦 ≤ 𝑤, we will obtain
𝐹𝑊 (𝑤). Thus,
∞ 𝑤−𝑦

𝐹𝑊 (𝑤) = ∫ ∫ 𝑓𝑋,𝑌 (𝑥, 𝑦) 𝑑𝑥 𝑑𝑦


−∞ 𝑥=−∞
because X and Y are independent,
∞ 𝑤−𝑦 ∞ 𝑤−𝑦

𝐹𝑊 (𝑤) = ∫ ∫ 𝑓𝑋 (𝑥)𝑓𝑌 (𝑦) 𝑑𝑥 𝑑𝑦 = ∫ 𝑓𝑌 (𝑦) ∫ 𝑓𝑋 (𝑥) 𝑑𝑥 𝑑𝑦


−∞ 𝑥=−∞ −∞ 𝑥=−∞
By differentiating the above equation using Leibniz’s rule, we obtain the desired
density function. i.e.,

𝒇𝑾 (𝒘) = ∫ 𝒇 𝒀 (𝒚) 𝒇𝑿 (𝒘 − 𝒚) 𝒅𝒚
−∞
This expression is recognized as a convolution integral. That is, the density function
of the sum of two statistically independent random variables is the convolution of
their individual density functions. i.e.,
𝒇𝑾 (𝒘) = 𝒇𝒀 (𝒚) ∗ 𝒇𝑿 (𝒙)

 Extending the above concept for sum of N independent random variables, i.e.,
Let 𝑊 be a random variable equal to the sum of several independent random
variables 𝑊1 , 𝑊2 , …, 𝑊𝑁 .
 Then the density function of the new random variable w is given by,
𝒇𝑾 (𝒘) = 𝒇𝑾𝑵 (𝒘𝑵 ) ∗ 𝒇𝑾𝑵−𝟏 (𝒘𝑵−𝟏 ) ∗ … ∗ 𝒇𝑾𝟐 (𝒘𝟐 ) ∗ 𝒇𝑾𝟏 (𝒘𝟏 )

1.13 Discuss central limit theorem for sum of large number of Radom variables
(OR)
State central limit theorem

Ans:- CENTRAL LIMIT THEOREM


 The central limit theorem says that the probability distribution function of
the sum of a large number of random variables approaches a Gaussian
distribution.
 The mean and variances of this Gaussian distribution are respectively the
sum of the means and the variances of the all independent random variables
1.14 Define the following random variables.
(i) Uniform
(ii) Exponential
(iii) Rayleigh
(iv) Binomial
(v) Poisson
(OR)
Define the following distribution functions.
Page 13 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

(i) Uniform
(ii) Exponential
(iii) Rayleigh
(iv) Binomial
(v) Poisson

Ans:- (1) UNIFORM RANDOM VARIABLE


 A random variable 𝑋 is said to be Uniform random variable, if it has the density
and distribution functions of the form
1
, 𝑎≤𝑥≤𝑏
𝑓𝑿 (𝑥) = {(𝑏 − 𝑎)
0, 𝑒𝑙𝑠𝑒
0, 𝑥<𝑎
𝑥−𝑎
𝐹𝑿 (𝑥) = { , 𝑎≤𝑥<𝑏
(𝑏 − 𝑎)
1, 𝑥≥𝑏
for real constants −∞ < 𝑎 < ∞ and 𝑏 > 𝑎.
 Graphical representation of uniform density function is shown in figure (a) and
uniform distribution is shown in figure (b).

(2) EXPONENTIAL RANDOM VARIABLE


 A random variable 𝑋 is said to be Exponential random variable, if it has the
density and distribution functions of the form
1 −(𝑥−𝑎)
𝑓𝑿 (𝑥) = { 𝑏 𝑒
𝑏 , 𝑥>𝑎
0, 𝑥<𝑎
𝑥−𝑎
−( )
𝐹𝑿 (𝑥) = {{1 −𝑒 𝑏 , 𝑥>𝑎
0, 𝑥<𝑎
for real constants −∞ < 𝑎 < ∞ and 𝑏 > 0.
 Graphical representation of exponential density function is shown in figure (a)
and exponential distribution is shown in figure (b).

Page 14 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

(3) RAYLEIGH RANDOM VARIABLE


 A random variable 𝑋 is said to be Rayleigh random variable, if it has the density
and distribution functions of the form
2 −(𝑥−𝑎)2 ⁄𝑏
(𝑥
𝑓𝑿 (𝑥) = {𝑏 − 𝑎) 𝑒 , 𝑥≥𝑎
0, 𝑥<𝑎
−(𝑥−𝑎)2 ⁄𝑏
𝐹𝑿 (𝑥) = {{1 −𝑒 , 𝑥≥𝑎
0, 𝑥<𝑎
for real constants −∞ < 𝑎 < ∞ and 𝑏 > 0.
 Graphical representation of Rayleigh density function is shown in figure (a) and
Rayleigh distribution is shown in figure (b).

(4) BINOMIAL RANDOM VARIABLE


 A random variable 𝑋 is said to be Binomial random variable, if it has the density
and distribution functions of the form
𝑁
𝑁
𝑓𝑿 (𝑥) = ∑ ( ) 𝑝𝑘 (1 − 𝑝)𝑁−𝑘 𝛿(𝑥 − 𝑘)
𝑘
𝑘=0
𝑁
𝑁
𝐹𝑿 (𝑥) = ∑ ( ) 𝑝𝑘 (1 − 𝑝)𝑁−𝑘 𝑢(𝑥 − 𝑘)
𝑘
𝑘=0
𝑁!
where, 0 < 𝑝 < 1, N=1, 2, … and (𝑁𝑘) = 𝑘!(𝑁−𝑘)! .

 The binomial density can be applied to many games of chance, detection


problems in RADAR and SONAR, and many experiments having only two possible
outcomes on any given trial.
(5) POISSON RANDOM VARIABLE
 A random variable 𝑋 is said to be Poisson random variable, if it has the density
and distribution functions of the form

𝑒 −𝑏 𝑏 𝑘
𝑓𝑿 (𝑥) = ∑ 𝛿(𝑥 − 𝑘)
𝑘!
𝑘=0

𝑒 −𝑏 𝑏 𝑘
𝐹𝑿 (𝑥) = ∑ 𝑢(𝑥 − 𝑘)
𝑘!
𝑘=0

Where, 𝑏 > 0 is a real constant.

Page 15 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

UNIT-2
Random Process-Temporal Characteristics
2.1 Define and classify random processes with suitable examples.
(OR)
What is a random process? Explain classification of random process.

Ans:- RANDOM PROCESS DEFINITION:


The random process (also known as Stochastic Process) is defined as an ensemble
(collection) of functions of time when 𝑡 and 𝑠 are variables. It is usually denoted by
𝑋(𝑡, 𝑠) or simply 𝑋(𝑡).
Interpretations:
 The random process 𝑋(𝑡, 𝑠) is a family of functions where 𝑡 and 𝑠 are
variables.
 If 𝑠 is fixed, the random process is a function of time 𝑡 only. It is called
single time function.
 If 𝑡 is fixed, the random process is a function of 𝑠 only and hence the
random process will represent a random variable at time 𝑡.
 If both 𝑡 and 𝑠 are fixed, then 𝑋(𝑡, 𝑠) is merely a number

CLASSIFICATION OF RANDOM PROCESSES


The random processes are classified based on the type of random variable
(continuous or discrete) and the time (continuous or discrete). They are defined as
follows.
(1) Continuous random process
 A continuous random process is one in which the random variable X is
continuous and time t is continuous.
 Following figure illustrates an example curve for continuous random process.

 The physical examples for continuous random processes are:


 Dissolving of sugar crystals in coffee
 Thermal noise generated in a network
 Fluctuations in air temperature, and air pressure

Page 16 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

(2) Discrete random process


 A discrete random process is one in which the random variable 𝑋 is discrete (it
assumes only certain specific values) and time 𝑡 is continuous.
 Following figure illustrates an example of discrete random process.

 As shown in Figure, the voltage available at one end of the switch because of
opening and closing of the switch is a discrete random process.

(3) DISCRETE TIME (DT) RANDOM PROCESS


 A random process for which the random variable 𝑋 is continuous but the time
𝑡 is discrete is called as a discrete time random process. Often it is also called
as continuous random sequence.
 Following figure illustrates an example of discrete time random process.

 𝑇𝑠 is sampling interval and the sampling rate is 1/𝑇𝑠 samples per second.
 These types of processes are important in the analysis of various digital signal
processing (DSP) systems.

(4) DISCRETE RANDOM SEQUENCE


 A random process for which both random variable 𝑋 and the time 𝑡 being
discrete is called discrete random sequence.
 Following figure illustrates an example of such a process.

Page 17 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

(5) DETERMINISTIC RANDOM PROCESS


 If the future values of any sample function can be predicted from a knowledge
of the past values, then the random process is known as deterministic random
process.
 For example, consider the random process 𝑋(𝑡) = 𝐴 𝑐𝑜𝑠 (𝜔𝑡 + 𝜃). It consists
of a family of pure sine waves and it is completely specified in terms of the
random variables 𝐴 and 𝜃. Hence, it is a deterministic random process.

(6) NON-DETERMINISTIC RANDOM PROCESS


 If the future values of any sample function cannot be predicted from a
knowledge of the past values, then the random process is known as non-
deterministic random process.
 For example, consider the random process shown in Figure below. The future
values here cannot be determined based on past values so, it is a non-
deterministic random process

2.2 Define stationarity of random process and Classify.

Ans:- STATIONARITY OF RANDOM PROCESS


 A random process 𝑋(𝑡) is said to be stationary or strict sense stationary (SSS)
if 𝑋(𝑡) and 𝑋(𝑡 + 𝜏) possess the same statistical properties for any value of 𝜏.
i.e., the statistical properties of the random processes are not affected by a shift
in the time.
 Random processes that are not stationary are called non-stationary random
processes and sometimes also called evolutionary processes

FIRST-ORDER STATIONARY RANDOM PROCESSES:


 A random process is called first-order stationary or stationary to the first order
if its first order density function does not change with a shift in the time origin.
i.e.,
𝑓𝑋 (𝑥1 , 𝑡1 ) = 𝑓𝑋 (𝑥1 , 𝑡1 + 𝜏)
for any value of 𝜏. Hence it possesses a constant mean value.

Page 18 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

SECOND-ORDER STATIONARY RANDOM PROCESSES:


 A random process is called second-order stationary or stationary to order two
if its second order density function does not change with a shift in the time
origin. i.e.,
𝑓𝑋 (𝑥1 , 𝑥2 ; 𝑡1 , 𝑡2 ) = 𝑓𝑋 (𝑥1 , 𝑥2 ; 𝑡1 + 𝜏, 𝑡2 + 𝜏)
for any value of 𝑡1 , 𝑡2 and 𝜏. Hence it possesses a constant mean value

N- ORDER AND STRICT SENSE STATIONARY RANDOM PROCESS:


 By extending the above discussions to 𝑁 random processes, we say that a
random process is stationary to order 𝑁 if its 𝑁 𝑡ℎ order density function is
invariant to a shift in time origin, i.e.,
𝑓𝑋 (𝑥1 , 𝑥2 … 𝑥𝑁 ; 𝑡1 , 𝑡2 … , 𝑡𝑁 ) = 𝑓𝑋 (𝑥1 , 𝑥2 … 𝑥𝑁 ; 𝑡1 + 𝜏, 𝑡2 + 𝜏 … , 𝑡𝑁 + 𝜏)
for all 𝑡1 , 𝑡2 … , 𝑡𝑁 and 𝜏.
 Stationarity of order 𝑁 implies to all orders 𝐾 ≤ 𝑁.
 A process stationary to all orders 𝑁 is called strict-sense stationary.

2.3 Define wide-sense stationarity of random processes.

Ans:- WIDE-SENSE STATIONARY (WSS) RANDOM PROCESSES:


 A random process 𝑋(𝑡) is called WSS random process or sometimes called as
weak-sense stationary process, if it satisfies the following conditions,
i. 𝐸[𝑋(𝑡)] = 𝑚 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
i.e., the mean value of the process is a constant
ii. 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏)] = 𝑅𝑋𝑋 (𝜏)
i.e., it's autocorrelation function depends only on 𝜏 and not on 𝑡.
 A strict sense stationary random process implies a wide sense stationary
random process but not vice versa
 A process stationary to order two is clearly a wide sense stationary, but not vice
versa.
JOINTLY WIDE-SENSE STATIONARY RANDOM PROCESS:
 Two random processes 𝑋(𝑡) and 𝑌(𝑡) said to be jointly wide-sense stationary
if each of them satisfies the following properties;
i. The mean value of the process is constant.
ii. The autocorrelation function depends only on 𝜏 and not on 𝑡.
 In addition to the above properties, their cross-correlation function defined by
𝑅𝑋𝑌 (𝑡1 , 𝑡2 ) = 𝐸[𝑋(𝑡1 ). 𝑌(𝑡2 )] must be a function only of time difference 𝜏 =
𝑡2 − 𝑡1 and not of absolute time, i.e.,
𝑅𝑋𝑌 (𝑡1 , 𝑡2 ) = 𝐸[𝑋(𝑡1 ). 𝑌(𝑡2 )] = 𝑅𝑋𝑌 (𝜏)

Page 19 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

2.4 Define and classify Ergodic random processes.


(OR)
Define Time average and Ergodicity of the random process.

Ans:- ERGODIC RANDOM PROCESSES

 A random process is said to be ergodic random process, if the time averages of


the process are equal to its statistical averages and the time autocorrelation is
equal to its statistical autocorrelation. i.e., if
𝑥̅ = 𝑋̅ 𝑎𝑛𝑑 ℛ𝑋𝑋 (𝜏) = 𝑅𝑋𝑋 (𝜏)
 Two random processes are called jointly ergodic if they are individually ergodic
and also have a time cross correlation function that equals the statistical cross
correlation function. i.e.,
𝑇
1
ℛ𝑋𝑌 (𝜏) = 𝑙𝑖𝑚 ∫[𝑋(𝑡) ∙ 𝑌(𝑡 + 𝜏)] 𝑑𝑡 = 𝑅𝑋𝑌 (𝜏)
𝑇→∞ 2𝑇
−𝑇
Mean-Ergodic random process
A random process 𝑋(𝑡) with a constant mean value 𝑋̅ is called “mean-ergodic or
ergodic in mean” if its statistical average 𝑋̅ equals the time average 𝑥̅ of any
sample function 𝑥(𝑡), i.e., if,
𝐸[𝑋(𝑡)] = 𝑋̅ = 𝐴[𝑋(𝑡)] = 𝑥̅
Similarly, a discrete sequence is called mean-ergodic if the time average of
samples equals the statistical average
𝑁
1
𝐴𝑋 = 𝑙𝑖𝑚 ∑ 𝑋[𝑛] = 𝑋̅
𝑁→∞ 2𝑁 + 1
𝑛=−𝑁
Correlation-Ergodic random process
A stationary continuous process 𝑋(𝑡) with autocorrelation function 𝑅𝑋𝑋 (𝜏) is
called autocorrelation-ergodic or ergodic in autocorrelation if and only if its time
autocorrelation equals the statistical autocorrelation, i.e, if
ℛ𝑋𝑋 (𝜏) = 𝑅𝑋𝑋 (𝜏) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝜏
Similarly, a wide-sense stationary sequence 𝑋[𝑛] is said to be auto-correlation
ergodic if and only if
𝑁
1
𝑙𝑖𝑚 ∑ 𝑋[𝑛] 𝑋[𝑛 + 𝑘] = 𝑅𝑋𝑋 (𝑘)
𝑁→∞ 2𝑁 + 1
𝑛=−𝑁
Two processes 𝑋(𝑡) and 𝑌(𝑡) may be called cross-correlation ergodic or ergodic in
cross-correlation if the time cross-correlation function is equal to the statistical
cross-correlation function. i.e., if,
ℛ𝑋𝑌 (𝜏) = 𝑅𝑋𝑌 (𝜏)

Page 20 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

2.5 Define Autocorrelation function of a random variable and discuss its properties.

Ans:- AUTOCORRELATION FUNCTION


The autocorrelation provides a measure of the similarity (or coherence) between a
given signal and replica of the same signal delayed by a particular time.
The autocorrelation function of a random process 𝑋(𝑡) is the correlation of two
random variables 𝑋1 = 𝑋(𝑡1 ) and 𝑋2 = 𝑋(𝑡2 ) defined by the processes at
times 𝑡1 𝑎𝑛𝑑 𝑡2 . Mathematically given by,
𝑅𝑋𝑋 (𝑡1 , 𝑡2 ) = 𝐸[𝑋(𝑡1 ) 𝑋(𝑡2 )]
Let 𝑡1 = 𝑡 and 𝑡2 = 𝑡1 + 𝜏 = 𝑡 + 𝜏 with  a real number, we get
𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏)]
If 𝑋(𝑡) is a wide sense stationary random process then, autocorrelation function
depends on only the time difference 𝜏 = 𝑡2 − 𝑡1 . Thus for a WSS process,
𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏)] = 𝑅𝑋𝑋 (𝜏)
The physical significance of the autocorrelation function is that it provides a means
of describing the independence of two random variables obtained by observing a
random process 𝑋(𝑡) at times 𝜏 seconds apart.
Properties of Autocorrelation Function:
1. The maximum value of an autocorrelation function is equal to the value
evaluated at the origin. i.e.,
|𝑅𝑋𝑋 (𝜏)| ≤ 𝑅𝑋𝑋 (0)
2. The autocorrelation function is an even function of 𝜏, i.e.,
𝑅𝑋𝑋 (𝜏) = 𝑅𝑋𝑋 (−𝜏)
3. The value obtained at 𝜏 = 0 of an autocorrelation function gives the power in
the process, i.e.,
𝑅𝑋𝑋 (0) = 𝐸[𝑋 2 (𝑡)]
4. If 𝐸[𝑋(𝑡)] = 𝑋̅ ≠ 0 and 𝑋(𝑡) is ergodic with no periodic components, then
𝑙𝑖𝑚 𝑅𝑋𝑋 (𝜏) = 𝑋̅ 2
|𝜏|→∞

5. If 𝑋(𝑡) has a periodic component, then 𝑅𝑋𝑋 (𝜏) will also have a periodic
component with the same period
6. If 𝑋(𝑡) is ergodic, zero mean, and has no periodic component then,
𝑙𝑖𝑚 𝑅𝑋𝑋 (𝜏) = 0
|𝜏|→∞

7. The autocorrelation function 𝑅𝑋𝑋 (𝜏), cannot have an arbitrary shape.


8. The autocorrelation function 𝑅𝑋𝑋 (𝜏) of a random process 𝑋(𝑡) is a finite energy
function, i.e.,

𝑅𝑋𝑋 (𝑡1 , 𝑡2 ) = ∫ 𝑅𝑋𝑋 (𝜏) 𝑑𝜏 < ∞


−∞

Page 21 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

9. If there are two random processes 𝑋(𝑡) and 𝑌(𝑡) such that 𝑍(𝑡) = 𝑋(𝑡) +
𝑌(𝑡), then
𝑅𝑍𝑍 (𝜏) = 𝑅𝑋𝑋 (𝜏) + 𝑅𝑌𝑌 (𝜏) + 𝑅𝑋𝑌 (𝜏) + 𝑅𝑌𝑋 (𝜏)

2.6 Define cross correlation functions and list their properties.

Ans:- CROSS-CORRELATION FUNCTION


The cross-correlation function of two random processes is defined as a measure of
the similarity between a signal and a time delayed version of a second signal, i.e.,
𝑅𝑋𝑌 (𝑡1 , 𝑡2 ) = 𝐸[𝑋(𝑡1 ) 𝑌(𝑡2 )]
Let 𝑡1 = 𝑡 and 𝑡2 = 𝑡1 + 𝜏 = 𝑡 + 𝜏 with  a real number, we get
𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡) 𝑌(𝑡 + 𝜏)]
If 𝑋(𝑡) and Y(t) are jointly wide sense stationary random processes then, cross-
correlation function depends on only the time difference 𝜏 = 𝑡2 − 𝑡1 . Thus for
jointly WSS processes,
𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡) 𝑌(𝑡 + 𝜏)] = 𝑅𝑋𝑌 (𝜏)
 If 𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 0, then 𝑋(𝑡) and 𝑌(𝑡) are called orthogonal processes
 If two processes are statistically independent, then their cross correlation
function becomes
𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡)] 𝐸[𝑌(𝑡 + 𝜏)]
 If X(t) and Y(t) are at least wide-sense stationary random processes, then
𝑅𝑋𝑌 (𝜏) = 𝑅𝑌𝑋 (𝜏) = 𝑋̅𝑌̅
Properties of Cross-correlation:
1. Cross correlation function is not even function but is a symmetric. i.e.,
𝑅𝑋𝑌 (𝜏) = 𝑅𝑌𝑋 (−𝜏)
2. The maximum value of a cross correlation function is given by
|𝑅𝑋𝑌 (𝜏)| = √𝑅𝑋𝑋 (0)𝑅𝑌𝑌 (0)
3. The magnitude of the cross-correlation function is given by
1
|𝑅𝑋𝑌 (𝜏)| ≤ [𝑅𝑋𝑋 (0)+𝑅𝑌𝑌 (0)]
2
4. If 𝑋(𝑡) and 𝑌(𝑡) are independent to each other, then
𝑅𝑋𝑌 (𝜏) = 𝑅𝑌𝑋 (𝜏) = 𝑋̅𝑌̅
5. If the random processes 𝑋(𝑡) and 𝑌(𝑡) have periodic components with a
period 𝑇0 , then their cross correlation functions 𝑅𝑋𝑌 (𝜏) and 𝑅𝑌𝑋 (𝜏) will also
have the same periodic components with the same period 𝑇0 .
2.7 Write a note on covariance functions of random processes.

Ans:- Covariance is defined as a measure of how much two random processes change
together. Covariance can be of two types, namely, auto-covariance and cross-
covariance.
Page 22 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

AUTO-COVARIANCE FUNCTION
The auto-covariance is defined as a measure of variation of a random process with
its replica with a time difference 𝜏. It is denoted by 𝐶𝑋𝑋 (𝑡1 , 𝑡2 ). Mathematically it
is given by,
𝐶𝑋𝑋 (𝑡1 , 𝑡2 ) = 𝐸{ [𝑋(𝑡1 ) − 𝐸(𝑋(𝑡1 ))] [𝑋(𝑡2 ) − 𝐸(𝑋(𝑡2 ))] }
= 𝐸[[𝑋(𝑡1 ) 𝑋(𝑡2 )] − 𝐸[𝑋(𝑡1 )] 𝐸[𝑋(𝑡2 )]
= 𝑅𝑋𝑋 (𝑡1 , 𝑡2 ) − 𝐸[𝑋(𝑡1 )]𝐸[𝑋(𝑡2 )]
Let 𝑡1 = 𝑡 and 𝑡2 = 𝑡1 + 𝜏 = 𝑡 + 𝜏 with 𝜏 a real number, we get
𝐶𝑋𝑋 (𝑡, 𝑡 + 𝜏) = 𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏) − 𝐸[𝑋(𝑡)] 𝐸[𝑋(𝑡 + 𝜏)]
 If 𝑋(𝑡) is a wide-sense stationary random process then, autocorrelation
function and auto covariance function depends on only the time difference
𝜏 = 𝑡2 − 𝑡1. Thus, for a WSS process we can write,
𝐶𝑋𝑋 (𝜏) = 𝑅𝑋𝑋 (𝜏) − 𝐸[𝑋(𝑡)] 𝐸[𝑋(𝑡 + 𝜏)]
 If 𝑡1 = 𝑡2 = 𝑡, i.e., 𝐶𝑋𝑋 is observed at the same time, then
𝐶𝑋𝑋 (𝑡, 𝑡) = 𝑅𝑋𝑋 (𝑡, 𝑡) − 𝐸[𝑋(𝑡)] 𝐸[𝑋(𝑡)]
2
= 𝐸[𝑋 2 (𝑡)] − 𝐸[𝑋(𝑡)]2 = 𝐸[𝑋(𝑡) − ̅̅̅̅̅̅
𝑋(𝑡)] = 𝑉𝑎𝑟[𝑋(𝑡)]

CROSS-COVARIANCE FUNCTION
The Cross-covariance is defined as a measure of variation of two random process
with each other at two different time instants 𝑡1 and 𝑡2 . It is denoted by 𝐶𝑋𝑌 (𝑡1 , 𝑡2 ).
Mathematically it is given by,
𝐶𝑋𝑌 (𝑡1 , 𝑡2 ) = 𝐸{ [𝑋(𝑡1 ) − 𝐸(𝑋(𝑡1 ))] [𝑌(𝑡2 ) − 𝐸(𝑌(𝑡2 ))] }
= 𝐸[[𝑋(𝑡1 ) 𝑌(𝑡2 )] − 𝐸[𝑋(𝑡1 )] 𝐸[𝑌(𝑡2 )]
= 𝑅𝑋𝑌 (𝑡1 , 𝑡2 ) − 𝐸[𝑋(𝑡1 )] 𝐸[𝑌(𝑡2 )]
Let 𝑡1 = 𝑡 and 𝑡2 = 𝑡1 + 𝜏 = 𝑡 + 𝜏 with 𝜏 a real number, we get
𝐶𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) − 𝐸[𝑋(𝑡)] 𝐸[𝑌(𝑡 + 𝜏)]
 For two random processes, if 𝐶𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 0, they are called un
correlated processes. Hence,
𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸[𝑋(𝑡)] 𝐸[𝑌(𝑡 + 𝜏)]
 If 𝑋(𝑡) and Y(t) are jointly wide sense stationary random processes then,
cross-correlation function and cross covariance function depends on only
the time difference 𝜏 = 𝑡2 − 𝑡1 , and their mean values are constant. Thus
for jointly WSS processes, we can write,
𝐶𝑋𝑌 (𝜏) = 𝑅𝑋𝑌 (𝜏) − 𝐸[𝑋(𝑡)] 𝐸[𝑌(𝑡 + 𝜏)]
= 𝑅𝑋𝑌 (𝜏) − 𝑋̅ ̅𝑌

Page 23 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

UNIT III
Random Process-Spectral Characteristics

3.1 Derive the relation between autocorrelation function and power spectral density
and comment on it.
(OR)
State and prove Weiner Khintchine relations.
(OR)
Show that autocorrelation function and power spectral density forms Fourier
transform pair.

Ans: RELATIONSHIP BETWEEN POWER SPECTRUM AND AUTOCORRELATION FUNCTION


-
 To obtain the relation between power spectrum and autocorrelation function, let
us reconsider the Fourier transform of the portion of the sample function 𝑥(𝑡)
that exists between −𝑇 and 𝑇, i.e.,
∞ 𝑇

𝑋𝑇 (𝜔) = ∫ 𝑥𝑇 (𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑥(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞ −𝑇

The power spectrum of the random process 𝑋(𝑡) is given by,


𝐸[|𝑋𝑇 (𝜔)|2 ] 𝐸[𝑋𝑇∗ (𝜔) 𝑋𝑇 (𝜔)]
𝑆𝑋𝑋 (𝜔) = 𝑙𝑖𝑚 = 𝑙𝑖𝑚
𝑇→∞ 2𝑇 𝑇→∞ 2𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 𝐸 [ ∫ 𝑥(𝑡1 ) 𝑒 𝑗𝜔𝑡1 𝑑𝑡1 ∫ 𝑥(𝑡2 ) 𝑒 −𝑗𝜔𝑡2 𝑑𝑡2 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 𝐸 [ ∫ ∫ 𝑥(𝑡1 ) 𝑥(𝑡2 )𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 [ ∫ ∫ 𝐸[𝑥(𝑡1 ) 𝑥(𝑡2 )] 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
∴ 𝑆𝑋𝑋 (𝜔) = 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1
𝑇→∞ 2𝑇
−𝑇 −𝑇

Taking inverse Fourier transform on both sides of the above equation, we get
∞ ∞ 𝑇 𝑇
1 1 1
∫ 𝑆𝑋𝑋 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = ∫ 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑒 𝑗𝜔𝜏 𝑑𝑡2 𝑑𝑡1 𝑑𝜔
2𝜋 2𝜋 𝑇→∞ 2𝑇
−∞ −∞ −𝑇 −𝑇
∞ 𝑇 𝑇
1 1
= ∫ 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 −𝜏) 𝑑𝑡2 𝑑𝑡1 𝑑𝜔
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇

Page 24 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑇 𝑇 ∞
1 1
= 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝑑𝑡2 𝑑𝑡1 ( ∫ 𝑒 −𝑗𝜔(𝑡2 −𝑡1 −𝜏) 𝑑𝜔)
𝑇→∞ 2𝑇 2𝜋
−𝑇 −𝑇 −∞
𝑇 𝑇 ∞
1 1
= 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝑑𝑡2 𝑑𝑡1 ( ∫ 𝑒 𝑗𝜔(𝑡1 +𝜏−𝑡2 ) 𝑑𝜔)
𝑇→∞ 2𝑇 2𝜋
−𝑇 −𝑇 −∞
From the definition of the impulse function,

1
𝛿(𝑥) = ∫ 𝑒 𝑗𝜔𝑥 𝑑𝜔
2𝜋
−∞
Therefore,
∞ 𝑇 𝑇
1 1
∫ 𝑆𝑋𝑋 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 ) 𝛿(𝑡1 + 𝜏 − 𝑡2 ) 𝑑𝑡2 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇
From the even symmetry property of the impulse function, 𝛿(𝑡1 + 𝜏 − 𝑡2 ) =
𝛿(𝑡2 − 𝑡1 − 𝜏), thus,
∞ 𝑇 𝑇
1 1
∫ 𝑆𝑋𝑋 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ ( ∫ 𝑅𝑋𝑋 ( 𝑡1 , 𝑡2 )𝛿(𝑡2 − (𝑡1 + 𝜏)) 𝑑𝑡2 ) 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇
Now, from the integral definition of impulse function,

∫ 𝛿(𝑥 − 𝛼) 𝜙(𝑥) 𝑑𝑥 = 𝜙(𝛼)


−∞
Thus,
∞ 𝑇
1 1
∫ 𝑆𝑋𝑋 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑋 (𝑡1 , 𝑡1 + 𝜏) 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇
Generalizing the time, we get,
∞ 𝑇
1 1
∫ 𝑆𝑋𝑋 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏) 𝑑𝑡
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇
The right-side expression is known as the time average of the autocorrelation
function, and defined as
𝑇
1
𝐴[𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏)] = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏) 𝑑𝑡
𝑇→∞ 2𝑇
−𝑇
Thus,

𝟏
∫ 𝑺𝑿𝑿 (𝝎)𝒆𝒋𝝎𝝉 𝒅𝝎 = 𝑨[𝑹𝑿𝑿 (𝒕, 𝒕 + 𝝉)]
𝟐𝝅
−∞
And the direct Fourier transformation of the above equation is,

𝑺𝑿𝑿 (𝝎) = ∫ 𝑨[𝑹𝑿𝑿 (𝒕, 𝒕 + 𝝉)]𝒆−𝒋𝝎𝝉 𝒅𝝉


−∞

From the above two equations, it can be concluded that, time autocorrelation and
power spectrum are Fourier transform pair.

Page 25 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

If the random process is wide-sense stationary, the time autocorrelation depends


only on time difference 𝜏. Thus, the above equations modify to,

𝟏
𝑹𝑿𝑿 (𝝉) = ∫ 𝑺𝑿𝑿 (𝝎) 𝒆𝒋𝝎𝝉 𝒅𝝎
𝟐𝝅
−∞

𝑺𝑿𝑿 (𝝎) = ∫ 𝑹𝑿𝑿 (𝝉) 𝒆−𝒋𝝎𝝉 𝒅𝝉


−∞
i.e. the autocorrelation function and power density spectrum are Fourier transform
pair. The above relations are also known as “Weiner-Khintchine relations”.

3.2 Derive an expression for total power in terms of power spectral density.
(OR)
Derive an expression for the power spectral density.

Ans:- THE POWER DENSITY SPECTRUM/ POWER SPECTRAL DENSITY (PSD)


For a random process 𝑋(𝑡), let 𝑥𝑇 (𝑡) be defined as the portion of a sample function
𝑥(𝑡) that exists between −𝑇 and 𝑇, i.e.,
𝑥(𝑡), −𝑇 < 𝑡 < 𝑇
𝑥𝑇 (𝑡) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
For finite values of T, 𝑥𝑇 (𝑡) will satisfy
𝑇

∫|𝑥𝑇 (𝑡)| 𝑑𝑡 < ∞


−𝑇
 Its Fourier transform is given by,
∞ 𝑇

𝑋𝑇 (𝜔) = ∫ 𝑥𝑇 (𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑥(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞ −𝑇
 Energy contained in the signal 𝑥𝑇 (𝑡) is given by,
∞ 𝑻

𝑬(𝑻) = ∫ 𝒙𝟐𝑻 (𝒕) 𝒅𝒕 = ∫ 𝒙𝟐 (𝒕) 𝒅𝒕


−∞ −𝑻
 Using Parseval’s theorem, 𝑥(𝑡) ↔ 𝑋𝑇 (𝜔), therefore,
𝑇 ∞
1
𝐸(𝑇) = ∫ 𝑥 2 (𝑡) 𝑑𝑡 = ∫ |𝑋𝑇 (𝜔)|2 𝑑𝜔
2𝜋
−𝑇 −∞
 The average power 𝑃(𝑇) in 𝑥(𝑡) over the interval (−𝑇, 𝑇) can be obtained by
dividing the above expression by 2𝑇, i.e.,
𝑇 ∞
1 2 (𝑡)
1 |𝑋𝑇 (𝜔)|2
𝑃(𝑇) = ∫𝑥 𝑑𝑡 = ∫ 𝑑𝜔
2𝑇 2𝜋 2𝑇
−𝑇 −∞
 While considering the entire process, i.e., 𝑋(𝑡) over full-time period (𝑖. 𝑒. , 𝑇 →
∞), it is convenient to consider expected value of the signal 𝑥 2 (𝑡). Hence the
average power in the random process 𝑋(𝑡) is given by,
𝑇 ∞
1 1 𝐸[|𝑋𝑇 (𝜔)|2 ]
𝑃𝑋𝑋 = 𝑙𝑖𝑚 ∫ 𝐸[𝑥 2 (𝑡)] 𝑑𝑡 = ∫ 𝑙𝑖𝑚 𝑑𝜔
𝑇→∞ 2𝑇 2𝜋 𝑇→∞ 2𝑇
−𝑇 −∞

Page 26 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

From the above equation, following two things are clear.


1. The average power 𝑃𝑋𝑋 in a random process 𝑋(𝑡) is given by the time average
of its second moment, i.e.,
𝑻
𝟏
𝑷𝑿𝑿 = 𝒍𝒊𝒎 ∫ 𝑬[𝒙𝟐 (𝒕)] 𝒅𝒕 = 𝑨{𝑬[𝒙𝟐 (𝒕)]}
𝑻→∞ 𝟐𝑻
−𝑻
If the process is at least wide-sense stationary, then 𝐸[𝑥 2 (𝑡)] = ̅̅̅
𝑋 2̅, a
constant, and hence,
𝑃𝑋𝑋 = ̅̅̅
𝑋 2̅
2. The average power 𝑃𝑋𝑋 can be obtained by a frequency domain integration.
If we define the power density spectrum of the random process as,
𝑬[|𝑿𝑻 (𝝎)|𝟐 ]
𝑺𝑿𝑿 (𝝎) = 𝒍𝒊𝒎
𝑻→∞ 𝟐𝑻
Then the average power of the random process is given by,

𝟏
𝑷𝑿𝑿 = ∫ 𝑺𝑿𝑿 (𝝎) 𝒅𝝎
𝟐𝝅
−∞

3.3 Discuss the properties of the power density spectrum.

Ans:- PROPERTIES OF THE POWER DENSITY SPECTRUM:


i. 𝑆𝑋𝑋 (𝜔) ≥ 0
ii. 𝑆𝑋𝑋 (𝜔) = 𝑆𝑋𝑋 (−𝜔)
iii. 𝑆𝑋𝑋 (𝜔) is real
1 ∞
iv. ∫ 𝑆 (𝜔) 𝑑𝜔 = 𝐴{𝐸[𝑥 2 (𝑡)]}
2𝜋 −∞ 𝑋𝑋
v. 𝑆𝑋̇𝑋̇ (𝜔) = 𝜔2 𝑆𝑋𝑋 (𝜔)
i.e. Power density spectrum of the derivative of the 𝑋(𝑡) is 𝜔2 times the power
spectrum of 𝑋(𝑡).
1 ∞
vi. ∫ 𝑆 (𝜔) 𝑒 𝑗𝜔𝑡 𝑑𝜔 = 𝐴[𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏)]
2𝜋 −∞ 𝑋𝑋

𝑆𝑋𝑋 (𝜔) = ∫−∞ 𝐴[𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏)]𝑒 −𝑗𝜔𝜏 𝑑𝜏
Thus, the power density spectrum and the time average of autocorrelation
function form a Fourier transform pair.
For a wide-sense stationary random process, the above relations modify to
be,

1
𝑅𝑋𝑋 (𝜏) = ∫ 𝑆𝑋𝑋 (𝜔) 𝑒 𝑗𝜔𝜏 𝑑𝜔
2𝜋
−∞

𝑆𝑋𝑋 (𝜔) = ∫ 𝑅𝑋𝑋 (𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞

Page 27 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

3.4 Derive the relation between cross-correlation function and cross power spectral
density and comment on it.
(OR)
Show that cross-correlation function and cross power spectral density forms
Fourier transform pair.

Ans:- RELATIONSHIP BETWEEN CROSS-POWER SPECTRUM AND CROSS-CORRELATION


FUNCTION:
To obtain the relation between cross-power spectrum and cross-correlation function,
let us reconsider the Fourier transform of the portions of the sample functions 𝑥(𝑡1 )
and 𝑦(𝑡2 ) that exists between −𝑇 and 𝑇, i.e., 𝑥(𝑡) and 𝑦(𝑡) are taken at two different
tines 𝑡1 and 𝑡2 respectively. Thus,
∞ 𝑇
−𝑗𝜔𝑡
𝑋𝑇 (𝜔) = ∫ 𝑥𝑇 (𝑡) 𝑒 𝑑𝑡 = ∫ 𝑥(𝑡1 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡1
−∞ −𝑇
∞ 𝑇

𝑌𝑇 (𝜔) = ∫ 𝑦𝑇 (𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑦(𝑡2 ) 𝑒 −𝑗𝜔𝑡 𝑑𝑡2


−∞ −𝑇
Now, the cross-power spectrum of the random process 𝑋(𝑡) and 𝑌(𝑡) is given by,
𝐸[𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)]
𝑆𝑋𝑌 (𝜔) = 𝑙𝑖𝑚
𝑇→∞ 2𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 𝐸 [ ∫ 𝑥(𝑡1 ) 𝑒 𝑗𝜔𝑡1 𝑑𝑡1 ∫ 𝑦(𝑡2 ) 𝑒 −𝑗𝜔𝑡2 𝑑𝑡2 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 𝐸 [ ∫ ∫ 𝑥(𝑡1 ) 𝑦(𝑡2 )𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
= 𝑙𝑖𝑚 [ ∫ ∫ 𝐸[𝑥(𝑡1 ) 𝑦(𝑡2 )] 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1 ]
𝑇→∞ 2𝑇
−𝑇 −𝑇
𝑇 𝑇
1
∴ 𝑆𝑋𝑌 (𝜔) = 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑑𝑡2 𝑑𝑡1
𝑇→∞ 2𝑇
−𝑇 −𝑇
Now, taking the inverse Fourier transform on both sides,

1
∫ 𝑆𝑋𝑌 (𝜔) 𝑒 𝑗𝜔𝜏 𝑑𝜔
2𝜋
−∞
∞ 𝑇 𝑇
1 1
= ∫ 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 ) 𝑒 𝑗𝜔𝜏 𝑑𝑡2 𝑑𝑡1 𝑑𝜔
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇
∞ 𝑇 𝑇
1 1
= ∫ 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝑒 −𝑗𝜔(𝑡2 −𝑡1 −𝜏) 𝑑𝑡2 𝑑𝑡1 𝑑𝜔
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇

Page 28 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑇 𝑇 ∞
1 1
= 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝑑𝑡2 𝑑𝑡1 ( ∫ 𝑒 −𝑗𝜔(𝑡2 −𝑡1 −𝜏) 𝑑𝜔)
𝑇→∞ 2𝑇 2𝜋
−𝑇 −𝑇 −∞
𝑇 𝑇 ∞
1 1
= 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝑑𝑡2 𝑑𝑡1 ( ∫ 𝑒 𝑗𝜔(𝑡1 +𝜏−𝑡2 ) 𝑑𝜔)
𝑇→∞ 2𝑇 2𝜋
−𝑇 −𝑇 −∞
From the definition of the impulse function,

1
𝛿(𝑥) = ∫ 𝑒 𝑗𝜔𝑥 𝑑𝜔
2𝜋
−∞
Therefore,
∞ 𝑇 𝑇
1 1
∫ 𝑆𝑋𝑌 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 ) 𝛿(𝑡1 + 𝜏 − 𝑡2 ) 𝑑𝑡2 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇
From the even symmetry property of the impulse function, 𝛿(𝑡1 + 𝜏 − 𝑡2 ) =
𝛿(𝑡2 − 𝑡1 − 𝜏), thus,
∞ 𝑇 𝑇
1 1
∫ 𝑆𝑋𝑌 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ ( ∫ 𝑅𝑋𝑌 ( 𝑡1 , 𝑡2 )𝛿(𝑡2 − (𝑡1 + 𝜏)) 𝑑𝑡2 ) 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇 −𝑇
Now, from the integral definition of impulse function,

∫ 𝛿(𝑥 − 𝛼) 𝜙(𝑥) 𝑑𝑥 = 𝜙(𝛼)


−∞
Thus,
∞ 𝑇
1 1
∫ 𝑆𝑋𝑌 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑌 (𝑡1 , 𝑡1 + 𝜏) 𝑑𝑡1
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇
Generalizing the time, we get,
∞ 𝑇
1 1
∫ 𝑆𝑋𝑌 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) 𝑑𝑡
2𝜋 𝑇→∞ 2𝑇
−∞ −𝑇
The right-side expression is known as the time average of the cross-correlation
function, and is defined as
𝑇
1
𝐴[𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏)] = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) 𝑑𝑡
𝑇→∞ 2𝑇
−𝑇
Thus,

1
∫ 𝑆𝑋𝑌 (𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 = 𝐴[𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏)]
2𝜋
−∞
And the direct Fourier transformation of the above equation is,

𝑆𝑋𝑌 (𝜔) = ∫ 𝐴[𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏)]𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞

Page 29 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

From the above two equations, it can be concluded that, time cross-correlation and
cross-power spectrum are Fourier transform pair. If both of the random processes
are jointly wide-sense stationary, the time cross-correlation depends only on time
difference 𝜏. Thus, the above equations modify to,

𝟏
𝑹𝑿𝒀 (𝝉) = ∫ 𝑺𝑿𝒀 (𝝎) 𝒆𝒋𝝎𝝉 𝒅𝝎
𝟐𝝅
−∞

𝑺𝑿𝒀 (𝝎) = ∫ 𝑹𝑿𝒀 (𝝉) 𝒆−𝒋𝝎𝝉 𝒅𝝉


−∞
i.e. the cross-correlation function and cross-power density spectrum are Fourier
transform pair.
Similarly, by repeating the above procedure, we can also prove the following
relations:

𝑺𝒀𝑿 (𝝎) = ∫ 𝑨[𝑹𝒀𝑿 (𝒕, 𝒕 + 𝝉)]𝒆−𝒋𝝎𝝉 𝒅𝝉


−∞

𝟏
𝑨[𝑹𝒀𝑿 (𝒕, 𝒕 + 𝝉)] = ∫ 𝑺𝒀𝑿 (𝝎)𝒆𝒋𝝎𝝉 𝒅𝝎
𝟐𝝅
−∞
and

𝟏
𝑹𝒀𝑿 (𝝉) = ∫ 𝑺𝒀𝑿 (𝝎) 𝒆𝒋𝝎𝝉 𝒅𝝎
𝟐𝝅
−∞

𝑺𝒀𝑿 (𝝎) = ∫ 𝑹𝒀𝑿 (𝝉) 𝒆−𝒋𝝎𝝉 𝒅𝝉


−∞

3.5 Derive an expression for the bandwidth of power density spectrum.

Ans: BANDWIDTH OF POWER DENSITY SPECTRUM


The bandwidth of power density spectrum is the measure of its spread and is called
rms (root mean squared) bandwidth. It is given by

2
𝑊𝑟𝑚𝑠 = ∫ 𝜔2 𝑆𝑋𝑋 (𝜔) 𝑑𝜔
−∞
Upon normalization, the rms bandwidth is given by

2
∫−∞ 𝜔2 𝑆𝑋𝑋 (𝜔) 𝑑𝜔
𝑊𝑟𝑚𝑠 = ∞
∫−∞ 𝑆𝑋𝑋 (𝜔) 𝑑𝜔

If a bandpass form of power spectrum is assumed, its significant spectral components


cluster near some frequencies ±𝜔0 with a mean frequency

Page 30 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK


∫0 𝜔𝑆𝑋𝑋 (𝜔) 𝑑𝜔
𝜔0 =
̅̅̅̅ ∞
∫0 𝑆𝑋𝑋 (𝜔) 𝑑𝜔

Then rms bandwidth is given by



𝜔0 2 𝑆𝑋𝑋 (𝜔) 𝑑𝜔
4 ∫0 (𝜔 − ̅̅̅̅)
2
𝑊𝑟𝑚𝑠 = ∞
∫0 𝑆𝑋𝑋 (𝜔) 𝑑𝜔
3.6 Derive an expression for total power in terms of cross power spectral density.
(OR)
Derive an expression for the cross power spectral density.

Ans: CROSS-POWER DENSITY SPECTRUM


To obtain cross power density spectrums of two random processes 𝑋(𝑡), and 𝑌(𝑡),
let us consider the portion of their sample functions defined as follows.
𝑥(𝑡), −𝑇 < 𝑡 < 𝑇
𝑥𝑇 (𝑡) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
𝑥(𝑡), −𝑇 < 𝑡 < 𝑇
𝑦𝑇 (𝑡) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
Where, 𝑥𝑇 (𝑡) and 𝑦𝑇 (𝑡) are the portions of sample functions, 𝑥(𝑡) and 𝑦(𝑡) of the
random processes 𝑋(𝑡), and 𝑌(𝑡) defined over the interval −𝑇 < 𝑡 < 𝑇. The Fourier
transforms of the above functions are given by,
∞ 𝑇

𝑋𝑇 (𝜔) = ∫ 𝑥𝑇 (𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑥(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞ −𝑇
∞ 𝑇

𝑌𝑇 (𝜔) = ∫ 𝑦𝑇 (𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡 = ∫ 𝑦(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞ −𝑇
The cross power 𝑃𝑋𝑌 of the two processes 𝑋(𝑡), and 𝑌(𝑡) defined over the interval
−𝑇 < 𝑡 < 𝑇 is given by,
𝑇 𝑇
1 1
𝑃𝑋𝑌 (𝑇) = ∫ 𝑥𝑇 (𝑡) 𝑦𝑇 (𝑡) 𝑑𝑡 = ∫ 𝑥(𝑡) 𝑦(𝑡) 𝑑𝑡
2𝑇 2𝑇
−𝑇 −𝑇
By using Parseval’s theorem,
𝑇 ∞
1 1 𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)
𝑃𝑋𝑌 (𝑇) = ∫ 𝑥(𝑡) 𝑦(𝑡) 𝑑𝑡 = ∫ 𝑑𝜔
2𝑇 2𝜋 2𝑇
−𝑇 −∞
The total average cross power 𝑃𝑋𝑌 as 𝑇 → ∞, can be obtained by taking expectation.
Thus,
𝑇
1
𝑃𝑋𝑌 = 𝑙𝑖𝑚 𝑃𝑋𝑌 (𝑇) = 𝑙𝑖𝑚 ∫ 𝐸[𝑥(𝑡) 𝑦(𝑡)] 𝑑𝑡
𝑇→∞ 𝑇→∞ 2𝑇
−𝑇

1 𝐸[𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)]
= 𝑙𝑖𝑚 ∫ 𝑑𝜔
𝑇→∞ 2𝜋 2𝑇
−∞

Page 31 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑇 ∞
1 1 𝐸[𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)]
𝑃𝑋𝑌 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑌 (𝑡, 𝑡) 𝑑𝑡 = ∫ 𝑙𝑖𝑚 𝑑𝜔
𝑇→∞ 2𝑇 2𝜋 𝑇→∞ 2𝑇
−𝑇 −∞
From the above equation, the following expressions can be deduced,
𝑇
1
𝑃𝑋𝑌 = 𝑙𝑖𝑚 ∫ 𝑅𝑋𝑌 (𝑡, 𝑡) 𝑑𝑡 = 𝐴[𝑅𝑋𝑌 (𝑡, 𝑡)]
𝑇→∞ 2𝑇
−𝑇
And

1 𝐸[𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)]
𝑃𝑋𝑌 = ∫ 𝑙𝑖𝑚 𝑑𝜔
2𝜋 𝑇→∞ 2𝑇
−∞
𝐸[𝑋𝑇∗ (𝜔) 𝑌𝑇 (𝜔)]
Here, the term, 𝑙𝑖𝑚 indicates the cross-power spectral density and is
𝑇→∞ 2𝑇
denoted by 𝑆𝑋𝑌 (𝜔). Thus,
𝑬[𝑿∗𝑻 (𝝎) 𝒀𝑻 (𝝎)]
𝑺𝑿𝒀 (𝝎) = 𝒍𝒊𝒎
𝑻→∞ 𝟐𝑻
Therefore, the cross-power formula now becomes,

1
𝑃𝑋𝑌 = ∫ 𝑆𝑋𝑌 (𝜔) 𝑑𝜔
2𝜋
−∞
By repeating the above procedure, we can also define another cross-power density
spectrum by,
𝑬[𝒀∗𝑻 (𝝎) 𝑿𝑻 (𝝎)]
𝑺𝒀𝑿 (𝝎) = 𝒍𝒊𝒎
𝑻→∞ 𝟐𝑻
And its cross-power formula is given by,

𝟏
𝑷𝒀𝑿 = ∫ 𝑺𝒀𝑿 (𝝎) 𝒅𝝎 = 𝑷∗𝑿𝒀
𝟐𝝅
−∞

3.7 Discuss the properties of cross power density spectrum.

Ans: PROPERTIES OF CROSS-POWER SPECTRUM:



1. 𝑆𝑋𝑌 (𝜔) = 𝑆𝑌𝑋 (−𝜔) = 𝑆𝑌𝑋 (𝜔).
2. 𝑅𝑒[𝑆𝑋𝑌 (𝜔)] and 𝑅𝑒[𝑆𝑌𝑋 (𝜔)] are even functions of 𝜔.
3. 𝐼𝑚[𝑆𝑋𝑌 (𝜔)] and 𝐼𝑚[𝑆𝑌𝑋 (𝜔)] are odd functions of 𝜔.
4. 𝑆𝑋𝑌 (𝜔) = 0 and 𝑆𝑌𝑋 (𝜔) = 0 if 𝑋(𝑡) and 𝑌(𝑡) are orthogonal.
5. If 𝑋(𝑡) and 𝑌(𝑡) are uncorrelated and have constant means 𝑋̅ and 𝑌̅, then
𝑆𝑋𝑌 (𝜔) = 𝑆𝑌𝑋 (𝜔) = 2𝜋𝑋̅𝑌̅𝛿(𝜔)
6. The cross-covariance and cross power spectrums are Fourier transform pair.
i.e.,
𝐴[𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏)] ↔ 𝑆𝑋𝑌 (𝜔)
𝐴[𝑅𝑌𝑋 (𝑡, 𝑡 + 𝜏)] ↔ 𝑆𝑌𝑋 (𝜔)

Page 32 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

If the random processes are wide-sense stationary, then they are defined as
follows.

1
𝑅𝑋𝑌 (𝜏) = ∫ 𝑆𝑋𝑌 (𝜔) 𝑒 𝑗𝜔𝜏 𝑑𝜔
2𝜋
−∞

1
𝑅𝑌𝑋 (𝜏) = ∫ 𝑆𝑌𝑋 (𝜔) 𝑒 𝑗𝜔𝜏 𝑑𝜔
2𝜋
−∞

𝑆𝑋𝑌 (𝜔) = ∫ 𝑅𝑋𝑌 (𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞

𝑆𝑌𝑋 (𝜔) = ∫ 𝑅𝑌𝑋 (𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞

Page 33 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

UNIT IV
RANDOM SIGNAL RESPONSE OF LINEAR SYSTEMS
4.1 Derive an equation for mean square value of random signal response of linear
systems.

Ans: MEAN SQUARE VALUE OF OUTPUT RESPONSE:


Let a random process 𝑋(𝑡) be applied to a continuous linear time invariant
system whose impulse response is ℎ(𝑡) as shown in below figure. Then the
output response 𝑌(𝑡) is also a random process.

The output response can be expressed by the convolution integral,


𝒀(𝒕) = 𝒉(𝒕) ∗ 𝑿(𝒕)

Mean square value of output response is


𝐸 [𝑌 2 (𝑡)] = 𝐸[(ℎ (𝑡) ∗ 𝑋 (𝑡))2 ]
= 𝐸 [(ℎ(𝑡) ∗ 𝑋(𝑡)) (ℎ(𝑡) ∗ 𝑋(𝑡))]
∞ ∞

= 𝐸 [ ∫ ℎ(𝜏1 ) 𝑋(𝑡 − 𝜏1 ) 𝑑𝜏1 ∫ ℎ(𝜏2 ) 𝑋(𝑡 − 𝜏2 ) 𝑑𝜏2 ]


−∞ −∞
∞ ∞

= 𝐸 [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑋(𝑡 − 𝜏2 )𝑋(𝑡 − 𝜏1 ) 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
∞ ∞

= [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝐸[𝑋(𝑡 − 𝜏2 )𝑋(𝑡 − 𝜏1 )] 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
Where 𝜏1 and 𝜏2 are shifts in time intervals. If input X(t) is a WSS random process
then
𝐸[𝑋(𝑡 − 𝜏2 )𝑋(𝑡 − 𝜏1 )] = 𝑅𝑋𝑋 (𝜏1 − 𝜏2 )
Therefore,
∞ ∞

𝑬 [𝒀𝟐 (𝒕)] = [ ∫ ∫ 𝑹𝑿𝑿 (𝝉𝟏 − 𝝉𝟐 ) 𝒉(𝝉𝟏 )𝒉(𝝉𝟐 ) 𝒅𝝉𝟏 𝒅𝝉𝟐 ]


−∞ −∞
This expression is independent of time t and it represents the output power.

4.2 Derive an equation for mean value of random signal response of linear systems.

Ans:- Mean Value of Output Response:


Let a random process 𝑋(𝑡) be applied to a continuous linear time invariant
system whose impulse response is ℎ(𝑡) as shown in below figure. Then the
output response 𝑌(𝑡) is also a random process.

Page 34 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

The output response can be expressed by the convolution integral,


𝒀(𝒕) = 𝒉(𝒕) ∗ 𝑿(𝒕)
Consider that the random process X (t) is wide sense stationary process, and Mean
value of output response=𝐸[𝑌 (𝑡)], Then
𝐸[𝑌 (𝑡)] = 𝐸 [ℎ (𝑡) ∗ 𝑋 (𝑡)]

= 𝐸 [ ∫ ℎ(𝜏) 𝑋(𝑡 − 𝜏) 𝑑𝜏]


−∞

= ∫ ℎ(𝜏) 𝐸[𝑋(𝑡 − 𝜏)] 𝑑𝜏


−∞
But 𝐸[𝑋(𝑡 − 𝜏)] = 𝑋̅ = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡, 𝑠𝑖𝑛𝑐𝑒 𝑋(𝑡)𝑖𝑠 𝑊𝑆𝑆. Then,

𝐸[𝑌 (𝑡)] = 𝑋̅ ∫ ℎ(𝜏) 𝑑𝜏


−∞
If 𝐻(𝜔) is the Fourier Transform of the h(t), then

𝐻(𝜔) = ∫ ℎ(𝑡)𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞
At 𝜔 = 0,

𝐻(0) = ∫ ℎ(𝑡) 𝑑𝑡
−∞
is called the zero-frequency response of the system. Substituting this we get
̅ = 𝑯(𝟎)𝑿
𝑬[𝒀 (𝒕)] = 𝒀 ̅

is constant. Thus, the mean value of the output response 𝑌 (𝑡) of a WSS random
process is equal to the product of the mean value of the input process and the zero-
frequency response of the system.

4.3 Derive an equation for autocorrelation function of an LTI system having white
noise as its input.

Ans:- AUTOCORRELATION FUNCTION OF OUTPUT RESPONSE:


Let a random process 𝑋(𝑡) be applied to a continuous linear time invariant
system whose impulse response is ℎ(𝑡) as shown in below figure. Then the
output response 𝑌(𝑡) is also a random process.

The output response can be expressed by the convolution integral,


𝒀(𝒕) = 𝒉(𝒕) ∗ 𝑿(𝒕)
Let input response 𝑋(𝑡) be a WSS. The autocorrelation function of Y(t) is
Page 35 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑅𝑌𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸 [𝑌(𝑡)𝑌(𝑡 + 𝜏)] = 𝐸 [(ℎ(𝑡) ∗ 𝑋(𝑡))(ℎ(𝑡 + 𝜏) ∗ 𝑋(𝑡 + 𝜏))]


∞ ∞

= 𝐸 [ ∫ ℎ(𝜏1 ) 𝑋(𝑡 − 𝜏1 ) 𝑑𝜏1 ∫ ℎ(𝜏2 ) 𝑋(𝑡 + 𝜏 − 𝜏2 ) 𝑑𝜏2 ]


−∞ −∞
∞ ∞

= 𝐸 [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑋(𝑡 − 𝜏1 )𝑋(𝑡 + 𝜏 − 𝜏2 ) 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
∞ ∞

= [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝐸[𝑋(𝑡 − 𝜏1 )𝑋(𝑡 + 𝜏 − 𝜏2 ) ] 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
If input 𝑋(𝑡) is a WSS random process then
𝐸[𝑋(𝑡 − 𝜏1 )𝑋(𝑡 + 𝜏 − 𝜏2 ) ] = 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 )
Therefore, on substitution,
∞ ∞

𝑅𝑌𝑌 (𝜏) = [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
Here, the autocorrelation of output response 𝑅𝑌𝑌 (𝜏) is the twofold convolution of
the input autocorrelation function with networks impulse response; that is
∴ 𝑹𝒀𝒀 (𝝉) = 𝑹𝑿𝑿 (𝝉) ∗ 𝒉(𝝉) ∗ 𝒉(−𝝉)

4.4 Derive an equation for cross-correlation functions of an LTI system having white
noise as its input.

Ans:- CROSS-CORRELATION FUNCTION OF INPUT AND OUTPUT PROCESSES:


Let a random process 𝑋(𝑡) be applied to a continuous linear time invariant
system whose impulse response is ℎ(𝑡) as shown in below figure. Then the
output response 𝑌(𝑡) is also a random process.

The output response can be expressed by the convolution integral,


𝒀(𝒕) = 𝒉(𝒕) ∗ 𝑿(𝒕)

If the input X (t) is WSS random process, then the cross-correlation function of input
X (t) and output Y(t) is
𝑅𝑋𝑌 (𝑡, 𝑡 + 𝜏) = 𝐸 [𝑋(𝑡)𝑌(𝑡 + 𝜏)] = 𝐸 [𝑋(𝑡)(ℎ(𝑡 + 𝜏) ∗ 𝑋(𝑡 + 𝜏))]

= 𝐸 [𝑋(𝑡) ∫ ℎ(𝜏1 ) 𝑋(𝑡 + 𝜏 − 𝜏1 ) 𝑑𝜏1 ]


−∞

= 𝐸 [ ∫ ℎ(𝜏1 ) 𝑋(𝑡) 𝑋(𝑡 + 𝜏 − 𝜏1 ) 𝑑𝜏1 ]


−∞

= ∫ ℎ(𝜏1 ) 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏 − 𝜏1 )] 𝑑𝜏1


−∞

Page 36 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

If 𝑋(𝑡) is WSS process, 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏 − 𝜏1 )] = 𝑅𝑋𝑋 (𝑡, 𝑡 + 𝜏 − 𝜏1 ) = 𝑅𝑋𝑋 (𝜏 −


𝜏1 )

∴ 𝑅𝑋𝑌 (𝜏) = ∫ ℎ(𝜏1 ) 𝐸[𝑋(𝑡) 𝑋(𝑡 + 𝜏 − 𝜏1 )] 𝑑𝜏1


−∞

= ∫ ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 − 𝜏1 ) 𝑑𝜏1


−∞
𝑅𝑋𝑌 (𝜏) = ℎ(𝜏) ∗ 𝑅𝑋𝑋 (𝜏 )
or
𝑹𝑿𝒀 (𝝉) = 𝑹𝑿𝑿 (𝝉 ) ∗ 𝒉(𝝉)
Similarly, it can be shown that,
𝑹𝒀𝑿 (𝝉) = 𝑹𝑿𝑿 (𝝉 ) ∗ 𝒉(−𝝉)
From above equation, it is clear that the cross-correlation functions depend on 𝜏
and not on absolute time 𝑡.

4.5 Derive an equation for power spectral density of system response.


(OR)
Derive an equation for Average Power of Output Response.

Ans:- POWER DENSITY SPECTRUM OF AND AVERAGE POWER OF OUTPUT RESPONSE


Since Power density spectrum and Autocorrelation functions forms the Fourier
Transform pair,

𝑆𝑌𝑌 (𝜔) = ∫ 𝑅𝑌𝑌 (𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞
The autocorrelation function of output response is given by
∞ ∞

𝑅𝑌𝑌 (𝜏) = [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞

On substitution we get,
∞ ∞ ∞

𝑆𝑌𝑌 (𝜔) = ∫ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑑𝜏1 𝑑𝜏2 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞ −∞ −∞
∞ ∞ ∞

= ∫ ℎ(𝜏1 ) ∫ ℎ(𝜏2 ) ∫ 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑒 −𝑗𝜔𝜏 𝑑𝜏 𝑑𝜏1 𝑑𝜏2


−∞ −∞ −∞
Let 𝜏 + 𝜏1 − 𝜏2 = 𝛼, 𝑑𝜏 = 𝑑𝛼, then,
∞ ∞ ∞

𝑆𝑌𝑌 (𝜔) = ∫ ℎ(𝜏1 ) ∫ ℎ(𝜏2 ) ∫ 𝑅𝑋𝑋 (𝛼) 𝑒 −𝑗𝜔(𝛼−𝜏1 +𝜏2 ) 𝑑𝛼 𝑑𝜏1 𝑑𝜏2
−∞ −∞ −∞
∞ ∞ ∞

= ∫ ℎ(𝜏1 )𝑒 𝑗𝜔𝜏1 𝑑𝜏1 ∫ ℎ(𝜏2 )𝑒 −𝑗𝜔𝜏2 𝑑𝜏2 ∫ 𝑅𝑋𝑋 (𝛼) 𝑒 −𝑗𝜔𝛼 𝑑𝛼


−∞ −∞ −∞

∴ 𝑺𝒀𝒀 (𝝎) = 𝑯∗ (𝝎) 𝑯(𝝎) 𝑺𝑿𝑿 (𝝎)


or
Page 37 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑺𝒀𝒀 (𝝎) = 𝑺𝑿𝑿 (𝝎)|𝑯(𝝎)|𝟐

where,

∗ (𝝎)
𝑯 = ∫ ℎ(𝜏1 )𝑒 𝑗𝜔𝜏1 𝑑𝜏1
−∞

𝑯(𝝎) = ∫ ℎ(𝜏2 )𝑒 −𝑗𝜔𝜏2 𝑑𝜏2


−∞

𝑺𝑿𝑿 (𝝎) = ∫ 𝑅𝑋𝑋 (𝛼) 𝑒 −𝑗𝜔𝛼 𝑑𝛼


−∞
The average power in the systems output response is given by,
∞ ∞
𝟏 𝟏
𝑷𝒀𝒀 = ∫ 𝑺𝒀𝒀 (𝝎) 𝒅𝝎 = ∫ 𝑺𝑿𝑿 (𝝎)|𝑯(𝝎)|𝟐 𝒅𝝎𝟒
𝟐𝝅 𝟐𝝅
−∞ −∞

4.6 Derive an equation for cross power spectral density of Input and Output response
of an LTI system.

Ans:- CROSS-POWER DENSITY SPECTRUMS OF INPUT AND OUTPUT:


Since cross power density spectrum and cross correlation functions forms the
Fourier Transform pair,

𝑆𝑋𝑌 (𝜔) = ∫ 𝑅𝑋𝑌 (𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞
We have,

𝑅𝑋𝑌 (𝜏 ) = ∫ ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 − 𝜏1 ) 𝑑𝜏1


−∞
So
∞ ∞

𝑆𝑋𝑌 (𝜔) = ∫ ∫ ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 − 𝜏1 ) 𝑑𝜏1 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞ −∞
Let, 𝜏 − 𝜏1 = 𝛼, 𝑑𝜏 = 𝑑𝛼, then,

∞ ∞

𝑆𝑋𝑌 (𝜔) = ∫ ∫ ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝛼 ) 𝑑𝜏1 𝑒 −𝑗𝜔(𝛼+𝜏1 ) 𝑑𝛼


−∞ −∞
∞ ∞

= ∫ ℎ(𝜏1 )𝑒 −𝑗𝜔𝜏1 𝑑𝜏1 ∫ 𝑅𝑋𝑋 (𝛼 ) 𝑒 −𝑗𝜔𝛼 𝑑𝛼


−∞ −∞
∴ 𝑺𝑿𝒀 (𝝎) = 𝑯(𝝎)𝑺𝑿𝑿 (𝝎)
Similarly, it can be proved that,
𝑺𝒀𝑿 (𝝎) = 𝑯(−𝝎)𝑺𝑿𝑿 (𝝎)

Page 38 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

4.6
Prove the following relations with respect to the random signal response of
linear systems:
𝐑 𝐘𝐘 (𝛕) = 𝐑 𝐗𝐘 (𝛕) ∗ 𝐡(−𝛕)
𝐑 𝐘𝐘 (𝛕) = 𝐑 𝐘𝐗 (𝛕) ∗ 𝐡(𝛕)

Ans:- RELATION BETWEEN 𝑹𝒀𝒀 (𝝉) AND 𝑹𝑿𝒀 (𝝉):


We have
∞ ∞

𝑅𝑌𝑌 (𝜏) = [ ∫ ∫ ℎ(𝜏2 ) ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑑𝜏1 𝑑𝜏2 ]


−∞ −∞
∞ ∞

= [ ∫ ℎ(𝜏1 ) ∫ ℎ(𝜏2 ) 𝑅𝑋𝑋 (𝜏 + 𝜏1 − 𝜏2 ) 𝑑𝜏2 𝑑𝜏1 ]


−∞ −∞

And from cross-correlation function,


𝑅𝑋𝑌 (𝜏) = ∫ ℎ(𝜏1 ) 𝑅𝑋𝑋 (𝜏 − 𝜏1 ) 𝑑𝜏1


−∞
On comparison and substitution, we get

𝑅𝑌𝑌 (𝜏) = [ ∫ ℎ(𝜏1 ) 𝑅𝑋𝑌 (𝜏 + 𝜏1 ) 𝑑𝜏1 ]


−∞

𝑹𝒀𝒀 (𝝉) = 𝑹𝑿𝒀 (𝝉) ∗ 𝒉(−𝝉)


Similarly, it can be shown that,
𝑹𝒀𝒀 (𝝉) = 𝑹𝒀𝑿 (𝝉) ∗ 𝒉(𝝉)

4.7 Discuss about Linear system fundamentals.


(OR)
Show that the output response of a LTI system is equal to convolution of input
process and systems impulse response. Also find its spectral representation.
(OR)
Show that the output response of an LTI system is equal to the product of input
spectrum and system transfer function.

Ans:- LINEAR SYSTEM FUNDAMENTALS


Linear System
A system is said to be linear if its response to a sum of inputs 𝑥𝑛 (𝑡), 𝑛 = 1,2, … 𝑁,
is equal to the sum of responses taken separately. Thus, if 𝑥𝑛 (𝑡), causes a response
𝑦𝑛 (𝑡), , 𝑛 = 1,2, … 𝑁, then for a linear system
𝑁 𝑁 𝑁

𝑦(𝑡) = 𝐿 [∑ 𝛼𝑛 𝑥𝑛 (𝑡)] = ∑ 𝛼𝑛 𝐿[𝑥𝑛 (𝑡)] = ∑ 𝛼𝑛 𝑦𝑛 (𝑡)


𝑛=1 𝑛=1 𝑛=1

Page 39 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

must hold, where 𝛼𝑛 are arbitrary constant and N may be infinite. Here L is an
operator representing the action of the system on inputs 𝑥𝑛 (𝑡).
A general linear system block diagram is shown in figure below.

The output response 𝑦(𝑡) is given by


𝑦(𝑡) = 𝐿[𝑥(𝑡)]
From the definition and properties of impulse function we have,

𝑥(𝑡) = ∫ 𝑥(𝜏)𝛿(𝑡 − 𝜏) 𝑑𝜏
−∞
On substitution, we get
∞ ∞

𝑦(𝑡) = 𝐿 [ ∫ 𝑥(𝜏)𝛿(𝑡 − 𝜏) 𝑑𝜏] = ∫ 𝑥(𝜏)𝐿[𝛿(𝑡 − 𝜏)] 𝑑𝜏


−∞ −∞
Now let 𝐿[𝛿(𝑡 − 𝜏)] = ℎ(𝑡, 𝜏) as the impulse response of the linear system; then

𝒚(𝒕) = ∫ 𝒙(𝝉) 𝒉(𝒕, 𝝉) 𝒅𝝉


−∞
That is, the response of a linear system is completely determined by its impulse
response. Block diagram of linear system with impulse response is shown in figure
below.

Linear Time-Invariant Systems:


A general linear system is said to be linear time invariant system if its impulse
response does not depend on the time but depends on the time difference. Thus,
the system impulse response can be represented as
ℎ(𝑡, 𝜏) = ℎ(𝑡 − 𝜏)
So the response of linear system is given by

𝑦(𝑡) = ∫ 𝑥(𝜏) ℎ(𝑡 − 𝜏) 𝑑𝜏


−∞
The above equation is known as convolutional integral of x(t) and h(t) and is
represented as
𝒚(𝒕) = 𝒙(𝒕) ∗ 𝒉(𝒕)

Time-Invariant System’s Transfer Function:


The transfer function of the linear system can be obtained by taking Fourier
transform of system response and is given by
Page 40 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑌(𝜔) = ∫ 𝑦(𝑡) 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞
∞ ∞

= ∫ [ ∫ 𝑥(𝜏) ℎ(𝑡 − 𝜏) 𝑑𝜏] 𝑒 −𝑗𝜔𝑡 𝑑𝑡


−∞ −∞
∞ ∞

= ∫ 𝑥(𝜏) [ ∫ ℎ(𝑡 − 𝜏)𝑒 −𝑗𝜔(𝑡−𝜏) 𝑑𝜏] 𝑒 −𝑗𝜔𝜏 𝑑𝑡


−∞ −∞
∞ ∞

= ∫ 𝑥(𝜏) [ ∫ ℎ(𝑡 − 𝜏)𝑒 −𝑗𝜔(𝑡−𝜏) 𝑑𝑡] 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞ −∞

= ∫ 𝑥(𝜏)𝐻(𝜔) 𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞

= 𝐻(𝜔) ∫ 𝑥(𝜏) 𝑒 −𝑗𝜔𝜏 𝑑𝜏


−∞

∴ 𝒀(𝝎) = 𝑯(𝝎)𝑿(𝝎)

Page 41 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

UNIT V
Random Noise Processes
5.1 Define and classify Noise with respect to communication system.
(OR)
Classify the different types of Internal noise. Explain each.

Ans: Noise is defined in electrical sense as an unwanted energy tending to interfere with
-
reception and reproduction of wanted signals. It represents a basic limitation on
transmission and detection of signals in communication systems. No system can be
designed which is free from any kind of noise. Noise is random in nature.
 Examples:
 Noise generates a random fuzzy sound in broadcast receiver.
 In pulse communication systems, noise may produce unwanted pulses or cancel
out wanted pulses which may cause errors in the receiver.
 In RADAR systems, noise may cause reduction in bandwidth.
 In image processing system, noise may cause errors in image capturing system.
Noise in Communication Systems:
Noise is often described as the limiting factor in communication systems: indeed, if
there was no noise there would be virtually no problem in communications. Noise is
a general term which is used to describe an unwanted signal which affects a wanted
signal.
For example consider the general block diagram of a communication system shown
in Figure

The noise, being an unwanted signal, arise from a variety of sources which may be
considered in one of two main categories: -
a) Interference, usually from a human source (man-made)
b) Naturally occurring random noise.

Page 42 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

TYPES OF NOISE
In an electronic communication system shown in Figure 5.1, there exists variety of
noises that affects the communication processes. Figure 5.2 illustrates the noise
categories of an electronic communication system

Figure 7.3 Electronic Communication System

(i) Industrial noise:


 Industrial noise is a man-made noise arises mainly from industrial areas
and urban areas.
 It spreads over frequencies from 1 to 600MHz.
 Ex: rotors, stators, gears, fans, vibrating panels, automobile, aircraft
ignition, switch gears, electrical arcs in electric motors etc.
 This electrical noise can inject itself onto analog or digital signals and fool
control equipment into thinking the process variable is different from
what it actually is.
 This noise interferes with the original message signal and corrupts the
parameters of the message signal.
(ii) Atmospheric noise:
 Atmospheric noise is generated by lightning discharges in thunderstorms
and other natural electrical disturbances occurring in atmosphere.
 It is also known as static noise.
 It is random in nature and consists of spurious radio signals distributed
over a wide frequency range.
 Atmospheric noise becomes less severe at frequencies above 30MHz
because at those frequencies, line-of-sight propagation is used.
 It is mainly caused by cloud-to-ground flashes as the current is much
stronger than that of cloud-to-cloud flashes.
 Field strength of the signal varies inversely in proportion to frequencies.
Hence, atmospheric noise is less severe with TV reception than radio
reception.

Page 43 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 Static from distant sources varies in accordance with variations in


propagation weather conditions. Hence atmospheric noise is high at night
as compared to that at day time.
 On a worldwide scale, 3.5 million lightning flashes occur daily. This are
about 40 lightning flashes per second.
 The sum of all these lightning flashes results in atmospheric noise.

(iii) Extraterrestrial noise:


Extraterrestrial noise is electrical noise which occurs due to Solar and Cosmic activity.
Solar noise:
• Noise that originates from the Sun is called solar noise
• The sun is a large body and its surface temperature is over 60000C i.e., very
high.
• Hence it acts as a noise source and emits constant noise under quiet
conditions.
• This noise spreads equally over all frequencies.
• Under normal conditions, there is approximately constant radiation from the
Sun due to its high temperature, but solar storms can cause a variety of
electrical disturbances. The intensity of solar noise varies over time in a solar
cycle.
• Additional large amount of noise is produced when sun passes through
disturbances such as corona flares, sun spots etc.
• The solar cycle repeats these eruptions of electrical disturbances
approximately every 11 years.
• A majestic display of cosmic grandeur - a solar storm erupts across the sun.
Massive flares soar into the cosmic vacuum. Billions of electrons, ions, and
atoms tumble into the galactic void, creating a celestial burst of white noise.
Cosmic noise:
• Distant stars are of large body size and are at very high temperatures but they
are far away as compared to sun. Distant stars generate noise called cosmic
noise.
• While these stars are too far away to individually affect terrestrial
communications systems, their large number leads to appreciable collective
effects. Therefore, noise produced by individual star is less but overall noise
radiated by all stars is certainly high and is easily felt on the earth.
• Cosmic noise has been observed in a range from 8 MHz to 1.43 GHz. This noise
spreads uniformly over entire sky and is called thermal noise.

Page 44 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

• Apart from man-made noise, it is the strongest component over the range of
about 20 to 120 MHz.
• Little cosmic noise below 20MHz penetrates the ionosphere, while its eventual
disappearance at frequencies in excess of 1.5 GHz
• We also receive noise from Galaxy and hence is also known as Galactic noise.
(iv) Thermal Noise (Johnson Noise)
 This type of noise is generated by all resistances (e.g. a resistor,
semiconductor, the resistance of a resonant circuit, i.e. the real part of the
impedance, cable etc.).
 Free electrons are in contact by random motion for any temperature above
absolute zero (00 K, ~ -2730 C) as shown in figure below.
 As the temperature increases, the random motion increases, hence thermal
noise, and since moving electron constitute a current, although there is no net
current flow, the motion can be measured as a mean square noise value

across the resistance.


 Experimental results (by Johnson) and theoretical studies (by Nyquist) give the
mean square noise voltage as
𝑣̅ 2 = 4𝑘𝑇𝐵𝑅
Where
𝑘 = Boltzmann’s constant = 1.38 x 10-23 Joules per K
𝑇 = absolute temperature, K
𝐵 = bandwidth noise measured in (Hz)
𝑅 = resistance (ohms)
The law relating noise power, 𝑃𝑛 , to the temperature and bandwidth is given by
𝑃𝑛 = 𝑘 𝑇𝐵 watts
 The equations above held for frequencies up to > 1013 Hz (1000 GHz) and for
at least all practical temperatures, i.e. for all practical communication
systems they may be assumed to be valid.
 Thermal noise is often referred to as ‘white noise’ because it has a uniform
‘spectral density’.
Note:
 Noise power spectral density is the noise power measured in a 1 Hz bandwidth
i.e. watts per Hz. A uniform spectral density means that if we measured the
Page 45 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

thermal noise in any 1 Hz bandwidth from ~ 0Hz → 1 MHz → 1GHz…. 10,000


GHz etc. we would measure the same amount of noise.
 From the equation 𝑃𝑛 = 𝑘𝑇𝐵, noise power spectral density is 𝑝0 = 𝑘𝑇 watts
per Hz.
i.e. Graphically

(v) Shot Noise


 Shot noise was originally used to describe noise due to random fluctuations
in electron emission from cathodes in vacuum tubes (called shot noise by
analogy with lead shot).
 Shot noise also occurs in semiconductors due to the liberation of charge
carriers, which have discrete amount of charge, in to potential barrier region
such as occur in 𝑝𝑛 junctions. The discrete amounts of charge give rise to a
current which is effectively a series of current pulses.
For 𝑝𝑛 junctions the mean square shot noise current is

I n2  2I DC  2 I o  qe B (amps) 2

Where
I DC is the direct current as the pn junction (amps),

I 0 is the reverse saturation current (amps),

qe is the electron charge = 1.6 x 10-19 C,

B is the effective noise bandwidth (Hz).


Shot noise is found to have a uniform spectral density as for thermal noise.
Shot noise is also called as “Schottky noise” or “Poisson noise”. Shot noise power
spectral density at the frequency 𝜔 is given by
𝑆(𝜔) = 2𝑞𝑒 𝐼0
where
𝑞𝑒 =Electron charge = 1.6 x 10-19 Coulombs
𝐼𝑜 =Mean value of the current in Amps
Note:

Page 46 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

 The r.m.s shot noise current 𝐼𝑛 in a diode is given by 𝐼𝑛 = √2𝑞𝑒 𝐼𝑜 𝐵


 The r.m.s shot noise current in bipolar junction transistor is given by
𝐼𝑛 = √2𝑞𝑒 𝐼𝐸 𝐵
where 𝐼𝐸 is the average emitter current in Amps.
(vi) Low Frequency or Flicker Noise
 Active devices, integrated circuit, diodes, transistors etc. also exhibit a low
frequency noise, which is frequency dependent (i.e. non-uniform) known as
flicker noise or ‘one – over – f’ noise.
 The mean square value is found to be proportional to where f is the frequency
and n= 1. Thus the noise at higher frequencies is less than at lower
frequencies.
 Flicker noise is due to impurities in the material which in turn cause charge
carrier fluctuations.
(vii) Excess Resistor Noise
 Thermal noise in resistors does not vary with frequency, as previously noted,
by many resistors also generates as additional frequency dependent noise
referred to as excess noise.
 This noise also exhibits a (1/𝑓) characteristic, similar to flicker noise.
 Carbon resistor generally generates most excess noise whereas were wound
resistors usually generates negligible amount of excess noise.
 However, the inductance of wire wound resistor limits their frequency and
metal film resistor are usually the best choices for high frequency
communication circuit where low noise and constant resistance are required.

5.2 With suitable mathematical equations, write a note on white noise. Also define
colour noise.
(OR)
Discuss in detail bout White and colored noises with appropriate equations and
sketch.

Ans: WHITE AND COLORED NOISE


-
 The power spectral density of thermal noise is independent of the operating
frequency similar to white light which contains equal amounts of all frequencies
within the visible band of electromagnetic radiation. That is why, thermal noise
is also called "White noise".
 Examples are solar noise, cosmic noise and shot noise. All these noises come
under the title white noise.
 Alternatively, a sample function 𝑛(𝑡) of a wide-sense stationary noise random
process 𝑁(𝑡) is called White-noise if the power density spectrum of 𝑁(𝑡) is a
constant at all frequencies and is given by:

Page 47 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝒩0
𝑆𝑁𝑁 (𝜔) =
2
where 𝒩0 is a real positive constant.
 The autocorrelation function of 𝑁(𝑡) can be obtained by taking inverse
Fourier transform of above equation, and is given by;
𝒩0
ℛ𝑁𝑁 (𝜏) = ( ) 𝛿(𝑡)
2
The above two functions are illustrated in Figure below.

Noise having a nonzero and constant power spectrum over a finite frequency band
and zero everywhere else is called band-limited white noise.

Colored Noise:
A noise that is not a white noise is called colored noise. Alternatively, a noise that has
only a portion of visible light frequencies in its spectrum, called as colored noise.

5.3 Describe Band pass, band-limited and Narrow band processes.


(OR)
Define and explain band pass, band-limited and Narrow band random processes.

Ans:
Types of Random Processes

Some important types of random processes are


i. Low pass random processes
ii. Band pass random processes
iii. Band limited random processes
iv. Narrow band random processes
Low pass random processes:
A random process is defined as a low pass random process X(t) if its power spectral
density 𝑆𝑋𝑋 (𝜔) has significant components within the frequency band as shown in
figure below

Page 48 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

Band pass random processes:


 A random process X(t) is called a band pass process if its power spectral density
𝑆𝑋𝑋 (𝜔) has its significant components within a bandwidth W that does not
include 𝜔 = 0 .
 But in practice, the spectrum may have a small amount of power spectrum at
𝜔 = 0, as shown in Figure.

The spectral components outside the band W are very small and can be neglected.
For example,
• Modulated signals with carrier frequency 𝜔 = 0, and bandwidth W are
bandpass random processes.
• The noise transmitting over a communication channel can be modelled as a
band pass process.
Band limited random processes:
• A bandpass random process is said to be band limited if its power spectrum
components are zero outside the frequency band of width W that does not
include 𝜔 = 0.
• The power density spectrum of band limited bandpass process is shown in figure

Narrow band random processes:


• A band limited random process is said to be a narrow band process if the
bandwidth W is very small compared to the band center frequency, i.e., 𝑊 <

Page 49 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

< 𝜔0 , where W= bandwidth and 𝜔0 , is the frequency at which the power


spectrum is maximum.
• The power density spectrum of a narrow band process N(t) is shown in Fig.
(a).
• The narrow band process can be modelled as a cosine function slowly varying
in amplitude and phase with frequency 𝜔0 as shown in Fig. (b).

• It can be expressed as

𝑁(𝑡) = 𝐴(𝑡) 𝑐𝑜𝑠[𝜔0 𝑡 + 𝜃(𝑡)]

where 𝐴(𝑡) is an amplitude random process and 𝜃(𝑡) is phase random process.
Representation of Narrow band random processes:
The PSD of narrow band noise process is

A typical sample function n(t) of narrow band random process N(t), might look like

Narrow band random process N(t) can be represented as

Page 50 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑁(𝑡) = 𝐴(𝑡) 𝑐𝑜𝑠[𝜔0 𝑡 + 𝜃(𝑡)]


Where A(t) is time dependent amplitude variation and
𝜃(𝑡) is time dependent phase variation
On expansion, we get
𝑁(𝑡) = 𝐴(𝑡) 𝑐𝑜𝑠(𝜔0 𝑡) 𝑐𝑜𝑠(𝜃(𝑡)) − 𝐴(𝑡) 𝑠𝑖𝑛(𝜔0 𝑡) 𝑠𝑖𝑛(𝜃(𝑡))
= 𝑋(𝑡) 𝑐𝑜𝑠(𝜔0 𝑡) − 𝑌(𝑡) 𝑠𝑖𝑛(𝜔0 𝑡)
Where
𝑋(𝑡) = 𝐴(𝑡) 𝑐𝑜𝑠(𝜃(𝑡)) is called as in-phase component
𝑌(𝑡) = 𝐴(𝑡) 𝑠𝑖𝑛(𝜃(𝑡)) is called as quadrature phase component of random
process N(t).
Amplitude and phase components are given by

𝐴(𝑡) = √𝑋 2 (𝑡) + 𝑌 2 (𝑡)


𝑌(𝑡)
𝜃(𝑡) = 𝑡𝑎𝑛−1 ( )
𝑋(𝑡)

5.4 List out the properties of band-limited processes.

Ans: PROPERTIES OF BAND LIMITED RANDOM PROCESS:

Let 𝑁(𝑡) = 𝑋(𝑡) 𝑐𝑜𝑠(𝜔0 𝑡) − 𝑌(𝑡) 𝑠𝑖𝑛(𝜔0 𝑡) be any band limited WSS random
process with zero mean value and a power spectral density 𝑆𝑁𝑁 (𝜔), then some
important properties of 𝑋(𝑡) and 𝑌(𝑡) are given by:
1. If 𝑁(𝑡) is WSS, then 𝑋(𝑡) and 𝑌(𝑡) are jointly wide sense stationary
2. If 𝑁(𝑡) has zero mean, i.e. if 𝐸[𝑁(𝑡)] = 0, then
𝐸[𝑋(𝑡)] = 𝐸[𝑌(𝑡)] = 0
3. The mean square values of the processes are equal
i.e. 𝐸[𝑁 2 (𝑡)] = 𝐸[𝑋 2 (𝑡)] = 𝐸[𝑌 2 (𝑡)]
4. Both processes 𝑋(𝑡) and 𝑌(𝑡) have the same autocorrelation functions
𝑅𝑋𝑋 (𝜏) = 𝑅𝑌𝑌 (𝜏)
5. The cross correlation function of 𝑋(𝑡) and 𝑌(𝑡) are given by
𝑅𝑌𝑋 (𝜏) = −𝑅𝑌𝑋 (𝜏)
If the processes are orthogonal, then
𝑅𝑋𝑌 (𝜏) = 𝑅𝑌𝑋 (𝜏) = 0
6. Both 𝑋(𝑡) and 𝑌(𝑡) have same power spectral densities

Page 51 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑆𝑁 (𝜔 − 𝜔0 ) + 𝑆𝑁 (𝜔 + 𝜔0 ), |𝑊| ≤ 𝜔0
𝑆𝑌𝑌 (𝜔) = 𝑆𝑋𝑋 (𝜔) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
7. The cross power spectrums are
𝑆𝑋𝑌 (𝜔) = −𝑆𝑌𝑋 (𝜔)
8. If 𝑁(𝑡) is a Gaussian random process, then 𝑋(𝑡) and 𝑌(𝑡) are jointly Gaussian
9. The relationship between autocorrelation and power spectrum 𝑆𝑁𝑁 (𝜔) is

1
𝑅𝑋𝑋 (𝜏) = ∫ 𝑆𝑁𝑁 (𝜔) 𝑐𝑜𝑠[(𝜔 − 𝜔0 )𝜏] 𝑑𝜔
𝜋
0

1
𝑅𝑌𝑌 (𝜏) = ∫ 𝑆𝑁𝑁 (𝜔) 𝑠𝑖𝑛[(𝜔 − 𝜔0 )𝜏] 𝑑𝜔
𝜋
0

10. If 𝑁(𝑡) is a zero mean Gaussian and its PSD, 𝑆𝑁𝑁 (𝜔) is symmetric about then
𝑋(𝑡) and 𝑌(𝑡) are statistically independent

5.5 Show that available power across a noisy resistor is independent of resistance of a
source but depends only on its physical temperature.

Ans:
RESISTIVE (THERMAL) NOISE:
-
 Thermal noise generated by a resistor is proportional to its absolute
temperature and bandwidth.
 Therefore the noise power 𝑷𝒏 ∝ 𝑻𝑩
or 𝑷𝒏 = 𝒌 𝑻 𝑩 watts
Where
k=Boltzmann’s constant=1.38 × 10−23 J/oK
T=Absolute temperature in Kelvin
B=Bandwidth in Hz
 Now let us consider a “Noise resistor R”.
 The noise voltage generated in the noisy resistor may be considered as a noise
voltage source and can be modelled as a Thevenin’s equivalent circuit as shown
in figure below.

 From the circuit, the mean square value of the noise voltage can be written as
𝑣𝑛2 = 4𝑘𝑇𝐵𝑅𝑒𝑞
Page 52 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑤ℎ𝑒𝑟𝑒 𝑅𝑒𝑞 = 𝑅||𝑅𝐿


Available noise power:
 It is the maximum power that can be drawn from the noise source and is given
by:
𝑣2 4𝑘𝑇𝐵𝑅𝑒𝑞
𝑃𝑎 = 𝑛 = = 𝑘𝑇𝐵 watts
4𝑅 𝑒𝑞 4𝑅 𝑒𝑞

Hence, it is proved that available power across a noisy resistor is independent of


resistance of a source but depends only on its physical temperature

5.6 Explain how the available power gain of a two-port network can be estimated.
(OR)
Define the Available Power Gain for a two port network and prove the following
in case of cascade connections:
𝐌

𝐆𝐚 (𝛚) = ∏ 𝐆𝐦 (𝛚).
𝐦=𝟏

Ans:
AVAILABLE POWER GAIN

For a two port network, the available power gain is defined as the ratio of
maximum PSD of the signal at the output to the input, and is given by
𝑀𝑎𝑥. 𝑃𝑆𝐷 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑖𝑔𝑛𝑎𝑙 𝑎𝑡 𝑡ℎ𝑒 𝑜𝑢𝑡𝑝𝑢𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑛𝑒𝑡𝑤𝑜𝑟𝑘
𝐺𝑎 (𝜔) =
𝑀𝑎𝑥. 𝑃𝑆𝐷 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑖𝑔𝑛𝑎𝑙 𝑎𝑡 𝑡ℎ𝑒 𝑖𝑛𝑝𝑢𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑛𝑒𝑡𝑤𝑜𝑟𝑘
𝑆𝑠𝑜 (𝜔)
∴ 𝐺𝑎 (𝜔) =
𝑆𝑠𝑖 (𝜔)
Therefore the available output power can be written as
∞ ∞
1 1
𝑃𝑎𝑜 = ∫ 𝑆𝑠𝑜 (𝜔) 𝑑𝜔 = ∫ 𝐺𝑎 (𝜔) 𝑆𝑠𝑖 (𝜔) 𝑑𝜔
2𝜋 2𝜋
−∞ −∞

Available power gain of M-cascade networks


Now let us assume that M-networks are in cascade connection with individual
gains G1, G2, … GM respectively where M=1, 2, .. Indicates the number of network
as shown in figure below:

The available power gain of the cascade network is given by

Page 53 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

𝑆𝑚𝑜 (𝜔) 𝑆𝑚𝑜 (𝜔) 𝑆𝑚𝑖 (𝜔)


𝐺𝑎 (𝜔) = =
𝑆1𝑖 (𝜔) 𝑆𝑚𝑖 (𝜔) 𝑆1𝑖 (𝜔)
𝑆𝑚𝑜 (𝜔) 𝑆(𝑚−1)𝑜 (𝜔) 𝑆(𝑚−1)𝑖 (𝜔)
= . .
𝑆𝑚𝑖 (𝜔) 𝑆(𝑚−1)𝑖 (𝜔) 𝑆1𝑖 (𝜔)
𝑆𝑚𝑜 (𝜔) 𝑆(𝑚−1)𝑜 (𝜔) 𝑆(𝑚−2)𝑜 (𝜔) 𝑆1𝑜 (𝜔)
= . . .…
𝑆𝑚𝑖 (𝜔) 𝑆(𝑚−1)𝑖 (𝜔) 𝑆(𝑚−2)𝑖 (𝜔) 𝑆1𝑖 (𝜔)
= 𝐺𝑀 . 𝐺𝑀−1 . 𝐺𝑀−2 . … . 𝐺1
𝑴

∴ 𝑮𝒂 (𝝎) = ∏ 𝑮𝒎 (𝝎)
𝒎=𝟏

i.e. the available power gain of the cascade network is equal to the product of
individual gains.

5.6 Discuss and drive the expression for noise bandwidth for a Low pass filter with
input as white noise.

Ans: Noise bandwidth

• Consider a system having a low pass transfer function 𝐻(𝜔) and assume
white noise is applied at the input as shown in figure below.

• The power density of white noise is:


𝑁0
𝑆𝑋𝑋 (𝜔) =
2
• The power density spectrum of Y(t) is
𝑆𝑌𝑌 (𝜔) = |𝐻(𝜔)|2 𝑆𝑋𝑋 (𝜔)
• Total average power emerging from network is given by:
∞ ∞ ∞
1 1 1 𝑁0
𝑃𝑌𝑌 = ∫ 𝑆𝑌𝑌 (𝜔) 𝑑𝜔 = ∫ 𝑆𝑋𝑋 (𝜔)|𝐻(𝜔)|2 𝑑𝜔 = ∫ ( ) |𝐻(𝜔)|2 𝑑𝜔
2𝜋 2𝜋 2𝜋 2
−∞ −∞ −∞

Now consider an idealized system that is equivalent to the actual system which
means that
• Both produce the same average output power when they are excited by same
white noise source.
• Both have the same value of power transfer function at mid band i.e. |𝑯(𝟎)|𝟐
is same in both systems.
• The transfer function of idealized system is defined as

Page 54 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

|𝑯(𝟎)|𝟐 , |𝜔| < 𝑊𝑁


|𝐻𝐼 (𝜔)|2 = {
0, |𝜔| > 𝑊𝑁
Where 𝑊𝑁 is a constant selected to make output powers in two systems
are equal.
Therefore, the output power in the idealized system is given by
∞ ∞
1 𝑁0 1 𝑁0
∫ ( ) |𝐻𝐼 (𝜔)|2 𝑑𝜔 = ∫ ( ) |𝐻(0)|2 𝑑𝜔
2𝜋 2 2𝜋 2
−∞ −∞
𝑊𝑁
𝑁0 2
𝑁0 2
𝑁0 |𝐻(0)|2 𝑊𝑁
= ∫ |𝐻(0)| 𝑑𝜔 = |𝐻(0)| 2𝑊𝑁 =
4𝜋 4𝜋 2𝜋
−𝑊𝑁

By comparing both powers, we get



1 𝑁0 𝑁0 |𝐻(0)|2 𝑊𝑁
∫ ( ) |𝐻(𝜔)|2 𝑑𝜔 =
2𝜋 2 2𝜋
−∞

𝟏 ∞
(𝟐) ∫−∞|𝑯(𝝎)|𝟐 𝒅𝝎
⟹ 𝑾𝑵 =
|𝑯(𝟎)|𝟐
By assuming actual system impulse response is real and hence its transfer function is
an even function. Then,
∞ ∞ ∞
1 1
( ) ∫ |𝐻(𝜔)|2 𝑑𝜔 = ( ) . 2 ∫ |𝐻(𝜔)|2 𝑑𝜔 = ∫ |𝐻(𝜔)|2 𝑑𝜔
2 2
−∞ 0 0

Therefore

∫ |𝑯(𝝎)|𝟐 𝒅𝝎
𝑾𝑵 = 𝟎
|𝑯(𝟎)|𝟐
𝑊𝑁 is called the Noise bandwidth of the system.
 If the centre band frequency of band pass transfer function is assumed as 𝜔0 ,
then noise bandwidth is given by

∫𝟎 |𝑯(𝝎)|𝟐 𝒅𝝎
𝑾𝑵 =
|𝑯(𝝎𝟎 )|𝟐

5.7 For a cascade connection of M- two-port networks, derive the expression for
overall equivalent noise temperature.
(OR)
For M cascaded networks, show that
𝑻𝒆𝟐 𝑻𝒆𝟑 𝑻𝒆𝑴
𝑻𝒆 = 𝑻𝒆𝟏 + + + ⋯+
𝑮𝟏 𝑮𝟏 𝑮𝟐 𝑮𝟏 𝑮𝟐 𝑮𝟑 … 𝑮𝑴−𝟏
Ans:
Page 55 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

Equivalent noise temperature of cascade networks


Let a network is driven by a source with effective noise temperature Ts, and let the
noise generated within the network is Te and assumed as the effective input noise
temperature. In such a case, the total available output noise power is equal to the
sum of powers due to source temperature and input temperatures as shown in figure
below.

As shown in figure, the total available noise power is denoted as 𝑃𝑇𝐴 , the available
noise power due to source alone is denoted by 𝑃𝑆𝐴 , and the available noise power
due to network alone is denoted as 𝑃𝑁𝐴 .
From the definition of noise power due to temperature, the available power due to
source alone is given by
𝑃𝑆𝐴 = 𝐺𝑎 𝑘 𝑇𝑠 𝐵
And the noise power due to network alone is given by
𝑃𝑁𝐴 = 𝐺𝑎 𝑘 𝑇𝑒 𝐵
Therefore, the total available noise power in a network is given by
𝑃𝑇𝐴 = 𝑃𝑆𝐴 + 𝑃𝑁𝐴 = 𝐺𝑎 𝑘 𝑇𝑠 𝐵 + 𝐺𝑎 𝑘 𝑇𝑒 𝐵 = 𝐺𝑎 𝑘( 𝑇𝑠 + 𝑇𝑒 ) 𝐵
Now the network can be redrawn as shown in figure below

Now let us consider M-cascaded networks each with available power gain G1, G2, …
GM respectively where M=1, 2, .. Indicates the number of a network as shown in
figure below.

From the network, total available power of the network 1 is given by

𝑃𝑇𝐴1 = 𝐺1 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝑘 𝑇𝑒1 𝐵
Total available power of the network 2 is given by
𝑃𝑇𝐴1 = 𝐺2 (𝐺1 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝑘 𝑇𝑒1 𝐵) + 𝐺2 𝑘 𝑇𝑒2 𝐵

Page 56 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

= 𝐺1 𝐺2 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝐺2 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝑘 𝑇𝑒2 𝐵
Total available power of the network 3 is given by
𝑃𝑇𝐴1 = 𝐺3 (𝐺1 𝐺2 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝐺2 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝑘 𝑇𝑒2 𝐵) + 𝐺3 𝑘 𝑇𝑒3 𝐵
= 𝐺1 𝐺2 𝐺3 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝐺2 𝐺3 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝐺3 𝑘 𝑇𝑒2 𝐵 + 𝐺3 𝑘 𝑇𝑒3 𝐵
Similarly, repeating the procedure for all networks, the total available power of
network M can be written as
𝑃𝑇𝐴𝑀 = 𝐺1 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑠 𝐵 + 𝐺1 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑒2 𝐵 + ⋯
+ 𝐺𝑀 𝑘 𝑇𝑒𝑀 𝐵
= 𝐺 𝑘 𝑇𝑠 𝐵 + 𝐺 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑒2 𝐵 + ⋯ + 𝐺𝑀 𝑘 𝑇𝑒𝑀 𝐵
Let us now consider the equivalent network of cascade connection, the above
cascade connection can be redrawn as follows.

From the above figure, we have


𝐺 𝑘 𝑇𝑠 𝐵 + 𝐺 𝑘 𝑇𝑒 𝐵
= 𝐺 𝑘 𝑇𝑠 𝐵 + 𝐺 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑒2 𝐵 + ⋯ + 𝐺𝑀 𝑘 𝑇𝑒𝑀 𝐵
𝐺 𝑘 𝑇𝑒 𝐵 = 𝐺 𝑘 𝑇𝑒1 𝐵 + 𝐺2 𝐺3 … 𝐺𝑀 𝑘 𝑇𝑒2 𝐵 + ⋯ + 𝐺𝑀 𝑘 𝑇𝑒𝑀 𝐵
𝐺 𝐺
𝐺 𝑘 𝑇𝑒 𝐵 = 𝐺 𝑘 𝑇𝑒1 𝐵 + 𝑘 𝑇𝑒2 𝐵 + ⋯ + 𝑘 𝑇𝑒𝑀 𝐵
𝐺1 𝐺1 𝐺2 𝐺3 … 𝐺𝑀−1
Therefore, the effective noise temperature is given by
𝑻𝒆𝟐 𝑻𝒆𝟑 𝑻𝒆𝑴
𝑻𝒆 = 𝑻𝒆𝟏 + + + ⋯+
𝑮𝟏 𝑮𝟏 𝑮𝟐 𝑮𝟏 𝑮𝟐 𝑮𝟑 … 𝑮𝑴−𝟏
Where 𝑻𝒆 is the effective input noise temperature and 𝑮 = 𝐺1 𝐺2 𝐺3 … 𝐺𝑀 is the
overall gain of the cascaded networks.

5.8 In a cascade of M network stages for which the Mth stage has available power
gain 𝐆𝐌 and operating noise figure 𝐅𝐎𝐏 , Show that:
𝐅𝟐 − 𝟏 𝐅𝟑 − 𝟏 𝐅𝐌 − 𝟏
𝐅𝐎𝐏 = 𝐅𝟏 + + +⋯+
𝐆𝟏 𝐆𝟏 𝐆𝟐 𝐆𝟏 𝐆𝟐 … 𝐆𝐌−𝟏

Ans: SPOT NOISE FIGURE


-
Let us consider a two-port network as shown in figure below, the total available
noise power is denoted as 𝑃𝑇𝐴 , the available noise power due to source alone is

Page 57 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

denoted by 𝑃𝑆𝐴 , and the available noise power due to network alone is denoted as
𝑃𝑁𝐴 .

 The spot noise figure of a two-port network is defined as the ratio of total
available power to the available noise power due to source alone and is
given by
𝑃𝑇𝐴 𝑃𝑆𝐴 + 𝑃𝑁𝐴 𝑃𝑁𝐴
𝐹= = =1+
𝑃𝑆𝐴 𝑃𝑆𝐴 𝑃𝑆𝐴
 From the definitions of noise powers, upon substitution, we get
𝑃𝑁𝐴 𝐺 𝑘 𝑇𝑒 𝐵
𝐹 = 1+ =1+
𝑃𝑆𝐴 𝐺 𝑘 𝑇𝑠 𝐵
𝑃𝑁𝐴 𝑇𝑒
∴𝐹 =1+ =1+
𝑃𝑆𝐴 𝑇𝑠

Similarly, for cascade of M- networks,

For network -1,


𝑇𝑒1
𝐹1 = 1 + ⟹ 𝑇𝑒1 = (𝐹1 − 1)𝑇𝑠
𝑇𝑠
Similarly
𝑇𝑒2
𝐹2 = 1 + ⟹ 𝑇𝑒2 = (𝐹2 − 1)𝑇𝑠
𝑇𝑠
And we have
𝑻𝒆𝟐 𝑻𝒆𝟑 𝑻𝒆𝑴
𝑻𝒆 = 𝑻𝒆𝟏 + + + ⋯+
𝑮𝟏 𝑮𝟏 𝑮𝟐 𝑮𝟏 𝑮𝟐 𝑮𝟑 … 𝑮𝑴−𝟏
Substituting 𝑇𝑒1 , 𝑇𝑒2, 𝑇𝑒3and so on values we get
(𝐹2 − 1)𝑇𝑠 (𝐹3 − 1)𝑇𝑠 (𝐹𝑀 − 1)𝑇𝑠
(𝐹𝑜𝑝 − 1)𝑇𝑠 = (𝐹1 − 1)𝑇𝑠 + + + ⋯+
𝑮𝟏 𝑮𝟏 𝑮𝟐 𝑮𝟏 𝑮𝟐 𝑮𝟑 … 𝑮𝑴−𝟏
𝐅𝟐 − 𝟏 𝐅𝟑 − 𝟏 𝐅𝐌 − 𝟏
∴ 𝐅𝐎𝐏 = 𝐅𝟏 + + + ⋯+
𝐆𝟏 𝐆𝟏 𝐆𝟐 𝐆𝟏 𝐆𝟐 … 𝐆𝐌−𝟏
Hence proved.

Page 58 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

5.9 Two resistors with resistances R1 and R2 are connected in parallel and have
physical temperatures T1 and T2 respectively.
i. Find the effective noise temperature Ts of an equivalent resistor with
resistance equal to the parallel combination of R1 and R2
(OR)
Derive an equation of effective noise temperature of resistors connected in
parallel.

Ans: Let us consider two resistors R1 and R2 operating at different temperature are
connected in parallel as shown in figure below.

2
The mean square noise current of resistor R1 is given by 𝑖𝑛1 = 4𝑘𝑇1 𝐵𝐺1
2
The mean square noise current of resistor R2 is given by 𝑖𝑛2 = 4𝑘𝑇2 𝐵𝐺2
2 2
𝑖𝑛1 + 𝑖𝑛2 = 4𝑘𝑇1 𝐵𝐺1 + 4𝑘𝑇2 𝐵𝐺2
= 4𝑘(𝑇1 𝐺1 + 𝑇2 𝐺2 )𝐵 − − − − − −(𝑎)
Now, let us assume that the noise temperature across output terminals is 𝑇𝑠 and
equivalent conductance is 𝐺𝑒𝑞 , then mean square noise current across output
terminals is given by

𝑖𝑛2 = 4𝑘𝑇𝑠 𝐵𝐺𝑒𝑞 − − − − − −(𝑏)


Theoretically, equation (a) and (b) must be equal. Therefore, on equating both
currents
4𝑘𝑇𝑠 𝐵𝐺𝑒𝑞 = 4𝑘(𝑇1 𝐺1 + 𝑇2 𝐺2 )𝐵
1 1
𝑇1 𝐺1 + 𝑇2 𝐺2 𝑇1 𝑅1 + 𝑇2 𝑅2
⟹ 𝑇𝑠 = =
𝐺𝑒𝑞 1 1
+
𝑅1 𝑅2
𝑇1 𝑅2 + 𝑇2 𝑅1
∴ 𝑇𝑠 =
𝑅1 + 𝑅2
5.10 Find the effective noise temperature Ts of an equivalent resistor with resistance
equal to the series combination of R1 and R2.

Ans:
Let us consider two resistors R1 and R2 operating at different temperature are
connected in series as shown in figure below

Page 59 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

2
The mean square noise voltage of resistor R1 is given by 𝑣𝑛1 = 4𝑘𝑇1 𝐵𝑅1
2
The mean square noise voltage of resistor R2 is given by 𝑣𝑛2 = 4𝑘𝑇2 𝐵𝑅2
Mean square noise voltage across A & B is
2 2
𝑣𝑛2 = 𝑣𝑛1 + 𝑣𝑛2 = 4𝑘𝑇1 𝐵𝑅1 + 4𝑘𝑇2 𝐵𝑅2
∴ 𝑣𝑛2 = 4𝑘(𝑇1 𝑅1 + 𝑇2 𝑅2 )𝐵 − −(𝑎)
Now, let us assume that the noise temperature across AB terminals is Ts and
equivalent resistance is Req, then mean square noise voltage across AB is given by

𝑣𝑛2 = 4𝑘𝑇𝑠 𝐵𝑅𝑒𝑞 − −(𝑏)


Theoretically, equation (a) and (b) must be equal. Therefore, on equating both
voltages,

4𝑘𝑇𝑠 𝐵𝑅𝑒𝑞 = 4𝑘(𝑇1 𝑅1 + 𝑇2 𝑅2 )𝐵


𝑇1 𝑅1 + 𝑇2 𝑅2 𝑇1 𝑅1 + 𝑇2 𝑅2
𝑇𝑠 = =
𝑅𝑒𝑞 𝑅1 + 𝑅2
Here Ts is known as equivalent or effective noise temperature.

5.11 Derive an equation for Average Noise Figure of practical noisy networks.

Ans AVERAGE NOISE FIGURE:


For practical noisy networks, it is necessary to calculate average noise figure denoted
̅̅̅̅̅
by 𝐹 𝑂𝑃
It is defined as the total output available noise power from a network divided by the
total output power due to the source alone.
𝑃𝑇𝑂
̅̅̅̅̅
𝐹 𝑂𝑃 =
𝑃𝑆𝑂
But, we have by definition of total power due to source alone,
∞ ∞

𝑃𝑆𝑂 = ∫ 𝑃𝑆𝐴 𝑑𝜔 = ∫ 𝐺 𝑘 𝑇𝑠 𝐵 𝑑𝜔
0 0

Similarly, the total output power is

Page 60 of 61
RANDOM SIGNAL PRINCIPLES
QUESTION BANK

∞ ∞ ∞

𝑃𝑇𝑂 = ∫ 𝑃𝑇𝐴 𝑑𝜔 = ∫ 𝐹𝑜𝑝 𝑃𝑆𝐴 𝑑𝜔 = ∫ 𝐹𝑜𝑝 𝐺 𝑘 𝑇𝑠 𝐵 𝑑𝜔


0 0 0
∞ ∞
𝑃𝑇𝑂 ∫0 𝐹𝑜𝑝 𝐺 𝑘 𝑇𝑠 𝐵 𝑑𝜔 ∫0 𝐹𝑜𝑝 𝐺 𝑇𝑠 𝑑𝜔
̅̅̅̅̅
𝐹 𝑂𝑃 = = ∞ = ∞
𝑃𝑆𝑂 ∫0 𝐺 𝑘 𝑇𝑠 𝐵 𝑑𝜔 ∫0 𝐺 𝑇𝑠 𝑑𝜔

If 𝑇𝑠 is assumed to be approximately constant, then



∫0 𝐹𝑜𝑝 𝐺 𝑑𝜔
̅̅̅̅̅
𝐹 𝑂𝑃 = ∞
∫0 𝐺 𝑑𝜔

5.12 Derive an equation for Average Noise Temperature of practical noisy networks.

Ans:
- Average noise temperature of a practical system can be calculated in two cases,
average source temperature, and average effective input temperature.
From the definition of total available noise power of a network, we have
𝑃𝑇𝐴 = 𝐺𝑎 𝑘( 𝑇𝑠 + 𝑇𝑒 ) 𝐵
From the definition of total output noise power of a network, we have
∞ ∞

𝑃𝑇𝑂 = ∫ 𝑃𝑇𝐴 𝑑𝜔 = ∫ 𝐺𝑎 𝑘( 𝑇𝑠 + 𝑇𝑒 ) 𝐵 𝑑𝜔 − − − − − (𝑎)


0 0

Now let us define 𝑇̅𝑆 as the average effective source temperature and 𝑇̅𝑒 as the
average effective input noise temperature, which are supposed to produce same
output noise power, then
∞ ∞

∴ 𝑃𝑇𝑂 = ∫ 𝐺𝑎 𝑘( 𝑇̅𝑆 + 𝑇̅𝑒 ) 𝐵 𝑑𝜔 = ( 𝑇̅𝑆 + 𝑇̅𝑒 ) ∫ 𝐺𝑎 𝑑𝜔 − − − − − −(𝑏)


0 0

Equating (a) and (b) on a term-by-term basis,


∞ ∞

𝑇̅𝑆 ∫ 𝐺𝑎 𝑘 𝐵 𝑑𝜔 = ∫ 𝐺𝑎 𝑘 𝐵 𝑇𝑠 𝑑𝜔
0 0

∫0 𝐺𝑎 𝑇𝑠 𝑑𝜔
𝑇̅𝑆 = ∞
∫0 𝐺𝑎 𝑑𝜔

Similarly, we get

∫0 𝐺𝑎 𝑇𝑒 𝑑𝜔
𝑇̅𝑒 = ∞
∫0 𝐺𝑎 𝑑𝜔

Page 61 of 61

You might also like