Prob AND STochastic Process
Prob AND STochastic Process
STOCHASTIC PROCESSES
(20A54403 )
LECTURE NOTES
Prepared by:
Dr. S. Muni Rathnam, Professor
Department of Electronics and Communication Engineering
Accredited By NAAC, NBA( EEE, ECE & CSE) & ISO: 9001-2015 Certified Institution
Course Outcomes (CO): After completion of the course, the student can able to
CO_I: Understand the fundamental concepts of probability theory, random variables and evaluate probability Distribution and
Density Functions for different random variables.
CO_II: Analyze the random variable by calculating statistical parameters
CO_III: Analyze multiple random variables by calculating different statistical parameters and understand the linear transformation
of Gaussian random variable
CO_IV: Analyze the random process in both time and frequency domain.
CO_V: Derive the response of linear system for random signals as input and explain the low pass and band pass noise models of
random process
Unit – I: Probability & Random Variable
Probability through Sets and Relative Frequency: Experiments and Sample Spaces, Discrete and Continuous Sample
Spaces, Events, Probability Definitions and Axioms, Mathematical Model of Experiments, Probability as a Relative
Frequency, Joint Probability, Conditional Probability, Total Probability, Bayes’ Theorem, Independent Events, Problem
Solving.
Random Variable: Definition of a Random Variable, Conditions for a Function to be a Random Variable, Discrete,
Continuous, Mixed Random Variable, Distribution and Density functions, Properties, Binomial, Poisson, Uniform,
Gaussian, Exponential, Rayleigh, Conditional Distribution, Methods of defining Conditioning Event, Conditional
Density, Properties, Problem Solving.
Unit – II: Operations on Random variable
Operations on Single Random Variable: Introduction, Expectation of a random variable, moments-moments about
the origin, Central moments, Variance and Skew, Chebyshev’s inequality, moment generating function, characteristic
function, transformations of random variable.
Multiple Random Variables: Vector Random Variables, Joint Distribution Function, Properties of Joint Distribution,
Marginal Distribution Functions, Conditional Distribution and Density – Point Conditioning, Interval conditioning,
Statistical Independence, Sum of Two Random Variables, Sum of Several Random Variables, Central Limit Theorem,
(Proof not expected), Unequal Distribution, Equal Distributions.
Unit – III: Operations on Multiple Random variables
.Operations on Multiple Random Variables: Expected Value of a Function of Random Variables, Joint Moments
about the Origin, Joint Central Moments, Joint Characteristic Functions, Jointly Gaussian Random Variables: Two
Random Variables case, N Random Variable case, Properties of Gaussian random variables, Transformations of
Multiple Random Variables, Linear Transformations of Gaussian Random Variables.
Unit – IV: Random Processes
Random Processes-Temporal Characteristics: The Random Process Concept, Classification of Processes,
Deterministic and Nondeterministic Processes, Distribution and Density Functions, concept of Stationarity and
Statistical Independence, First-Order Stationary Processes, Second-Order and Wide-Sense Stationarity, N-Order and
Strict-Sense Stationarity. Time Averages and Ergodicity, Mean-Ergodic Processes, Correlation-Ergodic Processes,
Autocorrelation Function and Its Properties, Cross-
Correlation Function and its Properties, Covariance Functions, Gaussian Random Processes, Poisson Random Process.
Random Processes-Spectral Characteristics: The Power Density Spectrum and its Properties, Relationship between
Power Spectrum and Autocorrelation Function, The Cross-Power Density Spectrum and its Properties, Relationship
between Cross-Power Spectrum and Cross-Correlation Function.
Unit – V: Random Signal Response of Linear Systems
Random Signal Response of Linear Systems: System Response – Convolution, Mean and Mean squared Value of
System Response, autocorrelation Function of Response, Cross-Correlation Functions of Input and Output, Spectral
Characteristics of System Response: Power Density Spectrum of Response, Cross-Power Density Spectrums of Input
and Output, Band pass, Band Limited and Narrowband Processes, Properties.
Noise Definitions: White Noise, colored noise and their statistical characteristics, Ideal low pass filtered white noise,
RC filtered white noise.
Textbooks:
1.Peyton Z. Peebles, “Probability, Random Variables & Random Signal Principles”, 4 th Edition, TMH, 2002.
2. Athanasios Papoulis and S. Unnikrishna Pillai, “Probability, Random Variables and Stochastic Processes”, 4 th
Edition, PHI, 2002
Reference Books:
1. Simon Haykin, “Communication Systems”, 3rd Edition, Wiley, 2010.
2. Henry Stark and John W.Woods, “Probability and Random Processes with Application to Signal Processing,” 3 rd
Edition, Pearson Education, 2002.
3. George R. Cooper, Clave D. MC Gillem, “Probability Methods of Signal and System Analysis,” 3rd Edition, Oxford,
1999.
TABLE OF CONTENTS
SYLLABUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
1 Introduction to Probability 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Deterministic signal . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Non-Deterministic signal . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basics in Probability (or) Terminology in Probability . . . . . . . . . . . 2
1.2.1 Outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Random experiment . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Random event . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.5 Certain event . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.6 Impossible event . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.7 Elementary event . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.8 Null event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.9 Mutually exclusive event . . . . . . . . . . . . . . . . . . . . . . 3
1.2.10 Equally Likely event . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.11 Exhaustive event . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.12 Union of a event . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.13 Intersection of an event . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.14 Complement of an event . . . . . . . . . . . . . . . . . . . . . . 4
1.2.15 Sample space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.16 Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Definition of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Relative frequency approach . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Classical approach . . . . . . . . . . . . . . . . . . . . . . . . . 6
iii
1.3.3 Approximate or Axiomatic approach . . . . . . . . . . . . . . . . 6
1.3.4 Probability Measure, Theorems . . . . . . . . . . . . . . . . . . . 6
1.3.5 Probability: Playing Cards . . . . . . . . . . . . . . . . . . . . . 12
1.4 Conditional, Joint Probabilities and Independent events . . . . . . . . . . 13
1.4.1 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2 Joint probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.3 Properties of conditional probability . . . . . . . . . . . . . . . . 15
1.4.4 Joint Properties and Independent events . . . . . . . . . . . . . . 16
1.5 Total Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6 Baye’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
iv
3.1.1 Statistical parameters of Binomial R.V . . . . . . . . . . . . . . . 100
3.2 Possion random variable . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.2.1 Statistical parameter of Possion random variable . . . . . . . . . . 108
v
6.1 Joint Moment about the origin . . . . . . . . . . . . . . . . . . . . . . . 186
6.2 Joint Central Moment (or) JointMoment about the Mean . . . . . . . . . 188
6.3 Properties of Co-Variance . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6.3.1 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.4 Joint Characteristic function . . . . . . . . . . . . . . . . . . . . . . . . 202
6.4.1 Properties of Joint characteristic function . . . . . . . . . . . . . . 202
6.5 MGF of the sum of independent random variables . . . . . . . . . . . . . 205
6.6 Characteristic function of sum of random variables . . . . . . . . . . . . 206
6.7 Joint PDF of N-Gaussian random variables . . . . . . . . . . . . . . . . . 207
6.7.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.8 Linear Transformation of Gaussian random variable . . . . . . . . . . . . 213
6.9 Transformation of multiple random variables . . . . . . . . . . . . . . . . 219
vi
8.2.1 Wiener Kinchin Relation . . . . . . . . . . . . . . . . . . . . . . 248
8.2.2 Properties of Power Spectral Density (PSD) . . . . . . . . . . . . 249
8.3 Types of random process . . . . . . . . . . . . . . . . . . . . . . . . . . 262
8.3.1 Baseband random process . . . . . . . . . . . . . . . . . . . . . . 262
8.3.2 Bandpass random process . . . . . . . . . . . . . . . . . . . . . . 263
8.4 Cross correlation and cross PSD . . . . . . . . . . . . . . . . . . . . . . 268
8.4.1 Wiener Kinchin Relation . . . . . . . . . . . . . . . . . . . . . . 269
8.4.2 Properties of Cross Power Spectral Density (PSD) . . . . . . . . . 270
8.5 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
8.6 Signal to Noise Ratio (SNR) . . . . . . . . . . . . . . . . . . . . . . . . 273
vii
CHAPTER 1
Introduction to Probability
1.1 Introduction
In communication, signals are broadly classified into two types: Deterministic signals
and Non-deterministic signals.
The value of the signal can be determined at any instant of time. The deterministic
signals convey no information.
Example: M (t) = 10 × t
The value of the signal cannot be determined at any instant of time. This signal carry
the information. It is also called as random signals or statical signals.
Examples:
• Unpredictable information signals
− Audio signal, Video signal, Voice signal, Image signal
• Noise generated in the receiver and channel
• 0’s and 1’s generated by the computer
• stock market
Communication is a process of conveying information from one point to another
point . Communication system consists of three major blocks as shown in Fig. 1.1
Random or Channel
Statistical signal Transmitter Receiver Output signal
(XMTR) (RCVR)
Source of Noise
The transmitter (XMTR) and receivers (RCVR) are sources of noise, which is gener-
ated by a resistor, diode, transistor, FETs, etc., These components are used in XMTR’s
1
and RCVR’s. Channel is major source of noise, which are man-made noise, interfer-
ence from other XMTR’s etc., To analyze of information signal and noise, the proba-
bility concepts are used. Analysis means calculation of power, energy and frequency
etc.,
1.2.1 Outcome
1.2.2 Trail
If the probability of an event is equal to one (1), then it is caller certain event.
Example: Rising the sun in the East.
2
1.2.6 Impossible event
If the probability of an event is equal to zero (0). Ex: Rising of sun in the West.
If there is common element between two events then it is called null event.
Example: In a rolling a dice the total outcome S = {1, 2, 3, 4, 5, 6}
Getting even number Ae = {2, 4, 6}
Getting odd number Ao = {1, 3, 5}
∴ Ae ∩ Ao = φ
The two events A and B are said to be mutually exclusive, If they have no common
element. Example: In a rolling a dice the total outcome S = {1, 2, 3, 4, 5, 6}
Getting even number Ae = {2, 4, 6}
Getting odd number Ao = {1, 3, 5}
Getting numbers less than 4, i.e., A4 = {1, 2, 3} ∴ Ae ∩ Ao = φ
Ae and Ao −→ are mutually exclusive event.
Ae ∩ A4 −→ are not mutually exclusive event.
If the probability of occurrence of events are equal then they are called likely events.
Example: In a rolling a dice the total outcome S = {1, 2, 3, 4, 5, 6}
Getting even number Ae = {2, 4, 6}; P (Ae ) = 3/6 = 1/2
Getting odd number Ao = {1, 3, 5}; P (Ae ) = 3/6 = 1/2
So, P (Ae ) and P (Ao ) are likely event.
Getting numbers less than 5, i.e., A5 = {1, 2, 3, 4} then P (A5 ) = 4/6 = 2/3. So,this
not likely event.
3
1.2.12 Union of a event
The union of two events A and B is the set of all outcomes, which belongs to A or B or
both. Example: Ae or A5 = Ae + A5 = Ae ∪ A5 = {1, 2, 3, 4, 5, 6}
It is the complement of an event A is the event containing all the point in ‘S’, but not in
‘A’. Example: In a rolling a dice the total outcome S = {1, 2, 3, 4, 5, 6}
Getting numbers less than 5, i.e., A = {1, 2, 3, 4}
Complement of A is Ā = {5, 6}
4
1.2.16 Difference
The set consisting of all elements of ‘A’ which do not belongs to ‘B’ is called the
difference of A and B. It is denoted by A − B.
nA
P (A) = lim ; 0 ≤ P (A) ≤ 1 (1.1)
n→∞ n
It is also known as a posteriori probability, i.e., the probability determines after the
event.
Consider two events A and B of the random experiment. Suppose we conduct ‘n’
independent trails of this experiment and events A and B occurs in n(A) and n(B) trails
vice versa. Hence, the event A ∪ B or A + B or P (A or B) occurs in n(A) + n(B)
trails and
P (A ∪ B) = P (A + B)
nA + nB
= lim
n→∞ n
nA nB
= lim + lim
n→∞ n n→∞ n
= P (A) + P (B)
The equation (1.2) gives that, A and B are mutually exclusive event. If they are not
mutually exclusive then, it is given in equation (1.3).
P (A ∪ B) = P (A) + P (B) − P (A ∩ B)
(1.3)
P (A + B) = P (A) + P (B) − P (AB)
5
Example: An experiment is repeated number of times as shown in below. Find the
probability of each event.
Random Experiment Getting Head
1 1
10 6
100 50
nA
P (A) = lim ; 0 ≤ P (A) ≤ 1
n→∞ n
M 1 M 6 M 50
P (A) = = = 1; P (B) = = ; P (C) = =
N 1 N 10 N 100
It is based on the axioms of theorems. Let us consider ’S’ be the sample space consist-
ing all possible outcomes an experiment. The events A, B, C... are subsets of sample
space. The function P (.) defines which associates with event ‘A’ is a real number called
probability of A. This function P (.) has to satisfies the following axioms.
6
Proof. Let ‘A’ be any set such that A and φ are mutually exclusive. i.e., A + φ = A.
Using Axiom 3,
P (A + φ) = P (A)
P (A) + P (φ) = P (A)
∴ P (φ) = 0.
Theorem 1.3.2. In sample space ‘S’ such that B = A + A; P (A) = 1 − P (A).
Proof. The sample space S can be divided into two mutually exclusive events A and A
as shown in Venn diagram.
P (S) = 1
S
P (A) + P (A) = 1 A
A
∴ P (A) = 1 − P (A).
Proof. If A ⊂ B, then B can be divided into two mutually exclusive events A and B as
shown in Venn diagram. Thus,
S
P (B) = P (A) + P (B − A) B-A
P (B − A) ≥ 0, ∵ Axiom 1 A
B
∴ P (A) ≤ P (B).
Proof. The event A can be divided into two mutually exclusive events A − B and AB
as shown in venn diagram. Thus,
P (A) = P (A − B) + P (AB) S
A B
P (A − B) = P (A) − P (AB)
A-B AB
∴ P (A) ≤ P (B).
Theorem 1.3.5. If A and B are two events, then P (A + B) = P (A) + P (B) − P (AB)
Proof. The events A + B can be divided into two mutually exclusive events A − B and
B as shown above Figure. Thus,
P (A + B) = P (A − B) + P (B)
= P (A) − P (AB) + P (B) ∵ (T heorem4)
7
Example 1.3.1. If two coins tossed simultaneously , Determine the probability of ob-
taining exactly two heads.
Solution: Number of sample points = 2 × 2 = 4
1
S = {(T, T ), (T, H), (H, T ), (H, H)}; P (getting two heads) =
4
Question. 2: A Box contain 3 White, 4 Red, and 5 Black balls. A ball is drawn at
randomly. Find Probability i.e., i) Red ii) Not black iii) Black or White.
Solution:
(ii)
5
P (Black) =
12
5 7
P (N ot a Black) = 1 − P (Black) = 1 − =
12 12
(iii)
P (W hite or Black) = P (B + W )
5 3 2
= P (B) + P (W ) = + =
12 12 3
8
Solution: Sample points: 24 = 16
Sample space :
0000 0001 0010 0011 0100 0101 0110 0111
1000 1001 1010 1011 1100 1101 1110 1111
6 3
P (exactly two heads) = =
16 8
9
Question. 6: An experiment consists of rolling a single dice, two events are defined
as A = {a 6 show up}; B = {a 2 or a 5 show up};
(i) Find P (A) and P (B) (ii) P (C) = 1 − P (A) − P (B)
1
Solution: A = {a 6 show up}; B = {2, 5}; P (A) =
6
1 1 1
P (B) = P (2 ∪ 5) = + =
6 6 3
1 2 1
P (C) = 1 − P (A) − P (B) = 1 − − =
6 6 2
Question. 7: A pair of dice are thrown. Person A wins if sum of number showing up is
six or less and one of the dice shows four. Person B wins if the sum is five or more and
one of the dice shows a four. Find (a) Probability that A wins. (b) The probability that
B wins. (c) The probability that both A and B wins.
Solution:
(a) Person A −→ sum of number is six or less (≤ 6), but one dice 4.
4
P (A) = P (A wins) = P (2, 4) + P (1, 4) + P (4, 2) + P (4, 1) =
36
(b) Person B −→ sum of number is five or more (≥ 5), but one dice 4.
P (B) = P (B wins)
= P (4, 1) + P (4, 2) + P (4, 3) + P (4, 4) + P (4, 5) + P (4, 6)
+ P (1, 4) + P (2, 4) + P (3, 4) + P (5, 4) + P (6, 4)
11
=
36
Question. 8: When three dice are thrown. What is the probability that sum on three
faces is less than 16.
Solution: Sample space S: 6 × 6 × 6 = 216
P (sum < 16) = 1 − P (sum ≥ 16)
= 1 − {P (sum = 16) + P (sum = 17) + P (sum = 18)}
6 3 1 10 206
=1− + + =1− =
216 216 216 216 216
1. The probability that sum on the dice is seven, i.e., P (A) = P (7).
2. The probability of getting sum ten or eleven, i.e., P (B).
3. The probability of getting the sum between 8 to 11,
i.e., P (C) = P (8 < sum ≤ 11).
4. The probability of getting sum greater than 10, i.e., P (D).
10
5. P {(8 < sum ≤ 11) ∪ (10 < sum)}.
6. P {(8 < sum ≤ 11) ∩ (10 < sum)}.
7. P {(sum ≥ 10)}.
8. A die will shows a 2 and the other will shows 3 or larger.
9. P (10 ≤ sum and sum ≤ 4)
10. Let X and Y denote the numbers are the first and second die respectively. Find
(i) P [X = Y ] (ii) P [(X + Y ) = 8] (iii)P [(X + Y ) ≥ 8] (iv) P (7 or 11)
(v) X be the event that Y is larger than 3. Find X, P (X).
Solution: Number of sample points in sample space = 6 × 6 = 36
S P(A)
(1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
6 1
1. P (A) = P (7) = =
36 6
5
2. P (B) = P (10 or 11) =
36
14
3. P (C) = P (8 < sun ≤ 11) =
36
3 1
4. P (D) = P (sum > 10) = =
36 12
10 5
5. P {(8 < sum ≤ 11) ∪ (10 < sum)} = =
36 18
2 1
6. P {(8 < sum ≤ 11) ∩ (10 < sum)} = =
36 18
6 1
7. P (sum ≥ 10) = =
8. 36 6
P (2 and ≥ 3) = {(2, 3), (2, 4), (2, 5), (2, 6), (3, 2), (4, 2), (5, 2), (6, 2)}
1 2
=8 =
36 9
12 1
9. P (10 ≤ sum and sum ≤ 4) = =
36 3
6 1
10. (i) P [X = Y ] = =
36 6
5
(ii) P [(X + Y ) = 8] =
36
11
15
(iii) P [(X + Y ) ≥ 8] =
36
6 2 8 4
(iv) P (7 or 11) = P (7) + P (11) − P (7 ∩ 11) = + −0= =
36 36 36 17
(v) X = {(x, y) : x ∈ N, y ∈ N, 1 ≤ x ≤ 6, 4 ≤ y ≤ 6} and
x 18 1
P (X) = = =
S 36 2
Cards -52
26 Red 26 Black
The primary deck of 52 playing cards in use today and includes thirteen ranks of
each of the four French suits, diamonds (♦), spades (♠), hearts (♥) and clubs (♣), with
reversible Rouennais “court” or face cards (some modern face card designs, however,
have done away with the traditional reversible figures).
Each suit includes an ace, depicting a single symbol of its suit; a king, queen, and
jack, each depicted with a symbol of its suit; and ranks two through ten, with each card
depicting that many symbols (pips) of its suit.
Two (sometimes one or four) Jokers, often distinguishable with one being more colorful
than the other, are included in commercial decks but many games require one or both
to be removed before play . . . A deck often comes with two Joker Cards that do not
usually have hearts, diamonds, clubs or spades, because they can be any card in certain
games. In most card games, however, they are not used.
12
Question. 10: A card is drawn at random from an ordinary deck of 52 playing
cards. Find the probability of its being (a) an ace; (b) a six or a Heart; (c) neither a nine
nor a spade; (d) either red or a king; (e) 5 or smaller; (f ) red 10.
Solution:
4 Aces 1
(a) P (Ace) = =
52 Cards 13
4 13 1 4
(b) P (6 + H) = P (6) + P (H) − P (6H) = + − =
(c) 52 52 52 13
P (9S) = P (9 + S)
= 1 − P (9 + S)
= 1 − [P (9) + P (S) − P (9S)]
4 13 1 9
=1− + − =
52 52 52 13
(d)
P (R ∪ K) = P (R) + P (K) − P (RK)
26 4 2 7
= + − =
52 52 52 13
(e)
4 f ives + 4 f ours + 4 threes + 4 twos
P (cards ≤ 5) =
52 cars
4+4+4+4 16 4
= = =
52 52 13
Let us consider ‘A’ and ‘B’ are two events of a random experiment. conditional prob-
ability is defined as
B P (AB) P (A ∩ B)
P = = ; P (A) 6= 0; (1.4)
A P (A) P (A)
13
Here P (A) is called elementary probability; P (AB) is joint probability;
B
P is conditional probability. i.e., the probability of B given that event A has
A
already occurred.
The joint probability of two events may be expressed as the product of the conditional
probability of one event given the other, and elementary probability of the other.
B A
P (AB) = P .P (A) = P .P (B) (chin rule or multiplication rule)
A B
B of event A does not effect the occurrence of event B,
If occurrence
then P = P (B),
A
Question. 12: In a box there are 100 resistors having resistance and tolerance as given
in Table.
Tolerance
Resistance 1. P (A), P (B), P (C)
5% 10% Total
22Ω 10 14 24 2. P (AB), P (BA), P (CA)
47Ω 28 16 44 3. P BA
, P CA , P B C
100Ω 24 8 32
4. Is A, B, C are independent.
Total 62 38 100
Let a resistor be selected from the box and assume each resistor has the same likely-
hood of being chosen. Define three events are A as draw a 47Ω resister, B as draw 5%
tolerance resistor and C as draw 100Ω resistor. Now find elementary probabilities, joint
and conditional probability.
Solution:
44
1. Probability of getting 67Ω resistor: P (A) = 100
62
Probability of getting 5% tolerance: P (B) = 100
32
Probability of getting 100Ω resistor: P (C) = 100
28
2. Probability of the resistor building 47Ω and 5% tolerance: P (AB) = 100
24
Probability of the resistor building 100Ω and 5% tolerance: P (BC) = 100
Probability of the resistor building 47Ω and 100Ω resistor: P (AC) = φ = 0
14
A P (A∩B) 28
3. P ( B )= P (A)
= 100 × 100
62
= 1431
P (AC)
P ( CA ) = P (C)
= 0
P (BC)
P (B C
)= P (C
24
= 100 × 100
32
= 34
Fig. 1.4
A P (AB) P (B)
P B
= P (B)
= P (B)
=1 ∵ From Fig. 1.4 (i), P (AB) = P (B)
2. If B ⊂ A then P ( B A
) = PP (B)
(A)
P (AB) P (B)
Proof: P ( B
A
) = P (A)
= P (A)
A
3. P ≥ 0, it is non-negative.
B A P (AB)
Proof:P = then P (AB) ≥ 0;
B P (B)
A
P (B) 6= 0. So, P ≥0
B
4. If two events A and B are in the sample space S then,
S S
• P =P =1
A B
Proof: From Fig. 1.4 (ii) and (iii) then,
S SA P (A)
P =P = = 1 ; P (A) 6= 0
A A P (A)
S P (SB) P (B)
Similarly, P = = = 1 ; P (B) 6= 0
B P (B) P (B)
A B
• P = P (A) and P = P (B)
S S
A P (AS) P (A)
Proof: P = = = P (A)
S P (S) 1
B P (BS) P (B)
similarly, P = = = P (B)
S P (S) 1
15
1.4.4 Joint Properties and Independent events
P (AB) = P (A).P ( B
A
)
1.4.4.2 Independent
= P (A)P (B)
2.
P (A ∩ B) = P (A ∪ B) = P (A ∪ B)
= 1 − P (A ∪ B)
h i
= 1 − P (A) + P (B) − P (A ∩ B)
= P (A)P (B)
16
3.
P (A ∩ B) = P (A ∪ B) = P (A ∪ B)
= 1 − P (A ∪ B)
= 1 − P (A) − P (B) + P (A)P (B)
h i
= P (A) − P (B) 1 − P (A)
= P (A)P (B)
Question. 13: One card is selected from ordinary 52 cards and events defined as event
A as select a king, event B as Jack and Queen and event C as select Heart. Determine
whether A, B and C are independent.
Solution:
4
• Event A select a King: P (A) = P (King) = 52
8
• Event B select a Jack and Queen: P (B) = P (J ∪ Q) = P (J) + P (Q) = 52
13
• Event A select a Heart: P (C) = P (Heart) = 52
Question. 14: Find the probability of drawing first card is diamond and second card is
heart (first card is not replaced).
H 13 13
Solution: P (DH) = P (D).P = ×
D 52 51
Question. 15: What is the probability that a six is obtained on one the dice in a throw
two dice, given that the sum is 7.
Solution: Let A be the event getting sum is 7 and B the event, 6 appears on any one of
the dice.
6 2
A = {(2, 5), (3, 4), (4, 3), (5, 2), (6, 1), (1, 6)} So, P (A) = P (A ∩ B) =
36 36
17
B 2
P (AB) 36 2 1
P = = 6 = =
A P (A) 36
6 3
Question. 16: A pair of dice thrown, find the probability that the sum is 10 or greater,
if
(i) 5 appears on the first dice
(ii) 5 appears on the at least one of the dice.
Solution:
(i) Let A be the event that 5 appears on the first dice then
A = {(5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6)}
6
Total sample space S = 6 × 6 = 36, therefore P (A) =
36
2
Let B the event that sum is 10 or greater, P (A ∩ B) =
36
B P (AB) 2
1
∴P = = 36
6 =
A P (A) 36
3
(ii) Let C be the event that 5 appears on atleast one of the dice, then
C = {(5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6), (1, 5), (2, 5), (3, 5), (4, 5), (5, 5)}
11
P (A) =
36
3
P (AC) =
36
C P (CA) 3
3
P = = 36
11 =
A P (A) 36
11
Let us consider a random experiment whose sample space ‘S which consisting of ‘n’
mutually exclusive events. i.e., Bn the probability of elementary event A in terms of all
mutually exclusive events called total probability and it is written as P (A).
Xn A
P (A) = P Bn P (1.6)
m=1
Bn
Proof. Let us consider a simple space ‘S’ as shown in Fig. 1.5 Here Bn is exclusive
events. i.e., Bm ∩ Bn = φ; m 6= n = 1, 2, 3, . . . N
N
X
S= ∪N
n=1 Bn = Bn
n=1
18
S
B4
B2
B1 B5
A
B7
B6
B3 BN
From Fig. event ‘A’ in terms of sample space ‘S’ can be written as,
N
X
A=A∩S =S∩A=A∩ Bn
n=1
N
X
= A ∩ Bn
n=1
XN
P (A) = P (A ∩ Bn ); (∵ Axiom 3)
n=1
A P (A ∩ B )
n
P = ; P (Bn ) 6= 0;
Bn P (Bn )
Question. 17: There are three boxes such that box one contain 3 red, 4 green, 5 blue;
box two contain 4 red, 3 green, 4 blue; and box three contain 2 red, 1 green, 4 blue;
these boxes are selected randomly with equal probability and then one ball is drawn
from the selected box. Find the probability that the drawn ball is red.
Solution:
1
P (Box1 ) = P (Box2 ) = P (Box3 ) =
3
19
3
X Red
P (Red) = P (Boxn )P
n=1
Boxn
Red Red Red
= P (Box1 )P + P (Box2 )P + P (Box3 )P
Box1 Box2 Box3
1 3 1 4 1 2
= × + × + ×
3 12 3 9 3 7
1 242
=
3 252
B
i P (Bi )P ( BAi )
P = Pn (1.8)
A i=1 P (Bi )P ( BAi )
• This Baye’s theorem formulae widely used in biometrics, epidemiology and com-
munication theory.
• The term P ( ABi ) is known as the posteriori probability of an given B and P ( ABi )
is called a priori probability of B given Ai and P (Ai ) is the casual or a priori
probability of Ai .
• In general a priori probability are estimated from past measurements or pre-
supposed by experience while a posteriori probabilities are measured or computed
from observations.
• A example of Baye’s formula is the Binary Symmetric Channel (BSC) shown in
next Fig. (1.6), which model for bit errors that occur in a digital communication
system. For binary system, the the transmitted symbols has two outcomes {0, 1}.
20
Question. 18 Determine probabilities of system error and correct transmission
of symbols of a binary communication channel as shown in Fig. 1.6
It consisting of transmitter that transmits one of two possible symbols 0 or 1 over a
channel to receiver. The channel causes error so that symbol 1 is converted to 0 and
vice-versa at the receiver. Assume the symbols 1 and 0 are selected for the transmission
as 0.6 and 0.4 respectively.
0.9
Symbol ‘1’
B1 P(A1/B1) A1
P(B1)=0.6
0.1 0.1
)2
/B
P(A2
A1
/B1)
P(
P(B2)=0.6
B2 P(A2/B2) A2
Symbol ‘0’
0.9
Solution:
The effect of the channel on the transmitted symbol is described by conditional proba-
bilities.
21
B
1
P −→ Probability of occurrence of B1 provided A1 as already occurred, then
A1
what is the probability that the received A1 is due to B1 in communication.
From the theorem of total probability,
XN A
P (A) = P ;
n=1
Bn
where N
P
n=1 Bn = S and N = total number of events in ‘S’. By using above theorem
we obtain probabilities of A1 and A2 ; i.e., received symbol probabilities are,
A A
1 1
P (A1 ) = P P (B1 ) + P P (B2 )
B1 B2
= 0.9 × 0.6 + 0.1 × 0.4 = 0.54 + 0.04 = 0.58
A A
2 2
P (A2 ) = P P (B1 ) + P P (B2 )
B1 B2
= 0.1 × 0.6 + 0.9 × 0.4 = 0.42
B1 B2
Let P A2
and P A1
are probabilities of system error then,
A2
B
1 P (B 1
)P (B1 ) 0.1 × 0.6
P = = = 0.143
A2 P (A2 ) 0.42
A1
B
2 P (B 2
)P (B2 ) 0.1 × 0.4
P = = = 0.069
A1 P (A1 ) 0.58
A1 A2
Let P and P
B1
are represents the probability correct system transmission
B2
of symbols and are obtained by using given Baye’s theorem,
A1
B
1 P (B 1
)P (B1 ) 0.9 × 0.6
P = = = 0.931
A1 P (A1 ) 0.58
A2
B
2 P (B 2
)P (B2 ) 0.9 × 0.4
P = = = 0.857
A2 P (A2 ) 0.42
Problem: 19 A bag ‘X’ contains 3 white and 2 black balls another bag contains 2
white and 4 black balls. If one bag is selected at random and a ball is selected from it
then find the probability that ball is white.
Solution: Given
22
1
P (Bag1 ) = P (Bag2 ) = 2
PN
Total probability: P (A) = n=1 P (A|Bn ) · P (Bn )
2
X
∴ P (W ) = P (W |Bn ) · P (Bn )
n=1
23
CHAPTER 2
A random variable is a real valued function defined over a sample space of a random
experiment, it is called random or stochastic variable. Random variables are denoted by
capital or upper case letters such as X, Y etc., and the values assumed by are denoted
by lower case letters with subscripts such as x1 , x2 , y1 , y2 etc.,
Example: Let us consider a random experiment is tossing three coins, there are eight
possible outcomes of this experiment. The sample space can be written as,
S = HHH, HHT, HT H, HT T, T HH, T HT, T T H, TTT
X = x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8
Cond. = 3 2 2 1 2 1 1 0
Here, S denotes a sample space, X denotes a random variable and the condition is
number of heads.
• A random variable should be a single valued function i.e., every sample point in
sample space ‘S’ must correspond to only one value of the random variable.
1. Discrete random variable: If the random variable takes finite set of discrete values
then it is called discrete random variable.
Ex: In tossing a three coins, a random variable ‘X’ takes 0, 1, 2 and 3 values.
2. Continuous random variable: If the variable takes infinite set of values then it is
called continuous random variable.
24
Ex: Finding the real value between 0 to 12 in a random experiment, sample space
S will be {0 ≤ S ≤ 12}.
3. Mixed random variable: If the random variable takes both discrete and continuous
values then it is called mixed random variable.
Ex: Let us consider a random experiment in which the temperature is measured
by selecting thermometer randomly, then selection of thermometer takes finite
values it is called discrete random variable and measuring the temperature takes
continuous value and it is called continuous random variable. Combination of
these two is called mixed random variable.
Question. 1: Find the PDF and CDF of a random experiment in which three coins are
tossed and condition to get random variable is getting head.
Solution:
Sample space S = {HHH, HHT, HT H, HT T, T HH, T HT, T T H, T T T }
Random variable X = {x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 }
No.of Heads (Condition) = {3 2 2 1 2 1 1 0}
Apply the condition to random variable X, getting head X = {01234} The probability
density function (PDF) is the probability of random variable.
1
P (X = 0) = P (x1 ) =
8
3
P (X = 1) = P (x2 ) + P (x3 ) + P (x5 ) =
8
3
P (X = 2) = P (x4 ) + P (x6 ) + P (x7 ) =
8
1
P (X = 3) = P (x8 ) =
8
X 0 1 2 3
The probability density function (PDF) is given by
1 3 3 1
fX (x) 8 8 8 8
The expression for probability density function is
25
1 3 3 1
fX (x) = δ(x) + δ(x − 1) + δ(x − 2) + δ(x − 3)
8 8 8 8
The expression for CDF function is
1 3 3 1
FX (x) = u(x) + u(x − 1) + u(x − 2) + u(x − 3)
8 8 8 8
fX(x)
3/8 3/8
1/8 1/8
X
0 1 2 3
FX(x)
1
1 7/8
4/8
CDF
1/8
X
0 1 2 3
Question. 2: Two dies are rolls, find PDF and CDF of a random variable ‘X’ which is
getting sum on two dies.
Solution: The sample space ‘S’
S = {(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6),
(3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6),
(5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)}
26
The expression for probability density function is
1 1 1 1 1
fX (x) = δ(x − 2) + δ(x − 3) + δ(x − 4) + δ(x − 5) + δ(x − 6)
36 36 36 36 36
1 1 1 1
+ δ(x − 7) + δ(x − 8) + δ(x − 9) + δ(x − 10)
36 36 36 36
1 1
+ δ(x − 11) + δ(x − 12)
36 36
1 2 3 4 5
FX (x) = u(x − 2) + u(x − 3) + u(x − 4) + u(x − 5) + δ(x − 6)
36 36 36 36 36
6 5 4 3
+ u(x − 7) + u(x − 8) + u(x − 9) + u(x − 10)
36 36 36 36
2 1
+ u(x − 11) + u(x − 12)
36 36
6/36
6/36
5/36 5/36
5/36
4/36 4/36
4/36
3/36
3/36 3/36
2/36 2/36
2/36
1/36 1/36
1/36
1 2 3 4 5 6 7 8 9 10 11 12
36/36
36/36
35/36
35/36
33/36
33/36
30/36
30/36
26/36
26/36
21/36
21/36
15/36
15/36
10/36
10/36
6/36
6/36
3/36
3/36
1/36
1/36
0
1 2 3 4 5 6 7 8 9 10 11 12
X = xi −1 0 1 2 3 4 5 6 7
fX (x) K 2K 3K K 4K 3K 2K 4K K
Find (i) K (ii) P {X ≤ 2} (iii) P {X > 4} (iv) P {1 < X ≤ 4} and (v) PDF
27
and CDF.
Solution: Total probability = 1;
(i)
K + 2K + 3K + K + 4K + 3K + 2K + 4K + K = 1
21K = 1
1
K=
21
(ii)
P {X ≤ 2} = P (X = −1) + P (X = 0) + P (X = 1) + P (X = 2)
= K + 2K + 3K + K
7 1
= ∵K=
21 21
(iii)
P {X > 2} = P (X = 5) + P (X = 6) + P (X = 7)
= 2K + 4K + K
7 1
= ∵K=
21 21
(iv)
P {1 < X ≤ 4} = P (X = 5) + P (X = 6) + P (X = 7)
= K + 4K + 3K
8 1
= ∵K=
21 21
1 2 3 1 4
fX (x) = δ(x + 1) + δ(x) + δ(x − 1) + δ(x − 2) + δ(x − 3)
21 21 21 21 21
3 2 4 1
+ δ(x − 4) + δ(x − 5) + δ(x − 6) + δ(x − 7)
21 21 21 21
1 2 3 1 4
FX (x) = u(x + 1) + u(x) + u(x − 1) + u(x − 2) + u(x − 3)
21 21 21 21 21
3 2 4 1
+ u(x − 4) + u(x − 5) + u(x − 6) + u(x − 7)
21 21 21 21
Let fX (x) and FX (x) are the PDF and CDF of a continuous random variable X.
28
f X (x)
PDF 13/49
13/49
11/49
11/49
9/49
9/49
7/49
7/49
5/49
5/49
3/49
3/49
1/49
0
1 2 3 4 5 6
FX (x)
CDF 49/49
13/49
36/49
11/49
25/49
9/49
16/49
7/49
9/49
5/49
4/49
3/49
1/49
1/49
0
1 2 3 4 5 6
• It is a non-negative function.
R∞
• The area under the PDF curve is unity. i.e., x=−∞
fX (x)dx = 1.
• FX (x) has minimum value is zero at −∞ and maximum value at +∞ is one. i.e.,
FX (−∞) = 0; FX (+∞) = 1;
29
Proof. Let X be the random variable, which takes the variable from −∞ to +∞.
FX (x2 ) = P (X ≤ x2 )
h i
= P (−∞ ≤ X < x1 ) ∪ (x1 ≤ X ≤ x2 )
h i
= P (−∞ ≤ X < x1 ) + (x1 ≤ X ≤ x2 ) ∵ mutually exclusive
= P (x1 ) + P (x1 ≤ X ≤ x2 )
2
∴ C=
9
(ii) Z 3
2
P {2 ≤ X ≤ 3} = (x − 1)dx
x=2 9
2 h x2 i3 2h3i 1
= −x = =
9 2 x=2 9 2 3
1
∴ P {2 ≤ X ≤ 3} =
3
30
f X (x)
PDF
8/9
6/9
4/9
2/9
0
1 2 3 4 5
FX (x)
CDF
8/9
6/9
6/9
4/9
4/9
2/9
2/9
0
1 2 3 4 5
(iii) To find FX (x), we have three intervals. From Fig. (a). FX (x) for −∞ ≤ x ≤ 1
(b). FX (x) for 1 ≤ x ≤ 4 (c). FX (x) for 4 ≤ x ≤ +∞.
FX (x) = P {−∞ ≤ X ≤ 1}
Z x
= fX (u)du = 0
u=−∞
∴ FX (x) = 0; −∞ ≤ x ≤ 1
FX (x) = P {−∞ ≤ X ≤ 1} + P {1 ≤ X ≤ 4}
Z 1 Z x
= fX (x)dx + fX (u)du
x=−∞ u=1
Z x
2
=0+ (u − 1)du
u=1 9
2 h u2 ix
= −u
9 2 1
2
(x − 1)
=
9
(x − 1)2
∴ FX (x) = ; 1≤x≤4
9
31
FX (x) = P {−∞ ≤ X ≤ 1} + P {1 ≤ X ≤ 4} + P {4 ≤ X ≤ +∞}
Z 1 Z 4 Z ∞
= fX (x)dx + fX (x)dx + fX (u)du
x=−∞ x=1 u=4
Z 4
(x − 1)2
=0+ +0
x=1 9
2 h x2 i4
= −x
9 2 1
2h9i
= =1
9 2
∴ FX (x) = 1; 4 ≤ x ≤ +∞
0; −∞ ≤ x ≤ 1
2
(x − 1)
FX (x) = ; 1≤x≤4
9
1; x>4
Solution:
Z a/2
(a) n a ao 1
P − ≤X≤ = dx
2 2 x=−a/2 2a
1 h ia/2
= x
2a −a/2
1 ha ai 1
= + =
2a 2 2 2
n a ao 1
∴ P − ≤X≤ =
2 2 2
(b) The PDF function fX (x) is shown in Fig. To find F X(x), we have three inter-
vals. i.e., (i). FX (x) for −∞ ≤ x ≤ −a (ii). FX (x) for −a ≤ x ≤ a
(iii). FX (x) for a ≤ x ≤ +∞.
(i). FX (x) for −∞ ≤ x ≤ −a
Z x
FX (x) = P {−∞ ≤ X ≤ −a} = fX (u)du = 0
u=−∞
∴ FX (x) = 0; −∞ ≤ x ≤ −a
32
f X (x)
1/2a
0 x
-a a
FX (x)
1/2
x
-a 0 a
1
∴ FX (x) = x+a ; −a ≤ x ≤ a
2a
(c). FX (x) for a ≤ x ≤ +∞
∴ FX (x) = 1; a ≤ x ≤ +∞
0; −∞ ≤ x ≤ −a
1
FX (x) = x + a ; −a ≤ x ≤ a
2a
1; a ≤ x ≤ +∞
33
CDF.
R∞
Solution: We know that x=−∞ fX (x)dx = 1
(a)
Z 0 Z a
bx bx
+ b dx + − + b dx = 1
x=−a a x=0 a
h b x2 i0 h −b x2 ia
+ bx + + bx = 1
a 2 −a a 2 0
h b (−a)2 i h −b (a)2 i
0− + b(−a) + + b(a) − 0 = 1
a 2 a 2
b a2 −b a2
− − ab + + ab = 1
a 2
a 2
−ab −ab
+ ab + ab = 1
2 2
−ab + 2ab = 1
ab = 1
1
∴ a=
b Rx
(b) From the graph of fX (x): FX (x) = u=−∞ fX (u)du
(i) FX (x) for the interval −∞ ≤ x ≤ −a
∴ FX (x) = 0; −∞ ≤ x ≤ −a
(ii) FX (x) for the interval −a ≤ x ≤ 0
Z −a Z x
FX (x) = fX (x)dx + fX (u)du
x=−∞ u=−a
Z x b
=0+ u + b du
u=−a a
h b u2 ix
= + bu
a 2 −a
2 b a2
bx
= + bx − − ab
a 2 a 2
b ab
= x2 + bx − + ab
2a 2
b ab
= x2 + bx +
2a 2
b 2 ab
∴ FX (x) = x + bx + ; −a ≤ x ≤ 0
2a 2
(iii) FX (x) for the interval 0 ≤ x ≤ a
Z −a Z 0 Z x
FX (x) = fX (x)dx +
fX (x)dx + fX (u)du
x=−∞ x=−a u=0
Z x Z x
b b
=0+ x + b dx + − u + b du
x=−a a u=0 a
34
h b x2 h b u2
i0 ix
= + bx + − + bu
a 2 −a a 2 0
h b (−a)2 i h b x2 i
= 0− + b(−a) + − + bx − 0
a 2 a 2
2
−ab bx
= + ab − + bx
2 a 2
b ab
= − x2 + bx +
2a 2
b 2 ab
∴ FX (x) = − x + bx + ; 0≤x≤a
2a 2
(iv) FX (x) for the interval a ≤ x ≤ +∞
Z −a Z 0 Z a Z ∞
FX (x) = fX (x)dx + fX (x)dx + fX (x)dx + fX (u)du
x=−∞ x=−a x=0 u=a
Z x Z a
b b
=0+ x + b dx + − x + b dx + 0
x=−a a x=0 a
h b x2 i0 h b x2 ia
= + bx + − + bx
a 2 −a a 2 0
h b a2 i h b a2 i
= 0− − ab + − + ab − 0
a 2 a 2
ab ab
= − + ab − + ab
2 2
= −ab + 2ab
= ab
f X(x)
FX(x)
a=2, b=0.5
pdf b cdf
1
1
0.625
0.5
0.125
0 x
0 x -2 -1 1 2
-a a
35
Question. 7: A random variable has an experimental PDF: fX (x) = ae−b|x| where a
and b are constants. (i) Find the relation between a and b. (ii) Plot PDF and CDF.
Solution:
aebx ; −∞ ≤ x ≤ 0
fX (x) =
ae−bx ; 0≤x≤∞
R∞
(i) We know that x=−∞ fX (x)dx = 1
Z 0 Z ∞
bx
ae dx + ae−bx dx = 1
x=−∞ x=0
ebx 0 e−bx ∞
a + a =1
b −∞ −b 0
a a
−0 + 0+ =1
b b
2a
=1
b
a 1
=
b 2
a 1
∴ =
b 2
(ii) Find the expression for FX (x) in two intervals, i.e.,
(a) −∞ ≤ x ≤ 0; (b) 0 ≤ x ≤ +∞.
(a) FX (x) for the interval −∞ ≤ x ≤ 0
Z x
FX (x) = fX (u)du
Zu=−∞
x
= aebu du
u=−∞
ebu x
= a
b −∞
1
= ebx − 0
2
1 bx
= e
2
1
∴ FX (x) = ebx ; −∞ ≤ x ≤ 0
2
(b) FX (x) for the interval 0 ≤ x ≤ +∞.
Z 0 Z x
FX (x) = fX (x)dx + fX (u)du
x=−∞ x=0
Z x Z x
= bx
ae dx + ae−bu du
x=−∞ u=0
36
ebx 0 e−bu x
= a + a
b −∞ a b 0
a
= 1−0 − e−bx − 1
b b
a a −bx a
= − e +
b b b
2a a −bx
= − e
b b
1 −bx
=1− e
2
1
∴ FX (x) = 1 − e−bx ; 0 ≤ x ≤ +∞
2
1 ebx ;
−∞ ≤ x ≤ 0
FX (x) = 2 1
1 − e−bx ; 0 ≤ x ≤ +∞
2
fX(x)
a
aebx ae-bx
x
0
FX(x)
1
a=1, b=2
0.5
x
0
37
2.4 Statistical parameters of a Random variable
R∞
• If n = 0 then m0 = x=−∞ fX (x)dx;
where m0 is the total area of PDF curve.
R∞
• If n = 1 then m1 = E[X] = X = x=−∞ xfX (x)dx = X;
where m1 is called mean value of random variable X or (or the expected or the
average or D.C value of X).
R∞
• If n = 2 then m2 = E[X 2 ] = X 2 = x=−∞ x2 fX (x)dx;
where m2 is called mean square value of random variable X, which is total power
of random variable. i.e.,
m2 = E[X 2 ] = T otal P ower = AC P ower + DC P ower
R∞
• If n = 3 then m3 = E[X 3 ] = X 3 = x=−∞ x3 fX (x)dx;
where m3 is a 3rd moment about origion.
R∞
• If n = 0 then µ0 = x=−∞ fX (x)dx;
where µ0 is the total area of PDF curve.
• If n = 1 then Z ∞
µ1 = E[(x − X)] = (x − X)fX (x)dx
x=−∞
Z ∞ Z ∞
=1
= xfX (x)dx − X f
X (x)dx
x=−∞ x=−∞
=X −X =0
∴ µ1 = 0
38
• If n = 2 then
Z ∞
2
µ2 = E[(x − X) ] = (x − X)2 fX (x)dx
x=−∞
h 2
i
2 2
= σX = E X + X − 2XX
2
= E[X 2 ] + E[X ] − 2XE[X]
2 2
= X 2 + X − 2X
2
= X2 − X
= m2 − m21
2
∴ µ2 = σ X = m2 − m21 = mean square value − square of mean
2
The second central moment is called variance, and denoted by σX . Which is
equal to AC power of random variable X.
In many practical problems, the measure of expected value E(X) of a random
variable ‘X’ does not completely describer or characterize the probability distri-
bution. So, it is necessary to find spread or dispersion of the function about mean
value. The quantity used to measure the width or spread or dispersion is called
variance.
• If n = 3 then Z ∞
3
µ3 = E[(x − X) ] = (x − X)3 fX (x)dx
x=−∞
h 2 3
i
3 2
= E X − 3X X + 3XX − X ∵ (a − b)3 = a3 − 3a2 b + 3ab2 − b3
2 3
= E[X 3 ] − 3XE[X 2 ] + 3X E[X] − X
2 3 3 2
= E[X 3 ] − 3X(σX
2
+ X ) + 3X − X 2
∵ m2 = σX +X
3 3 3
= E[X 3 ] − 3XσX
2
− −X
3X
+
3X
2 3
= X 3 − 3XσX −X
2
= m3 − 3m1 σX − m31
2 3 2
∴ µ3 = X 3 − 3XσX − X = m3 − 3m1 σX − m31
The third central moment is called skew of PDF and it is measures asymmetry of fX (x)
about mean. If a density function is symmetric about x = X then its skew is zero. The
39
µ3
normalized third central moment, α3 = 3
is known as skewness of PDF or coefficient
σX
of skewness.
Summary:
2
4. Variance µ2 = σX = m2 − m21
p √
5. Standard deviation σX = m2 − m21 = µ2
√
r.m.s value of random variable X If m1 = 0, σX = m2
√
standard deviation 6 0, σX = µ2
If m1 =
2
6. Skew µ3 = m3 − 3m1 σX − m31
2 (x − 1); 1 ≤ x ≤ 4
fX (x) = 9
0; else where
Solution: R∞
1. Mean vale m1 = E[X] = X = x=−∞ xfX (x)dx.
Z ∞ Z 4
2 2
E[X] = x (x − 1)dx = (x2 − x)dx
x=−∞ 9 9 x=1
2 x3 x2 4
h i
= −
9 3 2 1
2 h 43 42 1 1 i
= − − −
9 3 2 3 2
2 81
= × =3
9 6
40
f X (x)
PDF
8/9
6/9
4/9
2/9
0
1 2 3 4 5
FX (x)
CDF
8/9
6/9
6/9
4/9
4/9
2/9
2/9
0
1 2 3 4 5
R∞
2. Mean square value m2 = E[X 2 ] = X 2 = x=−∞
x2 fX (x)dx.
Z ∞ Z 4
22 2
m2 = x (x − 1)dx = (x3 − x2 )dx
x=−∞ 9 9 x=1
4 3 i4
2 x
h x
= −
9 4 3 1
2 h 44 43 1 1 i
= − − −
9 4 3 4 3
2 h 255 63 i
= − = 9.5
9 4 3
R∞
3. 3rd moment about origin m3 = E[X 3 ] = X 3 = x=−∞
x3 fX (x)dx.
Z ∞ Z 4
32 2
m3 = x (x − 1)dx = (x4 − x3 )dx
x=−∞ 9 9 x=1
5 4 i4
2 x
h x
= −
9 5 4 1
2 h 45 44 1 1 i
= − − −
9 5 4 5 4
2 h 1023 255 i
= − = 31.3
9 5 4
2
4. Variance µ2 = σX = m2 − m21 = 9.52 − 32 = 9.5 − 9 = 0.5
p √ √
5. Standard deviation σX = m2 − m21 = µ2 = 0.5 = 0.7071
2
6. Skew µ3 = m3 − 3m1 σX − m31 = 31.3 − 3(3)(0.5) − 33 = −0.2
41
Question. 9: Find all statistical parameters of continuous random variable X, whose
PDF is given by
1 ; −a ≤ x ≤ a
fX (x) = 2a
0; else where
Solution: f (x) X
1/2a
0 x
-a a
FX (x)
1/2
x
-a 0 a
R∞
1. Mean vale m1 = E[X] = X = x=−∞
xfX (x)dx.
∞ a
1 h x2 ia 1 h a2 (−a)2 i
Z Z
1 1
E[X] = x dx = xdx = = − =0
x=−∞ 2a 2a x=−a 2a 2 −a 2a 2 2
R∞
2. Mean square value m2 = E[X 2 ] = X 2 = x=−∞
x2 fX (x)dx.
Z ∞ Z a
12 1
m2 = x dx = x2 dx
x=−∞ 2a 2a x=−a
1 h x3 ia 1 h a3 (−a)3 i 1 2a3 a2
= = − = =
2a 3 −a 2a 3 3 2a 3 3
R∞
3. 3rd moment about origin m3 = E[X 3 ] = X 3 = x=−∞
x3 fX (x)dx
Z ∞ Z a
1 3 1
m3 = x dx = x3 dx
x=−∞ 2a 2a x=−a
4 ia
1 x
h 1 a4 (−a)4 i
h
= = − =0
2a 4 −a 2a 4 4
2 a2 a2
4. Variance µ2 = σX = m2 − m21 = −0=
3 3
r
p √ a2 a
5. Standard deviation σX = m2 − m21 = µ2 = =√
3 3
2 a2
6. Skew µ3 = m3 − 3m1 σX − m31 = 0 − 3(0)( ) − 0 = 0
3
42
Conclusion: Skew is zero. So, the PDF function is symmetry.
Question. 10: Find all statistical parameters of continuous random variable X, whose
1
PDF is given by fX (x) = e|x| ; −∞ ≤ x ≤ ∞; where a and b are constants.
2
1 |x|
Solution: Given fX (x) = e ; −∞ ≤ x ≤ ∞;
2
fX(x)
1/2
ex e-x
x
0
FX(x)
0.5
x
0
1 ex ; −∞ ≤ x ≤ 0
fX (x) = 21
e−x ; 0 ≤ x ≤ ∞
2R
∞
1. Mean vale m1 = E[X] = X = x=−∞ xfX (x)dx.
Z 0 Z ∞
E[X] = xfX (x)dx + xfX (x)dx
x=−∞ x=0
Z 0 Z ∞
1 x 1
= x e dx + x e−x dx
x=−∞ 2 x=0 2
1 h i 0 1h i∞
= ex (x − 1) + e−x (−x − 1)
2 −∞ 2 0
1h 0 i 1h i
= e (0 − 1) − e−∞ (−∞ − 1) + e−∞ (−∞ − 1) − e−0 (−0 − 1)
2 2
1 1
= (−1 − 0) − (0 − 1)
2 2
1 1
=− + =0
2 2
R∞
2. Mean square value m2 = E[X 2 ] = X 2 = x=−∞
x2 fX (x)dx.
Z 0 Z ∞
2
m2 = x fX (x)dx + x2 fX (x)dx
x=−∞ x=0
Z 0 Z ∞
1 1
= x2 . ex dx + x2 . e−x dx
x=−∞ 2 x=0 2
1h 2 i0 1 h i∞
= (x − 2x + 2)ex + (−x2 − 2x + 2)e−x
2 −∞ 2 0
43
1h 2 x i0 1h i∞
= x e − 2xex − 2ex + − x2 e−x − 2xe−x − 2e−x
2 −∞ 2 0
1 h i 1h i
= (0 − 0 + 2) − (0 − 0 + 0) + (−0 − 0 − 0) − (0 − 0 − 2)
2 2
1h i 1h i
= 2 + (+2) = 1 + 1 = 2
2 2
R∞
3. 3rd moment about origin m3 = E[X 3 ] = X 3 = x=−∞
x3 fX (x)dx
Z 0 Z +∞
1 x 1
m3 = 3
x . e dx + x3 . e−x dx
x=−∞ 2 x=0 2
1 h i 0 1h i∞
= e3 (x3 − 3x2 + 6x − 6) + e−x (−x3 − 3x2 − 6x − 6
2 −∞ 2 0
1h 0 i 1h i
= e (0 − 0 + 0 − 6) + − (e0 (0 − 0 − 0 − 6))
2 2
6 6
=− + =0
2 2
2
4. Variance µ2 = σX = m2 − m21 = 2 − 0 = 2
p √ √
5. Standard deviation σX = m2 − m21 = µ2 = 2 = 1.414
2
6. Skew µ3 = m3 − 3m1 σX − m31 = 0 − 0 − 0 = 0
µ3
7. Skewness α3 = 3
=0 So, it is symmetric PDF function about mean value.
σX
2.4.0.1 Skewness
α3>0 α3<0
• If the longer tail occurs to the right, the distribution is said to be skewed to the
right.
• If the longer tail occurs to the left, the distribution is said to be skewed to the left.
44
• The measures α3 will be positive then distribution is skewed to the right, α3 will
be negative then distribution is skewed to the left, and α3 will be zero then ther
PDF is a symmetric.
R∞
Proof. E[X] = X = m1 = −∞ xfX (x)dx
Let X = K = Constant =1
R∞ R∞
E[K] = X = m1 = −∞ K.fX (x)dx = K.−∞f
X (x)
∴ E[K] = K
2. Expectation of KX is KE[X]
R∞ R∞
Proof. E[KX] = −∞
Kx.fX (x)dx = K. −∞
xfX (x)dx = KE[X]
Proof. Z ∞
E[aX + b] = (ax + b).fX (x)dx
−∞
Z ∞ ∞
=1
Z
=a x.fX (x)dx + b f
X (x)
−∞ −∞
= aE[X] + b
4. If x and y are two random variables with joint probability density function fXY (x, y)
then E[X + Y ] = E[X] + E[Y ]
Proof. Let X and Y be the two random variables with joint PDF fXY (x, y)
Z x=∞ Z y=∞
E[X + Y ] = (x + y)fXY (x, y)dxdy
x=−∞ y=−∞
Z x=∞ Z y=∞ Z x=∞ Z y=∞
= xfXY (x, y)dxdy + yfXY (x, y)dxdy
x=−∞ y=−∞ x=−∞ y=−∞
Z x=∞ Z y=∞ Z y=∞ Z x=∞
= xdx fXY (x, y)dy + ydy fXY (x, y)dx
x=−∞ y=−∞ y=−∞ x=−∞
45
Z x=∞ Z y=∞
= xdxfx (x) + ydyfY (y)
x=−∞ y=−∞
= E[X] + E[Y ]
5. If X and Y are two independent random variables with PDF fXY (x, y) then
E[XY ] = E[X]E[Y ]
Proof. We know that, if two random variable are independent then fXY (x, y) =
fX (x)fY (y) or P (AB) = P (A)P (B)
Z x=∞ Z y=∞
E[XY ] = xyfXY (x, y)dxdy
Zx=−∞
x=∞
y=−∞
Z y=∞
= xfX (x)dx fY (y)dy
x=−∞ y=−∞
= E[X]E[Y ]
Proof. Y ≤ X
Y −X ≤0
E[Y − X] ≤ E[0]
E[y] − E[X] ≤ 0
E[Y ] ≤ E[X] Hence Proved.
2 (x − 1); 1 ≤ x ≤ 4
fX (x) = 9
0; else where
46
2 h 43 42 1 1 i
= − − −
9 3 2 3 2
2h i
= 13.9 = 3
9
Z 4
2
2. E[X ] = x2 .fX (x)dx
1
2 4 2
Z
= x (x − 1)dx
9 1
2 h x4 x3 i4
= −
9 4 3 1
2 h 44 43 1 1 i
= − − −
9 4 3 4 3
2h i
= 42.75 = 9.5
9
3. E[3X] = 3E[X] = 3 × 3 = 9
4. E[3X + 5] = 3E[X] + 5 = 9 + 5 = 14
47
Question. 12: The continuous random variable ‘X’ is defined by
−2 with a probability 1/3;
X = 3 with a probability 1/2;
1 with a probability 1/6;
−2 3 1
xi
1 1 1
P (Xi ) = fX (x)
3 2 6
X 1 1 1
(a). E[X] = xfX (x) = (−2) +3 +1 =1
x
3 2 6
X 1 1 1
(b). E[X 2 ] = x2 fX (x) = (−2)2 + 32 + 12 =6
x
3 2 6
Find (i) If PDF is valid. (ii) E[X] (iii) E[3X − 1] (iv) E[(X − 1)2 ]
Solution: (i) The integration of PDF is equal to one, then it is valid.
T otal P robability = 1
Z ∞ Z ∞ h e−5 i∞ h i
fX (x) = 5.e−5x dx = 5 = −1 e−∞ − e0 = −1(−1)
x=−∞ 0 −5 0
=1
Question. 14: Let ‘X’ be the random variable defined by the density function
π cos( πx ); −4 ≤ x ≤ 4
6 8
fX (x) =
0; else where
49
π h sin( πx ) −cos( πx ) i4
=x π 8 − π π8
16 8
( 8 )( 8 ) −4
π 8 h πx 8 πx i4
= × xsin( ) + cos( )
16 π 8 π 8 −4
1h πx i4 1 8h πx i4
= xsin( ) + × cos( )
2 8 −4 2 π 8 −4
1h 4π −4π i 4 h 4π 4π i
= 4sin( ) − 4sin( ) + cos( ) − cos( )
2 8 8 π 8 8
1 π 4
= × 8sin + (0)
2 2 π
1
= (0) + 0
2
E[X] = 0
∴ E[3X] = 3E[X] = 3(0) = 0
Z ∞
2
(ii) E[X ] = x2 .fX (x)dx
−∞
Z 4 Z Z Z Z
π
2 πx
= x . cos( )dx ∵ uv = u v − du v
−4 16 8
π 4 2
Z
πx
= x cos( )dx
16 −4 8
πx
π 2 sin( 8 ) sin( πx ) i4
h Z
8
= x π − 2x π dx
16 8 8
−4
Z
π 8 2
h πx πx i4
= × x sin( ) − 2 xsin( )dx
16 π 8 8 −4
i 4
" #
h −cos( ) πx πx
−cos
Z
1 2 πx 8
= x sin( ) − 2 x. π − 1. π 8 dx
2 8 8 8 −4
" #4
πx i
1 2 πx 16 h πx sin( 8 )
= x sin( ) − − xcos( ) + π
2 8 π 8 8 −4
" #
1 πx 4 16 πx 4 128 πx 4
E[X 2 ] = x2 sin + xcos − 2 sin
2 8 −4 π 8 −4 π 8 −4
" #
1 16 128
= (16 + 16) + (0) − 2 (2)
2 π π
1 1 128
= × 32 − × 2 × 2
2 2 π
128
= 16 − 2
π
50
2.4.2 Properties of Variance
Let ‘X’ be the random variable with PDF fX (x). Variance or Second central moment
or ac power is defined as Z +∞ 2
2
V ar(X) = µ2 = E[(x − X) ] = x − X fX (x)dx
x=−∞
2
1. V ar(X) = σX = m2 − m21 = E[X 2 ] − (E[X])2
Proof.
2
3. σKX = V ar(KX) = K 2 V ar(X) where ‘K’ is constant.
Proof.
2
σKX = V ar(KX) = E[(KX − KX)2 ]
= E[(KX − KX)2 ] ∵ KX = E[KX] = KX
= E[(KX − KX)2 ]
2
= E[K 2 (X − X )]
2
V ar(KX) = K 2 E[X − X ]
2
4. σaX+b = V ar(aX + b) = a2 V ar(X)
Proof.
h i
V ar(aX + b) = E [(aX + b) − (aX + b)]2
h i
= E [(aX + b) − (aX + b)]2 ∵ aX + b = E[aX + b] = aX + b
= E[a2 (X − X)2 ]
51
= a2 V ar(X)
2
5. σX+Y = V ar(X + Y ) = V ar(X) + V ar(Y ); If X, Y r.v are independent
Proof.
= V ar(X) + V ar(Y )
2
6. σX−Y = V ar(X − Y ) = V ar(X) + V ar(Y ); If X, Y r.v are independent
Proof.
7. If X, Y are two independent random variable with joint PDF fXY (x, y) then
2 2
V ar(XY ) = E[X 2 ]E[Y 2 ] − X Y
Proof.
V ar(XY ) = E[(XY − XY )2 ]
= E[(XY − X Y )2 ] ∵ XY = E[XY ] = E[X]E[Y ] = X Y
2 2
= E[X 2 Y 2 + X Y − 2XY X Y ]
2 2 2 2
= E[X 2 ]E[Y 2 ] + X Y − 2X Y
2 2
= E[X 2 ]E[Y 2 ] − X Y
2
8. σX+Y = V ar(X + Y ) = V ar(X) + V ar(Y )
Proof.
52
h i
= E (X − X)2 + (Y − Y )2 + 2(X − X)(Y − Y )
h i
= E (X − X)2 + (Y − Y )2 + 2(XY − XY − XY + X Y )
2 2
= E[(X − X) ] + E[(Y + Y ) ] + 2 E[XY ] − E[XY ] − E[XY ] + E[X Y ]
= V ar(X) + V ar(Y ) + 2 E[XY ] − X Y −
XY +
XY
= V ar(X) + V ar(Y ) + 2 E[XY ] − X Y
Similarly
V ar(X − Y ) = V ar(X) + V ar(Y ) − 2 E[XY ] − X Y
V ar(X − Y ) = V ar(X) + V ar(Y ) for independence r.v case.
7
∴ E[X] =
2
7 2 49 64 − 49 15
(b) V ar(X) = m2 − m21 = 16 − = 16 − = =
2 4 4 4
53
2.5 Standard PDF and CDF for Continues Random Variable (or) Differ-
ent types of PDF and CDF
A continuous random variable X is said to follow a uniform distribution [a, b] if its PDF
is fX (x) = K; a ≤ x ≤ b
1
; a≤x≤b
fX (x) = b − a
0; Else where
a b
54
2.5.1.1 Statistical Parameters for Uniform PDF
Question. 16: Calculate all statistical parameters for uniform random variable, whose
PDF is shown in Fig.
Solution:
−a a x 0 2a x a b x
Fig. (a) Fig. (a) Fig. (a)
55
3rd momentum: E[X 2 ] 3rd momentum: E[X 2 ] 3rd momentum: E[X 2 ]
R 2a Rb
Ra m3 = x=0 x3 · fX (x)dx m3 = x=a x3 · fX (x)dx
m3 = x=−a
x3 · fX (x)dx R 2a 1 Rb 1
Ra = x=0 2a · x3 dx = x=a b−a · x3 dx
1
= x=−a 2a
· x3 dx h i2a h 4 ib
1 x4
h 4 ia = 2a 4 = b−a x4
1
1 x h 0 i
= 2a 4
h 4 a4 i
1 16a4 −0
−a
= 2a 4 = b−a b −a
1
4
=0
b4 −a4
= 2a3 = 4(b−a)
σX = √a σX = √a σX = b−a
√
3 3 12
NOTE: The mean value locates for continuous r.v, the center of gravity of the area
under the PDF curve.
56
Question. 17: Let ‘X’ is uniform random variable, which represents Quantization
Noise Power (QNP) and defined as
1;
0 ≤ x ≤ 20
fX (x) = 20
0; Else where
2. What is the probability the QNP is greater than the average power?
Solution:
R∞
1. Average QNP E[X] = x=−∞ xfX (x)dx
R 20 1
= x=0 20 xdx
h 2 i20
1 x
= 20
h2 0 i
1 20×20
= 20 2
= 10
2. Probability the QNP is greater than the average power
R 20
P {X ≥ 10} = x=10 fX (x)dx
R 20 1
= x=10 20 dx
h i20
1
= 20 x
h 10 i
1
= 20 20 − 10
= 12
3. Probability that QNP ±5 about the average power.
R 15
P {5 ≤ X ≤ 15} = x=5 fX (x)dx
R 15 1
= x=5 20 dx
h i15
1
= 20 x
h 5 i
1
= 20 15 − 5
= 12
Question. 18: ‘X’ is a continuous random variable X(θ) = Acosθ; which PDF is
a uniform (0,2π) random variable. Find the mean value of r.v?
Solution:
E[X(θ)] = X(θ)
R∞
= x=−∞ X(θ)fX (θ)dθ
1
R 2π
= 2π x=0
Acosθdθ
=0
57
Question. 19: ‘X’ is a continuous random variable (−5, 5);
fX (x)
1
b−a
1. What is the PDF of ‘X’ fX (x)?
1
fX (x) = ; −5 ≤ x ≤ 5
10
= 0; Else where
x−(−5)
FX (x) = 5−(−5)
−5≤x≤5
x+5
∴ FX (x) = 10
−5≤x≤5
= 0; Else where
R∞ R5 1
3. E[X] = x=−∞
xfX (x)dx = x=−5
x 10 dx = 0 (odd function)
R∞ R5 1
E[X 5 ] = x=−∞
x5 fX (x)dx = x=−5
x5 10 dx = 0 (odd function)
R∞ R5 h i5
1 1
E[ex ] = x=−∞
ex fX (x)dx = x=−5
ex 10 dx = 10
ex = 14.84
−5
Question. 19: ‘X’ is a uniform random variable with expected value E[X] = 7 and
variance V ar[X] = 3. What is the PDF of ‘X’?
Solution:
a+b
E[X] = =7
2
2 (b − a)2
V ar[X] = σX = = 3 ⇒ (b = a) = 6
12
58
From the above equations, a = 4; b = 10
1;
4 ≤ x ≤ 10
∴ fX (x) = 6
0; Else where
R∞ 1
We know that x=−∞ gX (x)dx = 1
Rb 𝑥
⇒ x=−b 4cos( πx2b
x
)rect( 2b )dx = 1 -b
4
b
Rb
⇒ x=−b 4cos( πx )dx = 1
4cos ( )
𝜋𝑥
2𝑏
2b
h ib
⇒ 4 × 2b sin πx =1 -b
𝑥
π 2b
−b b
π
⇒b= 16
4
-b b
𝑥
59
2.5.2 Exponential random Variable
1
1 − e −(x−a)
b ; x≥a
∴ FX (x) =
0; x<a
𝑥
a
Applications:
• The exponential density function is useful in describing rain drop sizes when a
large number of strome measurements are made.
Question. 20: Calculate all the statistical averages or parameters for exponetional PDF
as shown in Fig.
𝑓𝑋 (𝑥)
1
1 e −(x)
b ; x≥0
𝑏
b 1 − 𝑥
fX (x) = 𝑏
𝑒 𝑏
0; x<0
0
𝑥
60
Solution:
R∞
1. Mean Value E[X] = X = x=−∞ xfX (x)dx
Z ∞
1
m1 = x e−x/b dx
x=0 b
1 h e−x/b e−x/b i∞
Z
= x − 1. dx
b −1/b −1/b 0
−x/b Z −x/b i∞
1 e
h e
= x − dx
b −1/b (−1/b)(−1/b) 0
1 h i
= 0 − 0 − (0 − b2 )
b
=b
R∞
2. Mean Square Value E[X 2 ] = X = x=−∞
x2 fX (x)dx
Z ∞
1
m2 = x2 e−x/b dx
x=0 b
−x/b
e−x/b i∞
Z
1 2e
h
= x − 2x dx
b −1/b −1/b 0
=0
∞
Z i∞
1 h
−x/b
2 i h 2b −x/b
= − bx
e + xe dx
b b
0 0
Z ∞
= 2b2 ∵ from mean value calculation xe−x/b dx = b2
0
R∞
3. 3rd momentum E[X 3 ] = X = x=−∞
x3 fX (x)dx
Z ∞
1
m3 = x3 e−x/b dx
x=0 b
1 3 e−x/b −x/b
Z i∞
2e
h
= x − 3x dx
b −1/b −1/b 0
∞ =0
Z ∞
1 h
−x/b
3
i
h 3b 2 −x/b
i
= − bx e
+ xe dx
b
0 b 0
"Z #∞
=3 x2 e−x/b dx
0
" #
∞
e−x/b ∞ 2xe−x/b
Z
2
=3 x − −1 dx
−1/b 0 0 b
" #
−x/b
∞ =0
Z ∞
2 e
−x/b
=3 x + 2bxe dx
−1/b 0 0
Z ∞
= 6b(b ) 2
∵ from mean value calculation xe−x/b dx = b2
0
= 6b3
61
2
4. Variance µ2 = σX = m2 − m21 = 2b2 − b2 = b2
√ √
5. Standard Deviation σX = µ 2 = b2 = b
Question. 21: Calculate all the statistical parameters for given exponetional PDF.
1 e −(x−a)
b ; x≥a
b
fX (x) =
0; x<a
Solution:
R∞
1. Mean Value E[X] = X = x=−∞ xfX (x)dx
Z ∞
1 −(x−a)
m1 = x · e b dx
x=a b
" −(x−a)
! Z −(x−a)
#∞
1 e b e b
= x· − 1· dx
b −1/b −1/b
a
−(x−a)
" #
1 h e b i ∞
−∞ 0
= (−b) xe − ae + b
b −1/b a
" #
1 h i h
2 −∞ 0
i
= (−b) 0 − a − b e −e
b
" #
1 2
= ab − b (0 − 1)
b
1h i
= ab + b2
b
E[X] = a + b
R∞
2. Mean Square Value E[X 2 ] = X = x=−∞
x2 fX (x)dx
Z ∞
1 −(x−a)
m2 = x2 · e b dx
x=a b
" −(x−a)
! Z −(x−a)
#∞
1 e b e b
= x2 · − 2x · dx
b −1/b −1/b
a
" Z ∞ #
1 h −(x−a)
i
= (−b) x2 e−∞ − a2 e0 + 2b xe b dx
b x=a
" #
1 ∞ −(x−a)
Z
1 h
2
i
2
= (−b) 0 − a + 2b b + ab ∵ xe b dx = a + b
b b x=a
62
" # Z ∞
1 2 3 2
−(x−a)
= a b + 2b + 2ab ⇒ xe b dx = ab + b2
b x=a
2
4. Variance: µ2 = σX = m2 − m21
= a2 + 2ab + 2b2 − (a + b)2
= a2 + 2ab + 2b2 − a2 − b2 − 2ab
= b2
√ √
5. Standatd deviation: σX = µ2 = b2 = b
µ3 2b3
7. Skewness: 3
= =2
σX b3
63
Question. 22: The power reflected from an aircraft of complicated shape that is
received by a RADAR can be described by an exponentional random variable P , the
density of ‘P ’ is given by
−p
1 e p0 ; p≥0
p0
fP (p) =
0; p<0
where p0 is the average amount received power. What is probability that the received
power is larger than the power received on the average?
Solution:
−p
1 − 1 p0
p0
e ; p≥0
FP (p) =
0; p<0
−p0
FP (−∞ ≤ P ≤ p0 ) = 1 − e p0
= 1 − e−1
= 0.632
• Find the probability that the received power is greater than the average power?
Solution:
1. Average power
Z ∞
E[X] = xfX (x)dx
−∞
Z ∞
1 x
= x · e b dx
x=0 b
" −x
! Z −x
#∞
1 eb eb
= x· − 1· dx
b −1/b −1/b
0
" −x i
#
1 h e b ∞
−∞ 0
= (−b) xe − 0e + b
b − 1b 0
64
" #
1 h i h
2 −∞ 0
i
= (−b) 0 − 0 − b e −e
b
" #
1 2
= 0+b
b
=b
= e−1
= 0.367
Solution:
We know that, the mean, mean-square and variance for exponentional r.v is
E[X] = m1 = b; E[X 2 ] = m2 = 2b2 ; V ar(X) = µ2 = σX2
= b2
∴ b2 = 25 ⇒ b = 5
1. The PDF function
1 e −x
5 ; x≥0
fX (x) = 5
0; x<0
65
2.5.3 Rayleigh PDF
An Rayleigh distribution function (PDF) can be defined for a continuous random vari-
able X is 2
2 (x − a)e −(x−a)
b ; x≥a
fX (x) = b
0; x<a
The CDF function is
Z x
FX (x) = fX (u)du
u=−∞
Z x
2 −(u−a)2
𝑓𝑋 (𝑥)
= (u − a)e b du
u=a b
0.607 2
𝑏
(u − a)2
Let =t
2 √
⇒ u − a = bt
√
b −1/2
⇒ du = t dt
𝑎
𝑥
2 𝑎+ 𝑏
𝐹𝑋 (𝑥) 2
If u = a ⇒ t = 0;
(x − a)2 1
If u = x ⇒ t =
2
(x−a)2 √
2 √ −t b −1/2
Z
b
= bte t dt 0.393
u=0 b 2
(x−a)2 𝑥
𝑎 𝑎+
Z
b 𝑏
= e−t dt 2
u=0
2
" # (x−a)
b
−t
e 2
= 1 − e −(x−a)
b ; x≥a
−1 ∴ FX (x) =
0
" # 0; x<a
(x−a)2
− −0
=− e 2 −e
(x−a)2
= 1 − e− b
Applicationas:
• The Rayleigh PDF describes the envelope of one type of noise when passed
through a band pass filter.
66
Question. 25: The life time of a computer is expressed in weeks is Rayleigh r.v, its
PDF is 2
x e −x
400 ; x≥0
fX (x) = 100
0; x<a
1. Find the probability that the computer will not fail in a full week?
2. What is the probability that the life time of a computer will exceed one year?
Solution:
Z 1
x −x2
P {0 ≤ x ≤ 1} = e 400 dx
x=0 200
√
let x /400 = t ⇒ x = 20 t; dx = 10t−1/2 dt
2
If x = 0 ⇒ t = 0; If x = 1 ⇒ t = 1/400
Z 1/400 √
20 t −t −1/2
= e 10t dt
x=0 200
Z 1/400 " #1/400
−t
e
= e−t dt =
x=0 −1
"0 #
h i
= − e−1/400 − e−0 = e−0 − e−1/400
= 1 − 1.0025
= 0.0025
67
2.5.3.1 Statistical Parameters for Rayleigh PDF
Question. 25: Calculate all statistical averages of continuoues random vaiable ‘X’ with
Rayleigh PDF.
Case I. Case II. 2
2 xe− xb2 ; x ≥ 0 2 (x − a)e− (x−a)
b ; x≥a
fX (x) = b fX (x) = b
0; Else where 0; x<0
Case I:
1. Mean Value: 𝑓𝑋 (𝑥)
Z ∞
0.607 2
E[X] = xfX (x)dx 𝑏
x=−∞
Z ∞
2 x2
m1 = x xe− b dx
x=0 b
2 ∞ 2 − x2
Z
= x e b dx 0 𝑏
𝑥
b x=0 2
√ √ √ √ 1
let x2 /b = t ⇒ x = b t; dx = b t−1/2 dt ⇒ dx = b √ dt
2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
2
Z ∞ √ 1
m1 = bt e−t · b √ dt
b t=0 2 t
√ Z ∞ 1− 1 −t
= b t 2 · e dt
0
√ Z ∞ 1 −t
= b t 2 · e dt
0
Z ∞ 1 √
We know that Γ(n) = e−x xn−1 dx; Γ(n + 1) = nΓ(n); Γ = π
0 2
√ Z ∞ −t ( 1 +1)−1
= b e ·t 2 dt
0
√ 1 √ 1 1 √b √
= bΓ +1 = b Γ = π
√ 2 2 2 2
bπ
m1 =
2
68
√ √ √ −1/2 √ 1
let x2 /b = t ⇒ x = b t; dx = bt dt ⇒ dx = b √ dt
2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞ √ √ 1
2
m2 = ( bt)3 e−t · b √ dt
b t=0 2 t
Z ∞
2 3 3 1 1 1
= b 2 t 2 b 2 · t− 2 e−t dt
b t=0 2
Z ∞ Z ∞
1 2 −t
= b te dt = b te−t dt
b t=0 t=0
" #∞ " #
−t
e
= b − te−t − = b 0 − 0 − (0 − 1)
(−1)(−1)
0
∴ m2 = b
3. 3rd Momentum:
Z ∞
3
E[X ] = x3 fX (x)dx
Zx=−∞
∞
2 x2
m3 = x3 · xe− b dx
x=0 b
Z ∞
2 x2
= x4 e− b dx
b x=0
√ √ √ −1/2 √ 1
let x2 /b = t ⇒ x = b t; dx = bt dt ⇒ dx = b √ dt
2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
2
Z ∞ √ 1
= (bt)2 e−t · b √ dt
b t=0 2 t
√ Z ∞ 3 −t
=b b t 2 · e dt
0
Z ∞ 1 √
We know that Γ(n) = e−x xn−1 dx; Γ(n + 1) = nΓ(n); Γ = π
0 2
√ Z ∞ −t ( 3 +1)−1
=b b e ·t 2 dt
0
√ 3 √ 3 3
=b bΓ +1 =b b Γ
2 2 2
√ 3 1 3 √ 1 1 3 √ √
= b b Γ( + 1) = b b Γ( ) = b b π
2 2 2 2 2 4
3 √
m3 = b bπ
4
4. Variance:
2
σX = µ2 = m2 − m21
√bπ 2
2
σX = b −
2
69
bπ
=b−
4 π
=b 1−
4
b(1 − π4 )
p
5. Standard Deviation: σX =
6. Skew:
µ3 = m3 − 3m1 µ2 − m31
√ √ √
3b bπ 3 bπ π bπ 3
= − ·b 1− −
√4 2
√ √4 2√
3b bπ 3b bπ 3b bπ · π bπ bπ
= − + −
4 2 8 8
3 √ 1 √
= − b bπ + bπ bπ
√4 4
b bπ
µ3 = π−3
4
𝑓𝑋 (𝑥)
Case II: Given PDF 0.607 2
𝑏
2
2 (x − a)e− (x−a)
b ; x≥a
fX (x) = b
0; x<0
𝑎 𝑏
𝑥
𝑎+ 2
1. Mean Value:
Z ∞
E[X] = xfX (x)dx
x=−∞
Z ∞
2 (x−a)2
m1 = x (x − a)e− b dx
x=a b
2 ∞
Z
(x−a)2
= x(x − a)e− b dx
b x=a
√ √ √ 1
let (x − a)2 /b = t ⇒ x = b t + a; dx = b √ dt
2 t
If x = a ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞ √ √ √ 1
2
m1 = ( bt + a) · bt e−t · b √ dt
b 2 t
Z ∞t=0
√ 1 Z ∞√ Z ∞
1
−t −t
= b t 2 + a e dt = b t 2 e dt + ae−t dt
0 0 0
√ Z ∞ −t ( 1 +1−1) h e−t i∞
= b e ·t 2 dt + a
0 −1 0
Z ∞ 1 √
We know that Γ(n) = e−x xn−1 dx; Γ(n + 1) = nΓ(n); Γ = π
0 2
70
√ 1
= bΓ
+1 +a 0+1
2
√ 1 1 √ 1√
= b Γ +a= b π+a
2√ 2 2
bπ
m1 = a +
2
3. 3rd Momentum:
Z ∞
3
E[X ] = x3 fX (x)dx
Zx=−∞
∞
2 (x−a)2
m3 = x3 · (x − a)e− b dx
x=0 b
Z ∞
2 (x−a)2
= x3 (x − a)e− b dx
b x=0
√ √ √ 1
let (x − a)2 /b = t ⇒ x = b t + a; dx = b √ dt
2 t
If x = a ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞ √ √ √ √ √ 1
2 3
= b t + a · e−t b t · b √ dt
b 2 t
Z ∞t=0√ √
3 −t
= b t + a · e dt
t=0
71
Z h √
∞ √ i
3
= bt + a + 3bta + 3 bt a · e−t dt
3 2
t=0
3
Z ∞
−t 3
3
Z ∞
−t
Z ∞
−t
√ Z ∞ −t 1
= b2 e t 2 dt + a e dt + 3ab e tdt + 3a b e t 2 dt
0 0 0 0
3
Z ∞
−t ( 3
+1)−1 3
√ Z ∞ −t ( 1 +1)−1
= b2 e t2 dt + a (1)dt + 3ab(1) + 3a b e t2 dt
0 0
3
3 √ 1 Z ∞
= b2 Γ 3
+ 1 + a + 3ab + 3a b Γ 2
+1 ∵ Γ(n) = e−x xn−1 dx
2 2 0
3 3 3 √ 1 1
= b2 · Γ + a3 + 3ab + 3a2 b Γ ∵ Γ(n + 1) = nΓ(n)
2 2 2 2
3 3 1 1
3 3 2√ √ 1 √
=b · · Γ
2 + a + 3ab + a b π ∵Γ = π
2 2 2 2 2
3 3√ 3 √ √
= b2 · π + a3 + 3ab + a2 b π
4 √ √ √2
√ 2
3b b π 3a b π
m 3 = a3 + + + 3ab
4 2
4. Variance:
2
σX = µ2 = m2 − m21
√
2 2
√ √ bπ 2
σX = a + a b π + b − a +
2 √
√ √ bπ 2a bπ
= a2 + a b π + b − a2 − −
4 2
bπ
= b−
4
π
=b 1−
4
6. Skew:
µ3 = m3 − 3m1 µ2 − m31
" √ √ 2
√ √ # " √ # " # " #3
3b b π 3a b π bπ bπ bπ
= a3 + + + 3ab − 3 a + · b− − b−
4 2 2 4 4
" # " #
3 √ 3 √ √ 3 √ √ 3 √ √ 3
= a3 + 3ab + a2 bπ + b b π − b b π − b b · π π + 3ab − abπ
2 4 2 2 4
" √ √ #
b b·π π 3 3√
− + a3 + bπa + bπ aa2
8 4 2
" √ #
3 √ 3 √ 3 √ bπ bπ
= b bπ − b bπ + bπ bπ −
4 2 8 8
√ √
3b bπ bπ bπ
=− +
4 4
72
√
b bπ
µ3 = π−3
4
Question. 26: Calculate all statistical averages of continuoues random vaiable ‘X’
for given PDF.
𝑓𝑋 (𝑥)
2 0.607
x
x − 2α α
α 2 e 2
; x≥0
Solution: fX (x) =
0; x<0
0 α 𝑥
1. Mean Value:
Z ∞
E[X] = xfX (x)dx
x=−∞
Z ∞
x x2
m1 = x · 2 e− 2α2 dx
x=0 α
Z ∞
1 x2
= 2 x2 e− 2α2 dx
α x=0
√ √
let x2 /2α2 = t ⇒ x = 2α t
2 α2 α2 α
2xdx = 2α dt ⇒ dx = dt ⇒ dx = √ √ dt ⇒ dx = √ √ dt
x 2α t 2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞
1 α
m1 = 2 2α2 t e−t · √ √ dt
α t=0 2 t
Z ∞√
1
= 2 α t 2 e−t dt
t=0
√ Z ∞
1
= 2α e−t · t( 2 +1)−1 dt
0
Z ∞ 1 √
We know that Γ(n) = e−x xn−1 dx; Γ(n + 1) = nΓ(n); Γ = π
0 2
√ 1 √ 1 1 α √
= 2αΓ +1 = 2α Γ =√ π
2 2 2 2
r
π
m1 = α
2
73
Z ∞
1 x2
= 2 x3 e− 2α2 dx
α x=0
2 2
√ √
let x /2α = t ⇒ x = 2α t
α2 α2 α
2xdx = 2α2 dt ⇒ dx = dt ⇒ dx = √ √ dt ⇒ dx = √ √ dt
x 2α t 2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞ √
1 α
m2 = 2 ( 2t α)3 e−t · √ √ dt
α t=0 2 t
Z ∞
1 3 3 α
= 2 2 2 t 2 α3 · e−t · √ √ dt
α 2 t
Z ∞0
3 1 3 1
= 2 2 − 2 t 2 − 2 α2 e−t dt
0
Z ∞
= 2α 2
te−t dt
0
2
= 2α (1)
m2 = 2α2
3. 3rd Momentum:
Z ∞
3
E[X ] = x3 fX (x)dx
Zx=−∞
∞
x x2
m3 = x3 · 2 e− 2α2 dx
x=0 α
Z ∞
1 x2
= 2 x4 e− 2α2 dx
α x=0
√ √
let x2 /2α2 = t ⇒ x = 2α t
α2 α2 α
2xdx = 2α2 dt ⇒ dx = dt ⇒ dx = √ √ dt ⇒ dx = √ √ dt
x 2α t 2 t
If x = 0 ⇒ t = 0; If x = ∞ ⇒ t = ∞
Z ∞
1 α
= 2 (2t α2 )3 e−t · √ √ dt
α t=0 2 t
Z ∞
1 α
= 2 (4t2 α4 ) e−t · √ √ dt
α 2 t
Z ∞ t=0
1 1
= 22− 2 t2− 2 α3 e−t dt
t=0
Z ∞
3 3
= 22 α 3
e−t t 2 dt
0
√ 3 Z ∞ −t ( 3 +1)−1 Z ∞
=2 2α e t2 dt ∵ Γ(n) = e−x xn−1 dx
0 0
√ 3 3 1 √
=2 2α Γ +1 ∵ Γ(n + 1) = nΓ(n); Γ = π
2 2
√ 3 3 3 1 1 1 1 √
= 2 2 α3 · Γ ∵Γ =Γ +1 = Γ = π
2 2 2 2 2 2 2
74
√ 3 1√
= 2 2 α3 · · π
r 2 2
3 π
m3 = 3α
2
4. Variance:
2
σX = µ2 = m2 − m21
r π 2
2 2
σX = 2α − α
2
π
= 2α2 − α2
2
2
π
=α 2−
2
r
π
5. Standard Deviation: σX = α 2− 2
6. Skew:
µ3 = m3 − 3m1 µ2 − m31
r r r
3 π π 2 π π 3
= 3α −3·α ·α 2− − α
2 2 2 2
r r r r
3 π 3 π 3 π π 3π π
= 3α − 6α + 3α −α
2 2 2 2 2 2
r " #
π π π
= α3 3−6+3 −
2 2 2
r
3 π
µ3 = α π−3
2
x − x22
fX (x) = e 2α
α2
d d x − x22
fX (x) = 0 ⇒ e 2α =0
dx dx α2
1 d 2
− x2 d − x22 x2 d
⇒ 2 xe 2α =0⇒x e 2α + e− 2α2 x = 0
α dx dx dx
x 2
−2x x 2
⇒ x e− 2α2 2
+ e− 2α2 (1) = 0
2α
x2
2
− x2
⇒ e 2α − 2 + 1 = 0
α
x2
⇒ 2 =1 ⇒x=α
α
Substitute this x = α value in fX (x)
75
α − α22 1 −1 1
fX (x) = e 2α = e 2 = (0.6065)
α2 α α
0.6065
∴ fX (x) =
α
Gaussian PDF is the most important of all PDFs and it enters into nearly all areas of
science and engineering. In communication Gaussion is used to represent noise voltage
generated across the resistor, shot noise generated in semiconductor devices, thermal
noise, noise added by the channel while transmitting information from transmitter to
receiver through channel.
1 x2
Normal PDF: fX (x) = √ e− 2
2π Z
∞
1 x2
Normal CDF: FX (x) = √ e− 2 (or)
2π −∞
(x − m)2
1
Standard PDF: fX (x) = √ Exp − ; where m = X = µ
σ 2π 2σ 2
fX (x)
x
P
−3 −2 −1 1 2 3
FX (x)
1
0.8
0.6
0.4
0.2
x
−3 −2 −1 1 2 3
76
Question. 25: Calculate all statistical averages of continuoues random vaiable ‘X’
with Gaussion PDF.
1. 2. 2
√1 e− x22 ; x ≥ 0 √1 e− (x−m)2σ 2 ; x≥a
2π
fX (x) = fX (x) = σ 2π
0; Else where 0; x<0
x2
Case-1: fX (x) = √12π e− 2
1. Mean Value: √1
2π
Z ∞
E[X] = xfX (x)dx
x=−∞
Z ∞
1 x2
m1 = x · √ e− 2 dx
x=−∞ 2π −3 −2 −1 0 1 2 3
Z ∞
1 x2
=√ x · e− 2 dx
2π x=−∞
𝑓𝑋 (𝑥)
This integration of odd function is zero.
Z a
i.e., fX (x)dx = 0
−a
odd function
77
2 1 1
=√ · Γ
π 2 2
1 √
=√ · π=1
π
∴ m2 = E[X 2 ] = X 2 = 1
2
4. Variance: σX = m2 − m21 = 1 − 02 = 1
5. Standard deviation: σX = 1
2
6. Skew: µ3 = m3 − 3m1 σX − m31 = 0 − 0 − 0 = 0
h 2
i
Case-2:fX (x) = σ √1 2π Exp − (x−m)
2σ 2
1. Mean Value:
Z ∞ √1
σ 2π
If x = −∞ ⇒ y = −∞; If x = ∞ ⇒ y = ∞
Z ∞
1 y2
= √ (y + m) e− 2σ2 dy
σ 2π −∞
Z ∞ =0 Z ∞
1 −y22 m y2
= √ y e 2σ + √ m e− 2σ2
σ 2π −∞
σ 2π −∞
Z ∞
m y 2
= √ y e− 2σ2 ∵ odd function
σ 2π −∞
y2 √ √
let 2 = t ⇒ y = 2 σ t
2σ
σ2 σ2 σ 1
2y dy = 2σ 2 dt ⇒ dy = dt ⇒ dy = √ √ dt ⇒ dy = √ t− 2 dt
y 2σ t 2
Z ∞
m σ 1
= √ e−t · √ t− 2 dt
σ 2π −∞ 2
Z ∞
m 1
= √ e−t · t− 2 dt
2 π t=−∞
Z ∞
m 1
=2· √ e−t · t 2 −1 dt ∵ even function
2 π t=0
m 1 m √
=√ Γ =√ · π
π 2 π
E[X] = m
78
2. Mean Squre Value:
Z ∞
2
E[X ] = x2 fX (x)dx
Zx=−∞
∞
1 (x−m)2
m1 = x2 · √ e− 2σ2 dx
x=−∞ σ 2π
Z ∞
2 (x−m)2
= √ x2 · e− 2σ2 dx ∵ even function
σ 2π x=0
x−m
let = y ⇒ x = σy + m ⇒ dx = σ dy
σ
If x = 0 ⇒ y = 0; If x = ∞ ⇒ y = ∞
Z ∞
2 y2
= √ (σy + m)2 e− 2σ2 σ dy
σ 2π −∞
Z ∞ Z ∞ Z ∞ =0
2 2
2
2 − y2 2
2
− y2 −
2
y
=√ σ y e dy + m e dy + 2σm e 2 dy
2π
0 0 0
√ Z ∞ √ Z ∞
2 y2 2 y2
= σ2 √ y 2 e− 2 dy + m2 √ e− 2 dy ∵ even function
π 0 π 0
y2 √
let = t ⇒ y = 2t
2
1
2y dy = 2 dt ⇒ dy = √ dt
2t
If y = 0 then t = 0; If y = ∞ then t = ∞;
√ Z ∞ √ Z ∞
2 −t 1 2 1
=σ √2
2t · e √ dt + m √ 2
e−t √ dt
π 0 2t π 0 2t
Z ∞ 2 Z ∞
2 1 m 1
= σ2 √ e−t · t( 2 +1)−1 dt + √ e−t · t 2 −1 dt
π 0 π 0
2 2 Z ∞
2σ 1 m 1
= √ Γ +1 + √ Γ ∵ Γ(n) = e−x xn−1 dx
π 2 π 2 0
2 2 √ 1 √
2σ 1 1 m
= √ Γ +√ π ∵ Γ(n + 1) = nΓ(n); Γ = π
π 2 2 π 2
σ2 √
=√ π + m2
π
E[X 2 ] = σ 2 + m2
2
3. Variance: σX or µ2
2
σX = µ2 = m2 − m21
= σ 2 + m2 − m2
∴ µ2 = σ 2
4. Standard Deviation: σX = σ
79
5. Skew: µ3 = 0 ∵ It is symmetry about mean X
2.5.4.1 Q-function:
1 (x−X)2
fX (x) = √ e− 2σ 2 (2.3)
σ 2π
x2
If X = 0 and σ = ±1 then fX (x) = √12π e− 2 . This is called “Normal Gaussion PDF
function”.
The Normal Gaussion CDF function is
Z x Z x
1 u2
FX (x) = P (−∞ ≤ X ≤ x) = P (X ≤ x) = fX (u)du = √ e− 2 du
−∞ 2π u=−∞
(2.4)
This integral can not be evaluated in closed form and it must be computed numeri-
cally. It is convenient to use the function Q(·), defined as
Z ∞ Z ∞
1 u2
Q(x) = P (X > x) = fX (u) du = √ e− 2 du
u=x 2π u=x
The area under fX (x) from 0 to ∞ is Q(·). From the symmetry of fX (x) about
origion and total area under fX (x) is 1.
Z ∞ Z 0 Z ∞
fX (x) dx = 1 ⇒ fX (x) dx + fX (x) dx = 1 ⇒ FX (x) + Q(x) = 1
−∞ −∞ 0
∴ FX (x) = 1 − Q(x)
FX (x) = P (−∞ ≤ X ≤ x) = P (X ≤ x)
(u−X)2
Z x Z x
1 − 2
= fX (u) du = √ e 2 σX du
−∞ σX 2π u=−∞
u−X
let Z = ⇒ du = σX dZ
σX
x−X
If u = −∞ ⇒ Z = −∞; u=x⇒Z=
σX
Z x−X
1 σX Z2
= √ e− 2 · σX dZ
σX 2π Z=−∞
80
x−X
1 √
Z
σX Z2
=√ 2π e− 2 dZ
2π Z=−∞
= P {X < x}
x−X
∴ FX (x) = 1 − Q
σX
x−X x−X
P {X < x} = FX =1−Q
σX σX
x−X x−X
P {X > x} = 1 − FX =Q
σX σX
Summary:
1. Q(·) Definition:
Z ∞
1 u2
Q(x) = √ e− 2 du
2π u=x
𝑓𝑋 (𝑥)
2. Property 1
2𝜋
1 (𝑥− 𝑚)2
−
𝑒 2𝜎 2
Q(−x) = 1 − Q(x) 2𝜋
1 x 𝑥
Q(x) = erf c √ , 0
2 2
√ 𝐹𝑋 (𝑥)
erf c(x) = 2Q(x 2)
81
Question. 26: Find the probability of the event X ≤ 5.5 for Gausssion random
variable having m = 3 and σX = 2.
Solution:
x−X
P (x ≤ 5.5) = P (−∞ ≤ X ≤ 5.5) = FX (x) = 1 − Q
σX
x−X 5.5 − 3
= = 1.2
σX 2
∴ FX (x) = 1 − Q(1.25) = FX (1.25) = 0.8944
(or another method)
= 1 − Q(1.25)
= 1 − 0.1056
P (x ≤ 5.5) = 0.8944 and also P (x ≥ 5.5) = 0.1056
Question. 27: Find the probability of the event X ≤ 7.3 for Gausssion random
variable having m = 3 and σX = 0.5.
Solution:
x−X
P (x ≤ 5.5) = P (−∞ ≤ X ≤ 5.5) = FX (x) = 1 − Q
σX
x−X 7.3 − 7
= = 1.2
σX 0.5
∴ FX (x) = FX (0.6) = 1 − Q(0.6)
= 1 − 0.2743
P (x ≤ 5.5) = 0.7257 and also P (x ≥ 5.5) = 0.2743
Question. 27: Assume that the height of clouds above the ground at some location
is a gaussion r.v with m = 1890 m and σX = 460 m. Find the probability that the
clouds will be higher than 2750 m?
Solution:
x−X
P (X > 2750) = 1 − P (−∞ ≤ X ≤ 2750) ∵ P (X > x) = Q
σX
x−X
= 1 − FX (2750) ∵ P (X < x) = 1 − Q
σX
x−X
=1− 1−Q
σX
2750 − 1830
=Q = Q(2.0) = 0.2275 × 10−1
460
∴ P (X > 2750) = 0.02275
82
Question. 28: An analog signal received at the dector (measured in µV ) may be
modeled as Gaussion r.v with the mean value 200 and standard deviation 256. What is
the probability that the signal is larger than 250 µV ?
Solution:
x−X
P (X > 250) = 1 − P (−∞ ≤ X ≤ 250) ∵ P (X > x) = Q
σX
x−X
= 1 − FX (2750) ∵ P (X < x) = 1 − Q
σX
2750 − 1830
=1− 1−Q
460
= Q(0.195) = 0.4247
∴ P (X > 250) = 0.4247
1 (x−X)2
fX (x) = √ e− 2σ 2 (2.5)
σ 2π
x2
Let X = 0 then fX (x) = √12π e− 2σ2 . This is called “Normal Gaussion PDF function”.
Here X = m1 = 0 means, in communication noise is zero.
The Normal Gaussion CDF function is
Z x Z x
1 u2
FX (x) = P (−∞ ≤ X ≤ x) = P (X ≤ x) = fX (u)du = √ e− 2σ2 du
−∞ 2π u=−∞
(2.6)
This integral is not easily evaluated and it can be evaluated using standard function
called “error function”, which is defined as
Z x
2 2
erf (u) = √ e−u du
π u=0
83
From the equation (2.5), the CDF can be evaluated by
Z x Z x
1 u2
FX (x) = P (−∞ ≤ X ≤ x) = P (X ≤ x) = fX (u)du = √ e− 2σ2 du
−∞ 2π u=−∞
u2 u2
e− 2σ2 e− 2σ2
Z ∞ Z ∞
= √ du − √ du
u=−∞ 2πσ 2 u=x 2πσ 2
u du √
let z = √ ⇒ dz = √ ⇒ du = 2 σ dz
2σ 2σ
x x2
If u = x ⇒ z = √ ; and z 2 = ; If u = ∞ ⇒ z = ∞
2σ 2 σ2
z2
e− 2σ2 √
Z ∞
=1− √ 2 σ dz
2πσ 2
z= √x
Z2 σ∞
1 2
=1− √ e−z dz
π z= √x
2Zσ ∞
1 2 2
=1− √ e−z dz
2 π z= √x
2 σ
1 x
= 1 − erf c √
2 2σ
1 x
∴ FX (x) = 1 − erf c √
2 2σ
x
erf c √ = 2 − 2FX (x)
2σ
= 2 [1 − FX (x)]
= 2Q(x)
1 x
∴ Q(x) = erf c √
2 2σ
(or)
1 x
Q(x) = 1 − erf √
2 2σ
Question. 29: ‘X’ is Gaussian r.v with E[X] = 0 and P [ |X| ≤ 10 ] = 0.1. What
is the standard deviation σX ?
Solution:
84
10 − 0 x−X
= 2Q −1 ∵ FX (x) = Q
σX σX
10
= 2Q − 1 = 0.1
σX
10
⇒Q = 0.55
σX
10
⇒ = 0.15 ∴ σX = 66.6
σX
Question. 30: The average life of a certain type of electric bulb is 1200 hours. What
percentage of this type of bulb is expected to fail in the first 800 hours of working?
What percentage of expected to fail between 800 and 1000 hours? Assume normal
distribution with σ = 200 hours.
Solution: (i.)
x−m
P (X < x) = 1 − Q
σ
800 − 1200
=1−Q = 1 − Q(−2) = 0.0228
200
∴ 2.28% of bulbs is expected to fail in first 800 hours of working.
(ii.)
85
x−X
= 1 − FX (1) − [1 − FX (2)] ∵ FX (x) = Q
σX
= Q(1) − Q(2) = 0.1587 − 0.0228 = 0.1359
∴ 13.59% is expected to fail between 800 and 1000 hours.
Question. 31: in a distribution exactly Gaussion 7% of items are under 35 and 11%
are over 63. Find the mean and standard deviation of the distribution. Also find how
many items are between 40 and 60 out of 200 items?
Solution:
7% 11%
35 63
86
63 − X
⇒F = 1 − 0.11 = 0.89
σX
63 − X
let m = value is positive, then F {m} = F {m}
σX
63 − X
⇒F = 0.89
σX
From above , check F [·] in table and put a positive sign in front.
As it is the left of mean. i.e., about m = 0.
63 − X
⇒ = 1.23 ∵ FX (1.23) = 0.89
σX
P (X ≥ 63) ⇒ X + 1.23 σX − 63 = 0
fX (x) 1
√
σ 2π
(x−m)2
fX (x) = σ
1
√
2π
e− 2σ 2
x
−∞ m ∞
1. The Gaussion PDF is used to describe the noise generated by resistor (thremal),
noise generated by semiconductor (shot noise) and noise generated by channel
(channel transmitter).
2. The Gaussion PDF is symmetrical about its mean and it is bell shaped curve
87
3. when S.D σX = 1 and mean m1 = 0 = X, General Gaussion PDF is called
normal gaussion PDF as shown in figure.
5. When
1
x = ±0, fX (x) = √
2π
1 1 0.606
x = ±1, fX (x) = √ e− 2 = √
2π 2π
1 0.135
x = ±2, fX (x) = √ e−2 = √
2π 2π
1 0.0111
x = ±3, fX (x) = √ e−4.5 = √
2π 2π
0.606 0.135 0.0111
∴ x = ±1, ±2, ±3, the maximum value falls to √ , √ , √ respectively
2π 2π 2π
as shown in figure.
√1 √1 √1
2π 2π 2π
68.26%
0.606
√
2π
−1 0 1 −2 0 2 −3 0 3
6. For x = ±1, ±2, ±3, the area of the curve is 68.26%, 95.45%, and 99.73% of the
total area respectively as shown in Figure.
7. A continuous r.v XX and another continuous r.v X2 , their mean and variance are
X1 , σX1 , and X2 , σX2 , then
88
(x−m)2 (x−m)2
1 − 1 −
2σ 2 2σ 2
fX1 (x) = √ e X1
fX1 (x) = √ e X2
σX1 2π σX2 2π
σX2
σX1
• If the noise has equal positive amplitude and negative amplitude then its
mean is zero as shown in figure.
• If the noise has more positive amplitude then its mean is positive as shown
in figure.
• If the noise has more negative amplitude then its mean is negative as shown
in figure.
0.4
0.3
fV (v)
0.2
0.1
0
−4 −2 0 2 4
v
• For low value of standard deviation noise voltages are more closed to mean
as shown in figure.
89
• For high standard deviation noise will have more amplitude variations about
mean as shown in figure.
f (x)
0.4 1 n o
(x−µ)2
f (x) = √ · exp − 2σ2
µ = −1, 2πσ 2
σ2 = 1 µ = 0, σ 2 = 1
0.2
µ = 0, σ 2 = 2
0
−6 −4 −2 0 2 4 6 8 10 x
1
√
fV (v) σ 2π
1
√
fV (v) σ 2π
v v
v=m v=m
90
HH
∆x
H 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
x HH
H
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998
3.5 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998
3.6 0.9998 0.9998 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.7 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.8 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999
3.9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Z x
1 2
F0;1 (x) = √ e−t /2 dt F0;1 (1.65) ≈ 0.9505
2π −∞
x−µ
Fµ;σ2 (x) = F0;1 F0;1 (−x) = 1 − F0;1 (x)
σ
91
2.6 Discrete random variable - Statistical parameters
Let ‘X’ be the discrete r.v which takes the values xi where i = −∞ to + ∞ with
probability density function.
P∞
1. Expectation E[X] = X = m1 = xi fX (xi )
i=−∞
∞
E[X 2 ] = X 2 = m2 = x2i fX (xi )
P
2. Mean square value
i=−∞
∞
3
x3i fX (xi )
P
3. Third moment E[X ] = m3 =
i=−∞
2
4. Variance σX = µ2 = V ar[X] = m2 − m21
√
5. Standard deviation σX = µ2
6. Skew deviation µ3 = m3 − 3m1 µ2 − m31
Question. 32: Find mean, mean square, variance, and standard deviation of statis-
tical data is 2, 4, 4, 4, 5, 5, 7, 9.
Solution:
𝑓𝑋 (𝑥) 𝑚𝑒𝑎𝑛
𝜎= 2 𝜎= 2
3
3
8
8
2
2
xi 2 4 5 7 9 8
8
1 1 1
1
1 3 2 1 1 8 8 8
fX (xi ) 8 8 8 8 8
8
𝑥
0 1 2 3 4 5 6 7 8 9
X
E[X] = xi fX (x)
x
1 3 2 1 1
= 2× + 4× + 5× + 7× + 9×
8 8 8 8 8
2 + 12 + 10 + 7 + 9
= =5
8
X
E[X 2 ] = x2i fX (x)
x
1 3 2 1 1
= 4× + 16 × + 25 × + 49 × + 81 ×
8 8 8 8 8
4 + 48 + 50 + 49 + 81 232
= = = 29
8 8
X
E[X 3 ] = x3i fX (x)
x
1 3 2 1 1
= 8× + 64 × + 125 × + 343 × + 729 ×
8 8 8 8 8
92
8 + 192 + 250 + 343 + 729
= = 29
8
2
Variance: σX = m2 − m21 = 29 − 25 = 4 (or) other method
X
2
σX = E[(xi − X)2 ] = = (xi − X)2 fX (x)
x
(2 − 5)2 + 3(4 − 5)2 + 2(5 − 5)2 + (7 − 5)2 + (9 − 5)2
=
8
2 2 2
3 +3+2 +4 9 + 3 + 4 + 16
= = =4
8 8
2
∴ σX =4
√
standard deviation: σX = 4 = ±2
skew:
2
µ3 = m3 − 3m1 σX − m31 = 190.25 − 3 × 5 × 4 − 53 = 190.25 − 60 − 125 = 5.25
µ3 5.25
Skewness: 3
σX
= 23
= 0.65625
93
xi 0 14 xi 0 6 8 14 xi 6 8
Solution:
2 2 1 1 1 2 2
fX (x) 4 4
fX (x) 14 4 4 4
fX (x) 4 4
(i) 0, 0, 14, 14
2 2 1 1 1 2 2
E[X] = 0 4
+ 14 8
E[X] = 0 14 + 6 4
+8 4
+ 14 4
E[X] = 6 4
+8 4
=7 =7 =7
1 1 1 1 2 2
E[X 2 ] = 02 4
+ 62 4
+ 82 4
+ 142 4
E[X 2 ] = 62 4
+ 82 4
2 2
E[X 2 ] = 02 4
+ 142 8 1 1 1 1 2 2
=0 4
+ 36 4
+ 64 4
+ 196 4
= 36 4
+ 64 4
2 2
(ii) 0, 6, 8, 14
=0 4
+ 196 4
= 0 + 9 + 16 + 49 = 18 + 32
=7
= 74 = 50
1 1 1 2
E[X 3 ] = 03 + 63 + 83 E[X 3 ] = 63
94
4 4 4
+ 143 14 4
+ 83 24
2 2
E[X 3 ] = 03 4
+ 143 8 2
= 0 14 + 216 14 + 512 14 + 2744 14 = 216 24 + 512 4
2 2
(iii) 6, 6, 8, 8
=0 4
+ 2744 4
= 0 + 54 + 128 + 686 = 108 + 256
= 1372
= 868 = 364
2 2 2
σX = m2 − m21 = 98 − 49 = 49 σX = m2 − m21 = 74 − 49 = 25 σX = m2 − m21 = 50 − 49 = 1
√ √ √
S.D = σX = 49 = ±49 S.D = σX = 25 = ±5 S.D = σX = 1 = ±1
2
Skew µ3 = m3 − 3m1 σX − m31
2 2
Question. 33: Find all statistical parameters for given statistical data.
𝜎 =−7 𝜎 =7
2 2
2 4 4
4
1
4
0 2 4 6 5 8 10 12 14
𝑚𝑒𝑎𝑛 = 7
𝑓𝑋 (𝑥)
𝜎 = −5 𝜎 =5
2
4
1 1 1 1
1
4 4 4 4
4
𝑥
0 2 4 6 5 8 10 12 14
𝑚𝑒𝑎𝑛 = 7
𝑓𝑋 (𝑥)
−1 1
𝜎 𝜎
2 2
2
4 4
4
1
4
𝑥
0 2 4 6 5 8 10 12 14
All three have the same expected value, E[X] = 7, but the “spread” in the distribu-
tions is quite different. Variance is a formal quantification of “spread”. There is more
than one way to quantify spread; variance uses the average square distance from the
mean.
p
Standard deviation is the square root of variance: SD(X) = V ar(X). Intuitively,
standard deviation is a kind of average distance of a sample to the mean. (Specifically,
it is a root-meansquare [RMS] average.) Variance is the square of this average distance.
NOTE:
1. The variance and standard deviation are closely related, measures spreding of
data about mean value.
3. For low standard deviations data and events are more closely existed about mean
and viceversa.
95
Question. 34: A fair coin is tossed three times. Let ‘X’ be the number of tails
appearing. Find PDF, CDF and also its statistical parameters ?
Solution:
Sample space S = { HHH HHT HTH THH HTT THT TTH TTT }
D.r.v ‘X’ = { x1 x2 x3 x4 x5 x6 x7 x8 }
Getting Tail = {0 1 1 1 2 2 2 3}
1
P (X = 0) = P (x1 ) =
8
1 1 1 3
P (X = 1) = P (x2 ) + P (x3 ) + P (x4 ) = + + =
8 8 8 8
1 1 1 3
P (X = 1) = P (x5 ) + P (x6 ) + P (x7 ) = + + =
8 8 8 8
1
P (X = 3) = P (x8 ) =
8
xi 0 1 2 3
1 3 3 1
fX (x) 8 8 8 8
X
E[X] = xi fX (x)
x
1 3 3 1
= 0× + 1× + 2× + 3×
8 8 8 8
0+3+6+3 12
= = = 1.5
8 8
X
E[X 2 ] = x2i fX (x)
x
2 1 2 3 2 3 2 1
= 0 × + 1 × + 2 × + 3 ×
8 8 8 8
0 + 3 + 12 + 9 24
= = =3
8 8
X
E[X 3 ] = x3i fX (x)
x
3 1 3 3 3 3 3 1
= 0 × + 1 × + 2 × + 3 ×
8 8 8 8
0 + 3 + 24 + 81 108
= = = 13.5
8 8
2
Variance: σX = m2 − m21 = 3 − 1.52 = 0.75 (or) other method
X
2
σX = E[(x − X)2 ] = = (xi − X)2 fX (x)
x
1(0 − 1.5) + 3(1 − 1.5)2 + 3(2 − 1.5)2 + 1(3 − 1.5)2
2
=
8
96
1.52 + 3(0.52 ) + 3(0.52 ) + 1(1.52 ) 2.25 + 0.75 + 0.75 + 2.25 6
= = =
8 8 8
2
∴ σX = 0.75
√
standard deviation: σX = 0.75 = 0.866
skew:
2
µ3 = m3 − 3m1 σX − m31 = 13.5 − 3 × 1.5 × 0.75 − 1.53
= 13.5 − 3.375 − 3.375 = 13.5
µ3 13.5
Skewness: 3
σX
= 0.8663
= 20.7864
𝑓𝑋 (𝑥)
3 3 3
8 8 8
2
8
1 1
1
8 8
8
𝑥
0 1 2 3
𝑓𝑋 (𝑥)
3
8 8
8 3
7 8
8
6
8
5
8
3
4
8
8
3
8
1
1 8
8
𝑥
0 1 2 3
97
CHAPTER 3
Let ‘X’ be the discrete r.v, the probability density function (PDF) can be written as
N
X N
fX (x) = P (X = x) = pk (1 − p)N −k δ(x − k); k = 0, 1, 2, 3, . . . N
k=0
k
N N!
where =
k (N − k)!k!
The special case of binomial distribution function with N = 1, then it is also called
the Bernouli distribution.
Application:
98
• the number of disk drives that crashed in a cluster of 1000 computers, and
• the number of advertisements that are clicked when 40,000 are served.
Problem: Let N = 6, P = 0.25 the find PDF and CDF for Binomial.
Solution: The PDF function is
N
X N
fX (x) = pk (1 − p)N −k δ(x − k); k = 0, 1, 2, 3, . . . N
k=0
k
n n!
where =
k (n − k)!k!
6
X 6
fX (x) = (0.25)k (0.75)6−k δ(x − k);
k=0
k
6 0 6 6
= (0.25) (0.75) δ(x − 0) + (0.25)1 (0.75)5 δ(x − 1)
0 1
6 2 4 6
+ (0.25) (0.75) δ(x − 2) + (0.25)3 (0.75)3 δ(x − 3)
2 3
6 4 2 6
+ (0.25) (0.75) δ(x − 4) + (0.25)5 (0.75)1 δ(x − 5)
4 5
6
+ (0.25)6 (0.75)0 δ(x − 6)
6
6
x = 0; fX (0) = (0.25)0 (0.75)6 δ(0 − 0) = 0.1779
0
6
x = 1; fX (1) = (0.25)1 (0.75)5 δ(1 − 1) = 0.35595
1
6
x = 2; fX (2) = (0.25)2 (0.75)4 δ(2 − 2) = 0.29663
2
6
x = 3; fX (3) = (0.25)3 (0.75)3 δ(3 − 3) = 0.13183
3
6
x = 4; fX (4) = (0.25)4 (0.75)2 δ(4 − 4) = 0.03295
4
6
x = 5; fX (5) = (0.25)5 (0.75)1 δ(5 − 5) = 0.00439
5
6
x = 6; fX (6) = (0.25)6 (0.75)0 δ(6 − 6) = 0.000244
6
99
FX (2) = P (X ≤ 2) = P (X ≤ 1) + P (X = 2) = 0.5339 + 0.2966 = 0.83043
FX (3) = P (X ≤ 3) = P (X ≤ 2) + P (X = 3) = 0.8343 + 0.1318 = 0.96223
FX (4) = P (X ≤ 4) = P (X ≤ 3) + P (X = 4) = 0.9622 + 0.0329 = 0.99518
FX (5) = P (X ≤ 5) = P (X ≤ 4) + P (X = 5) = 0.9951 + 0.0043 = 0.99957
FX (5) = P (X ≤ 5) = P (X ≤ 5) + P (X = 6) = 0.9995 + 0.00024 = 0.9998
fX(x)
0.5
0.3559
0.2966
0.4
0.3
0.1318
0.178
0.0044
0.2
0.033
0.0002
0.1
0
𝑥
1 2 3 4 5 6
FX(x)
0.999 1
1 0.995
0.962
0.830
0.8
0.6
0.534
0.4
0.2
0.178
0
𝑥
1 2 3 4 5 6
1. Mean Value
X
m1 = E[X] = xfX (x)
N
X N
= x· px q N −x
x=0
x
N
X N!
= x· px q N −x
x=0
(N − x)!x!
100
N
X N (N − 1)!
= x· px q N −x
(N − x)!
x (x − 1)!
x=0
N
N (N − 1)!
X
(N −1)−(x−1)
= · p · px−1 q
x=0
(N − 1) − (x − 1) ! (x − 1)!
N
(N − 1)!
X
x−1 (N −1)−(x−1)
= Np · p q
(N − 1) − (x − 1) ! (x − 1)!
x=0
N − 1 x−1 (N −1)−(x−1)
= Np p q
x−1
iN −1
h N
= Np p + q ∵ px q N −x = (p + q)N
x
E[X] = N p ∵p+q =1
∴ E[X] = N p
N
X
2
m2 = E[X ] = x2 fX (x)
x=0
N
X N
= 2
x · px q N −x
x=0
x
N h
X i N!
= x(x − 1) + x · px q N −x
x=0
(N − x)!x!
N N
X N! X N!
= x(x − 1) · px q N −x + x· px q N −x
x=0
(N − x)!x! x=0
(N − x)!x!
| {z }
E[X]
N
X N!
= x(x −
1) · px q N −x + E[X]
(N − x)!x(x − 1)(x − 2)!
x=0
N
N (N − 1)(N − 2)!
X
= p2 px−2 q (N −2)−(x−2) + E[X]
x=0
(N − 2) − (x − 2) ! (x − 2)!
N
(N − 2)!
X
2 x−2 (N −2)−(x−2)
= N (N − 1)p p q
x=0
(N − 2) − (x − 2) ! (x − 2)!
| {z }
(p+q)N −2
+ E[X]
| {z }
Np
2 N −2
= N (N − 1) p (p + q) + Np
= N 2 p2 − N p 2 + N p ∵p+q =1
101
∴ E[X 2 ] = N 2 p2 − N p2 + N p
= N 3 p3 − 3N 2 p3 + 2N p3 + 3N 2 p2 − 3N p2 + 3N p − 2N p
= N p − 3N p2 + 3N 2 p2 + 2N p3 − 3N 2 p3 + N 3 p3
∴ E[X 3 ] = N p − 3N p2 + 3N 2 p2 + 2N p3 − 3N 2 p3 + N 3 p3
102
4. Variance:
2
2
µ2 = σX = E[X 2 ] − E[X]
=
N2p2 − N p2 + N p − 2
(Np)
= N p(1 − p) = N pq
2
Variance: σX = N pq
√ √
5. Standard deviation : σX = µ2 = N pq
6. Skew: µ3
µ3 = E[X 3 ] − 3E[X]σX
2
− (E[x])2
= N p − 3N p2 + 3N 2 p2 + 2N p3 − 3N 2 p3 + N 3 p3 − 3N p(N pq) − (N p)3
= N p − 3N p2 + 3N 2 p2 + 2N p3 − 3N 2 p3 +
N3
p3 − 3N 2 p2 (1 − p) − 3
(Np)
= N p − 3N p2 +
3N2
p2 + 2N p3 −
3N2
p3 −
3N2
p2 + 2
3N p3
= N p − 3N p2 + 2N p3
= N p(1 − 3p) + 2N p3 = N p(q − 2p) + 2N p3 ∵q =1−p
= N pq − 2N p2 + 2N p3 = N pq − 2N p2 (1 − p)
= N pq − 2N p2 q = N pq(1 − 2p)
∴ µ3 = N pq(1 − 2p)
7. Skewness:
µ3 N pq(1 − 2p) 1
α= 3
= 3 ∵ σX = (N pq) 2
σX n (N pq) 2
1 − 2p
=√
N pq
Problem: Find the probability in Tossing a fair coin five times, there will be appear
N x N −x
P [X = x] = fX (x) = p q
x
103
1.
P [Getting three heads] = P [X = 3]
5 5!
= 0.53 0.55−3 = 0.53 0.52
3 3!2!
5×4×3×2×1 1
= ×
3 × 2 × 1 × 2 × 1 32
10
= = 0.3125
32
2.
5
P [3 tails and 2 heads] = 0.52 0.53 = 0.3125
2
3.
P [At least one head] = P [X ≥ 1]
5
= 0.52 0.53 = 0.3125
2
5
=1− (0.5)0 (0.5)5
0
1
=1− = 0.96875
32
4.
P [Not more than one tail] = Not more than one tail
= P [X = 0] + P [X = 1]
5 0 5 5
= 0.5 0.5 + 0.51 0.54
0 1
= 0.03125 + 0.15625 = 0.1875
Problem: If the mean and variance of binomial r.v 6 and 1.5 respectively. Find
E X − P (X ≥ 3) and also find its PDF and CDF.
Solution: Given Mean value E[X] = N p = 6 −→ 1
2
Variance σX = N pq = 1.5 −→ 2
2 ⇒ 6q = 1.5 ∵ 1
⇒ q = 0.25
p = 1 − q = 0.75
1 ⇒ N (0.75) = 6 ⇒ N = 8
∴ p = 0.75; q = 0.25; N =8
E X − P (X ≥ 3) = E[X] − E[P (X ≥ 3)] =⇒ 3
104
E[P (X ≥ 3)] = 1 − P [X < 3]
= 1 − P [X = 0] + P [X = 0] + P [X = 0]
8
P [X = 0] = fX (0) = 0.750 0.258 = 1 × 1 × 0.258 = 1.525 × 10−5
0
8
P [X = 1] = fX (1) = 0.751 0.257 = 8 × 0.75 × 0.257 = 3.662 × 10−4
1
8
P [X = 2] = fX (2) = 0.752 0.256 = 1 × 0.752 × 0.256 = 3.8452 × 10−3
2
∴ E[P (X ≥ 3)] = 1 − P [X < 3]
= 1 − 4.2239 × 10−3
= 0.995
=⇒ 3
E X − P (X ≥ 3) = E[X] − E[P (X ≥ 3)] = 6 − 0.995 = 5.005
∴ E X − P (X ≥ 3) = 5.005
105
fX(x)
0.311
0.266
0.20
0.1001
0.08
0.02
3.8x10-3
3.66x10-4
1.525x10-5
0
𝑥
1 2 3 4 5 6 7 8
FX(x)
0.98128
0.8811
0.615
0.304
0.0241
0.1042
4.18x10-3
3.81x10-4
1.525x10-5
0
𝑥
1 2 3 4 5 6 7 8
8
P (X = 3) = fX (3) = 0.753 0.255 = 0.0230
3
8
P (X = 4) = fX (4) = 0.754 0.254 = 0.0865
4
8
P (X = 5) = fX (5) = 0.755 0.253 = 0.20764
5
8
P (X = 6) = fX (6) = 0.756 00.252 = 0.34446
6
8
P (X = 7) = fX (7) = 0.757 0.251 = 0.26696
7
8
P (X = 8) = fX (8) = 0.758 0.250 = 0.10011
8
e−b bx
fX (x) = ; x = 0, 1, 2, . . . ∞
x!
106
Where b = λT and b > 0, real constant
T = Time duration in which number of events are conducted.
λ = The average number of events per time
fX(x)
0.27 0.27
0.18
0.178
0.09 0.036
0
𝑥
1 2 3 4 5
FX(x)
0.981
0.945
0.855
0.675
0.405
0.135
0
𝑥
1 2 3 4 5
107
fX (x)
b=5
e−b · bx b = 10
0.15 fX (x) =
x! b = 20
b = 30
0.10
0.05
0.00
0 10 20 30 40 50 x
Applications:
2. It is describes
Let ‘X’ be the random variable with probability density function defined as
e−b bx
fX (x) = ; x = 0, 1, 2, . . .
x!
1. Mean value
∞
X
E[X] = X = m1 = xfX (x)
x=0
∞ −b x
X e b
= x
x=0
x!
∞
−b
X b · bx−1
=e x
x(x − 1)!
x=0
∞
−b
X bx−1
=be
x=1
(x − 1)!
108
b2 b3
= b e−b 1 + b + + + . . .
2! 3!
x2 x3
= b e−b × eb = b ∵ ex = 1 + x + + + ...
2! 3!
∴ E[X] = m1 = b
∴ E[X 2 ] = m2 = b + b2
109
P
bx−2
= e−b bx=2 ∞ + 3[b + b2 ] − 2[b] ∵ E[X 2 ] = b + b2 ; E[X] = b
(x − 2)!
b2 b3
= e−b b3 1 + b + + + ... + 3b2 + 3b − 2b
2! 3!
x2 x3
=e−b
b3 × eb + 3b2 + b = b3 + 3b2 + b
∵ ex
= 1 + x + + + ...
2! 3!
∴ E[X 3 ] = m3 = b3 + 3b2 + b
2
4. Variance: σX = m2 − m21 = b + b2 − (b)2 = b
√
5. Standard deviation: σX = b
6 . Skew:
µ3 = E[X 3 ] − 3E[X]σX
2
− (E[X])3
= b3 + 3b2 + b − 3b(b) − (b)3
=b
µ3 b 1
7. Skewness: α = 3
= 3 =√
σX b2 b
Problem: Assume vehicles arrived at petrol bunk follows possion random variable
and occured at average rate of 50 per hour. The petrol bunk has only one station. It
is assumed that one munute is required to obtain fuel. What is the probability that a
waiting line will occur at the petrol bunk?
Solution: Given
50
Average rate of arriaval of cars λ = 50/hour = = 0.833
60
Time required to filling fuel T = 1 min
b = λt = 0.833 × 1 = 0.833
e−b bx
P [X = x] = fX (x) =
x!
e−0.833 0.8330
P [X = 0] = fX (0) = = 0.4347
0!
e−0.833 0.8331
P [X = 1] = fX (1) = = 0.3621
1!
110
Solution: Given b = 4
P {0 ≤ X ≤ 5} = P [X = 0] + P [X = 1] + P [X = 2] + P [X = 3]
+ P [X = 4] + P [X = 5]
= 0.0183 + 0.0732 + 0.1464 + 0.1952 + 0.1952 + 0.156
= 0.7843
111
CHAPTER 4
Two functions can be defined that allow moments to be calculated for random variable
‘X’. They are
1. Characteristic function 2. Moment Generating function
It is similar to Fourier Transform with sign reversed inh exponential. Therefore, the
PDF function defined as
Z∞
1
fX (x) = ΦX (ω) e−jωx dω
2π
ω=−∞
nth moment can be obtained by derivating ΦX (ω) in ‘n’ times with respect to ω and
setting ω = 0.
dn ΦX (ω)
n
mn = (−j) ; n = 1, 2, 3, . . .
dω n
ω=0
dΦX (ω)
m1 = (−j)1
dω
ω=0
112
d2 ΦX (ω)
m2 = (−j)2
dω 2
ω=0
3
d ΦX (ω)
m3 = (−j)3
dω 3
ω=0
Problem 1: Find the moment of exponential PDF of continuous r.v ‘X’ is given
1 e− (x−a)
b ; x≥a
fX (x) = b
0; x<a
ejωa
∴ ΦX (ω) =
1 − jωb
dΦX (ω)
First moment m1 = (−j)1 −→ 1
dω
ω=0
113
" #
dΦX (ω) d ejωa
=
dω dω 1 − jωb
ω=0
d d
jωa
(1 − e
jωb) dω − ejωa dω (1 − jωb)
= 2
(1 − jωb)
ω=0
jωa
(1 − jωb) e · ja − ejωa 0 − jb
=
(1 − jωb)2
ω=0
= ja + jb
h i
∴ 1 ⇒ m1 = (−j) ja + jb = −j 2 (a + b) = a + b
∴ m1 = a + b
Solution: Given
eω ω<0
ΦX (ω) =
e−ω ω≥0
Z∞
1
fX (x) = ΦX (ω)e−jωx dω
2π
ω=−∞
" Z0 Z∞ #
1
= ejω · e−jωx dω + e−jω · e−jωx dω
2π
ω=−∞ ω=0
" Z0 Z∞ #
1
= e(1−jx)ω dω + e−(1+jx)ω dω
2π
ω=−∞ ω=0
" #
1 e(1−jx)ω 0 e(1−jx)ω ∞
= +
2π 1 − ix −∞ 1 − ix 0
" #
1 1 1
= −0+0− −
2π 1 − jx 1 + jx
" #
1 1 1
= +
2π 1 − jx 1 + jx
" # " #
1 1 + jx + 1 − jx 1 2
= =
2π 12 − (jx)2 2π 1 + x2
" #
1 1
fX (x) =
π 1 + x2
114
Problem 3: A random variable ‘X’ has a characteristic function given by
1 − |ω|; |ω| ≤ 1
ΦX (ω) =
0; other wise
Solution:
Z∞
1
fX (x) = ΦX (ω) e−jωx dω
2π
ω=−∞
Z1
1
= ΦX (ω) e−jωx dω
2π
ω=−1
Z0 Z1
1 −jωx 1
= (1 + ω) e dω + (1 − ω) e−jωx dω
2π 2π
ω=−1 ω=0
" #
1 1 jx 1 −jx
= e − 1 + [ e − 1
2π (jω)2 (jω)2
" #
1 1 jx
= [ e − +e−jx − 1
2π (jω)2
" #
1 ejx + e−jx
= 1−
πx2 2
" #
1
= 1 − cos x
πx2
1 − cos x
fX (x) =
π
1
Problem 4: A characteristic function of a r.v is given by ΦX (ω) = N Find
(1−j2ω) 2
dn
mn = (−j)n ΦX ω
dω n ω=0
1. First moment
d 1
m1 = (−j)
dω (1 − j2ω) N2
ω=0
d − N2
= (−j) 1 − j2ω
dω
ω=0
N N d
= (−j) − (1 − j2ω)− 2 −1 (1 − j2ω)
2 dω
ω=0
115
N N
= −j (1 − j2ω)− 2 −1 · (0 − j2)
2
ω=0
2N N
= −j 2 (1 − j2ω)− 2 −1
2
ω=0
−N −1
= +N (1 − 0) 2 =N
∴ m1 = N
2. Second moment
d2 − N2
2
m2 = (−j) 1 − j2ω
dω 2
ω=0
d h N −N −1
i
= (−1) − (1 − j2ω) 2 × (−j2)
dω 2
ω=0
d − N2 −1
= (−jN ) 1 − j2ω
dω
ω=0
h N N
i
− 1 (1 − j2ω)− 2 −2 × (−j2)
= −jN −
2
ω=0
" #
−N − 2 N
= +j 2 2 N (1 − j2ω)− 2 −2
2
ω=0
h N
i
= −j 2 N (N + 2)(1 − j2ω)− 2 −2
ω=0
m2 = N (N + 2)
2
3. Variance: σX = m2 − m21 = N 2 + 2N − N 2 = 2N
Z∞
ΦX (ω) = fX (x)ejωx dx
x=−∞
Z1 " #1
ejωx ejω 1 ejω − 1
= (1)ejωx dx = = − =
jω jω jω jω
x=0 0
116
4.1.2 Properties of characteristic function
Proof.
Z∞
ΦX (ω) = fX (x) ejωx dx
x=−∞
=1
Proof.
Z∞
ΦX (ω) = fX (x) ejωx dx
x=−∞
let x = −y ⇒ dx = −dy
If x = −∞ ⇒ y = +∞ and If x = ∞ ⇒ y = −∞
Z−∞
ΦX (ω) = fX (−y) ejω(−y) (−dy)
y=∞
Z∞
= fX (y) ejω(−y) (dy)
y=−∞
ΦX (ω) = ΦX (−ω)
3. If X and Y are sum of two independent random variable, then the characteristic
function is ΦX+Y (ω) = ΦX (ω) · ΦY (ω)
117
Proof.
Z∞
ΦX (ω) = fX (x) ejωx dx
x=−∞
Z∞ Z∞
ΦX+Y (ω) = fXY (x, y) e−jω(x+y) dy dx
x=−∞ y=−∞
Proof.
Z∞
ΦX (ω) = fX (x) ejωx dx
x=−∞
Z∞
ΦaX+b (ω) = fX (x) ejω(ax+b) dx
x=−∞
Z∞
jωb
=e fX (x) ejω(ax) dx
x=−∞
jωb
=e ΦX (aω)
Zb fX (x)
jωX jωx
ΦX (ω) = E[e ]= e fX (x) dx 1
b−a
a
Zb
1 1 h ejωx ib
= ejωx dx =
b−a b − a jω a
a x
a b
1 h i
ΦX (ω) = ejbω − ejaω
jω(b − a)
118
(b) Exponential r.v
Z∞
ΦX (ω) = E[ejωX ] = ejωx fX (x) dx
x=0
Z∞ 𝑓𝑋 (𝑥)
= ejωx · αe−αx dx
𝛼
0
𝛼𝑥
Z∞ x(jω−α)
#∞ " 𝑒−
e
= α ex(jω−α) dx = α
jω − α
0 0
" #∞ " # 𝑥
e−x(α−jω) 1 0
=α =α 0−
jω − α jω − α
0
α α
=− =
jω − α α − jω
Z∞
ΦX (ω) = E[ejωX ] = ejωx fX (x) dx
x=−∞
Z∞
1 (x−m)2
= ejωx · √ e− 2σ2 dx
σ 2π
−∞
x−m
let m = = t ⇒ dx = σ dt
σ
If x = ±∞ ⇒ t = ±∞
Z∞
1 t2 √1 e−
(x−X)2
ejω(m+σt) e− 2
fX (x) = 2σ 2
= √ σ dt σ 2π
σ 2π
−∞
Z∞
1 t2
=√ ejωm · ejωσt · e− 2 dt
2π
−∞ −3 −2 −1 0 1 2 3
Z∞
ejωm (t−σjω)2 σ2 ω2
=√ e− 2 · e− 2 dt
2π
−∞
σ2 ω2 Z∞
ejωm− 2 (t−jσω)2
= √ e− 2 dt
2π
−∞
let y = t − jσω ⇒ dt = dy
σ 2 ω 2 Z∞
ejωm− 2 y2
= √ e− 2 dy
2π
−∞
| {z }
even function
119
σ2 ω2 Z∞
ejωm− 2 y2
= √ 2· e− 2 dy
2π
0
2 ω2
jωm− σ
r
e 2 π
= √ ×2×
2π 2
σ2 ω2
ΦX (ω) = ejωm− 2
σ2 ω2
∴ ΦX (ω) = ejωm− 2
1
∴ ΦX (ω) = −
ω 2 α2
120
(e) Binomial Distribution
Z∞
ΦX (ω) = E[ejωX ] = fX (x) ejωx dx
x=−∞
N
X N x N −x jωx
= p q e
x=0
x
N
X N
= (p ejω )x q N −x −→ 1
x=0
x
= (p ejω + q)N
To prove the above statement, take n = 2;
2 jω 0 2 2 jω 1 1 2
1 ⇒ (p e ) q + (p e ) q + (p ejω )2 q 2
0 x x
⇒ q 2 + 2(pejω )q + (pejω )2
= (p ejω+q )2 Hence proved
N
X N
∴ ΦX (ω) = (p ejω )x q N −x = (p ejω + q)N
x=0
x
jω )
∴ ΦX (ω) = e−(1−e
121
4.1.3 Moment Generating Function (MGF):
Let ‘X’ be the random variable and PDF is fX (x) and the moment generating function
is defined as
Z∞
vX
MX (v) = E[e ] = fX (x) evx dx
x=−∞
v2 v3 vn
∴ MX (v) = E[evX ] = 1 + vE[X] + E[X 2 ] + E[X 3 ] + . . . + E[xn ] + . . .
2! 3! n!
Differentiate the above equation with respect to ‘t’ and then putting t = 0, we get
d d2
dv
MX (v) = m1 dv 2
MX (v) = m2
v=0 v=0
dn
∴ mn = MX (v) ; n = 1, 2, 3, . . .
dv n v=0
Problem: Prove that the MGF of random variable ‘X’ having PDF
1; −1 < x < 2
3
fX (x) =
0; otherwise
is given by
e2v −e−v ; v 6= 0
3v
MX (v) =
1; v=1
Solution:
Z∞
vX
MX (v) = E[e ]= fX (x)evx dx
x=−∞
Z2
1 vx
= e dx
3
x=−1
1 h evx i2
=
3 v x=−1
1 h e2v e−v i
= −
3 v v
e2v − e−v
=
3v
122
0
If v = 0; then MX (v) = ; so, differentiate the MX (v) v=0
0
e2v (2) − e−v (−1)
=
3(1)
v=0
2+1
= =1 Hence proved
3
Problem: Find the MGF, mean, mean squre and variance for given function of
unifom random variable
‘X’.
1 ; a≤x≤b 1 ; −a ≤ x ≤ a
b−a
(a) fX (x) = (b) fX (x) = 2a
0; otherwise 0; otherwise
1 −x
(c) fX (x) = e b
b
Solution: Za
(b) MX (v) = fX (x)evx dx
Zb −a
Solution:
Za
MX (v) = fX (x)evx dx
−a
123
Z1 Z2
= x evx dx + (2 − x)evx dx
x=0 x=1
h evx evx i1 h evx evx i2
= x· − 2 + (2 − x) − (−1) 2
v v 0 v v 1
h ev evx 1 i h e 2v v
e e vi
= − 2 + 2 + 2 − − 2
v v v v v v
h ev 1 e2v ev i
= − 2+ 2+ 2 − 2
v v v v
1h i
= 2 1 − 2ev + e2v
v
h 1 − ev i2
MX (v) =
v
2
Problem: The MGF of a r.v ‘X’ is given by MX (v) = 2−v
.
Find the mean, mean square and variance?
dn
Solution: We know that mn = dv n MX (v)
v=0
dh 2 i
1. m1 =
" 2−v
dv #
(2 − v)(0) − (1)(−1)
=2
(2 − v)2
" #
1
=2
(2 − v)2
v=0
h1i 1
=2 =
4 2
d2 h 2 i
2. m2 = 2
dv "2 − v #
d 1
=2
dv (2 − v)2
" #
0 − (1)(2)(2 − v)3 (−1)
=2
(2 − v)4
v=0
8 1
= =
16 2
2 1 1 1
3. Variance: σX = m2 − m21 = 2
− 4
= 4
Another Method:
2 1 v −1
MX (v) = E[evX ] = = v = (1 − )
2−v 1 − (2) 2
124
We know that (1 − x)−1 = 1
1−x
= 1 + x + x2 + x3 + . . .
vX v −1
E[e ] = 1 −
2
h v2X 2 i v v2
E 1 + vX + + ... = 1 + + + ...
2! 2 4
v2X 2 v v2
E[1] + E[vX] + E[ ] + ... = 1 + + + ...
2! 2 4
v2 v v2
1 + vE[X] + E[X 2 ] + . . . = 1 + + + ...
2 2 4
v 1
vE[X] = ⇒ E[X] = −→ m1
2 2
v2 v 2
1
E[X 2 ] = ⇒ E[X 2 ] = −→ m2
2 4 2
2 2
v v 1
E[X 3 ] = ⇒ E[X 3 ] = −→ m3
2 8 8
2 1
Variance = m2 − m1 =
4
1
Problem: A random variavle ‘X’ has PDF fX (x) = 2x
; x = 1, 2, 3, . . .
Find MGF, mean, mean square, and variance.
Z∞
MX (v) = fX (x)evx dx
−∞
∞
X 1 vx
= e
x=1
2x
∞ v
X e x
=
x=1
2
∞ v
X ex
=1+ −1
x=1
2
∞ v
X e x
= −1
x=0
2
1 2 − 2 + ev
= v − 1 =
1 − e2 2 − ev
v
e
MX (v) =
2 − ev
d h ev i
m1 = v
"dv 2 − e v=0 #
(2 − ev )(ev ) − ev (−ev )
=
(2 − ev )2
v=0
125
" #
2ev − e2v
+ e2v
=
(2 − ev )2
v=0
" #
2(1)
=
(2 − 1)2
=2
d2 h ev i
m2 =
dv 2" 2 − ev v=0 #
d (2 − ev )(ev ) − ev (−ev )
=
dv (2 − ev )2
v=0
" #
v 2v 2v
d 2e − e +
e
= v 2
dv (2 − e )
v=0
" #
v
d 2e
=
dv (2 − ev )2
v=0
(2 − e ) (2e ) − (2ev ) 2(2 − ev )(−ev )
v 2 v
=
(2 − ev )4
2
v=0
(2 − 1) (2 × 1) − (2 × 1) 2(2 − 1)(−1)
=
(2 − 1)4
2+4
= =6
1
3.Variance: µ2 = m2 − m21 = 6 − 22 = 2
∴ m1 = 2; m2 = 6; µ2 = 2
126
4.1.4 Properties of MGF
Proof.
Z∞
MX (v) = E[evX ] = fX (x)evx dx
x=−∞
Z∞
McX (v) = fX (x)ev(cx) dx
x=−∞
Z∞
= fX (x)ecvx dx
x=−∞
=; MX (cv)
2. If MGF of r.v ‘X’ is MX (v) then r.v ‘aX + b’ is MaX+b = ebv MX (v)
Proof.
Z∞
vX
MX (v) = E[e ]= fX (x)evx dx
x=−∞
Z∞
MaX+b (v) = fX (x)ev(ax+b) dx
x=−∞
Z∞
= evb fX (x)eavx dx
x=−∞
= evb MX (av)
3. If X and Y are independent r.v’s then the MGF will be the product of two idivid-
ual of MGF. i.e., MX+Y (v) = MX (v) MY (v)
Proof.
Z∞
vX
MX (v) = E[e ]= fX (x)evx dx
x=−∞
Z∞ Z∞
MX+Y (v) = fX Y (x, y) ev(x+y) dy dx ∵ fXY (x, y) = fX (x)fY (y)
x=−∞ y=−∞
127
Z∞ Z∞
vx
= fX (x)e dx fY (y)eyx dy ∵ Independent
x=−∞ y=−∞
va
4. If MGF of r.v ‘X’ is MX (v) then r.v then M X+a = e b MX ( vb )
b
Proof.
Z∞
MX (v) = E[evX ] = fX (x)evx dx
x=−∞
Z∞
x+a
M X+a = fX (x)ev( b
)
dx
b
x=−∞
Z∞
va vx
=e b fX (x)e b dx
x=−∞
va v
= e MX ( )
b
b
P (A ∩ B) P (AB)
P (A|B) = = ; P (B) 6= 0
P (B) P (B)
Here, let event ‘A’ interms of continuous r.v A = −∞ < X < ∞ can be defined as
{X ≤ x} and conditionalprobability P {X ≤ x|B}.
P X ≤ xB P X ≤x∩B
P (X ≤ x|B) = = ; P (B) 6= 0
P (B) P (B)
P X ≤ x ∩ B is called probability of joint event. The conditional distribution
function my continuous or discrete.
128
• The conditional PDF can be obtained by derivating conditional CDF
d
fX (x|B) = FX (x|B)
dx
n −∞ ≤ X ≤ x o Zx
FX (x|B) = P = fX (x|B) dx
B
−∞
0 ≤ FX (x|B) ≤ 1
129
4.1.5.2 Properties of conditional PDF function
Rx
3. FX (x|B) = fX (u) du
−∞
Rx2
4. P {x1 |B ≤ X ≤ x2 |B} = fX (x|B) dx
x1
Problem: Two boxes contain red, green, and blue as shown in Table. Raandom
variable represents selecting one ball from selected box. Probability selecting boxes
2 8
P (B1 ) = 10 and P (B2 ) = 10 .
Ball Find
X=x i Ball color
Box1 Box2 Total
1. fX (x|B1 ) and FX (x|B1 )
i=1 Red 5 80 85
i=2 Green 35 60 95 2. fX (x|B2 ) and FX (x|B2 )
fX (x|B1 ) : fX (x|B2 ) :
5 80
P (X = 1|B1 ) = P (X = 1|B2 ) =
100 150
35 60
P (X = 2|B1 ) = P (X = 2|B2 ) =
100 150
60 10
P (X = 3|B1 ) = P (X = 3|B2 ) =
100 150
5 35 60
fX (x|B1 ) = δ(x − 1) + δ(x − 2) + δ(x − 3)
100 100 100
80 60 10
fX (x|B2 ) = δ(x − 1) + δ(x − 2) + δ(x − 3)
150 150 150
5 35 60
FX (x|B1 ) = U (x − 1) + U (x − 2) + U (x − 3)
100 100 100
80 60 10
FX (x|B2 ) = U (x − 1) + U (x − 2) + U (x − 3)
150 150 150
130
fX(x|B1) fX(x|B2)
0.6 0.533
0.35 0.4
0.05 0.067
0
𝑥 𝑥
0
1 2 3 1 2 3
FX(x|B1) FX(x|B2)
1 1
0.933
0.4
0.533
0.05
0
𝑥 𝑥
0
1 2 3 1 2 3
P {X ≤ x ∩ B}
fX (x|B) =
P {B}
2 8
Given P (B1 ) = 10
; P (B2 ) = 10
131
fX(x) FX(x)
1
0.437
0.827
0.39 0.437
0.173
0
𝑥 𝑥
0
1 2 3 1 2 3
𝑋 𝑌 = 𝑇[𝑋]
𝑇[ ]
𝑓𝑋 (𝑥) 𝑓𝑌 (𝑦)
Here, X be the input r.v whose PDF is fX (x) and Y be the output r.v whose PDF is
fY (y). T [·] is the operation performed by system to transform X into Y , i.e., addition,
subtraction, multiplication, square, integration etc.,
Types of transformation:
Let ‘X’ is a continuous random variable and the transformation is said to be monotonic
if one-to-one transformation between input and output random variable.
The transformation is said to be monotonically increasing if its satisfies the condi-
tion T [X2 ] > T [X1 ]; if x2 > x1 as shown in Fig. (a). The transformation is said to be
monotonically decreasing if its satisfies the condition T [X2 ] < T [X1 ]; if x2 < x1 as
shown in Fig.(b).
132
𝑌 = 𝑇[𝑋]
𝑦0
𝑥0 𝑥
𝐹𝑖𝑔 (𝑎)
𝑌 = 𝑇[𝑋]
𝑦0
𝑥
𝑥0
𝐹𝑖𝑔 (b)
Let ‘Y ’ have a particular value ‘y0 ’ corresponding to the particular value x0 if ‘X’
as shown Fig. (a)
y0 = T [x0 ] ⇒ x0 = T −1 [x0 ] . Here T −1 is inverse transform of ‘T ’.
Now the probability of the event Y ≤ y0 must be equal to probability the event
X ≤ x0 , because of the one-to-one correspondance between X and Y .
FY (y0 ) = P {Y ≤ y0 } = P {X ≤ x0 } = FX (x0 ) = P {−∞ ≤ X ≤ ∞}
FY (y) = FX (x)
Zy0 Zx0
fY (y) dy = fX (x) dx
−∞ −∞
−1 [y ]
TZ
Zy0 0
fY (y) dy = fX (x) dx
−∞ −∞
∴ fY (y) = fX (x) · dx
dy
dy
where dx is the slope of the transformation, it is positive and negative for mono-
tonically increasing and decreasing respectively. But the PDF should always positive.
Hence the above equation can be written as
dx
∴ fY (y) = fX (x) ·
dy
dx
Here, dy
is the inverse slope of the transformation.
133
Problem: Consider two r.v’s X and Y , such that Y = 2X +3. The density function
of r.v ‘X’ is shown in Fig.
(x − x1 )(y2 − y1 ) = (y − y1 )(x2 − x1 )
(x + 1)(K − 0) = (y − 0)(5 + 1)
𝑓𝑋 (𝑥)
Kx + K = 5y + y
K(x + 1) = 6y K (5, K)
K(x + 1)
y=
6
0
𝑥
(-1, 0) 5
K (x + 1); −1 ≤ X ≤ 5
∴ fX (x) = 6
0; otherwise
Total probability is unity.
R∞
−∞ X
f (x) dx = 1
Z5
x+1 ;
K 18
−1 ≤ X ≤ 5
(x + 1) dx = 1 ∴ fX (x) = =⇒ 1
6 0;
−1 otherwise
K h x2 i5
+x =1
6 2 −1
1
⇒ K=
3
(ii) fY (y) = fX (x) dx
dy
⇒ 2
Given
y−3
y = 2x + 3 ⇒ x =
2
dy dx 1
=2⇒ =
dx dy 2
limits if x = −1 ⇒ y = 1;
and if x = 5 ⇒ y = 13;
⇒ 1 and 2
y−3
2
+1 1 y−1
⇒ fY (y) = × =
18 2 72
y−1 ; 1 ≤ y ≤ 13
72
∴ fY (y) =
0; other wise
134
𝑓𝑌 (𝑦)
1
6
𝑦
0 13
1
1
for y = 1 ⇒ fY (y) = 0; for y = 13 ⇒ fY (y) =
6
𝑥0 𝑥1 𝑥2 𝑥3 𝑥4
𝑥
If ‘X’ is a discrete random variable, whose PDF is fX (x) and CDF is FX (x), such taht
Y = T [X], whose PDF is fY (y) and CDF is FY (y) then
X X
fX (y) = P (X = xn ) δ(x − xn ) fY (y) = P (Y = yn ) δ(y − yn )
n n
135
X X
FX (y) = P (X = xn ) U (x − xn ) fY (y) = P (Y = yn ) U (y − yn )
n n
Yn = T [Xn ]
P [Y = yn ] = P [X = xn ]
If ‘X’ is discrete r.v and ‘T ’ is not monotonic, the above procedure remains valid except
there now exists probability that more than one value ‘xn ’ corresponds to a value yn . In
such case P [yn ] will equal to the sum of the probabilities of the various xn for which
Yn = T [Xn ].
Problem: Let a discrete r.v ‘X’ has values x = −1, 0, 1, and 2 wit probabilities
3
0.1, 0.3, 0.4 and 0.2. The r.v ‘X’ is transformed to (a) Y = 2X (b) Y = 2 − x2 + x3
then find fY (y) and FY (y) ?
Solution:
X = xi −1 0 1 2
P (X = xi ) 0.1 0.3 0.4 0.2
fX(x)
0.4
0.3
0.2
0.1
𝑥
-1 0 1 2 3
FX(x)
1.0
0.8
0.4
0.1
0
𝑥
-1 1 2 3
136
Tranformation Y = 2X; P (Y = yn ) = P (X = xn )
Y = yi −2 0 2 4
P (Y = yi ) 0.1 0.3 0.4 0.2
fY(y)
0.4
0.3
0.2
0.1
𝑥
-2 -1 0 1 2 3 4 5
FY(y)
1.0
0.8
0.4
0.1
𝑥
-2 -1 0 1 2 3 4 5
x3
(b) Given Y = 2 − x2 + 3
1 6−3−1 2
x = −1 ⇒ y = 2 − 1 − = = ;
3 3 3
x=0⇒y=2
1 6−3+1 4
x=1⇒y =2−1+ = = ;
3 3 3
8 6 − 12 + 18 2
x=2⇒y =2−4+ = = ;
3 3 3
P (Y = yn ) = P (X = xn ); fX (x) = fY (y)
137
2 2
fY = P (Y = ) = P (X = −1) = 0.1
3 3
fY (2) = P (Y = 2) = P (X = 0) = 0.3
4 4
fY = P (Y = ) = P (X = 1) = 0.4
3 3
2 2
fY = P (Y = ) = P (X = 2) = 0.2
3 3
2 4 2
fY (y) = 0.1δ y − + 0.3δ(y − 2) + 0.4δ y − + 0.2δ y −
3 3 3
2 4
= 0.3δ y − + 0.4δ y − + 0.3δ(y − 2)
3 3
2 4
FY (y) = 0.3U y − + 0.4U y − + 0.3U (y − 2)
3 3
fX(x)
0.4
0.3 0.3
𝑥
0 2/3 4/3 2
FX(x)
1.0
0.7
0.4
0
𝑥
2/3 4/3 2
Problem: A r.v ‘X’ having the values −4, 1, 2, 3, 4 having equal probabilities is 51 .
(i) Find and plot fX (x), FX (x), mean and variance.
(ii) If Y = x3 ; Find and plot FY (y), FY (y), mean and variance.
138
fY(y)
𝑥
-4 -3 -2 -1 0 1 2 3 4
FY(y)
1.0
0.8
0.6
0.4
0.2
𝑥
-4 -3 -2 -1 0 1 2 3 4
X
E(X) = m1 = xi fX (xi )
xi
= (−4) (0.2) + (1)2 0.2 + (2)2 0.2 + (3)2 0.2 + (4)2 0.2
2
= 0.2 16 + 1 + 4 + 9 + 16
= 0.2 × 46 = 9.2
2
variance: σX = m2 − m21 = 9.2 − (1.2)2 = 7.76
(ii) Given Y = x3 ; P (Y = yi ) = P (X = xi )
139
x = 4 ⇒ y = (4)3 = 64; P (Y = 64) = 0.2
fY(y)
𝑥
-64 -27 -8 -1 0 1 8 27 64
FY(y)
1.0
0.8
0.6
0.4
0.2
𝑥
-64 -27 -8 -1 0 1 8 27 64
X
E(Y ) = m1 = yi fY (yi )
yi
= (−64) (0.2) + (1)2 (0.2) + (8)2 (0.2) + (27)2 (0.2) + (64)2 (0.2)
2
140
4.3 Methods of defining Conditional events
FX (x|B) = P {X ≤ x|B} −→ 1
FX (x|B) = P {X ≤ x|B}
= P {(X ≤ x)|(X ≤ b)}
P {(X ≤ x) ∩ (X ≤ b)}
= ; −→ 2
P (X ≤ b)
FX (x|B) where P (X ≤ b) 6= 0
P {(X ≤ x) ∩ (X ≤ b)} = P {X ≤ x}
since,X ≥ b means (X ≤ b) ⊂ X ≤ x. So,
{(X ≤ x) ∩ (X ≤ b)} = {X ≤ x}
substitute in eqn. (2)
P (X)
∴ FX (x|B) = =1 for x ≥ b −→ 3
P (X ≤ b)
{(X ≤ x) ∩ (X ≤ b)} = {X ≤ x}
P {(X ≤ x) ∩ (X ≤ b)} = P {X ≤ x}
substitute in eqn. (2), we get
P {X ≤ x}
FX (x|B) = ∵ FX (x) = P (X ≤ x)
P {X ≤ b}
FX (x)
FX (x|B) = −→ 4
FX (b)
From equation (4) and (3), the conditional distribution function is defined as
FX (x) ; x < b
FX (x|B) = FX (b) =⇒ 5
1 x≥b
141
fX (x)
Rb
; x<b
fX (x) dx
fX (x|B) = −∞ =⇒ 6
0 x≥b
d
Rb
where dx
FX (b) = fX (x) dx.
−∞
Definition: For a random variable ‘X’ with PDF is fX (X) and event B ⊂ SX with
P [B] > 0, the conditional PDF of ‘X’ given B is
fX (x) ; x ∈ B
fX|B (x) = P [B]
0; otherwise
For calls that atleast 2 minutes, what is the conditional probability of the call duration?
Solution:In this case, the conditioning event T > 2. The probability of the event
Z∞
2
P (T > 2) = fT (t) dt = e− 3
2
142
1 e− (t−2)
3 ; t>2
∴ fT |T >2 (t) = 3
0; otherwise
1
1
3
3
1 − t
𝑒 3 1 − t
3
𝑒 3
3
𝑥 0 2
𝑥
0
Z∞
E[X|B] = xfX|B (x) dx
−∞
Z∞
E[g(X)|B] = g(x)fX|B (x) dx
−∞
p
• The conditional standard deviation: σX|B = V ar(X|B)
Note: Conditional variance and standard deviation are useful because they measure
the spread of the random variable after we learn the conditioning information B. If the
conditional standard deviation σX|B is much smaller than σX , then that we learningthe
occurance of B reduces our uncertannity about X because it shrinks the range of typical
values of X.
143
CHAPTER 5
Let us consider two events A is a function of ‘x’ and B is function of ‘y’ such that
A = {X ≤ x} and B = {Y ≤ y}
The joint event A ∩ B = {X ≤ x} and {Y ≤ y} are shown in Fig.
The probability of two events A and B is called CDF or distribution function which
can be written as
FX (x) = P {X ≤ x} FY (y) = P {Y ≤ y}
144
FX Y (x, y) = P {X ≤ x, Y ≤ y} = P {−∞ ≤ X ≤ ∞}
Z∞ Z∞
= fXY (u, v) dv du = FXY (x, y)
| {z } | {z }
u=−∞ v=−∞ Joint PDF Joint CDF
d 2
If we know FX (x, y) then fX (x) = dx FX (x) = ∂x∂ ∂y FX (x, y)
2
Similarly if we know FXY (x, y) then fXY (x, y) = ∂x∂ ∂y FXY (x, y)
Let X and Y are two r.v, joint PDF can be written as fXY (x, y) = P (X = x, Y = y).
Joint PDF can ve obtained by evaluating second derivative of joint distribution function
i.e.,
2
fXY (x, y) = ∂x∂ ∂y FXY (x, y)
For ‘N ’ random variables, the joint PDF can be written as
∂ 4 FX1 X2 X3 ...XN (x1 ,x2 ,x3 ,...xN )
fXY (x, y) = ∂x1 ∂x2 ∂x3 ∂...xN
2. o ≤ fXY (x, y) ≤ 1
Rx2 Ry2
3. Total probability: fXY (x, y) dy dx
x=x1 y=y1
4.
P {(x1 ≤ X ≤ x2 ) ∩ (y1 ≤ Y ≤ y2 )}
Zx2 Zy2
= fXY (x, y) dy dx
x=x1 y=y1
Z∞
d
fX (x) = fXY (x, y) dy (or) fX (x) = FX (x)
dx
y=−∞
Z∞
d
fY (y) = fXY (x, y) dx (or) fY (y) = FX (x)
dy
x=−∞
∂2
6. The Joint PDF: fXY (x, y) = F (x, y)
∂x ∂y XY
145
5.2.3 Properties of Joint CDF: FXY (x, y)
1.
FXY (x, y) = P {(−∞ ≤ X ≤ x) ∩ (−∞ ≤ Y ≤ y)}
= P {X ≤ x ∩ Y ≤ y}
Zx Zy
= fXY (u, v) dv du
u=−∞ v=−∞
146
Problem 1: Let X and Y be the continuous r.v with Joint PDF is given by
b e−x cos y; 0 ≤ x ≤ 2 and 0 ≤ y ≤ π
2
fX (x) =
0; Else where
Solution: 𝑓𝑋 (𝑥)
b
1.Total probability = 1
Z∞ Z∞
fXY (x, y) dy dx = 1
x=−∞ y=−∞
0 𝑥
Z2 Z2
π
2
⇒ be−x cos y dy dx = 1 𝐹𝑋 (𝑥)
1
x=0 y=0
π
Z2 Z2
⇒ be−x cos y dy dx = 1
x=0 y=0 𝑥
0 2
Z2 π2
2. Marginal PDF fX (x) :
⇒ be−x sin y dx = 1
0
x=0 Z∞
Z2 fX (x) = fXY (x, y) dy
⇒ be−x 1 − 0 dx = 1 y=−∞
π
x=0 Z2
e−x 2
⇒b =1 = be−x cos y dy
−1 0
e−2 y=0
e0 i π2
⇒b − =1 −x
h
−1 −1 = be sin y = be−x
0
⇒ b − e−2 + 1 = 1
b e−x cos y; 0 ≤ x ≤ 2
1 ∴ fX (x) =
b= = 1.1565 0; Else where
1 − e−2
147
2. Marginal CDF FX (x) : Case II : 0 ≤ X ≤ 2
The given intervals
−∞ ≤ x ≤ 0; 0 ≤ x ≤ 2; x≥2 FX (x) = P {−∞ ≤ X ≤ x}
Case I : −∞ ≤ X ≤ 0 Z0 =0 Zx
= fX
(u) du + fX (u) du
FX (x) = P {−∞ ≤ X ≤ x} u=−∞
u=0
Zx Zx x
e−u
−u
= fX (u) du = be du = b
−1 0
u=−∞ u=0
Z0 = b 1 − e−x
= 0 du = 0
−∞
∴ FX (x) = b 1 − e−x ;
0≤x≤2
∴ FX (x) = 0; −∞ ≤ x ≤ 0
Case III : X ≥ 2
FX (x) = P {−∞ ≤ X ≤ x}
Z0 =0 Z2 Zx =0
= f (x) dx
+ fX (x) dx + fX
(u) du
X
u=−∞
x=0 u=2
Z2
= b e−u du
u=0
2
e−u
=b
−1 0
= b 1 − e−2
1
1− e−2 ]
= [
1−
e−2
=1
∴ FX (x) = 1; 0≤x≤2
0; −∞ ≤ x ≤ 0
∴ FX (x) = b [1 − e−x ] ; 0≤x≤2
x≥2
1;
148
3. Marginal PDF fY (y) :
Zx Marginal CDF FY (y) :
fY (y) = fXY (x, y) dx Given intervals are −∞ ≤ y ≤ 0; 0 ≤
x=−∞ y ≤ π2 ; y ≥ π2
Z2
= be−x cos y dx Case I : −∞ ≤ X ≤ 0
x=0
h e−x i2
FY (y) = P {−∞ ≤ Y ≤ y}
= b cos y
−1 0 Zx
= b cos y [1 − e−2 ] = fX (v) dv
1 u=−∞
= × cos y [1 − e−2 ]
1 − e−2 Z0
= cos y = 0 dv = 0
−∞
cos y 0≤x≤2 ∴ FY (y) = 0; −∞ ≤ y ≤ 0
∴ fX (x) =
0; Else where
Case II : 0 ≤ y ≤ 2
𝑓𝑌 (𝑦)
FY (y) = P {−∞ ≤ Y ≤ y} 1
Zy
Z0
=0
= fY (y) dy
+ fY (v) dv
y=−∞
v=0
y
Z 𝑦
0 𝜋
= cos v dv 2
v=0
𝐹𝑌 (𝑦)
y
= sin v 0 1
= sin y
π 𝑦
∴ FX (x) = sin y; 0≤x≤ 0 𝜋
2
2
π
Case III : X ≥ 2
FY (y) = P {−∞ ≤ Y ≤ x}
π
Zy
Z0
=0 Z2 =0
= fY (y) du
+ fY (y) dy + fY(v) dv
y=−∞ y=0 π
v=
2
149
π
Z2
π
= b cos y dy = sin y 02 = 1
u=0
π
∴ FY (y) = 1; 0≤x≤
2
0; −∞ ≤ y ≤ 0
∴ FY (y) = sin y; π
0≤y≤ 2
π
y≥
1;
2
π
Case (b): 0 ≤ x ≤ 2 and 0 ≤ y ≤ 2
π
∴ FXY (x, y) = b 1 − e−x sin y;
0 ≤ x ≤ 2 and 0 ≤ y ≤
2
150
π
Case (c): 2 ≤ x ≤ ∞ and 2
≤y≤∞
π
Zx Z2
=0
+ f
XY(x, y) dv du
u=2
v=0
π
Z2 Z2
= b e−x cos y dy dx
x=0 v=0
π
Zx Z2
=b e−x cos y dy dx
x=0 y=0
π h e−x i2
= b sin y 02
−1 0
h i
= b 1 − 0 − e−2 + 1
1 h
−2
i
× −
= 1 e =1
e−2
1−
π
∴ FXY (x, y) = 1; 2 ≤ x ≤ ∞ and ≤y≤∞
2
0; x ≤ 0 and y ≤ 0
∴ FXY (x, y) = b (1 − e−x ) sin y; 0 ≤ x ≤ 2 and 0 ≤ y ≤ π
2
π
x ≥ 0 and y ≥
1;
2
5. π
P {(0 ≤ X ≤ 1) and (0 ≤ Y ≤ )}
4
π
1 4 b h e−x i1
=√
Z Z
= b e−x cos y dy dx 2 −1 0
x=0 y=0 b h e−1 i
=√ +1
Z1
π 2 −1
b e−x sin x 04 dx
= 1 1 h
−1
i
= × √ × 1−e
x=0 1 − e−2 2
Z1 = 0.5169
1
b e−x √ − 0 dx
=
2
x=0
151
Problem 2. Find FXY (x, y), FX (x) and fX (x), FY (y) and fY (y)? for given
2 − x − y; 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1
The Joint PDF fXY (x, y) =
0; Else where
Solution:
Zx Zy
FXY (x, y) = (2 − u − v) dv du
u=0 v=0
Zx h v 2 iy
= 2v − uv − du
2 0
u=0
Zx "h #
y2 i
= 2y − uy − − 0 du
2
u=0
h u2 y 2 ix
= 2yu − y − ·u
2 2 0
2 2i
h x y xy
= 2xy − −
2 2
x2 y x y 2
∴ FXY (x, y) = 2xy − − ; 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1
2 2
0; −∞ ≤ x ≤ 0 and − ∞ ≤ y ≤ 0
∴ FXY (x, y) = 2xy − x2 y x y2
2
− 2
; 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1
1 ≤ x ≤ ∞ and 1 ≤ y ≤ ∞
1;
152
3. Marginal CDF FX (x) and FY (y)
i. FX (x) = 0; −∞ ≤ x ≤ 0 i. FY (y) = 0; −∞ ≤ y ≤ 0
Zx Zy
ii. FX (x) = fX (u) du ii. FY (y) = fY (v) dv
x=0 y=0
Zx h3 i Zy h3 i
= − u du = − v dv
2 2
x=0 x=0
h3u2 ix h3 v 2 iy
= u− = v−
2 2 0 2 2 0
h3 x2 i h3 y2 i
= x− −0 = y− −0
2 2 2 2
3 x2 3 y2
= x− = y−
2 2 2 2
iii. FX (x) = 0; ≤ x ≥ 0 iii. FY (y) = 0; ≤ y ≥ 0
0; x≤0
0; y≤0
2 2
∴ FX (x) = 32 x − x2 ; 0 ≤ x ≤ 1 ∴ FY (y) = 32 y − y2 ; 0 ≤ y ≤ 1
x≥0 y≥0
1;
0;
Problem 3: Find the Joint PDF of two r.v X and Y , where CDF is given by
(1 − e−x2 ) (1 − e−y2 ); x ≥ 0; y ≥ 0
FXY (x, y) =
0; y≥0
4xy e−x2 e−y2 ; x ≥ 0, y ≥ 0
∴ fXY (x, y) =
0; y≥0
P {1 ≤ X ≤ 2, 1 ≤ Y ≤ 2}
Z2 Z2
2 2
= 4xy e−x e−y dy dx
x=1 y=0
153
Z2 Z2 Z Z Z Z
−x2 −y 2
=4 xe dx ye dy ∵ uv = u v− du v
x=1 y=0
# Z2 " 2 #
Z2 "
−x2 −y
e d e
2 2
= 4 d d e−x = −2xe−x dx
−2 −2
x=1 y=1
2
h
−x2
i2 h
−y 2
i2 d e−x 2
= e e = xe−x dx
1 1 −2
2
= e−4 − e−1
= 0.12219
(i) Find the value of ‘C’? (ii) Marginal CDF of ‘X’ and ‘Y ’.
Solution:
1.Total probability = 1
Z∞ Z∞
fXY (x, y) dy dx = 1
x=−∞ y=−∞
Z1 Z2
⇒ c c(2x + y)dy dx = 1
x=0 y=0
Z1 2
y2
⇒ c 2xy + dx = 1
2 0
x=0
Z1
⇒ c 4x + 2 − (0 + 0) dx = 1
x=0
2 1
x
⇒c 4 + 2x = 1
2 0
4
⇒c +2 =1
2
1
⇒c=
4
154
2. Marginal PDF, CDF: 2. Marginal PDF, CDF:
Z∞ Z∞
fX (x) = FXY (x, y) dy fY (y) = FXY (x, y) dx
y=−∞ x=−∞
Z2 Z1
= c(2x + y) dy = c(2x + y) dx
y=0 x=0
1
1 2x2
2 2
1 y = + xy
= 2xy + 4 2
4 2 0 0
1 1
= [4x + 2] = [1 + y]
4 4
1 y+1
=x+ =
2 4
1 y+1
∴ fX (x) = x + ∴ fY (y) =
2 4
FX (x) =? FY (y) =?
(a) −∞ ≤ x ≤ 0; FX (x) = 0 (a) −∞ ≤ y ≤ 0; FY (y) = 0
(b) 0 ≤ x ≤ 1; FX (x) =? (b) 0 ≤ y ≤ 1; FY (y) =?
FX (x) FY (y)
Z0 Zx Z0
=0 Zy
=0
= fX
(x) dx
+ fX (u) du = fY (y) dy
+ fY (v) dv
x=−∞
v=0
x=−∞
u=0 y
Zx Z
v + 1
1 = dv
= u+ du 4
2 v=0
u=0 y
1 v2
2 x
u 1 = +v
= + u 4 2
2 2 0 2 0
1 2 1 y
= +y
= x +x
2 4 2
−x2 −y 2 −(x+y)2
n h i
FXY (x, y) = U (x)U (y) 1 − e 2 − e 2 + e 2
155
(i) Find fXY (x, y) (ii) P {0.5 ≤ X ≤ 1.5}
(iii) P {X ≤ 1 ∩ Y ≤ 2} (iv) P {−0.5 ≤ X ≤ 0.2, 1 ≤ y ≤ 2}
Solution: Given
2i
h 2 2
1 − e −x2 − e −y2 + e −(x+y)
2 ; x ≥ 0, y ≥ 0
FXY (x, y) =
0; otherwise
−x2 −y 2 −x2 −y 2
FXY (x, y) = 1 − e 2 −e 2 +e 2 e 2
−x2 −y 2 −y 2
=1−e 2 −e 2 1−e 2
h −x2
i −y 2
FXY (x, y) = 1 − e 2 1 − e 2 ; x ≥ 0, y ≥ 0
∂2
(a) fXY (x, y) = FXY (x, y)
∂x ∂y
∂2 x x
= 1 − e− 2 1 − e− 2
∂x ∂y
∂ x ∂ x
= 1 − e− 2 1 − e− 2
∂x ∂y
−1
− x2
−1 − y2
= 0−e · 0−e
2 2
1 −x −y
= e 2 e 2
4
1 − (x+y)
= e 2
4
1 − (x+y)
∴ fXY (x, y) = e 2
4
Z1.5 Z∞
1 − (x+y)
(b) P (0.5 ≤ X ≤ 1.5) = e 2 dy dx
4
x=0.5 y=0
Z1.5 y ∞
e− 2
1 − x2
= e dx
4 − 12 0
x=0.5
Z1.5
2 x
e− 2
−∞
=− e − 1 dx
4
x=0.5
Z1.5
1 x
= e− 2 dx
2
x=0.5
x 1.5
e− 2
1
=
2 − 12 0.5
156
h 1.5 0.5
i
= −1 e− 2 − e− 2
= − [0.472 − 0.778]
= 0.306
Z1 Z2
1 − (x+y)
(c) P (X ≤ 1, Y ≤ 2) = e 2 dy dx
4
x=0 y=0
Z1 y 2
e− 2
1 − x2
= e dx
4 − 12 0
x=0
Z1
2 x
e− 2 e−2 − 1 dx
=−
4
x=0
Z1
1 x
=− e− 2 [−0.632] dx
2
x=0
x 1
e− 2
= 0.316
− 12 0
h 1 i
−2
= −0.632 e − 1
= − [0.606 − 1]
= −0.632[−0.393]
= 0.248
∴ P (X ≤ 1, Y ≤ 2) = 0.248
Z2 Z3
1 − (x+y)
(d) P (0.5 ≤ X ≤ 2, 1 ≤ Y ≤ 3) = e 2 dy dx
4
x=0.5 y=1
Z2 y 3
e− 2
1 − x2
= e dx
4 − 12 1
x=0.5
Z2
1 − x2
h 3
−2 − 12
i
=− e e −e dx
4
x=0.5
Z2
x
= 0.1915 e− 2 dx
x=0.5
157
x 2
e− 2
= 0.1915
−1
h 2 0.5 0.5 i
= −0.383 e−1 − e− 2
= 0.1577
∴ P (0.5 ≤ X ≤ 2, 1 ≤ Y ≤ 3) = 0.1577
Problem 8. Let fXY (x, y) = xe−x(1+y) U (x)U (y). Check for independent?
158
Solution: The Condition is
Z∞
fX (x) = fXY (x, y) dy
y=0
Z∞
= xe−x(1+y) U (x)U (y)dy
y=0
= xe−x U (x)
Z∞
fY (y) = fXY (x, y) dx
x=0
Z∞
= xe−x(1+y) U (x)U (y)dy
y=0
U (y)
=
(1 + y)2
U (y)
From equation (5.1) ⇒ xe−x(1+y) U (x)U (y) 6= xe−x U (x) · (1+y)2
So, X and Y are not independent.
x y
1 −4 −3
Problem 9. fXY (x, y) = 12 e e U (x)U (y). Check independent X and Y .
Solution: Z∞ Z∞
1 x y 1 x y
fX (x) = e− 4 e− 3 dy fY (y) = e− 4 e− 3 dx
12 12
y=0 x=0
x y ∞ y x ∞
e− 4 e− 3 e 3 e− 4
−
= =
12 − 13 0 12 − 14 0
x y
e− 4 e− 3
= (3) = (4)
12 12
x x
e− 4 e− 4
= =
4 3
∴ fXY (x, y) = fX (x) · fY (y) So, X and Y are independent.
159
Solution:
Z1 Z2
fX (x) = (x + y) dy fY (y) = (x + y) dy
y=0 x=0
1 2
x2
y2
1
= xy + =x+ = + xy = 2(1 + y)
2 0 2 2 0
Solution: Z∞ 2 2 Z∞
− x2 − y2 x2 y2
fX (x) = xy e e dy fY (y) = xy e− 2 e− 2 dx
y=0 x=0
2
∞ 2 2 ∞
2
− x2 − y2 − y2 − x2
= xe ye = ye xe
0
0
y 2 x2
let = t ⇒ 2y dy = dt; let = t ⇒ 2x dx = dt;
2 2
y = 0 ⇒ t = 0; y = ∞ ⇒ t = ∞ x = 0 ⇒ t = 0; x = ∞ ⇒ t = ∞
Z∞ 2
Z∞
− y2
e−t dt
2
= xe − x2
e−t dt = ye
t=0
t=0
2
− x2
2 − y2
= xe = ye
Z1 Z1
P {X ≤ 1, Y ≤ 1} = fX Y (x, y) dy dx
0 0
Z1 Z1
x2 +y 2
= xye− 2 dy dx
0 0
Z1
x2 1
= xe− 2 e−t 0
0
160
5.4 Conditional Distribution and Density Functions
The conditional distribution function of a random variable ‘X’ given that same event
‘B’ is defined as
P {X ≤ x ∩ B}
FX (x/B) = P {X ≤ x/B} = ; P (B) 6== 0
P {B}
Zb
fX (x)dx = FX x/(x ≤ b)
−∞
The above conditional r.v of single random variable ‘X’ can be extended into mul-
tiple random variable. i.e., two random variable X and Y .
161
Problem 12.
e−(x+y) x ≥ 0, y ≥ 0,
If fXY (x, y) = 4. P (X < 1 ∩ Y < 3)
0; otherwise
5. P {(X < 1)/(Y < 3)}
1. Find fX (x), fY (y)
6. P {(X > 1 ∩ Y < 2)/(X < 3)}
2. Find FX (x), FY (y)
7. P {(X > 1 ∩ Y < 2)/(Y > 3)}
3. P (X < 1)
Solution: Z∞
1. fX (x) = fXY (x, y) dy 2.(i) − ∞ ≤ X ≤ 0; FX (x) = 0
y=−∞ (ii) 0 ≤ X ≤ ∞;
Z∞ Zx Zx
= e−(x+y) dy FX (x) = fx (u) du = e−u du
y=0 u=−∞ u=0
Z∞ h e−u ix h e−x i 1
= e−x e−y dy = = −
−1 0 −1 −1
y=0
∞ = 1 − e−x
e−y
−x
=e
−1
0 1 − e−x x ≥ 0,
= e−x [0 + 1] ∴ FX (x) =
0; x≤0
−x
=e
1 − e−y y ≥ 0,
∴ fX (x) = e−x ∴ FY (y) =
0; y≤0
similarly fY (y) = e−y
3. Z∞ Z∞
P (X < x) = fXY (x, y) dy dx
x=−∞ y=−∞
Other method:
Z1 Z∞
P (X < 1) = e−(x+y) dy dx Z∞
x=−∞ x=−∞
P (X < x) = fX (x) dx
Z1 h −(x+y) i∞ x=−∞
e Z1
= dy
−1 0 P (X < 1) = e−x dx
x=−∞
Z1 x=−∞
−x e h −x i1
= e dx =
x=−∞ −1 0
e h −x i1 = 1 − e−1
=
−1 0
= 1 − e−1
162
Z∞ Z∞
4. P (X < 1 ∩ Y < 3) = fXY (x, y) dy dx
x=−∞ y=−∞
Z1 Z3
= e−(x+y) dy dx
x=0 y=0
= 1 − e−3 1 − e−1
Z3
5. P (Y < 3) = fY (y) dy
y=0
Z3
= e−x dy
y=0
e h −x i3
=
−1 0
= 1 − e−3
n X ≤ x o P {(X ≤ x) ∩ (X ≤ b)}
P =
X≤b P {X ≤ b}
n X ≤ 1 o P {(X ≤ 1) ∩ (Y ≤ 3)}
P =
Y ≤3 P {Y ≤ 3}
(1−e )(1 − e−1 )
−3
=
−3
(1−e
)
= 1 − e−1
163
5.5 Discrete random variable
1.
FXY (x, y) = P {X ≤ xn ∩ Y ≤ ym } = P (X = xn , Y = ym )
X∞ ∞
X
= fXY (X = xn , Y = ym ) U (x − xn ) U (y − ym )
n=−∞ m=−∞
2.
∂2
fXY (x, y) = FXY (x, y)
∂x ∂y
∞
X ∞
X
= FXY (xn , ym ) δ(x − xn ) δ(y − ym )
n=−∞ m=−∞
3.
FX (x) = FXY (x, ∞) = P {X ≤ xn ∩ Y ≤ ∞}
∞
X
= fXY (xn , ym ) U (x − xn ) U (y − ∞)
n=−∞
X∞
∴ FX (x) = fXY (x, y) U (x − xn )
n=−∞
4. ∞
X
FY (y) = FXY (∞, y) = fXY (x, y) U (y − ym )
m=−∞
5. ym
X d
fX (x) = FXY (x, y) U (y − ym ) = FX (x)
m=−∞
dx
6. xn
X d
fY (y) = FXY (x, y) U (x − xn ) = FX (x)
n=−∞
dx
Problem 13. The joint space for two random variable X and Y , and corresponding
probabilities are shown in table.
(x, y) (1,1) (2,1) (3,3) 3. FY (y) and fY (y)
P (x, y) 0.2 0.3 0.5
4. Find P {0 ≤ X ≤ 1 ∩ 0 ≤ Y ≤ 3}
Find and plot:
5. Find P {0 ≤ X ≤ 2 ∩ 0 ≤ Y ≤ 2}
1. FXY (x, y) and fXY (x, y)
6. Find P {0 ≤ X ≤ 2 ∩ 0 ≤ Y < 2}
2. FX (x) and fX (x)
164
Solution:
∞
X ∞
X
FXY (x, y) = fXY (X = xn , Y = ym ) U (x − xn ) U (y − ym )
n=−∞ m=−∞
3 X
X 3
FXY (x, y) = fXY (X = xn , Y = ym ) U (x − xn ) U (y − ym )
n=1 m=1
= P (1, 1) U (x − 1) U (y − 1) + P (2, 1) U (x − 2) U (y − 1)
+ P (3, 3) U (x − 3) U (y − 3)
FXY (x, y) = 0.2 U (x − 1) U (y − 1) + 0.3 U (x − 2) U (y − 1) + 0.5 U (x − 3) U (y − 3)
3 3
2 2
0.5
0.2 0.3 0.2
1 1
1 2 3
𝑥 1 2 3
𝑥
2. Marginal PDF and CDF: fX (x) and FX (x)
165
4.P {0 ≤ X ≤ 1 ∩ 0 ≤ Y ≤ 3} = P (1, 1) = 0.2
5.P {0 ≤ X ≤ 2 ∩ 0 ≤ Y ≤ 2} = P (1, 1) + P (2, 1) = 0.5
6.P {0 ≤ X ≤ 2 ∩ 0 ≤ Y < 2} = P (1, 1) = 0.2
𝑓𝑋 (𝑥) 𝑓𝑌 (𝑦)
0.6 0.6
0.5 0.5 0.5
0.4 0.4
0.3
0.2
0.2 0.2
𝑥 0
𝑦
0 1 2 3 1 2 3
𝐹𝑋 (𝑥) 𝐹𝑌 (𝑦)
1.0 1.0
1.0 1.0
0.8 0.8
0.6 0.6
0.5 0.5
0.4 0.4
0.2
0.2 0.2
𝑥 𝑦
1 2 3 1 2 3
Problem 14. The joint space for two random variable X and Y , and corresponding
probabilities are shown in table.
(x, y) (1,1) (2,2) (3,3) (4,4) 3. FY (y) and fY (y)
P (xn , yn ) 0.05 0.35 0.45 0.15
4. Find P {0.5 ≤ X ≤ 1.5}
Find and plot:
5. Find P {X ≤ 2 ∩ Y ≤ 2}
1. FXY (x, y) and fXY (x, y)
6. Find P {1 < X ≤ 2, Y ≤ 2}
2. FX (x) and fX (x)
Solution:
4 X
X 4
FXY (x, y) = fXY (X = xn , Y = ym ) U (x − xn ) U (y − ym )
n=1 m=1
= P (1, 1) U (x − 1) U (y − 1) + P (2, 2) U (x − 2) U (y − 1)
+ P (3, 3) U (x − 3) U (y − 3) + P (4, 4) U (x − 4) U (y − 4)
FXY (x, y) = 0.05 U (x − 1) U (y − 1) + 0.35 U (x − 2) U (y − 2)
+ 0.45 U (x − 3) U (y − 3) + 0.15 U (x − 4) U (y − 4)
166
∂2
∴ fXY (x, y) = FXY (x, y) = 0.05 δ(x − 1) δ(y − 1) + 0.35 δ(x − 2) δ(y − 2)
∂x ∂y
+ 0.45 δ(x − 3) δ(y − 3) + 0.15 δ(x − 4) U (y − 4)
0.15 1
4 4
0.45 0.85
3 3
0.35 0.4
2 2
0.05 0.05
1 1
1 2 3 4
𝑥 1 2 3 4
𝑥
167
6. P {1 < X ≤ 2, Y ≤ 2} = P (1, 1) + P (1, 2) = 0.05 + 0 = 0.05
𝑓𝑋 (𝑥) 𝑓𝑌 (𝑦)
0.6 0.6
0.45 0.45
0.4 0.35 0.4 0.35
1.0 1.0
1.0 1.0
0.85 0.85
0.8 0.8
0.6 0.6
0.4 0.4
0.4 0.4
𝑥 𝑦
1 2 3 4 1 2 3 4
Problem 15. The joint space for two random variable X and Y , and corresponding
probabilities are shown in table.
X Find and plot:
Y
1 2 3
1 0.2 0.1 0.2 1. Joint and marginal distribution function
2 0.15 0.2 0.15 2. Joint and marginal density function
∞
X ∞
X
FXY (x, y) = P (xn , yn ) U (x − xn ) U (y − ym )
n=−∞ m=−∞
3 X
X 3
FXY (x, y) = P (xn , yn ) U (x − xn ) U (y − ym )
n=1 m=1
= P (1, 1) U (x − 1) U (y − 1) + P (1, 2) U (x − 1) U (y − 2)
+ P (2, 1) U (x − 2) U (y − 1) + P (2, 2) U (x − 2) U (y − 2)
+ P (3, 1) U (x − 3) U (y − 1) + P (3, 2) U (x − 3) U (y − 2)
FXY (x, y) = 0.2 U (x − 1) U (y − 1) + 0.15 U (x − 1) U (y − 2)
+ 0.1 U (x − 2) U (y − 1) + 0.2 U (x − 2) U (y − 2)
+ 0.2 U (x − 3) U (y − 1) + 0.15 U (x − 3) U (y − 2)
168
∂2
fXY (x, y) = FX XY (x, y)
∂x ∂y
= 0.2 δ(x − 1) δ(y − 1) + 0.15 δ(x − 1) δ(y − 2)
+ 0.1 δ(x − 2) δ(y − 1) + 0.2 δ(x − 2) δ(y − 2)
+ 0.2 δ(x − 3) δ(y − 1) + 0.15 δ(x − 3) δ(y − 2)
2 2
0.2 0.2 0.2 0.2
0.1
0.1
1 1
1 2 3
𝑥 1 2 3
𝑥
169
+ 0.2 U (y − 1) + 0.15 U (y − 2)
= 0.5 U (y − 1) + 0.5 U (y − 2)
𝑓𝑋 (𝑥) 𝑓𝑌 (𝑦)
0.6 0.6
0.5 0.5
0.2 0.2
𝑥 𝑦
0 1 2 3 0 1 2 3
𝐹𝑋 (𝑥) 𝐹𝑌 (𝑦)
1.0 1.0
1.0 1.0
0.8 0.8
0.65
0.6 0.6
0.5
0.4 0.4
0.35
0.2 0.2
𝑥 𝑦
0 1 2 3 0 1 2 3
4. P {X < 1, Y ≤ 2}
170
5.6 Conditional Distribution and density for discrete r.v
Let X and Y are discrete random variable with values xi , i = 1, 2, 3, ..., N and
yj , j = 1, 2, 3, ..., M respectively and probabilities are P (Xi ) and P (Yj ) respectively.
The probability of joint occurance of xi and yj is denoted by P (xi , yj ).
N
P M
P
fX (x) = P (xi )δ(x − xi ); fY (y) = P (yi )δ(y − yj )
i=1 j=1
N P
P M
fXY (x, y) = P (xi , yj )δ(x − xi ) δ(y − yj )
i=1 j=1
Conditional distribution function,
N
P
P (xi , yk )U (x − xi )
i=1
FX (x|y = yk ) =
p(yk )
N
P
P (xi , yk )δ(x − xi )
i=1
fX (x|y = yk ) =
p(yk )
M
P
P (xk , yj )U (y − yj )
j=1
FY (y|x = xk ) =
p(xk )
M
P
P (xk , yj )δ(y − yj )
j=1
fY (y|x = xk ) =
p(xk )
2 3 1 4
Problem 17. Let P (x1 , y1 ) = 15 , P (x2 , y1 ) = 15
, P (x2 , y2 ) = 15
, P (x1 , y3 ) = 15
,
5
P (x2 , y3 ) = 15 . Find fX (x|Y = y3 )?
3
P
P (xi , y3 )δ(x − xi )
i=1
fX (x|y = y3 ) =
p(y3 )
171
𝑓𝑋𝑌 (𝑥, 𝑦) 𝑦 𝑓𝑋 (𝑥|𝑌 = 𝑦3 ) 𝑦
4 5 4 5
15 15 9 9
3 3
1
15
2 3 2
2
15 15
1 1
1 2 3
𝑥 1 2 3
𝑥
Problem 18. The following table represents the Joint distribution of the discrete r.v
X and Y .
X
Y
1 2 3 Find and plot:
1 1
1 0
12 6 3. Find P {X ≤ 2, Y = 3}
1 1
2 0
9 5 4. P {Y ≤ 2}
1 1 2
3
18 4 15
5. P {X + Y < 4}
1. Find FX (x|y = 2) P {X ≤ 2, Y < 3}
6.
P {X < 3}
2. Find FY (y|x = 3)
Solution:
1. FX (x|y = 2)
P {X < x ∩ y = 2}
=
P {y = 2}
P3
P (xi , y = 2)U (x − xi )
i=1
=
p(y = 2)
P (1, 2)U (x − 1) + P (2, 2)U (x − 2) + P (3, 2)U (x − 3)
=
P (1, 2) + P (2, 2) + P (3, 2)
0 + 9 U (x − 2) + 51 U (x − 3)
1
= 1
9
+ 15
5 9
= U (x − 2) + U (x − 3)
14 14
172
2. FX (y|x = 3)
3
P
P (x = 3, yj )U (y − yj )
P {X = 3 ∩ Y < yj } j=1
= =
P {x = 3} p(x = 3)
P (3, 1)U (y − 1) + P (3, 2)U (y − 2) + P (3, 3)U (y − 3)
=
P (3, 1) + P (3, 2) + P (3, 3)
1 2
0 + 5 U (y − 2) + 15 U (y − 3)
= 1 2
9
+ 15
3 2
= U (y − 2) + U (y − 3)
5 5
1 1 11
3. P {X ≤ 2, Y = 3} = P (1, 3) + P (2, 3) = + =
18 4 18
1 1 1
4. P {X + Y < 4} = P (1, 1) + P (1, 2) + P (2, 1) = +0+ =
12 6 4
5.
6.
173
5.7 Sum of two independent random variables
Let ‘W ’ be a random variable equal to the sum of two independent random variables X
and Y .
W =X +Y
𝑦= 𝑤
𝑥+ 𝑦= 𝑤
𝑥+ 𝑦≤ 𝑤
𝑥= 𝑤
0 𝑥
Z∞ Z∞
fW (w) = fX (x) fY (y) dy dx
x=−∞ y=−∞
Z∞ Z∞
= fX (x) fY (y) dx dy ∵ Independent
y=−∞ x=−∞
w−y
Z Z∞ Z∞ w−x
Z
⇒ fX (x) fY (y)dy dx (or) = fX (x) fY (y)dy dx
x=−∞ y=−∞ x=−∞ y=−∞
Take integral and differentiate wrt to ‘x’ Take integral and differentiate wrt to ‘y’
w−y
Z w−x
Z
h iw−y h iw−x
fX (x) dx = fX (x) fY (y) dy = fY (y)
x=−∞ y=−∞
x=−∞ y=−∞
=0 =0
= f( w − y) −
fX(−∞) = f( w − x) −
fY(−∞)
= fX (w − y) = fY (w − x)
Z∞ Z∞
∴ fW (w) = fY (y)fX (w − y)dy (or) = fX (x)fY (w − x)dx
y=−∞ x=−∞
174
∴ fW (w) = fX (x) ∗ fY (y)
Problem 19: Find the sum of two independent r.v W = X + Y , whose PDF are
1h i
fX (x) = U (x) − U (x − a) ; x≥0
a
1h i
fY (y) = U (y) − U (y − b) ; y ≥ 0, where 0 < a < b
b
𝑈(𝑥) 𝑈(𝑦)
1 1
𝑥 𝑦
𝑈(𝑥 − 𝑎) 𝑈(𝑦 − 𝑏)
1 1
𝑥 𝑦
𝑎 𝑏
1
1 𝑓𝑌 (𝑦) = 𝑏 [𝑈(𝑦)− 𝑈(𝑦 −𝑏)]
𝑓𝑋 (𝑥) = 𝑎 [U(𝑥)− 𝑈(𝑥 −𝑎)]
1 1
𝑎 𝑏
𝑥 𝑦
𝑎 𝑏
We know that, the density function of sum of two independent r.v is the convolution
of their individual density function i.e.,
Z∞
fZ (z) = fX (x) ∗ fY (y) = fX (x) fY (z − x) dx
x=−∞
Case (i): at z = 0 Z∞
f z(z) = fX (x) fY (a − x) dx
Z∞
x=−∞
f z(z) = fX (x) fY (−x) dx Z∞ Za
x=−∞
1 1
= dx = dx
ab ab
=0 x=−∞ 0
1 a a 1
= x 0= =
ab ab b
175
𝑓𝑋 (𝑥) 𝑓𝑋 (𝑥)
1 1
𝑎 𝑎
𝑥 𝑥
−𝑏 0 𝑎 0 𝑎
x 𝑓𝑌 (− 𝑥)
x 𝑓𝑌 (− 𝑥 + 𝑎)
1
1 𝑏
𝑏
𝑥 𝑥
−𝑏 0 𝑎 −𝑏+ 𝑎 0 𝑎
𝑓𝑍 (𝑧) 𝑓𝑍 (𝑧)
= 𝑓𝑍 (𝑧) = 𝑓𝑋 (𝑥) − 𝑓𝑌 (− 𝑥)
=
1
=0 𝑎𝑏
0 𝑎
𝑥 𝑥
−𝑏 0 𝑎
(i) (ii)
Case (iii): at z = b
Z∞ Case (iv): at z = a + b
f z(z) = fX (x) fY (b − x) dx
Z∞
x=−∞
Za f z(z) = fX (x) fY ((a + b) − x) dx
1 x=−∞
= dx
ab
0 =0
1 a a 1
= x 0= =
ab ab b
𝑓𝑋 (𝑥) 𝑓𝑋 (𝑥)
1 1
𝑎 𝑎
𝑥 𝑥
0 𝑎 0 𝑎
x 𝑓𝑌 (−𝑥+b)
x 𝑓𝑌 (− 𝑥 + (𝑎+b))
1
1 𝑏
𝑏
𝑥 𝑥
0 𝑎 𝑏 0 𝑎 𝑏 𝑏+𝑎
𝑓𝑍 (𝑧)
= = 𝑓𝑍 (𝑧)
1
𝑎𝑏
𝑓𝑍 (𝑧)=0
0 𝑎
𝑥 𝑥
𝑏 0
(iii) (iv)
176
𝑓𝑋 (𝑥)
1
𝑎
𝑥
0 𝑎
z
ab
; 0≤w≤a
𝑓𝑌 (𝑦)
1
a≤w≤b
; convolution
b
a+b−z 1
∴ fX (z) = ; b≤w ≤a+b 𝑏
ab
w≥a
0; 0
𝑦
𝑎 𝑏
𝑓𝑍 (𝑧)
−∞ ≤ w ≤ 0
0;
1
=
𝑏
𝑧
0 𝑎 𝑏 𝑎+𝑏
(v)
Z∞
fZ (z) = fX (x)fY (z − x) dx
x=−∞
Z∞ h i
= xe−x U (x) U (z − x) − U (z − x − 1) dx
x=−∞
Z∞ Z∞
−x
= xe U (x) U (z − x) dx − xe−x U (x) U (z − x − 1) dx
x=−∞ x=−∞
| {z } | {z }
1 2
consider integral 1
Z∞
xe−x U (x) U (z − x) dx
x=−∞
Zz
= xe−x dx
x=−0
e−x e−x iz
h Z Z Z Z Z
= x· − 1· ∵ uv = u v− du v
−1 −1 0
h e−x e−x iz
= x· −
−1 1 0
177
h e−z e−z i
= z· − − 0−1
−1 1
= 1 − e−z (1 + z)
consider integral 2
Z∞
xe−x U (x) U (z − x − 1) dx
x=−∞
Zz−1
= xe−x dx
x=−0
e−x e−x iz−1
h Z Z Z Z Z
= x· − 1· ∵ uv = u v − du v
−1 −1 0
h e−x e−x iz−1
= x· −
−1 1 0
−(z−1)
h e e−(z−1) i
= (z − 1) · − − 0−1
−1 1
= 1 − ze−(z−1) + e−(z−1) − e−(z−1)
= 1 − ze−(z−1)
∴ fZ (z) = 1 − 2
= 1 − e−z (1 + z) − {1 − ze−(z−1) }
= 1 − e−z (1 + z) − 1 + ze−(z−1)
= −e−z (1 + z) + ze−z · e1
h i
∴ fZ (z) = e−z − 1 − z + ze
Problem 21: If X and Y are two r.v which are gaussian, if a r.v ‘Z’ is defined as
W = X + Y . Find fW (w).
We know that Gaussian r.v density function
1 (x−m)2 1 x2
fX (x) = √ e− 2σ2 = √ e− 2 ' Normalized Gaussion r.v
σ 2π 2π
2
Let X and Y be two normalized Gaussian r.v σX = σY2 = 1, mX = my = 0 then
x 2 y2
fX (x) = √12π e− 2 fY (y) = √12π e− 2
Z∞
fW (w) = fX (x) ∗ fY (y) = fX (x)fY (w − x) dx
x=−∞
Z∞
1 y2 1 (w−x)2
= √ e− 2 × √ e− 2 dx
2π 2π
x=−∞
178
Z∞
1 1 2 +w 2 +x2 −2wx)
= e− 2 (x dx
2π
x=−∞
Z∞
1 1 2 + w2 − w2 −2wx)
= e− 2 (2x 2 2 dx
2π
x=−∞
Z∞
1 w2 1 2 + w2 −2wx)
= e− 4 · e− 2 (2x 2 dx
2π
x=−∞
Z∞ √
1 − w2 − 1 (x 2− √w )2
= e 4 e 2 2 dx
2π
x=−∞
| {z }
1
√ w √ 1
let p = x 2 − √ ⇒ dp = 2 dx ⇒ dx = √ dp
2 2
If x = −∞ ⇒ p = ∞ If x = ∞ ⇒ p = −∞
w2 Z∞
e− 4 1 p2
1 ⇒ fW (w) = √ e− 2
2π 2
p=−∞
| {z }
2
Z∞ 2
Z∞
p2
− p2
let e dp = 2 e− 2 dp ∵ Gaussian is even function
p=−∞ 0
p 2 √
= z ⇒ p = 2z
2
2p √ √ 1
dp = dz ⇒ dz = 2 z dp ⇒ dz = √ √ dz
2 2 z
Z∞
1
=2 e−z √ √ dz
2 z
z=−∞
Z∞ Z∞
2 −z − 21 2 1
=√ e z dz = √ e−z z − 2 +1−1 dz
2 2
0 0
Z∞
2 1
h1i 2 √
=√ e−z z 2 −1 dz = 2 Γ =√ π
2 2 2
0
Z∞
p2 2 √
∴ e− 2 dp = √ π
2
p=−∞
| {z }
3
179
Substitute equation 3 in equation 2 .
w2
e− 4 1 2 √
fW (w) = √ ·√ π
2π 2 2
1 1 − w2
=√ √ e 4
2 2π
1 w2
∴ fW (w) = √ e− 4
2 π
√1
fX (x) 2π
x
x=0
√1
f Y (y) 2π
y
y=0
1
√
fW (w) 2 π
w
w=0
180
5.8 Central limit theorem
The central limit theoram say that the probability density function of the sum of a large
number of random variables approaches gaussian random variable.
Proof: (1) Equal distribution (2) Unequal distribution (not discussed)
X1 = X2 = X3 = . . . = XN = X −→ same r.v
X1 = X2 = X3 = . . . = XN = X −→ same mean
2 2 2 2 2
σX 1
= σX 3
= σX 3
= . . . = σX N
= σX −→ same variance
Y −Y
Let Z = , This is taken to find PDF of sum of r.v.
σY
Y = X 1 + X2 + X 3 + . . . + XN = N X
Y = X 1 + X2 + X 3 + . . . + XN = N X
2 2 2 2 2
σY = σX 1
+ σX 3
+ σX 3
+ . . . + σX N
= N σX
NX − N X
Z= √
σX N
Z∞
ΦZ (w) = E ejw = fX (x) ejwx dx
x=−∞
h i Z∞
jwz
ΦZ (w) = E e = fZ (z) ejwz dz
z=−∞
" #
N X−N
√ X
jw
σX N
=E e
" #
jwN √X−X
N σX
=E e
181
Apply logarithm both sides and exponentiation of all r.v are equal
( " #)
jwN √X−X
N σX
ln ΦZ (w) = ln E e −→ 1
( " #)N
jw √X−X
N σX
= ln E e
( " #)
N jw √X−X
N σX
= ln E e −→ 2
n
x2 x3
We know that ex = 1 + x + 2!
+ 3!
+ ...
Consider,
" # " 2 #
2
jw √X−X jw X − X w X − X
E e N σX
=E 1+ √ + j2 2
+ ......
N σX 2N σX
jw w2 2
=1+ √ E X −X − 2
E X − X + ...
N σX 2N σX
jw w2 2
=1+ √ 0 − 2
· σX + ......
N σX 2N σX
w2
=1+ + . . . . . . {z + . .}.
2N |
Eliminate Higher order terms
( " #) !
jw √X−X 2
N σX
w
ln E e ' ln 1 −
2N
We know that ln(1 − Z) = −{Z + Z 2 + Z 3 + . . .}
n w2 w4 o
'− + + ......
2N 4N 2
From equation 2 ⇒
n w2 w4 o
ln ΦZ (w) = N − − − ......
2N 4N 2
w2 w4=0
=− − − . . . ∵ N is large or Lim N → ∞
2 4N
w2
ln ΦZ (w) = −
2
w2
∴ ΦZ (w) = e− 2
Application of central limit theorem: The bell shaped gaussian r.v help us in so
many situations, the central limit theorem makes it possible to perform quick, accurate
calculations, otherwise extremely complex and time consuming. In these calculations,
182
the r.v of interest is a sum of other r.vs and we calculate the probabillities of event by
referring to the Gaussian r.v.
1024
P
The number of errors in the packet: V = Xi
i=1
Find P (V > 30) =?, which means more than 30 errors.
1024
X N
X
−2 m −2 1024−m
P (V > 30) = 1024Cm (10 ) (1 − 10 ) ∵ NPk pk q N −k
i=1 i=1
This calculation is time consuming. So, we apply central limit theorem, we can
solve problem approximately
1024
P
Based on central limmit theorem V = Xi is approximately Gaussian with
i=1
mean of N P = V = 1024 × 10−2 = 10.24
Variance N pq = σV2 = 1024 × 0.0099 = 10.1376
x − X
P (X > x) = Q
σX
30 − 10.34
∴ P (V > 30) = Q √
10.1376
= Q(6.20611)
= 1.925 × 10−10
183
Sum of several random variables:
Let ‘N ’ number of random variables Xn ; n = 1, 2, 3, . . . N .
Whose PDF is fXN (xn ); n = 1, 2, 3, . . . N .
Sum of N random variable YN can be written as
YN = X1 + X2 + X3 + . . . + X N
fYN (yn ) = fX1 (x1 ) ∗ fX2 (x2 ) ∗ fX3 (x3 ) ∗ . . . ∗ fXN (xN )
184
Problem: A random sample of size 100 is taken from a population whose mean is
60 and the variance is 400. Using central limit theorem , find the probability with which
the mean of the sample will not differ from 60 by more than 4.
Problem: The life time of a certain band of an electrc bulb may be considered as a
RV with mean 1200h and SD 250h.using central limit theorem,find the probability that
the life time of 60 bulbs exceeds 1250h.
Problem: The life time of a particular variety of an electric bulb may be considered
as a random variable with mean 1200h and SD 250h.Using central limit theorem, find
the probability that the average life time of 60 bulbs exceeds 1250hours
Problem: If X1 , X2 , ...Xn are Uniform variates with mean = 2.5 and variance =
3/4, use the central limit theorem to estimate P (108 < Sn < 12.6),where Sn =
X1 + X2 + ...Xn and n = 48.
Problem: If X1 , X2 , ...Xn are Poisson variates with parameter λ = 2, use the cen-
tral limit theorem to estimate P (120 < Sn < 160), where Sn = X1 + X2 + ...Xn and
n = 75.
Problem: Describe Binomial B(n, p) distribution and obtain the moment gener-
ating function. Hence compute (1). The first four moments and (2). The recursion
relation for the central moments.
185
CHAPTER 6
Let ‘X’ and ‘Y ’ are two random variables, the Joint moment is defined as
Z∞ Z∞
mnk = E X n Y k
xn y k fXY (x, y) dy dx
=
x=−∞ y=−∞
Z∞ Z∞ Z∞
= E Xn = n
xn fX (x) dx
mn0 x fXY (x, y) dy dx =
x=−∞ y=−∞ x=−∞
Z∞ Z∞ Z∞
=E Yk = y k fXY (x, y) dy dx = xn fY (y) dx
m0k
x=−∞ y=−∞ y=−∞
3. If n = 0 and k = 0 then
Z∞ Z∞
= E X 0Y 0 =
m00 fXY (x, y) dy dx = 1
x=−∞ y=−∞
Z∞ Z∞
= E X1 Y 0 =
m10 x fXY (x, y) dy dx = E[X]
x=−∞ y=−∞
186
• The given m01 is an Expectation of r.v ‘Y ’
Z∞ Z∞
m01 = E X 0 Y 1
= fXY (x, y) dy dx = E[Y ]
x=−∞ y=−∞
Z∞ Z∞
m11 = E[XY ] = xy fXY (x, y) dy dx = RXY
x=−∞ y=−∞
The second order moments m11 = E[XY ] is called the correlation of X and
Y , denoted by RXY
Z∞ Z∞
m11 = RXY = E[XY ] = E[X] · E[Y ] = xfX (x)dx · yfY (y)dy
x=−∞ y=−∞
NOTE:
187
• If X and Y are independent fXY (x, y) = fX (x)fY (y)
Similarly, RXY = E[XY ] = E[X]E[Y ]
The Joint cental moment of random variable X and Y is µnk can be written as
h i Z∞ Z∞
n k
µnk = E (X − X) (Y − Y ) = (x − X)n (y − Y )k fXY (x, y) dy dx
x=−∞ y=−∞
1. If n = 0 and k = 0 then
h i
0 0
µ00 = E (X − X) (Y − Y ) = E[1]
Z∞ Z∞
= (x − X)0 (y − Y )0 fXY (x, y) dy dx
x=−∞ y=−∞
Z∞ Z∞
= fXY (x, y) dy dx
x=−∞ y=−∞
2. i. If n = 0 and k 6= 0 then
h i Z∞ Z∞
µ0k = E (Y − Y )k = (x − X)0 (y − Y )k fXY (x, y) dy dx
x=−∞ y=−∞
Z∞ Z∞
= (y − Y )k fXY (x, y) dy dx
x=−∞ y=−∞
h i Z∞ Z∞
µn0 = E (X − X)n = (x − X)n (y − Y )0 fXY (x, y) dy dx
x=−∞ y=−∞
Z∞ Z∞
= (x − X)n fXY (x, y) dy dx
x=−∞ y=−∞
3. i. If n = 0 and k = 1 then
188
h i Z∞ Z∞
µ01 = E (Y − Y ) = (y − Y )fXY (x, y) dy dx = 0
x=−∞ y=−∞
• If n = 1 and k = 1 then
h i
µ11 = E (X − X)(Y − Y ) = CXY
Z∞ Z∞
= (x − X)(y − Y )fXY (x, y) dy dx
x=−∞ y=−∞
= RXY − X Y
h i Z∞ Z∞
2
µ02 = E (Y − Y ) = (y − Y )2 fXY (x, y) dy dx
x=−∞ y=−∞
Z∞
= (y − Y )2 fXY (x, y) dy
y=−∞
189
ii. If n = 2 and k = 0 then we will get variance ‘X’
h i Z∞ Z∞
µ20 = E (X − X)2 = (x − X)fXY (x, y) dy dx
x=−∞ y=−∞
Z∞
= (x − X)2 fXY (x, y) dx
x=−∞
NOTE: The terminology, while widely used, is some what confusing, since orthog-
onal means zero correlation while uncorrelated means zero co-variance.
xy
fXY (x, y) = ; 0 ≤ X ≤ 2; 0≤Y ≤3
9
R∞ R∞
The Joint moment mnk = E X n Y k = xn y k fXY (x, y) dy dx
x=−∞ y=−∞
190
Z2
x2 h y 2 i3
= dx
9 2 0
x=0
Z2
x2 h 9 i
= dx
9 2
x=0
1 h x3 i2
=
2 3 0
1 8 4
= × =
2 3 3
4
∴ E[X] = m10 =
3
2. If n = 0 and k = 1 then the mean value of r.v ‘Y ’ is
Z∞ Z∞
0 1
m01 = E[X Y ] = E[Y ] = yfXY (x, y) dy dx
x=−∞ y=−∞
Z2 Z3
xy
= y· dy dx
9
x=0 y=0
Z2
x h y 3 i3
= dx
9 3 0
x=0
Z2
27
x h
i
= dx
9 3
x=0
h x2 i 2 4
= = =2
2 0 2
∴ E[Y ] = m01 = 2
4 8
3. E[X]E[Y ] = ×2=
3 3
4. If n = 1 and k = 1 then correlation
Z∞ Z∞
1 1
m11 = E[X Y ] = RXY = xyfXY (x, y) dy dx
x=−∞ y=−∞
Z2 Z3
xy
= y· dy dx
9
x=0 y=0
Z2
x2 h y 3 i3
= dx
9 3 0
x=0
191
Z2
x2 h
27
i
= dx
9 3
x=0
h x3 i 2 8
= =
3 0 3
8
∴ m11 = RXY =
3
? If m11 = RXY = E[XY ] = E[X]E[Y ] then X and Y are independent. Here,
RXY = E[X]E[Y ] is satisfied. So, X and Y are independent.
∴ m20 = E[X 2 ] = 2
192
9 h x2 i 2 9 4 9
= = × =
4 2 0 4 2 2
9
∴ m20 = E[Y 2 ] =
2
2 4 2 18 − 16 2
2
7. σX = m2 − m21 2
= E[X ] − E[X] = 2 − = =
3 9 9
2 9 9−8 1
8. σY2 = m2 − m21 = E[Y 2 ] − E[Y ] = − (2)2 = =
2 2 2
9. Correlation:
Z∞ Z3
xy x h y 2 i3 x
fX (x) = fXY (x, y) dy = dy = =
9 9 2 0 2
y=−∞ y=0
x; 0≤x≤2
2
fX (x) =
0; otherwise
Z∞ Z3
xy y h x2 i 2 y 4 2y
fY (y) = fXY (x, y) dx = dy = = × =
9 9 2 0 9 2 9
y=−∞ y=0
2y ; 0≤y≤3
9
fY (y) =
0; otherwise
x 2y xy
fX (x) · fY (y) = · = = fXY (x, y)
2 9 9
Hence fXY (x, y) = fX (x)fY (y)
8
10. CXY = co-variance = µ11 = RXY − X Y = 3
− 43 (2) = 0
∴ CXY = 0 then X and Y are independent.
µ11 CXY
11. Normalized co-variance: ρ = √ = =0
µ02 µ20 σX σY
E[X] = X = m1 = m10 = 3
2
σX = E[(X − X)2 ] = µ20 = 2
193
We know that
2
σX = m2 − m21
⇒ 2 = m2 − (3)2
⇒ m2 = 2 + 9 = 11
∴ m2 = E[X 2 ] = m20 = 11
1. Mean value of Y :
3. Correlation:
194
4. Variance of Y
2. g(X1 , X2 , X3 ) = X1 X2 X3
4. g(X1 , X2 , X3 ) = X1 + X2 + X3
Solution:
h i
1. E g(X1 , X2 , X3 ) = E[X1 ] + 3E[X2 ] + 4E[X3 ]
= 3 + 3 × 6 + 4 × (−2) = 13
h i
2. E g(X1 , X2 , X3 ) = E[X1 ]E[X2 ]E[X3 ]
= 3 × ×(−2) = −36
h i
3. E g(X1 , X2 , X3 ) = −2E[X1 ]E[X2 ] − 3E[X1 ]E[X2 ] + 4E[X2 ]E[X3 ]
= −2 × 3 × 6 + 3 × 3 × 2 + 4 × 6 × (−2)
= −36 + 18 − 48 = −66
h i
3. E g(X1 , X2 , X3 ) = E[X1 ] + E[X2 ] + E[X3 ]
=3+6−2=7
Proof.
CXY = E[(X − X)(Y − Y )]
= E[XY − XY − XY + X Y ]
= E[XY ] − Y E[X] − XE[Y ] + X Y
= E[XY ] − X Y − X Y + X Y
195
= E[XY ] − X Y
Proof.
CXY = E[XY ] − X Y
= E[XY ] − E[X]E[Y ]
= E[X]E[Y ] − E[X]E[Y ] ∵ X and Y are independent
=0
2
2
(i) V ar(X) = σX = E[X 2 ] − E[X]
h i 2
V ar(X + Y ) = E (X + Y )2 − E[X + Y ] ∵ X + Y = E[X + Y ]
2
= E[X 2 + Y 2 + 2XY ] − X + Y ∵ E[X + Y ] = E[X] + E[Y ]
2
= E[X 2 + Y 2 + 2XY ] − X + Y
2
= E[X 2 + Y 2 + 2XY ] − E[X] + E[Y ]
h 2 2 i
= E[X 2 ] + E[Y 2 ] + 2E[XY ] − E[X] + E[Y ] + 2E[X]E[Y ]
h 2 i h 2 i h i
2 2
= E[X ] − E[X] + E[Y ] − E[Y ] + 2 E[XY ] − E[X]E[Y ]
2
(ii) V ar(Y ) = σY2 = E[Y 2 ] − E[Y ]
h i 2
V ar(X − Y ) = E (X − Y )2 − E[X − Y ] ∵ X − Y = E[X − Y ]
2
= E[X 2 + Y 2 − 2XY ] − X − Y ∵ E[X − Y ] = E[X] − E[Y ]
2
= E[X 2 + Y 2 − 2XY ] − X − Y
2
2 2
= E[X + Y − 2XY ] − E[X] − E[Y ]
196
h 2 2 i
= E[X 2 ] + E[Y 2 ]22E[XY ] − E[X] + E[Y ] − 2E[X]E[Y ]
2 2
= E[X 2 ] + E[Y 2 ]22E[XY ] − E[X] − E[Y ] + 2E[X]E[Y ]
h 2 i h 2 i h i
2 2
= E[X ] − E[X] + E[Y ] − E[Y ] − 2 E[XY ] − E[X]E[Y ]
= ab COV (X, Y )
= COV (X, Y )
197
6.3.1 Theorems
Y = α1 X1 + α2 X2 + α3 X3 + . . . + αN XN
N
X
= α i Xi , where αi is constant, Now
i=1
E[Y ] = E[α1 X1 + α2 X2 + α3 X3 + . . . + αN XN ]
= E[α1 X1 ] + E[α2 X2 ] + E[α3 X3 ] + . . . + E[αN XN ]
= α1 E[X1 ] + α2 E[X2 ] + α3 E[X3 ] + . . . + αN E[XN ] ∵ E[kX] = kE[X]
N
X N
X
= αi E[Xi ] = α i Xi
i=1 i=1
" N
# N
X X
∴E αi Xi = α i Xi
i=1 i=1
2
V ar(X) = σX = E[(X − X)2 ]
" N # " N N
!2 #
X X X
V ar αi Xi = E α i Xi − αi Xi
i=1 i=1 i=1
" N
#
X 2
=E αi Xi − αi Xi
i=1
" N #
X
=E αi2 (Xi 2
− Xi )
i=1
N
X h i
= αi2 E (Xi − Xi )2
i=1
198
N
X
= αi2 V ar(Xi )
i=1
N
! N
X X
∴ V ar α i Xi = αi2 V ar(Xi )
i=1 i=1
Ans:
3 1 −1
fX (x) = 2
−x m20 = 4
CXY = 144
3 1 11
fY (y) = 2
−y m02 = 4
µ20 = 72
5 1 11
E[X] = 12
m11 = RXY = 6
µ02 = 72
5
E[Y ] = 12
Ans:
C = 21 ; fX (x) = 2x
7
− 3
14
E[X 2 ] = m20 = 48
21
CXY = 32
29
24+4 6 20
fY (y) = 14
E[Y 2 ] = m02 = 7
µ20 = 7
8 51
E[X] = 7
m11 = RXY = 2 µ02 = 14
4
E[Y ] = 7
199
Problem 4: Find the Joint CDF and all statistical parameters of r.v X and Y .
Whose Joint CDF is shown in the table.
FXY (x, y) = 0.1U (x + 1)U (y) + 0.2U (x)U (y) + 0.1U (x)U (y − 2)
+ 0.3U (x − 1)U (y + 2) + 0.2U (x − 1)U (y − 1) + 0.1U (x − 1)U (y − 3)
fX(x) 𝑓𝑦 (𝑦)
0.6
0.3
0.3 0.3
0.2 0.2
0.1 0.1
𝑥 𝑦
-2 -1 0 1 2 3 -2 -1 0 1 2 3
200
∞
X
m10 = E[X] = xi fX (xi )
i=−∞
1
X
= xi fX (xi )
i=−1
∞
X
2
m20 = E[X ] = x2i fX (xi )
i=−∞
1
X
= x2i fX (xi )
i=−1
∞
X
m01 = E[Y ] = yi fY (yi )
j=−∞
3
X
= yi fY (yi )
j=−2
∞
X
2
m02 = E[Y ] = yi2 fY (yi )
j=−∞
3
X
= yi2 fY (yi )
j=−2
= (−2)2 × 0.3 + (02 × 0.3) + (12 × 0.2) + (22 × 0.1) + (32 × 0.1)
= 4 × 0.3 + 0 + 0.2 + 4 × 0.1 + 9 × 0.1
= 1.2 + 0.2 + 0.4 + 0.9 = 2.7
2
σX = m2 − m21
= 0.7 − 0.52
= 0.45 ⇒ σX = 0.6708
σY2 = m2 − m21
= 2.7 − 0.12
= 2.69 ⇒ σY = 1.64
201
∞
X ∞
X
m11 = E[XY ] = xi yj fXY (xi , yj ) = RXY
i=−∞ j=−∞
1
X 3
X
RXY = xi yj fXY (xi , yj )
i=−1 j=−2
CXY = RXY − X Y
= −0.1 − (0.5 × 0.1)
= −0.15
CXY −0.15
ρ= = = −0.1365
σX σY 0.6708 × 1.64
Let X and Y are two random variables with Joint PDF fXY (x, y). The Joint character-
istic function can ve written as
h i Z∞ Z∞
jωX+jωY
ΦXY (ω1 , ω2 ) = E e = fXY (x, y)ejωx+jωy dω2 dω1
x=−∞ y=−∞
dn+k
mnk = (−j)n+k ΦXY (ω1 , ω2 )
dω1n dω1k
ω1 =ω2 =0
202
3. ΦXY (ω1 , ω2 ) = Φω1 Φω2 ; if X and Y are independent.
Problem 5: Find all statistical parameters of r.v X and Y , whose Joint character-
2 2
istic function is given by ΦXY (ω1 , ω2 ) = e−2ω1 −8ω2
Solution: The Joint moment
dn+k
mnk = (−j)n+k ΦXY (ω1 , ω2 )
dω1n dω2k
ω1 =ω2 =0
1) when n = 1, k = 0 then
1+0 d1+0
m10 = (−j) ΦXY (ω1 , ω2 )
dω11 dω20
ω1 =ω2 =0
d −2ω12 −8ω22
= (−j) e
dω1
ω1 =ω2 =0
2 d −2ω12
= (−j) e−8ω2 e
dω1
ω1 =ω2 =0
−8ω22 −2ω12
= (−j) e e (−4ω1 )
ω1 =ω2 =0
−8ω22
= (−j) e 0
=0
∴ m01 = E[X] = X = 0
2) when n = 0, k = 1 then
d0+1
m01 = (−j)0+1 ΦXY (ω1 , ω2 )
dω20 dω11
ω1 =ω2 =0
d −2ω22 −8ω22
= (−j) e
dω2
ω1 =ω2 =0
2 d −8ω22
= (−j) e−2ω1 e
dω2
ω1 =ω2 =0
−2ω12 −8ω22
= (−j) e e (−16ω2 )
ω1 =ω2 =0
−2ω12
= (−j) e 0
=0
∴ m10 = E[Y ] = Y = 0
3) when n = 0, k = 2 then
203
d0+2
m02 = (−j)0+2 ΦXY (ω1 , ω2 )
dω20 dω12
ω1 =ω2 =0
2
d 2 2
= (−1) 2 e−2ω2 −8ω2
dω2
ω1 =ω2 =0
2
2 d −8ω22
= (−1) e−2ω1 e
dω22
ω1 =ω2 =0
d −8ω22
2
= (−1) e−2ω1 e (−16ω2 ) ∵ d(uv) = u dv + v du
dω2 ω1 =ω2 =0
2 d 2
= (−1) e−2ω1 (−16) ω2 e−8ω2
dω2 ω1 =ω2 =0
2
h 2 2
i
= 16 e−2ω1 e−8ω2 (1) + ω2 e−8ω2 (−16ω2 )
ω1 =ω2 =0
h i
= 16e0 e0 + 0 = 16
4) when n = 2, k = 0 then
d2+0
m20 = (−j)2+0 ΦXY (ω1 , ω2 )
dω20 dω12
ω1 =ω2 =0
d2 −2ω12 −8ω22
= (−1) e
dω22
ω1 =ω2 =0
2
2 d −2ω12
= (−1) e−8ω2 e
dω12
ω1 =ω2 =0
2 d −2ω12
= (−1) e−8ω2 e (−4ω1 ) ∵ d(uv) = u dv + v du
dω1 ω1 =ω2 =0
2 d 2
= (−1) e−8ω2 (−4) ω1 e−2ω1
dω1 ω1 =ω2 =0
2
h 2 2
i
= 4 e−8ω2 e−8ω2 (1) + ω1 e−2ω1 (−4ω1 )
ω1 =ω2 =0
h i
= 4e0 e0 + 0 = 4
5) when n = 1, k = 1 then
d1+1
m11 = RXY = (−j)1+1 ΦXY (ω1 , ω2 )
dω11 dω21
ω1 =ω2 =0
2
d 2 2
= (−1) e−2ω1 −8ω2
dω1 dω2
ω1 =ω2 =0
d −2ω12 d −8ω22
= (−1) e e
dω1 dω2
ω1 =ω2 =0
h 2
ih 2
i
= e−2ω1 (−4ω1 ) e−8ω2 (−16ω2 )
ω1 =ω2 =0
204
= (−1)(0)(0) = 0
∴ m11 = RXY = 0; So, X and Y are orthogonal.
2
6) V ar(X) = σX = E[X 2 ] − (X)2 = 4 − 0 = 4
CXY 0
9) ρ = = =0
σX σY 4 × 16
Moment generating functions (MGF) are particularly useful for analyzing sum of inde-
pendent r.vs, because if X and Y are independent, the MGF of W = X + Y is
Problem:6 If X
and Y are independent r.v with PMF is
0.2,x=1
y = −1
0.5,
0.6, x=2
PX (x) = PX (x) = 0.5, y=1
0.2,x=3
0, y = otherwise
0, x = otherwise
Find MGF of W = X + Y ? What is E W 3 and PW (w) =?
Solution: If W = X + Y then
205
0.1, w=0
0.1, w=0
0.3, w=1
0.3, w = 1, 3
PW (w) = 0.2, w=2 (or )PW (w) =
0.2,
w=2
0.3, w=3
0.1, w=4
0.1, w=4
3d3
E[W ] = ΦW (w) = 0.3(1)3 es + 0.2(23 )e2s + 0.3(33 )e3s + 0.1(43 )e4s = 16.4
dS 3 s=0
Proof. For a r.v ‘X’ with PDF is fX (x) then characteristic function (CF) is
Z∞
jωX
ΦX (ω) = E e = fX (x)ejωx dx
x=−∞
fX1 ,X2 ,X3 ,...XN (x1 , x2 , x3 , . . . xN ) = fX1 (x1 ) · fX2 (x2 ) · fX3 (x3 ) . . . fXN (xN )
N
Y
∴
Φ X1 +X2 +X3 +...XN
(ω1 , ω2 , ω3 , . . . ωN ) =
i=1
ΦXi (wi )
206
6.7 Joint PDF of N-Gaussian random variables
Let ‘N ’ number of Gaussian random variables with their PDF, mean, variance are
r.v −→ PDF Mean Variance
2
X1 −→ fX1 (x1 ) X1 σX 1
2
X2 −→ fX2 (x2 ) X2 σX 2
2
X3 −→ fX3 (x3 ) X3 σX 3
.. .. .. .. ..
. . . . .
2
XN −→ fXN (xN ) XN σX N
2 2
CXi Xj = E (Xi − Xi )(Xj − Xj ) = σX i
= σX j
; if i = j
= ρσXi σXj ; if i 6= j
CXY
∵ CXY = E[(X − X)(Y − Y )] ∵ρ=
σX σY
C CX1 X2 CX1 X3 . . . CX1 XN
X1 X1
h i CX2 X1 CX2 X2 CX2 X3 . . . CX2 XN
CX = .
.. .. .. ..
= CXi Xj
N ×N . . . . . .
CXN X1 CXN X2 CXN X3 . . . CXN XN
N ×N
x − X1
1
x2 − X 2
h i
x−X = ..
N ×1
.
xN − X N
N ×1
Note: [·] → Matrix transpose; [·]−1 → Matrix inverse; [·] → Matrix determinent
Fir N = 2, from equation (6.1)
207
1
2
−1
[CX ] n 1 T −1
o
fX1 X2 (x1 , x2 ) = Exp − [x − X] [CX ] [x − X] (6.2)
2π 2
where
2
h i CX1 X1 CX1 X2 σX 1
ρσX1 σX2
CX = CX = =
2×2 2
CX2 X1 CX2 X2 ρσX1 σX2 σX 2
2×2 2×2
a b 1 d −b
We know that matrix A = then A−1 =
c d ad − bc −c a
2
h i−1 1 σX2 −ρσX1 σX2
CX = 2 2 2 2
σX1 σX2 − ρ2 σX1 σX2 −ρσX σX σ 2
1 2 X1
2
1 σX2 −ρσX1 σX2
= 2 2 2
(1 − ρ )(σX1 σX2 ) −ρσX σX σ 2
1 2 X1
1
(1−ρ2 )σX2 1 − (1−ρ2 )σρX σX
1 2
=
ρ 1
− (1−ρ2 )σX σX 2
(1−ρ )σ 2
1 2 X2
h i−1 1 (−ρ)2
CX = 2
−
(1 − ρ2 )2 σX σ2
1 X2
2
(1 − ρ2 )2 σX σ2
1 X2
2
(1−ρ )
=
2 2
(1 − ρ2 )2 σX1 σX2
1
= 2
2
(1 − ρ )σX σ2
1 X2
1 21
1
h i−1 2 1
CX = 2 = (6.3)
2
(1 − ρ )σX σ2
p
1 X2 σX1 σX2 1 − ρ2
h i x 1 − X1 h iT h i
x−X = ; x − X = (x − X1 ) (x2 − X2 )
2×1 x 2 − X2 1×2
2×1
(6.4)
208
1
fX1 X2 (x1 , x2 ) = p
2πσX1 σX2 1 − ρ2
( )
1 (x1 −X1 )2 2ρ(x1 −X1 )(x2 −X2 ) (x2 −X2 )2
× Exp − 2 2 − (1−ρ2 )σX1 σX2
+ 2
(1−ρ2 )σX
2 (1−ρ )σX1 2
6.7.1 Properties
1
fX1 X2 (x1 , x2 ) = p
2πσX1 σX2 1 − ρ2
2. If ρ = 0 then independent.
" #
(x1 −X1 )2 (x2 −X2 )2
1 − 21 2 + 2
σX σX
fX1 X2 (x1 , x2 ) = e 1 2
2πσX1 σX2
" # " #
(x1 −X1 )2 (x2 −X2 )2
1 − 12 2 1 − 12 2
σX σX
=√ e 1 ·√ e 2
2π σX1 2π σX2
= fX1 (x1 ) · fX2 (x2 )
P (x1 ) P (x2 )
0.4
P
0.2
0
−1 4
P (x1 , x2 )
0 3
0.15
1 2
0.1
2 1
x1 x2 5 · 10−2
3 0
0
4 −1
209
µ
0.5
0
0
−2 0 2 4 6 −5
0.2
−0.2 4
2
−2 0
0
2
4 −2
𝑋2 𝑋2
𝑋2
𝑋2
𝛳 𝛳
𝑋1 𝑋1
𝑋1 𝑋1
210
4. The expression for θ can be written as
!
1 2ρσX1 σX2
θ = tan−1 2 2
2 σX 1
− σX 2
Note:
• ρ = 0 then independent
211
h i h i
− E (X1 − X1 )(X2 − X2 ) sin2 θ + E (X2 − X2 )2 cos θ sin θ
= CX1 X2 cos2 θ − σX
2
1
cos θ sin θ − CX1 X2 sin2 θ + σX 2
2
cos θ sin θ
2 sin θ cos θ
= CX1 X2 cos2 θ − sin2 θ + σX 2
2
− σX 2
2
2
sin 2θ
2 2
= ρ σX1 σX2 cos 2θ + σX2 − σX1
2
sin 2θ
2 2
∴ CY1 Y2 = ρ σX1 σX2 cos 2θ + σX2 − σX1
2
sin 2θ 2ρ σX σX
= 2 1 22
cos 2θ σX1 − σX2
2ρ σX σX
tan 2θ = 2 1 2 2
σX1 − σX2
!
2ρ σ σ
X1 X2
2θ = tan−1 2 2
σX 1
− σX 2
!
1 2ρ σ X σ X
θ = tan−1 2
1
2
2
2 σX 1
− σ X2
2 2
Problem: Two Gaussian r.vs X1 and X2 have variance σX 1
= 9; σX 2
= 4
π
respectively. It is known that a coordinate rotation by an angle θ = results new r.vs
8
Y1 and Y2 such that they are independent. What is the ρ value?
Solution:
!
1 2ρ σX1 σX2
θ = tan−1 2 2
2 σX 1
− σX 2
!
π 1 2ρ(3)(2)
= tan−1
8 2 9−4
!
π 12ρ
= tan−1
4 5
12 π
ρ = tan = 1
5 4
5
ρ= ∴ ρ = 0.416
12
212
6.8 Linear Transformation of Gaussian random variable
x − X1
1
h i x2 − X 2
x−X =
..
.
x N − XN
N ×1
1
2
−1
[CY ] n 1 o
T −1
fY1 Y2 ...YN (y1 , y2 , . . . yN ) = N Exp − [y − Y ] [CY ] [y − Y ]
(2π) 2 2
T
where CY = T CX T
CX = Co-variance Matrix of X and T = Transformation matrix
Matrix representation
Y a a12 . . . a1N X
1 11 1
Y2 a21 a22 . . . a2N X2
=
.. .. .. .. .. ..
. . . . . .
YN aN 1 aN 2 . . . aN N XN
N ×N N ×1
| {z }
T =Transformation matrix
213
Problem : A Gaussian random variable X1 and X2 for which X1 = 2,
2 2
σX 1
= 9; X2 = −1, σX 2
= 4; and CX1 X2 = −3 are transformed to new r.v Y1 and
Y2 according to Y1 = −X1 + X2 , Y2 = −2X1 − 3X2
Find (a) X12 , X22 , ρX1 X2 (b) σY21 , σY22 , ρY1 Y2 , E[Y1 ], E[Y2 ], Y12 , Y22
(c) fX1 X2 (x1 , x2 ), fY1 Y2 (y1 , y2 )
X1 = E[X1 ] = 2 X2 = E[X2 ] = −1
2
σX 1
= E[(x1 − X1 )2 ] = 9 2
σX 2
= E[(x2 − X2 )2 ] = 4
σX1 = 3 σX2 = 2
Y1 = −X1 + X2 Y2 = −2X1 − 3X2
CX1 X2 −3
CX1 X2 = CX2 X1 = −3 ρX1 X2 = σX1 σX2
= 3×2
= −0.5
1
ρX1 X2 = − = −0.5
2
2
(a) we know that σX = m2 − m21 = X 2 − (X)2
2
X12 = σX 1
+ (X1 )2 = 32 + (2)2 = 13
2
X22 = σX 2
+ (X2 )2 = 22 + (−1)2 = 5 ∴ X12 = 13 X22 = 5
we know that
2
CX1 X1 CX1 X2 σX 1
C X1 X2 9 −3
where CX = CXi Xj = = =
2
CX2 X1 CX2 X2 CX2 X1 σX2 −3 4
2 2
If i = j; CXi Xj = σX i
= σX j
and If i 6= j; CXi Xj = ρ σX2 σX2
214
(−1)(9) + 1(−3) −1 −2
(−1)(−3) + 1(4)
=
(−2)(9) + (−3)(−3) (−2)(−3) + (−3)(4) 1 −3
−12 7 −1 −2
=
−91 −6 1 −3
−12(−1) + 7(1) −12(−2) + 7(−3)
=
−9(−1) + (−6)(1) −9(−2) + (−6)(−3)
12 + 7 24 − 21
=
9 − 6 18 + 18
19 3
=
3 36
CY1 Y2 3√
ρ Y1 Y2 = σY1 σY2
= √
19× 36
= 0.1147 ∴ ρY1 Y2 = 0.1147
(f)
1
2
[CX ]−1
n 1 T −1
o
fX1 X2 (x1 , x2 ) = Exp − [x − X] [CX ] [x − X]
2π 2
2
C11 C12 σX C 12 9 −3
CX = = 1 =
2
C21 C22 C21 σX2 −2 4
4 1
1 4 3 4 3
[CX ]−1 = = 1 = 27 9
9(4) − (−3)(−3) 3 9 27 3 9 1 1
9 3
215
4 1 4
−1 27 9 1 1 1
[CX ] = = × − × = 0.03703
1 1 27 3 9 9
9 3
1
2
T −1
x − X CX [x − X]
T
4 1
x − X1 x − X1
= 27 9 1
1 1
x − X2 9 3
x 2 − X 2
h i 4 1 x1 − X 1
= x1 − X1 x2 − X2 27 9
1 1
9 3
x2 − X 2
4
(x1 − 2) 29 + (x2 + 1) 19 x1 − 2
=
1 1
(x1 − 2) 9 + (x2 + 1) 3 x2 + 1
h i
4
= (x1 − 2)2 27 + (x1 − 2)(x2 + 1) 19 + (x1 − 2)(x2 + 1) 91 + (x2 + 1)2 13
4 2 1
= (x1 − 2)2 + (x1 − 2)(x2 + 1) + (x2 + 1)2
27 9 3
1
2
[CX ]−1
n 1 T −1
o
∴ fX1 X2 (x1 , x2 ) = Exp − [x − X] [CX ] [x − X]
( 2π 2 )
0.19245 1 4
h
2 2 1 2
i
= Exp − (x1 − 2) + (x1 − 2)(x2 + 1) + (x2 + 1)
2π 2 27 9 3
h i
0.19245 − 12 4
27
(x1 −2)2 + 29 (x1 −2)(x2 +1)+ 31 (x2 +1)2
∴ fX1 X2 (x1 , x2 ) == e
2π
Another Method:
1
fX1 X2 (x1 , x2 ) = p
2πσX1 σX2 1 − ρ2
( )
1 (x1 −X1 )2 2ρ(x1 −X1 )(x2 −X2 ) (x2 −X2 )2
× Exp − 2 2 − (1−ρ2 )σX1 σX2
+ 2
(1−ρ2 )σX
2 (1−ρ )σX1 2
1
fX1 X2 (x1 , x2 ) = √
2π3 1 − 0.52
( )
1 h
(x1 −2)2 2(−0.5)(x1 −2)(x2 +1) (x2 +1)2
i
× Exp − − +
2(1 − 0.52 ) 9 3×2 4
( )
1 1 h (x1 −2)2 (x1 −2)(x2 +1) (x2 +1)2
i
= × Exp − + +
10.392π 1.5 9 6 4
216
( )
1 1 h (x1 −2)2 (x1 −2)(x2 +1) (x2 +1)2
i
fX1 X2 (x1 , x2 ) = × Exp − + +
10.392π 1.5 9 6 4
217
36 6 19
= (y1 + 3)2 − (y1 + 3)(y2 + 1) + (y2 + 1)2
675 675 675
1 n 36 6 19 o
fY1 Y2 (y1 , y2 ) = Exp (y1 + 3)2 − (y1 + 3)(y2 + 1) + (y2 + 1)2
51.96π 675 675 675
σV2 = E (V − V )2
h i2
= E (−X − Y ) − (−X − Y )
h i2
=E − (X − X) − (Y − Y )
nh io
= E (X − X)2 + (Y − Y )2 − 2(X − X)(Y − Y )
2
√ √
= σX + σY2 − CXY ∵ CXY = ρXY σX σY = 0.2 4 2 = 0.5656
= 4 + 2 + 2(0.56568)
= 7.1313
√
∴ ρV = 7.1313 = 2.6704
2
= E (W − W )2
σW
h i2
= E (2X + Y ) − (2X + Y )
218
h i2
= E 2(X − X) + (Y − Y )
nh io
= E 4(X − X)2 + (Y − Y )2 + 4(X − X)(Y − Y )
2
√ √
= 4σX + σY2 + 4CXY ∵ CXY = ρXY σX σY = 0.2 4 2 = 0.5656
= 4(4) + 2 + 4(0.56568)
= 20.2624
√
∴ ρW = 20.2624 = 4.5013
RV W = CV W + V W
V = −X − Y = −X − X = −1 − 1 = −2
W = 2X + Y = 2X + X = 2(1) + 1 = 3
CV W = RV W − V V
= −16.1312 − (−2)(3) = −10.1312
∴ CV W = −10.1312
CV W −10.1312
ρV W = = = −0.84284
σV σW 2.6704 × 4.5013
ρV W = −0.84284
219
2 2
Y1 = 2X! + X2 , Y2 = −X1 − 3X2 . Find Y1 , Y2 , Y12 , Y22 , RY1 Y2 , σX 1
, σX 2
and
fY1 Y2 (y1 , y2 )?
Solution: Given data
X1 = E[X1 ] = 0 X2 = E[X2 ] = −1
E[X12 ] = X12 = 2 E[X22 ] = X22 = 4
RX1 X2 = E[X1 X2 ] = −2
Y1 = 2X1 + X2 Y2 = −X1 − 3X2
220
3 1
Y
= 5 5 1
−1 −2
5 5
Y2
3 1 1 2
X1 = Y1 + Y2 ; X2 = − Y1 − Y2
5 5 5 5
3 1 3 1 3 3
E[X1 ] = E[Y1 ] + E[Y2 ] = (−1) + (3) = − + = 0
5 5 5 5 5 5
1 2 1 2 1 6
E[X2 ] = − E[Y1 ] − E[Y2 ] = − (−1) − (3) = − = −1
5 5 5 5 5 5
2
(vi) σX 1
= E[X12 ] = (E[X1 ])2 = 2 − 02 = 2
2
(vii) σX 2
= E[X22 ] = (E[X2 ])2 = 4 − (−1)2 = 3
∂X1 ∂X1 3 1
∂Y1 ∂Y2 5 5 3 2 1 1 6 1 1
J= = = − − − =− + =−
∂X2 ∂X2
− 15 − 25 5 5 5 5 25 25 5
∂Y1 ∂Y2
! !
1 3Y1 Y2 Y1 2Y2
∴ fY1 Y2 (y1 , y2 ) = − fX1 X2 + ; − −
5 5 5 5 5
221
CHAPTER 7
Random Process
The concept of a random process is based on enlarging the random variable concept
to include time. The random variable is function of sample coins or sample space and
the random process is both sample space and time then it is called random process or
stochastic process and it is defined as X(t, s).
The random process X(t, s) has family of specific values x(t, s). A random process
is sort form can be represented as X(t), it has family of specific values x(t).
Random process can be represented in three methods.
t1 t2 = t1 +
222
Exampe: Let us consider an experiment of measuring the temperature of a room
with different or collection of room temperature using thermometer. Each thermometer
is a random variable which can take on any value from the sample space ‘S’. Also at
different times the reading of thermometers may be different.Thus the room temperature
is a function of a both the sample space and time. In this example, the concept of random
variable can extended by taking into consideration of time dimension. Here we assign a
time function x(t, s) to every outcome ‘s’. There will be a family of all such functions.
The family X(t, S) is known as “random process” or “stochastic process”. In place of
x(t, s) and X(t, S), the sort form notation x(t) and X(t) are often used.
The Fig. 7.1 shows random process, ‘S’ is sample space with sample S1 , S2 , S3 .
Sample S1 is corresponds to thermometer1 readings i.e., x1 (t). S2 and S3 are corre-
sponds to thermometer2 and thermometer3 readings respectively.
To determine the statistics of the room temperature, say mean value two methods
are used.
The random variable corresponding to random process can be obtained by fixing time
T = t1 , t2 , t3 , . . . tN The random variable X1 is obtained at fixing time t = t1 , then
then the PDF of a random variable X1 and X2 can be obtained by calculating probability
of a random variable.
Let fX1 (x1 ) and fX2 (x2 ) are represents the PDF’s of random variable X1 and X2 .
The CDF’s can be obtained by integrating or adding the PDF’s FX1 (x1 ) and FX2 (x2 )
are represents the CDF’s of random variable’s X1 and X2 .
∴ The statistical parameters of random process is mean value or expectation or
statistical average or ensemble average E[Xk ].
Z ∞
E[Xk ] = xk fXk (xk )dxk
xk =−∞
223
7.1.2 Time averages or entire time scale
The mean vale may be calculated over the entire time scale or time averages.
Z T
1 2
A[x1 (t)] = < x1 (t) > = lim x1 (t)dt
T →∞ T − T2
This is called “Time average”. Similarly mean values of x2 (t) and x3 (t) can be calcu-
lated.
∴ Total Time average A[X(t)] = < x1 (t) > = A[x1 (t), x2 (t), x3 (t).
= RX1 X2 (τ )
1. Non-Deterministic process
224
iv. Strict sense stationary random process (SSS)
v. Wide sense stationary random process (WSS)
i. Continuous random process: If the future values are not predicted in ad-
vance and values are continuously varying with respect to time then it is
called “continuous random process”. Examples are
– Temperature measured using thermometer.
– Thermal noise generated by resistor.
X(t)
Fig. 7.2
ii. Discrete random process: If X(t) is discrete with respect to time ‘t’ then
random process is called “Discrete random process”. It has only two set of
values. Ex: Logic ‘1’ and ‘0’ generated by personal computer.
X(t)
5V
t
-5V
Fig. 7.3
225
iii. Continuous sequence random process: A random process for which X(t)
is continuous but time has only discrete values is called a “continuous se-
quence random process”. This can be obtained by sampling continues ran-
dom process.
X(t)
Fig. 7.4
iv. Discrete sequence random process: A random process for which X(t) and
‘t’ are discrete is called a “discrete sequence random process”. This can be
obtained by sampling discrete random process.
X(t)
5V
t
-5V
Fig. 7.5
2. Deterministic random process: If the future values of any sample function can
be predicted exactly from observed past values, the process is called “Determin-
istic process”.
226
Fig. 7.6
i. First order Stationary random process: If the first order PDF and expectation
constant doesn’t change with respect to time, then the random process is
called “first order Stationary random process”. Ex:
– fX1 (x1 ) is constant. i.e., does not with respect to time.
– E[X1 ] is constant.
ii. Second order Stationary random process: If the second order PDF and ex-
pectation constant doesn’t change with respect to time, then the random
process is called “2nd order Stationary random process”. Ex:
– fX1 X2 ...XN (x1 , x2 . . . xN ) is constant. i.e., does not with respect to time.
– E[X(t1 )X(t2 ) . . . X(tN )] is constant.
iii. N th order Stationary random process: If the N th order PDF and expectation
constant doesn’t change with respect to time, then the it is called “N th order
Stationary random process”. Ex:
– fX1 X2 (x1 , x2 ) is constant. i.e., does not with respect to time.
– E[X(t1 )X(t1 + τ )] is constant.
iv. Strict sense stationary random process (SSS): If all statistical parameters
and PDF’s are does not change with respect to time then it is called strict
sense stationary random process.
227
v. Wide sense stationary random process (WSS): If expectation or mean is
constant and correlation is function of τ = t2 − t1 then it is called wide
sense stationary random process.
– E[X(t)] is constant.
– E[X(t)X(t + τ )] = RXX (τ ) is constant. or
– E[X(t1 )X(t2 )] = E[X(t1 )X(t1 + τ )] is constant.
5. Ergodic random process: If statistical averages are equal to time averages then
it is called “ergodic random process”.
f()
1/2
0 2
Fig. 7.7
If mean value and auto correlation function of r.p is a function of time, ‘t’ then it is
not stationary.
228
1. Expectation or mean value:
Z 2π
E[X(t)] = X(t) = x(t)fθ (θ)dθ
θ=0
Z 2π
1
= A cos(ω0 t + θ). dθ
θ=0 2π
Z 2π
A
= cos(ω0 t + θ)dθ
2π θ=0
A
= [sin(ω0 t + θ)]2π
0
2π
A
= [sin(ω0 t + 2π) − sin ω0 t]
2π
A
= [sin ω0 t − sin ω0 t]
2π
=0
2. Correlation:
RXX (τ ) = E[X(t)X(t + τ )]
Z 2π
= x(t)X(t + τ )fθ (θ)dθ
θ=0
Z 2π
1
= A cos(ω0 t + θ).A cos(ω0 (t + τ ) + θ) dθ
θ=0 2π
2 Z 2π
A
= 2 cos(ω0 t + θ). cos(ω0 t + ω0 τ + θ)dθ
4π θ=0
A2 2π
Z
= cos(ω0 t + θ + ω0 t + ω0 τ + θ) + cos(ω0 t + θ − ω0 t − ω0 τ − θ)dθ
4π θ=0
A2 2π
Z
= cos(2ω0 t + 2θ + ω0 τ ) + cos(ω0 τ )dθ
4π θ=0
A2 h sin(2ω0 t + 2θ + ω0 τ ) i2π A2 h i2π
= + [cos(ω0 τ )] θ
4π 2 0 4π 0
A2 h i2π A2 h sin(2ω t + 4π + ω τ ) − sin(2ω t + ω τ ) i
0 0 0 0
= [cos(ω0 τ )] θ +
4π 0 4π 2
A2 A2 h sin(2ω0 t + ω0 τ ) − sin(2ω0 t + ω0 τ ) i
= cos(ω0 τ )[2π − 0] +
4π 4π 2
A2
= cos ω0 τ + 0
2
A2
= cos ω0 τ
2
A2
∴ RXX (τ ) = cos ω0 τ . This solution does not contain variable ‘t’.
2
So, both E[X(t)] and E[X(t)X(t + τ )] are constant, then it is WSS.
229
Problem 2: A random process X(t) = A cos(ω0 t + θ) is not stationary if A and ω0 are
constants and θ is a uniformly distributed variable on the interval (0, π). Show that it is
not WSS r.p.
Solution: Given X(t) = A cos(ω0 t + θ); where A and ω0 are constants and
θ −→ (0, π); fθ (θ) = π1 . The θ is uniformly distributed between 0 to π. The dis-
tribution shown in Fig. 7.8.
f()
1/
0
Fig. 7.8
If mean value and auto correlation function of r.p is a function of time, ‘t’ then it is
not stationary.
−2A
∴ E[X(t)] = sin ω0 t.
π
It is not constant. i.e., varying with ‘t’. So, it is not stationary r.p.
230
2. Correlation:
E[X(t)X(t + τ )] = RXX (τ )
Z π
= x(t)X(t + τ )fθ (θ)dθ
θ=0
Z π
1
= A cos(ω0 t + θ).A cos(ω0 (t + τ ) + θ) dθ
θ=0 π
2 Z π
A
= cos(2ω0 t + 2θ + ω0 τ ) + cos(ω0 τ )dθ
2 θ=0
A2 h sin((2ω0 t + 2θ + ω0 τ ) iπ A2 h iπ
= + cos(ω0 τ ) θ
2π 2 0 2π 0
2h
A sin((2ω0 t + 2π + ω0 τ ) − sin((2ω0 t + ω0 τ ) i A2 h i
= + cos(ω0 τ ) π − 0
2π 2 2π
A2 h sin((2ω0 t + ω0 τ ) − sin((2ω0 t + ω0 τ ) i A2 h i
= + cos(ω0 τ ) π − 0
2π 2 2π
πA2
=0+ cos(ω0 τ )
2π
A2
= cos(ω0 τ )
2
A2
∴ RXX (τ ) = cos(ω0 τ ).
2
This solution does not contain variable ‘t’. So it is constant.
fA(A)
1/2a
-a 0 A
a
Fig. 7.9
231
1. Expectation or mean value E[X(t)] :
Z a
E[X(t)] = X(t) = x(t)fA (A)dA
A=−a
Z a
1
= A cos(ω0 t + θ). dA
A=−a 2a
Z a
cos(ω0 t + θ)
= AdA
2a A=−a
cos(ω0 t + θ) h A2 ia
=
2a 2 −a
cos(ω0 t + θ) h a2 (−a)2 i
= −
2a 2 2
=0
RXX (τ ) = E[X(t)X(t + τ )]
= E[A cos(ω0 t + θ).A cos(ω0 t + ω0 τ + θ)]
h A2 i
=E cos(ω0 τ ) + cos(2ω0 t + 2θ + ω0 τ )
2
cos ω0 τ cos(2ω0 t + 2θ + ω0 τ )
= E[A2 ] + E[A2 ]
2 2
h cos ω τ + cos(2ω t + 2θ + ω τ ) i
0 0 0
= × E[A2 ]
2
h cos ω τ + cos(2ω t + 2θ + ω τ ) i Z a 1
0 0 0
= × A2 . dA
2 A=−a 2a
h cos ω τ + cos(2ω t + 2θ + ω τ ) i 3 ia
0 0 0 1 A
h
= ×
2 2a 3 −a
h cos ω τ + cos(2ω t + 2θ + ω τ ) i
0 0 0 1 h 2a3 i
= ×
2 2a 3
a2 h i
= cos ω0 τ + cos(2ω0 t + 2θ + ω0 τ )
6
Problem 4: A random process X(t) = A cos(ω0 t + θ) where θ and A are constants and
frequency ω0 is a uniform random variable from 0 to 100 rad/sec. Find Expectation
and Auto correlation?
Solution: Given X(t) = A cos(ω0 t + θ); where A and θ are constants and ω0 →
1
(0, 100); fA (A) = 100 . The amplitude ω0 is uniformly distributed between 0 to 100.
The distribution shown in Fig. 7.10.
232
f( )
1/100
0 100
rad/sec
Fig. 7.10
A h i
∴ E[X(t)] = sin(100t + θ) + sin(θ) . It is consists parameter ‘t’. So, it is
100t
not stationary r.p.
RXX (τ ) = E[X(t)X(t + τ )]
= E[A cos(ω0 t + θ).A cos(ω0 t + ω0 τ + θ)]
h A2 i
=E cos(ω0 τ ) + cos(2ω0 t + 2θ + ω0 τ )
2
A2 h i
= E[cos(ω0 τ )] + E[cos(2ω0 t + 2θ + ω0 τ )]
2 Z
A2 h 100
Z 100
1 1 i
= cos ω0 τ. dω0 + cos(2ω0 t + ω0 τ + 2θ)dω0
2 ω0 =0 100 100 ω0 =0
A2 h 1 sin ω0 τ 100 1 h sin(2ω0 t + ω0 τ + 2θ) i100 i
= +
2 100 τ 0 100 2t + τ ω0 =0
2h
A sin 100τ sin(200t + 100τ + 2θ) sin(2θ) i
= + −
2 100τ 200t + 100τ 200t + 100τ
233
fX(K)
1/2
K
-1 0 1
Fig. 7.11
RXX (τ ) = E[X(t)X(t + τ )]
Z 1
= x(t)x(t + τ )fX (K)dK
K=−1
Z 1
1 1 h K 3 i1
= (K)(K)( dK =
K=−1 2 2 3 −1
1 1 1
h i 1
= + = Constant.
2 3 3 3
234
7.3 Correlation function
Correlation finds the similarities between two random variables in the random process.
Let X(t) be the random process which contain X(t1 ) and X(t2 ) are random variables.
Auto correlation function is defined as
Let u = t − τ =⇒ t = u + τ
235
Proof. Consider positive quantity,
h i
2
(X(t1 ) + X(t2 )) ≥0
h i
E (X(t1 ) + X(t2 ))2 ≥ 0
h i
E X 2 (t1 ) + X 2 (t2 ) + 2X(t1 )X(t2 ) ≥ 0
lim RXX (τ ) = 0
|τ |→∞
lim RXX (τ ) = 0
|τ |→∞
6. If a random process with a zero mean has DC component ‘A’; Y (t) = A + X(t)
then RY Y (τ ) = A2 + RXX (τ )
Proof.
Y (t) = A + X(t)
RY Y (τ ) = E[Y (t)Y (t + τ )]
= E[(A + X(t))(A + X(t + τ ))]
h i
2
= E A + AX(t + τ ) + AX(t) + X(t)X(t + τ )
7. If the random process Z(t) is a sum of two random process X(t) and Y (t) that is
Z(t) = X(t) + Y (t) then RZZ (τ ) = RXX (τ ) + RXY (τ ) + RY X (τ ) + RY Y (τ ).
236
Proof.
Z(t) = X(t) + Y (t)
RZZ (τ ) = E[Z(t)Z(t + τ )]
h i
= E X(t) + Y (t) X(t + τ ) + Y (t + τ )
h i
= E X(t)X(t + τ ) + X(t)Y (t + τ ) + Y (t)X(t + τ ) + Y (t)Y (t + τ )
= RXX (τ ) + RXY (τ ) + RY X (τ ) + RY Y (τ )
2
lim RXX (τ ) = X
|τ |→∞
Notes:
• (E[X(t)])2 = DC Power
2
• Variance (σX ) = AC Power
4
E[X 2 (t)] = RXX (0) = 25 + = 29
1+0
237
∴ X = ±5
2
3. Variance: σX = m2 − m21 = E[X 2 ] − (E[X])2 = 29 − 52 = 4
4τ 2 + 100
Problem 2: The auto correlation function of a WSS r.p is given by RXX (τ ) = .
τ2 + 4
Find mean and variance?
Solution:
4(0) + 100 100
1. Mean Square Value:X 2 = E[X 2 (t)] = RXX (0) = = = 25
0+4 4
2
4 + 100
τ2
2. Mean Value:(X) = limτ →∞ RXX) (τ ) = lim|τ |→∞ = 4 ∴ X = ±2
1 + τ42
2
3. Variance: σX = E[X 2 (t)] − (E[X])2 = 25 − 4 = 21
Problem 3: Assume that an Ergodic random process X(t) has an auto correlation func-
2 h i
tion RXX (τ ) = 18 + 1 + 4 cos(12τ ) .
6 + t2
a) Find |X|
Solution:
a)
E[X(t)]2 = (X)2 = lim RXX (τ )
τ →∞
2 h i
= lim 18 + 1 + 4 cos(12τ )
τ →∞ 6 + t2
2 h i
= lim 18 + 1 + 4 cos(12τ )
τ →∞ 6 + (t + τ )2
2 h i
τ2
= lim 18 + 6 1 + 4 cos(12τ )
τ →∞
τ2
+ ( τt + 1)2
0 h i
= 18 + 1 + 4 cos 12τ
0 + (0 + 1)2
= 18 + 0 = 18
√
∴ X = ± 18
b) No.
238
c)
PXX = E[X 2 (t)] = RXX (0)
2 h i
= lim 18 + 1 + 4 cos(12τ )
τ →∞ 6 + t2
2 h i
= 18 + 1 + cos 0
6 + t2
10
= 18 + ∵ the total power at t = 0and τ = 0 So,
6 + t2
10 118 59
= 18 + = = W atts.
6 6 3
Problem 4: Assume that X(t) is a WSS random process with an auto correlation
function RXX (τ ) = e−α|τ | . Determine the second moment of the random variable
X(8) − X(5).
Solution: We know that E[X(t)X(t + τ )] = RXX (τ ); RXX (0) = E[X 2 (t)]
The second central moment of the r.v X is given by E[X 2 (t)].
h i
E (X(8) − X(5))2 = E[X 2 (8)] + E[X 2 (5)] − 2E[X 2 (8)X 2 (5)]
239
Z 2π
E[cos(2ωt + 2θ + ωτ )] = cos(2ωt + 2θ + ωτ )dθ
θ=0
" #
1 sin(2ωt + 2θ + ωτ )
=
2π 2
" #
1 sin(2ωt + ωτ ) sin(2ωt + ωτ )
= −
2π 2 2
=0
Z 2π
1
E[cos ωτ ] = cos ωτ.1.dθ
2π θ=0
Z 2π
1
= cos ωτ 1.dθ
2π θ=0
1 h i2π
= cos ωτ θ
2π 0
1 1
= cos ωτ [2π − 0] = cos ωτ [2π]
2π 2π
= cos ωτ
RXX (τ ) h i
∴ RY Y (τ ) = 0 + cos ωτ
2
RXX (τ )
= cos ωτ
2
e−a|τ |
∴ RY Y (τ ) = cos ωτ
2
1 ; −π ≤ θ ≤ π
Given fθ (θ) = 2π
0; Else where
Show that X(t) is a stationary random process and find total power?
240
7.4 Cross Correlation
The correlation between two random variables which are obtain from two different
random process is called cross correlation.
Let two random process X(t) and Y (t) with random variable X(t1 ) and Y (t2 ). The
cross correlation can be defined as
7.4.1 Properties:
2. If X(t) and Y (t) are independent and WSS random process, then,
RY Y (τ ) = E[X]E[Y ] = X Y
3. If two random process X(t) and Y (t) have zero mean and independent, then
lim RXY (τ ) = 0
τ →∞
4. The cross correlation function is even function i.e., RXY (τ ) = RXY (−τ )
Proof.
RXY (τ ) = E[X(t)Y (t + τ )]
Let τ = −τ
RXY (−τ ) = E[X(t)Y (t − τ )]
Let t − τ = u −→ t = u + τ
p
5. The maximum value is obtained at origin |RXY | = RXX (0)RY Y (0)
" #2
X(t1 ) Y (t2 )
Proof. Let p ±p ≥0
RXX (0) RY Y (0)
241
" #2
X(t) Y (t + τ )
E p ±p ≥0
RXX (0) RY Y (0)
" #
X 2 (t) Y 2 (t + τ ) X(t)Y (t + τ )
E + + 2p ≥0
RXX (0) RY Y (0) RXX (0)RY Y (0)
" # " # " #
X 2 (t) Y 2 (t + τ ) X(t)Y (t + τ )
E +E + 2E p ≥0
RXX (0) RY Y (0) RXX (0)RY Y (0)
" # " # " #
RXX (0) RY Y (0) RXY (τ )
+ +2 p ≥0
RXX (0) RY Y (0) RXX (0)RY Y (0)
" #
RXY (τ )
p ≤1
RXX (0)RY Y (0)
p
∴ RXY (τ ) ≤ RXX (0)RY Y (0)
Let X(t) be a random process with random variable obtained at t1 and t2 , the covariance
can be defined as
#
h
CX1 X2 (τ ) = E X(t1 ) − X(t1 ) X(t2 ) − X(t2 ) = CX1 X2 (t1 , t2 )
h i
=E X(t) − X(t) X(t + τ ) − X(t + τ )
h i
= E X(t)X(t + τ ) − X(t)X(t + τ ) − X(t)X(t + τ ) + X(t) X(t + τ )
2
If X(t) is WSS then CX1 X2 (τ ) = RXX (τ ) − X (t)
2
In general CXX (τ ) = RXX (τ ) − X (t)
242
7.5.2 Properties:
1. If X(t1 ) and X(t2 ) are orthogonal, then E[X(t1 )X(t2 )] = 0 −→ RXX (0) = 0
2
∴ CXX (τ ) = −X (t)
2
CXX (0) = RXX (0) − X (t)
2
= E[X 2 (t)] − X (t)
2
CXX (0) = σX
Let X(t) and Y (t) are two random process. The covariance between two random vari-
ables X(t1 ) and X(t2 ) which are obtained from X(t) and Y (t) random process. The
cross covariance can be written as
CXY (τ ) = E[X(t)Y (t + τ )] − X Y
CXY (τ ) = RXX (τ ) − X Y
2
CXY (i) = RXX (i) − X
• Mean time: Z T
1
A[X(t)] = lim X(t)dt (or)
T →∞ 2T t=−T
1 T
Z
= lim X(t)dt
T →∞ T t=0
• Time correlation
RXX (τ ) = A[X(t)X(t + τ )]
Z T
1
= lim X(t)X(t + τ )dt
T →∞ 2T t=−T
243
3. Ergodic random process: If statistical averages is equal to time averages then it is
called Ergodic r.p.
4. Mean Ergodic random process: If only statistical mean is equal to time mean then
it is called Mean Ergodic r.p.
f()
1/2
0 2
Fig. 7.12
If mean value and auto correlation function of r.p is a function of time, ‘t’ then it is
not stationary.
244
2. Correlation:
RXX (τ ) = E[X(t)X(t + τ )]
Z 2π
= x(t)X(t + τ )fθ (θ)dθ
θ=0
Z 2π
1
= A cos(ω0 t + θ).A cos(ω0 (t + τ ) + θ) dθ
θ=0 2π
2 Z 2π
A
= 2 cos(ω0 t + θ). cos(ω0 t + ω0 τ + θ)dθ
4π θ=0
A2 2π
Z
= cos(ω0 t + θ + ω0 t + ω0 τ + θ) + cos(ω0 t + θ − ω0 t − ω0 τ − θ)dθ
4π θ=0
A2 2π
Z
= cos(2ω0 t + 2θ + ω0 τ ) + cos(ω0 τ )dθ
4π θ=0
A2 h sin(2ω0 t + 2θ + ω0 τ ) i2π A2 h i2π
= + [cos(ω0 τ )] θ
4π 2 0 4π 0
A2 h i2π A2 h sin(2ω t + 4π + ω τ ) − sin(2ω t + ω τ ) i
0 0 0 0
= [cos(ω0 τ )] θ +
4π 0 4π 2
A2 A2 h sin(2ω0 t + ω0 τ ) − sin(2ω0 t + ω0 τ ) i
= cos(ω0 τ )[2π − 0] +
4π 4π 2
A2
= cos ω0 τ + 0
2
A2
= cos ω0 τ
2
A2
∴ RXX (τ ) = cos ω0 τ . This solution does not contain variable ‘t’.
2
So, both E[X(t)] and E[X(t)X(t + τ )] are constant, then it is WSS.
245
CHAPTER 8
Spectral Characteristics
In previous sections studied the characteristics of random process in time domain. The
characteristics of random process can be represented in frequency domain also and the
function obtained in frequency domain is called the spectrum of random signal and
measured in ‘volts/Hertz’.
Let X(t) be a random process as shown in Fig.
The random process X(t) and XT (t) be defined as that portion of X(t) between
−T to +T i.e.,
X(t); −T < t < t
XT (t) =
0; otherwise
Fourier transforms are very useful in spectral in spectral representation of the ran-
dom signals. For example, consider a random signal x(t), the Fourier transform of x(t)
is X(ω) is given by
Z∞
x(t)e−jωt dt
X(ω) = F x(t) =
t=−∞
This function X(ω) is considered to the voltage density specturam of x(t).; But, the
problem is that X(ω) may not exist for many functions of a random process. Therefore,
the spectral representation of random process utilizing a voltage density spectrum is not
feasible always.
In such situation, we go for the power density spectrum of a random process which is
defined as the function which results when the power in the random process is described
as a function of frequency.
246
The random process X(t) between −T to +T can be written as
X(t); −T ≤ t ≤ T
XT (t) =
0; elsewhere
1. Time domain
RT 2
RT
• Energy E = X (t) dt = XT2 (t) dt
t=−T t=−T
1
RT 1
RT
• Power P = lim X 2 (t) dt = lim XT2 (t) dt
T →∞ 2T t=−T T →∞ 2T t=−T
2. Frequency domain
1
R∞ 1
R∞
• Energy E = 2π
X 2 (ω) dω = 2π
XT2 (ω) dω
ω=−∞ ω=−∞
1 1
R∞ 1 1
R∞ 2
• Power P = lim × 2π X 2 (ω) dω = lim × XT (ω) dω
T →∞ 2T ω=−∞ T →∞ 2T 2π
ω=−∞
ZT
1
E X 2 (t) dt
PXX = lim
T →∞ 2T
t=−T
Z∞
1 1
E XT2 (ω) dω
= lim ×
T →∞ 2T 2π
ω=−∞
Z∞
1 E XT2 (ω)
= lim dω
2π T →∞ 2T
ω=−∞
Z∞
1 E XT2 (ω) h i
= A E X 2 (t) dt dω = A E X 2 (t)
∵ lim
T →∞ 2π 2T
ω=−∞
247
Z∞
1 E XT2 (ω)
∴ PXX = lim
2π T →∞ 2T
ω=−∞
Z∞
1
= SX X(ω) dω
2π
ω=−∞
where SX X(ω) is called Power Spectral Density (PSD) or Power Density Spectrum
(PDS) and given by
E XT2 (ω)
SXX (ω) = lim
T →∞ 2T
The Wiener Kinchin relation says that Power Spectral Density (PSD) and Auto-correlation
function from the ∞
Fourier Transform pair.
Z
SXX (ω) = RXX (τ )e−jωτ dτ
τ =−∞ F
Z∞ ∴ RXX (τ ) ←
→ SXX (ω)
1
RXX (τ ) = SXX (ω)e+jωτ dω
2π
ω=−∞
2
E XT (ω)
SXX (ω) = lim
T →∞ 2T
1
E XT∗ (ω)XT (ω)
= lim
T →∞ 2T
( Z∞ Z∞ )
1
= lim E X(t1 )ejωt1 dt1 · X(t2 )e−jωt1 dt2
T →∞ 2T
t1 =−∞ t2 =−∞
Where X(t1 ) and X(t2 ) are two random variables obtained from random process
X(t) as t = t1 and t = t2
ZT ZT
1
E X(t1 )X(t2 ) e−jω(t2 −t1 ) dt1 dt2
SXX (ω) = lim
T →∞ 2T
t1 =−T t2 =−T
let t1 = T, t2 = t1 + τ ⇒ τ = t2 + t1
248
ZT ZT +t
1
E X(t)X(t + τ ) e−jωτ dt dτ
SXX (ω) = lim
T →∞ 2T
t=−T τ =T −t
T
Z ZT
lim 1 E X(t)X(t + τ ) dt · e−jωτ dτ
=
T →∞ 2T
τ =−T t=−T
ZT
A RXX (τ ) e−jωτ dτ
=
τ =−T
When the random process X(t) is atleast Wide Sense Stationary random process
(WSS rp), we can write
A RXX (τ ) = RXX (τ )
Z∞
SXX (ω) = RXX (τ ) e−jωτ dτ
τ =−∞
Z∞
RXX (τ ) e−jωτ dτ = F RXX (τ )
SXX (ω) =
τ =−∞
Z∞
1
SXX (ω) ejωτ dω = F −1 SXX (ω)
RXX (τ ) =
2π
ω=−∞
Proof. " #
1 2
SXX (ω) = lim E XT (ω)
T →∞ 2T
Using the above equation we can say that PSD is a non-negative function.
Proof. " #
1 2
SXX (ω) = lim E XT (ω)
T →∞ 2T
Using the above equation we can say that PSD is a real valued function.
249
3. Power spectal density is even function
Proof.
Z∞
SXX (−ω) = RXX (τ )e−jωτ dτ (8.1)
τ =−∞
Let ω = −ω
Z∞
SXX (−ω) = RXX (τ )e+jωτ dτ
τ =−∞
Z∞
= RXX (τ ) e−jω(−τ ) dτ
| {z }
τ =−∞ even
Z∞
SXX (−ω) = RXX (−τ ) e−jωτ dτ (8.2)
| {z }
τ =−∞ even
5. The total power or mean square value of random process is equal to the total area
of PSD.
250
Proof. From Wiener Kinchin relation
Z∞
RXX (τ ) = f rac12π SXX (τ )ejωτ dω
ω=−∞
Z∞
1
∴ RXX (τ ) = SXX (ω) dω
2π
ω=−∞
NOTE:
• Total power
Z∞
1
∴ RXX (0) = SXX (f ) df = E[X 2 (t)];
2π
ω=−∞
Z∞
SXX (0) = RXX (τ ) dτ
τ =−∞
d
6. The PSD of derivation of the random process dt
X(t) is ω 2 times the PSD of the
random process.
Proof. " #
1 2
SXX (ω) = lim E XT (ω)
T →∞ 2T
ZT
XT (ω) = XT (t)e−jωt dt
t=−T
ZT
d
XT (ω) = ẊT (ω) = XT (t)e−jωt (−jω) dt
dt
t=−T
251
ẊT (ω) = −jω · XT (ω)
h i
2
E ẊT (ω)
SẊ Ẋ (ω) = lim
T →∞
h 2T i
E − jωXT (ω) · −jωXT (ω)
= lim
T →∞
h 2Ti
E XT (ω)|2
= ω 2 lim
T →∞ 2T
2
= ω SXX (ω)
A2
Problem 1: Find the PSD of Auto-correlation function RXX (τ ) = 2
cos ω0 τ and
plot both Auto-correlation (ACF) and PSD.
Solution:
A2
RXX (τ ) = cos ω0 τ
2
A2 ejω0 τ + e−jω0 τ
=
2 2
A jω0 τ A2 −jω0 τ
2
= e + e
4 4
A2
∴ SXX (ω) = π [δ(ω − ω0 ) + δ(ω + ω0 )]
2
252
1; ω=0 Let X(ω) = δ(ω − ω0 )
δ(ω) =
0; ω 6= 0
x(t) = F −1 X(ω)
Z∞ Z∞
1 1
F −1 [δ(ω)] = δ(ω)ejωτ dω = X(ω)ejωt dω
2π 2π
−∞
ω=−∞
Z∞
1
= δ(ω − ω0 )ejωt dω
−1 1 2π
F F [δ(ω)] = F (1)
2π −∞
1
δ(ω) = F [1] From sampling property of the impulse
2π
2πδ(ω) = F [1] function
If ω = ω0 1 h jωt i
F −1 δ(ω − ω0 ) =
e
h i 2π ω=ω0
2πδ(ω − ω0 ) = F ejω0 τ 1 jω0 t
= e
2π
If ω = −ω0 1 jω0 t
h i = F e
2πδ(ω + ω0 ) = F e−jω0 τ 2π
F ejω0 t = 2πδ(ω − ω0 )
Solution: Given
h i
τ
A 1+ T ; −T ≤ τ ≤ 0
h i
RXX (τ ) = A 1 − Tτ ; 0≤τ ≤T
0; otherwise
Method-1:
Using ramp function r(τ )
A 2A A
RXX (τ ) = r(τ + T ) − r(τ ) + r(τ − T )
T T T
d A 2A A
RXX (τ ) = U (τ + T ) − U (τ ) + U (τ − T )
dτ T T T
d2 A 2A A
RXX (τ 2 ) = δ(τ + T ) − δ(τ ) + δ(τ − T ) (8.3)
dτ T T T
253
d 2
We know that dτ RXX (τ 2 ) = (jω)2 SXX (ω)
Now, using differentiation and shifting property,
d
If x(t) ↔ X(ω) then x(t) ↔ jω X(ω);
dt
d2
x(t) ↔ (jω)2 X(ω) and
dt2
x(t − t0 ) ↔ X(ω) e−jωt0
From equation (8.3), apply Fourier transform on both sides
A 2A A
(jω)2 SXX (ω) =
F δ(τ + T ) − F δ(τ ) + F δ(τ − T )
T T T
A 2A A
−ω 2 SXX (ω) = ejωT − + e−jωT
T T T
jωT −jωT
2A e +e
= −1
T 2
2A
= [cos ωT − 1]
T
2A
SXX (ω) = [1 − cos ωT ]
T ω2
4A ωT 1 − cos 2θ
= 2 sin2 ∵ sin2 θ =
ω T( 2 ) 2
2 ωT
sin 2
= AT ω2 T 2
2
" #2
sin ωT
2
= AT ωT
2
2 ωT
= AT sinc
2
2 ωT
SXX (ω) = AT sinc
2
Method-2:
ZT
RXX (τ )e−jωτ dτ
SXX (ω) = F RXX (τ ) =
τ =−T
Z0 ZT
τ −jωτ τ −jωτ
= A 1+ e dτ + A 1− e dτ
T T
τ =−T τ =0
ZT ZT
τ +jωτ τ −jωτ
= A 1− e dτ + A 1− e dτ
T T
τ =0 τ =0
ZT τ h jωτ i
= A 1− e + e−jωτ dτ
T
τ =0
254
h i
ZT ejωτ
+ e −jωτ
τ
= 2A 1− dτ
T 2
τ =0
ZT τ
= 2A 1− cos ωτ dτ
T
τ =0
ZT ZT
τ
= 2A cosωτ dτ − 2A cos ωτ dτ
T
τ =0 τ =0
T Z T
sin ωτ 2A sin ωτ sin ωτ
= 2A − τ − 1·
ω τ =0 T ω ω τ =0
T
sin ωτ 2A sin ωτ cos ωτ
= 2A −0 − τ +τ
ω T ω ω 2 τ =0
sin ωT 2A sin ωT cos ωT 1
= 2A − T + − 0+ 2
ω T ω ω2 ω
sin ωT sin ωT 2A cos ωT 2A 1
= 2A − 2A − 2
+ · 2
ω ω T ω T ω
2A
= [1 − cos ωT ]
T ω2
4A ωT 1 − cos 2θ
= 2 sin2 ∵ sin2 θ =
ω T 2 2
sin2 ωT
= AT ω2 T 22
4
" #2
sin ωT
2
= AT ωT
2
2 ωT
= AT sinc
2
2 ωT
∴ SXX (ω) = AT sinc
2
Z∞
1
SXX (ω) = RXX (τ )e−jωτ dτ
2π
τ =−∞
255
Z∞
SXX (f ) = RXX (τ )e−j2πf τ dτ
τ =−∞
Z0 Z∞
−j2πf τ
= e2ατ
e dτ + e−2ατ e−j2πf τ dτ
τ =−∞ τ =0
Z0 Z∞
= e(2α−j2πf )τ dτ + e(−2α−j2πf )τ dτ
τ =−∞ τ =0
(2α−j2πf )τ
0 ∞
e(−2α−j2πf )τ
e
= +
2α − j2πf −∞ −2α − j2πf 0
1 1
= [1 − 0] + [0 − 1]
2α − j2πf −2α − j2πf
1 1
= +
2α − j2πf 2α + j2πf
2α + + 2α − j2πf
j2πf
=
(2α)2 − (j2πf )2
4α
=
4α − 4π 2 f 2
2
4α
∴ SXX (f ) =
4α2 − 4π 2 f 2
𝑆𝑋𝑋 (𝑓)
1
𝛼= 1
𝛼= 2
0 𝑓
3 2 1 1 2 3
− − −
𝜋 𝜋 𝜋 𝜋 𝜋 𝜋
Problem: 4 The power spectral density of a WSS white noise whose frequency
components are limited to −W ≤ f ≤ W is shown in the Fig.
𝑆𝑋𝑋 (𝑓)
𝜂
2
(a) Find average power of X(t)?
(b) Find auto-correlation of X(t)?
-W 0 W
f
256
(b) Auto correlationfor this process is
Solution: Z∞
(a) Average power RXX (τ ) = SXX (f )ej2πf τ df
f =−∞
ZW ZW
E X 2 (t) = η j2πf τ
SXX (f ) df = e df
f =−W
2
f =−W
ZW W
η ej2πf τ
η
= df =
2 2 j2πτ −W
f =−W
η ej2πW τ − ej2πW τ
η W η =
= [f ]−W = [2W ] 2 j2πW τ
2 2
η
= ηW = sin(2πW τ )
2πτ
sin(2πW τ )
∴ E X 2 (t) = ηW
= ηW ·
2πW τ
sin πx
RXX (τ ) = ηW sinc(2W τ ) = sincx ∵
πx
Problem: 5 For the random process X(t) = A sin(ω0 t+θ), where A and ω0 are real
constants and θ is a random variable distributed uniformly in the interval 0 < θ < π2 .
Find the average power PXX in X(t)?
𝑓𝛳 (𝛳)
2/𝜋 = 1
𝜋/2
𝛳
0 𝜋/2
Solution:
First Approach:
1 = cos 2θ
E X 2 (t) = E A2 sin2 (ω0 t + θ) ∵ sin2 θ =
2
A2 A2
=E − cos(2ω0 t + 2θ)
2 2
π
2
2 2 Z
A A 2
= − cos(2ω0 t + 2θ) · dθ
2 2 π
θ=0
2 2
π
A A sin(2ω0 t + 2θ) 2
= −
2 π 2 θ=0
A2 A 2
= − [sin(2ω0 t + π) − sin 2ω0 t]
2 2π
A2 A2
= − {[sin(2ω0 t) cos π − cos(2ω0 t) sin π] − sin 2ω0 t}
2 2π
A2 A2
= − [− sin 2ω0 t − sin 2ω0 t]
2 2π
257
A2 A2
= + sin 2ω0 t
2 π
Since E[X 2 (t)] is time dependent. So, X(t) is not WSS random process. Finally we
perform time averages is
ZT
1
PXX = lim E[X 2 (t)]dt
T →∞ 2T
t=−T
ZT
A2 A2
1
= lim + sin 2ω0 t dt
T →∞ 2T 2 π
t=−T
2
A
=
2
A2
∴ PXX =
2
Second Approach:
Z∞
XT (ω) = XT (t)e−jωt dt
t=−∞
ZT
= A sin(ω0 t + θ) e−jωt dt
t=−T
ZT
ejω0 t+θ − e−jω0 t+θ −jωt
=A ·e dt
2j
t=−T
ZT ZT
A jθ j(ω0 −ω)t A −jθ
= e e dt − e e−j(ω0 +ω)t dt
2j 2j
t=−T t=−T
T T
A jθ ej(ω0 −ω)t A −jθ e−j(ω0 +ω)t
= e − e
2j j(ω0 − ω) t=−T 2j −j(ω0 + ω) t=−T
j(ω0 −ω)T −j(ω0 −ω)T
A jθ e −e
= e
j(ω0 − ω) 2j
−j(ω0 +ω)T
− ej(ω0 +ω)T
A −jθ e
+ e
j(ω0 + ω) 2j
jθ −jθ
AT e sin(ω0 − ω)T AT e sin(ω0 + ω)T
= −
j (ω0 − ω)T j (ω0 + ω)T
jθ sin(ω0 − ω)T −jθ sin(ω0 + ω)T
XT (ω) = jAT e +e
(ω0 − ω)T (ω0 + ω)T
2 jθ sin(ω0 − ω)T −jθ sin(ω0 + ω)T
XT (ω) = jAT (−jAT ) e +e
(ω0 − ω)T (ω0 + ω)T
258
−jθ sin(ω0− ω)T sin(ω0 + ω)T
× e + ejθ
(ω0 − ω)T (ω0 + ω)T
2 A2
XT (ω) = PXX =
2
A2
∴ Average power PXX =
2
By comparing both two methods, the direct method (second method) is tedious.
So, very easy to compute first method.
Problem: 6 For the stationary ergodic random process having the auto correlation
function as shown in Fig,. Find (a) E[X(t)] (b) E[X 2 (t)] 2
(c) σX of RXX (τ )
Solution:
2
(c) σX = E[X 2 (t)] − (E[X(t)])2 = 50 − 20 = 30
of X(t)?
259
E[X(t)] E[Y (t)] CXX (τ )
E[X 2 (t)] E[Y 2 (t)] RXY (τ )
Find
2
σX σY2 CXY (τ )
RXX (τ ) RY Y (τ )
Solution:
1.
E[X(t)] = E[A cos(ω0 t + θ)]
= AE[cos ω0 t sin θ + sin ω0 t cos θ]
= A cos ω0 tE[sin θ] + A sin ω0 tE[cos θ]
Zπ Zπ
1 1
= A cos ω0 t sin θ dθ + A sin ω0 t cos θ dθ
2π 2π
θ=−π θ=−π
A π A π
= cos ω0 t cos θ θ=−π + sin ω0 t − sin θ θ=−π
2π 2π
A A
= cos ω0 t cos π − cos(−π) − sin ω0 t sin π − sin(−π)
2π 2π
A A
= cos ω0 t 1 − 1 − sin ω0 t 0 − 0
2π 2π
=0
2.
E[X 2 (t)] = E[A2 cos2 (ω0 t + θ)]
2 1 + cos 2(ω0 t + θ)
=A E
2
2
1 A
= A2 E + E [cos 2(ω0 t + θ)]
2 2
1
= A2 · + 0
2
2
A
=
2
2 A2 A2
3. σX = m2 − m21 = 2
− 02 = 2
4.
RXX (τ ) = E[X(t)X(t + τ )]
= Let t = t1 ; t + τ = t2 ;
= E[X(t1 )X(t2 )]
= E [A cos(ω0 t1 + θ) · A cos(ω0 t2 + θ)]
= A2 E [cos(ω0 t1 + θ) · cos(ω0 t2 + θ)]
260
= A2 E [cos(ω0 t1 − ω0 t2 ) + cos(ω0 t1 + ω0 t2 + 2θ)]
A2 A2 ( =0
(
(
((((
= E [cos(ω0 t1 − ω0 t2 )] + (E([cos(ω ( (
(( 0 1t (+ ω t
0 2 + 2θ)]
2 (2
A2
= E [cos ω0 (t1 − t2 )]
2
A2
E [cos(ω0 τ )] ‘θ’ is a random variable
2
A2
∴ RXX (τ ) = cos(ω0 τ )
2
5.
CXX (τ ) = E (X(t) − X(t))(X(t + τ ) − X(t + τ )
2
= RXX (τ ) − X(t)
A2
= cos(ω0 τ ) − 02
2
A2
∴ CXX (τ ) = cos(ω0 τ )
2
A2
If t1 = t2 = t then CXX (τ ) =
2
7.
E[Y 2 (t)] = E B 2 cos ω02 t
= cos ω02 tE B 2
= cos ω02 t
8.
2
σY2 = E[Y 2 (t)] − E[Y (t)]
= cos ω02 t − 0
= cos ω02 t
9.
RY Y (τ ) = E[Y (t)Y (t + τ )]
= E[B cos ω0 t B cos(ω0 t + τ )]
= cos ω0 tE[B 2 ]
261
= cos ω0 t
10.
CY Y (τ ) = RY Y (τ ) − (Y )2
= cos ω0 t − 0
= cos ω0 t
11.
RXY (τ ) = E[A cos(ω0 t1 + θ) B cos(ω0 t2 + θ)]
= E[A cos(ω0 t1 + θ)]E[B cos(ω0 t2 + +θ)]
=0
12.
CXY (τ ) = RXY (τ ) − (X X) = 0
Two types:
• Baseband random process
If the power spectral density SXX (ω) of a random process X(t)have zero frequency
components then it is called baseband random process. The frequency plot of baseband
random process will be shown in Fig. Here 3dB bandwith can be written as
𝑆𝑋𝑋(ω)
-ω ω
−𝑊 0 𝑊
BW
v
u R∞
ω 2 SXX (ω)dω
u
u
uω=−∞
Wrms = rms bandwidth = u
u R∞
t SXX (ω)dω
ω=−∞
262
8.3.2 Bandpass random process
If the power spectral density SXX (ω) of a random process X(t) does not have zero
frequency components then it is called bandpass random process. The frequency plot
of baseband random process will be shown in Fig.
𝑆𝑋𝑋(ω)
-ω 0
ω
− ω0 ω0
BW BW
v
u R∞ 2
ω − ω0 SXX (ω)dω
u
u4
ω=−∞
u
Wrms = rms bandwidth = u
u R∞
t SXX (ω)dω
ω=−∞
R∞
ωSXX (ω)dω
ω=0
where ω0 = R∞
SXX (ω)dω
ω=0
ω 2 SXX (ω)dω
u
u
uω=−∞
Wrms = rms bandwidth = u u R∞
t SXX (ω)dω
-ω ω
ω=−∞ −𝑊 0 𝑊
BW
263
2
Problem 9: The PSD of a baseband random process X(t) is SXX (ω) = 2
ω
1+ 2
Numerator part:
Z∞
ω 2 SXX (ω)dω
ω=−∞
Z∞
ω2 × 2
= h i dω
ω 2
ω=−∞
1+ 2
Z∞
2ω 2
= 2 dω
1 + ω4
ω=−∞
Z∞
2 × 4ω 2
= dω
4 + ω2
ω=−∞
Z∞
ω2
=2×8 dω
4 + ω2
ω=0
264
1 Γ 12 Γ − 12
= 32 × Γ(1 + n) = nΓ(n)
2 Γ(1)
Γ 1 − 12
1√ √ √
1 1
= 32 × π×2 π Γ − = = −2Γ = − 2 π
2 2 − 12 2
= 32π
Z∞
∴ ω 2 SXX (ω)dω = 32π
ω=−∞
Denominator part:
Z∞ Z∞
2×4
SXX (ω) dω = dω
22 + ω 2
ω=−∞ ω=−∞
∞
1 −1
=8 tan ω
2
h π π iω=−∞
=4 − −
2 2
= 4π
R∞
ω 2 SXX (ω)dω
2 ω=−∞ 32π
Wrms = R∞ = =8
4π
SXX (ω)dω
ω=−∞
√
=⇒ ∴ Wrms = 8
1; for |ω| < B
Problem 10: Assume random process PSD SX X(ω) =
0; for |ω| ≥ B
Find the rms bandwidth?
𝑆𝑋𝑋 (ω)
0
ω
-B B
Solution: Given
1; −B ≤ ω ≤ B
SX X(ω) =
0; otherwise
R∞
ω 2 SXX (ω)dω
2 ω=−∞
Wrms = R∞
SXX (ω)dω
ω=−∞
265
R∞ RB
ω 2 (1) dω ω 2 dω
ω=−∞ ω=−B
= R∞ =
RB
(1) dω 1 dω
ω=−∞ ω=−B
h iB
ω3
3 2B 3
−B 3
= B =
ω −B 2B
B2 B
= =⇒ ∴ rms bandwidth = Wrms = √
3 3
B
1; |ω ± ω0 | < 2
Problem 11: Assume random process PSD SX X(ω) =
0; elsewhere
Find the rms bandwidth?
𝑆𝑋𝑋 (ω)
-ω ω
𝐵 − ω0 𝐵 0 𝐵 𝐵
− ω0 + − ω 0− ω 0− ω0 − ω0 +
2
2 2 2
Solution:
R∞ 2 R∞
4 ω − ω0 SXX (ω)dω ωSXX (ω)dω
2 ω=−∞ ω=0
Wrms = R∞ ; where ω0 = R∞
SXX (ω)dω SXX (ω)dω
ω=−∞ ω=0
(i)
ω0 + B
Z∞ Z 2
ω0 + B2 B B
SXX (ω) dω = 1 · dω = ω ω − B = ω0 + − ω0 − =B
0 2 2 2
0 ω0 − B
2
(ii)
ω0 + B
Z∞ 2 ω0 + B2
ω2
Z
ωSXX (ω) = ω · 1 · dω =
2 ω0 − B
0 2
ω0 − B
2
( 2 2 )
1 B B
= ω0 + − ω0 −
2 2 2
1 B
= × 4 ω0 · = Bω0
2 2
266
(iii)
R∞
ωSXX (ω)dω
ω=0 Bω0
∴ ω0 = R∞ = =B ∵ (i) and (ii)
B
SXX (ω)dω
ω=0
(iv)
R∞ 2
4 ω − ω0 SXX (ω)dω
2 ω=−∞
Wrms = R∞ (8.4)
SXX (ω)dω
ω=−∞
Z∞
2
ω − ω0 SXX (ω)dω
ω=−∞
ω0 + B
2
Z
2
= ω − ω0 · 1 · dω
ω=ω0 − B
2
ω0 + B
2
Z
ω 2 + ω02 − 2ωω0 dω
=
ω=ω0 − B
2
ω3 ω2
2
= + ω0 ω − 2ω0
3 2
( 3 3 )
1 B B 2 B B
= ω0 + − ω0 − + ω0 ω0 + − ω0 −
3 2 2 2 2
( 2 2 )
B B
− ω0 ω0 + − ω0 −
2 2
267
3
2 4 · B12 B2
=⇒ Wrms = =
B 3
B
=⇒ ∴ Wrms = rms bandwidth = √
3
• Both, the ideal low pass and band pass process, rms bandwidth is equal i.e., √B3 .
This is the only the case if the factor is present 4 in bandwidth of band-pass
random process.
Let two random process X(t) and Y (t), the sample function of random processcan be
written as
X(t); −T ≤ t ≤ T
XT (t) =
0; elsewhere
y(t); −T ≤ t ≤ T
YT (t) =
0; elsewhere
The cross-power between X(t) and Y (t) in interval (−T, T ) can be written as
268
Z∞ Z∞
1 XT∗ (ω)YT (ω)
PXY = XT (t)YT (t) dt = dω
2π 2T
t=−∞ ω=−∞
Z∞ Z∞
1 1 XT∗ (ω)YT (ω)
lim XT (t)YT (t) dt = lim dω
T →∞ 2T T →∞ 2π 2T
t=−∞ ω=−∞
Z∞
1
∴ PXY = A E[X( t)YT (t)] = SXY (ω) dω
2π
ω=−∞
The Wiener Kinchin relation says that Cross Power Spectral Density (PSD) SXY (ω)
and Cross-correlation function RXY (τ ) from the Fourier Transform pair.
Z∞
SXY (ω) = RXY (τ )e−jωτ dτ
τ =−∞ F
Z∞ ∴ RXY (τ ) ←
→ SXY (ω)
1
RXY (τ ) = SXY (ω)e+jωτ dω
2π
ω=−∞
Where XT (t) is obtained from random random process X(t) as t = t1 and Y (t) is
obtained t = t2 = t1 + τ then
( ZT ZT )
1
SXY (ω) = lim E XT (t1 )ejωt1 dt1 · YT (t2 )e−jωt2 dt2
T →∞ 2T
t1 =−T t2 =−T
269
ZT ZT ( )
1
= lim E XT (t1 )YT (t2 ) e−jω(t2 −t1 ) dt2 dt1
T →∞ 2T
t1 =−T t2 =−T
let t1 = T, t2 = t1 + τ ⇒ τ = t2 − t1
ZT ZT +t ( )
1
= lim E XT (t)YT (t + τ ) e−jωτ dτ dt
T →∞ 2T
t=−T τ =T −t
T
Z ZT
= lim 1 RXY (τ )e−jωτ dt dτ
T →∞ 2T
τ =−T t=−T
ZT
= A [RXY (τ )] e−jωτ dτ
τ =−T
The random process X(t) and Y (t) are WSS random process, then
Z∞
RXY (τ ) e−jωτ dτ = F RXY (τ )
SXY (ω) =
τ =−∞
Z∞
1
SXY (ω) ejωτ dω = F −1 SXY (ω)
RXY (τ ) =
2π
ω=−∞
Proof.
Z∞
SXY (−ω) = RXY (τ )e−jωτ dτ (8.5)
τ =−∞
Let ω = −ω
Z∞
SXY (−ω) = RXY (τ )e+jωτ dτ
τ =−∞
Z∞
= RXY (τ ) e−jω(−τ ) dτ
| {z }
τ =−∞ even
270
Cross-correlation function is even function RXY (τ ) = RXY (−τ )
Z∞
SXY (−ω) = RXY (−τ )e−jω(−τ ) dτ (8.6)
τ =−∞
2. The real part of cross PSD is even and imaginary part is odd function.
Proof.
Z∞
SXY (ω) = RXY (τ )e−jωτ dτ
τ =−∞
Z∞
= RXY (τ ) [cos ωτ − j sin ωτ ] dτ
τ =−∞
Z∞
Re [SXY (ω)] = RXY (τ ) cos ωτ dτ =⇒ even function
τ =−∞
Z∞
Im [SXY (ω)] = − RXY (τ ) sin ωτ dτ =⇒ odd function
τ =−∞
3. If X(t) and Y (t) are orthogonal randomprocess then cross PSD is zero.
Proof.
Z∞
SXY (ω) = RXY (τ )e−jωτ dτ
τ =−∞
RXY (τ ) = E [X(t)X(t + τ )] = 0
IfX(t) and Y (t) are orthogonal.
∴ SXY (ω) = 0
4. If X(t) and Y (t) are uncorrelated and WSS r.p then SXY = 2πX Y δ(ω)
271
Proof. From Wiener Kinchin relation
Z∞
SXX (ω) = RXX (τ )e−jωτ dτ
τ =−∞
Z∞
= E [X(t)Y (t + τ )] e−jωτ dτ
τ =−∞
Z∞
= E [X(t)] E [Y (t + τ )] e−jωτ dτ ∵ X(t), Y (t) are independent
τ =−∞
Z∞
= X Y e−jωτ dτ ∵ WSS E[X(t)] = X; E[Y (t + τ )] = Y
τ =−∞
Z∞
=XY e−jωτ dτ
τ =−∞
= X Y · 2πδ(ω)
∴ SXX (ω) = 2πX Y δ(ω)
∴ The white noise process is zero mean WSS process with PSD is constant (flat)
for all frequencies. It is strictly speaking if we take inverse fourier transform of a flat
function does not exist for all frequencies f .
−1 −1 N0 N0
F {SN (f )} = RW W (f ) = F = δ(τ )
2 2
Z∞ Z∞
1
RW W (τ ) = SW (f )dω = SW (f ) df
2π
ω=−∞ ω=−∞
272
Z∞
N0
RW W (τ ) = W2 = df = ∞
2
ω=−∞
So, this mean square value (power) of white process is infinite. However it is not
possible to have a random process with infinite power, white noise does not exist in
the physical world. It is mathematical model can be used a close approximation toreal
world process.
Gaussian white noise often called white Gaussian noise, for any two (or several)
random variables from the process independent and long as they are not same random
variable, and uncorrelated their mean is zero.
Signal power
(SN R)dB = 10 log10
Noise power
Here,
Si − input signal power So − input signal power
Ni − input signal power No − input signal power
G− system gain
273
(S/N )o
∴ SN R =
(S/N )i
For any circuit, contain some noise producing active/ passive elements in it. These
SNR at the output will always be less than the SNR at the input, i.e., there is a deterio-
ration of SNR. Thus an amplifier does not improve SNR, it only degrades it.
Noise figure:
(S/N )i
F = Noise figure =
(S/N )o
274
CHAPTER 9
9.1 Introduction
In application of random process, the input-output relation through linear system can
be described as follows.
Here X(t) is a random process and h(t) (deterministic function) is the impulse
response of the linear system (Filter or another linear system).
1. Time domain: The output is the time domain is convolution of the input random
process X(t) and impulse response h(t) i.e.,
Z∞ Z∞
y(t) = X(t) ∗ h(t) = X(τ )h(t − τ ) dτ = h(τ )X(t − τ ) dτ
τ =−∞ τ =−∞
2. Frequency domain: The output in the frequency domain is the product of the input
Fourier transform of the impulse response X(t) is X(f ) and the fourier transform
of the impulse response h(t) is H(f ).
R∞
X(f ) = X(t)e−j2πf t dt =⇒ FT of the the input r.p X(t) is a r.p
−∞
R∞
H(f ) = h(t)e−j2πf t dt =⇒ FT of the the deterministic impulse response
−∞
275
Q: Can you evaluate the Fourier transform of input random porocess x(t), X(f )?
A: In genertal NO. Since, the X(t) is random in general and has no mathematical
expression.
Q: How can we describe the behavior of the random process and the output ran-
dom process through a linear system?
A: Case 1: Using auto-correlation function of random process X(t), RXX (τ ).
assume a WSS (constant mean and RXX (τ ) function is deterministic and only a
function of ‘τ ’.
RXX (τ ) = E[X(t)X(t + τ )]
The auto-correlation tell us how the random process is varying. It is a slow vary-
ing/fast varying process.
𝑅𝑋𝑋 (τ)
NOTE: The inout and output of the linear system as shown below in time and
frequency domain assuming the random process X(t) is WSS.
Random Function Random Function
276
for mean value can be written as
Z∞
y(t) = X(t) ∗ h(t) = h(τ )X(t − τ ) dτ
τ =−∞
Z∞
∴ E[Y (t)] == X(t) h(τ ) dτ
τ =−∞
The mean value of Y (t) is the multiplication of mean value of X(t) and the area
under the impulse response.
Z∞
y(t) = X(t) ∗ h(t) = h(τ )X(t − τ ) dτ
τ =−∞
Let τ1 = τ2 = τ
277
Z∞ Z∞
2
E[Y (t)] = RXX (0)h(τ )h(τ ) dτ dτ
τ =−∞ τ =−∞
RY Y (τ ) = E[Y (t)Y (t + τ )]
( Z∞ Z∞ )
=E h(τ1 )X(t − τ1 ) dτ1 · h(τ2 )X(t − τ2 ) dτ2
τ1 =−∞ τ2 =−∞
| {z } | {z }
Y (t) Y (t+τ )
Z∞ Z∞
= E[X(t − τ1 )X(t + τ − τ2 )] h(τ1 )h(τ2 ) dτ1 dτ2
τ1 =−∞ τ2 =−∞
Z∞ Z∞
= RXX (τ + τ1 − τ2 ) h(τ1 )h(τ2 ) dτ1 dτ2
τ1 =−∞ τ2 =−∞
Other method:
( Z∞ Z∞ )
RY Y (τ ) = E h(τ )X(t − τ ) dτ · h(t + τ )X(t) dτ
τ =−∞ τ =−∞
Z∞ Z∞
= E[X(t)X(t − τ )] h(τ )h(t + τ ) dτ dτ
τ =−∞ τ =−∞
Z∞ Z∞
∴ RY Y (τ ) = RXX (τ )h(τ )h(t + τ ) dτ dτ
τ =−∞ τ =−∞
RXY (τ ) = E[X(t)Y (t + τ )]
( Z∞ )
= E X(t) h(τ1 )X(t + τ − τ1 ) dτ1
τ1 =−∞
278
= RXX (τ − τ1 )h(τ! ) dτ1
∴ RXY (τ ) = RXX (τ ) ∗ h(τ )
𝑆XX(ω) 𝑆𝑌𝑌(ω)
Z∞
RY Y (τ )e−jωτ dτ
The output PSD SY Y (ω) = F RY Y (τ ) = (9.1)
τ =−∞
ACF RY Y (ω) = E Y (t)Y (t + τ ) (9.2)
Z∞
Y (t) = X(t) ∗ h(t) = h(α1 )X(t − α1 ) dα1
α1 =−∞
Z∞
Y (t + τ ) = X(t + τ ) ∗ h(t + τ ) = h(α2 )X(t + τ − α2 ) dα2
α2 =−∞
let T = t − α1 ; T + τ = t + τ − α2
T + τ − T ⇒ (t + τ ) − α2 − (t − α1 ) = τ + α1 − α2
Z∞ Z∞
RY Y (τ ) = h(α1 ) dα1 · h(α2 ) dα2 · RXX (τ + α1 − α2 )
α1 =−∞ α2 =−∞
279
Z∞ Z∞ Z∞
let α = τ + α1 − α2
Z∞ Z∞ Z∞
= h(α1 ) dα1 · h(α2 ) dα2 · RXX (α)e−jω(α−α1 +α2 ) dα
α1 =−∞ α2 =−∞ τ =−∞
Z∞ Z∞ Z∞
= h(α1 ) dα1 · h(α2 ) dα2 · RXX (α)e−jω(α) dα
α1 =−∞ α2 =−∞ α=−∞
Alternate Method:
From response of LTI system for ACF
RY Y (τ ) = RXX (τ ) ∗ h(−τ ) ∗ h(τ )
↓ ↓ ↓ ↓ ↓ ↓
SY Y (ω) = SXX (ω) · H ∗ (ω) · H(ω)
2
∴ SY Y (ω) = SXX (ω) H(τ )
Z∞
PXX = E[X 2 (t)] = RXX (0) = SXX (f ) df
f =−∞
Z∞ Z∞
2
PY Y = E[Y 2 (t)] = RY Y (0) = SY Y (f ) df = SXX (f ) H(f ) df
f =−∞ −∞
Problem 1: Let X(t) be the random process with PSD SXX (ω) is shown in Fig.
Find the output power of a LTI system whose frequency response is
1; |ω| ≤ ωc
H(ω) =
0; otherwise
280
Z∞ Zωc
1 1 η
PY Y = SXX (ω) dω = dω
2π 2π 2
ω=−∞ ω=−ωc
1 η ωc 1 η
= × ω −ωc = × 2ωc
2π 2 2π 2
ηωc
= W atts/rad/sec or V 2 /rad/sec
2π
Problem 2: Let X(t) be the random process with PSD SXX (ω) is shown in Fig.
Find the output power of a LTI system whose frequency response is
1; (ωc − B2 ) ≤ |ω| ≤ (ωc + B2 )
H(ω) =
0; otherwise
1 η h iωc + B2
= ×2× ω
2π 2 ωc − B2
" #
1 η B B
= × ωc + − ωc −
2π 2 2 2
η
= B W atts/rad/sec or V 2 /rad/sec
2π
281
η h ifc + B2
=2× f
2 fc − B
2
" #
η B B
= fc + − fc −
2 2 2
= ηB W atts/Hz or V 2 /Hz
3
Problem 3: A random noise X(t) having power spectrum SXX (ω) = is
49 + ω 2
applied to a network for which h(t) = t2 Exp(−7t). The network response is denoted
by Y (t).
Solution:
Z∞
1
PY Y = SXX (ω) dω
2π
ω=−∞
Z∞
1 3
= dω
2π 49 + ω 2
ω=−∞
3 1 ω ∞
= × tan−1
2π 7 7 ω=−∞
3 1 h π π i
= × − −
2π 7 2 2
3 1
= × ×π
2π 7
3
= W atts
14
3
∴ PXX = W atts
14
282
dx
dx = (7 + jω) dt =⇒ dt =
7 + jω
if x = ∞ ⇒ t = ∞; x = −∞ ⇒ t = −∞
Z∞
x2 dx
∴ H(ω) = 2
· e−x ·
(7 + jω) 7 + jω
x=−∞
Z∞ Z∞
1 −x 2
= e x dx ∵ e−x xn−1 dx = Γ(n)
(7 + jω)3
x=−∞ x=−∞
Z∞
1
= e−x x3−1 dx ∵ Γ(n) = nΓ(n − 1)
(7 + jω)3
x=−∞
1
= Γ(3) ∵ Γ(n + 1) = nΓ(n) = n!
(7 + jω)3
1
= Γ(2 + 1)
(7 + jω)3
1
× 2!
(7 + jω)3
2
∴ H(ω) =
(7 + jω)3
2
2 2
H(ω) =
(7 + jω)3
4
=
3 2
(49 + ω 2 ) 2
4
=
(49 + ω 2 )3
2
∴ SY Y (ω) = SXX (ω) · H(ω)
3 4
= ·
49 + ω 2 (49 + ω 2 )3
12
=
(49 + ω 2 )4
12
∴ SY Y (ω) =
(49 + ω 2 )4
(iii)
Z∞
1
PY Y (ω) = SXX (ω)
2π
ω=−∞
Z∞
1 12
= dω
2π (49 + ω 2 )4
ω=−∞
283
let ω = 7 tan θ =⇒ dω = 7 sec2 θ
π
let ω = −∞ ⇒ θ = tan−1 (−∞) = − ;
2
−1 π
ω = ∞ ⇒ θ = tan (∞) =
2
π
Z 2
1 12
PY Y = 2 4
· 7 sec2 θ dθ
2π (1 + tan θ)
ω=− π2
π
Z2
1 12
= · 7 sec2 θ dθ
2π (sec2 θ)4
ω=− π2
π
Z2
12 × 7 1
= dθ
2π sec6 θ
ω=− π2
π
Z2
12 × 7 × 2
= cos6 θ dθ ⇒ 1
2π
ω=0
We know that
Z∞ n−1 · n−3
n n−2
· . . . 12 · π2 ; ‘n’ even
cosn θ dθ =
n−1 · n−3
· . . . 23 ; ‘n’ odd
0 n n−2
Zπ/2
6−1 6−3 6−5 π
cos6 θ dθ = · · ·
6 6−2 6−4 2
0
5 3 1 π
= · · ·
6 4 2 2
12 × 7 5 3 1 π
1 ⇒ PY Y (ω) = × · · ·
2π 6 4 2 2
7×5×3 105
= = = 13.125 W atts
2×4 8
∴ PY Y = 13.125 W atts
Problem: 4 A random voltage modeled by a white noise process X(t) which power
spectral density η2 ia an input to RC network shown in Fig. Find
2. Auto-correlation function RY Y (τ )
284
1
jωC 1
H(ω) = 1 =
R+ jωC
1 + jωRC
(a)
2
SY Y (ω) = H(ω) SXX (ω)
1 N0
= ×
12 2 2
+ω R C 2 2
N0 |τ |
RY Y (τ ) = e RC
4RC
N0
E[Y 2 (t)] = RY Y (0) =
4RC
10−4 ; |f | < 100
Problem 5: A WSS r.p X(t) with PSD SXX (f ) = is
0; otherwise
1
the input an RC filter with the frequency response H(f ) = 100π+j2πf . The filter output
is the stochastic process Y (t). What is the
(a) E[X 2 (t)] (b) SXY (f ) (c) SY X (f ) (d) SY Y (f ) (e) E[Y 2 (t)]
Solution:
(a) We know that
Z∞ Z∞
1 jωτ
RXX (τ ) = SXX (ω)e dω = SXX (f )ej2πf τ df
2π
−∞ −∞
Z∞
If τ = 0 RXX (0) = E[X 2 (t)] = SXX (f )e0 df
−∞
Z∞
2
E[X (t)] = SXX (f ) df
−∞
Z100 h i100
−4 −4
= 10 df = 10 f = 10−4 (200) = 0.02
−100
−100
285
10−4 H(f ); |f | ≤ 100
(b) SXY (f ) = H(f )SXX (f ) =
0; otherwise
10−4
100π+j2πf
; |f | ≤ 100
∴ SXY (f ) =
0; otherwise
∗ ∗
(c) SY X (f ) = SXY (f ) and we know that RY X (τ ) = RXY (−τ )
10−4 H ∗ (f ); |f | ≤ 100
∗
SY X (f ) = SXY (f ) =
0; otherwise
10−4
100π−j2πf
; |f | ≤ 100
∴ SY X (f ) =
0; otherwise
(e)
Z∞
E[Y 2 (t)] = SY Y (f ) df
−∞
Z100
10−4
= df
104 π 2 + 4π 2 f 2
−100
Z100
2 df
= 8 2
10 π f 2
1+ 50
0
100
Z
2 −1 f dx 1 −1 x
= 8 2 tan ∵ = tan
10 π 50 0 x 2 + a2 a a
2 −1
= 8 2 tan (2) − tan−1 (0)
10 π
2
= 8 2 × 63.4349
10 π
= 12.584 × 108
∴ E[Y 2 (t)] = 12.584 × 108
Problem 6: Let X(t) is a WSS Gaussian r.p with mean X(t) = 0 and auto-
correlation function RXX (τ ) = 10δ(τ ). where δ(·) is a Dirac delta function. The
286
random process X(t) is run through a filter combination as shown below, where the
first filter frequency response
1; |f | < 3
H1 (f ) =
0; otherwise
2
and second filter has frequency response H2 (f ) = e−2f
(a) Find the power E[Y 2 (t)] in Y (t)
(b) Find the power E[Z 2 (t)] in Z(t)
(a)
1; |f | < 3
H1 (f ) =
0; otherwise
Z∞ Z3
∴ PY Y = Y 2 (t) = SY Y (f ) df = 10 df = 60 W atts/Hz
−∞ −3
2 4 f2
(b) H2 (f ) = e−2f =⇒ |H2 (f )|2 = e−4f ≈ e− 2
f2
Because square of Gaussian r.v will become Gaussian r.v. So, |H2 (f )|2 = e− 2 (ap-
proximation)
R∞
∴ PZZ = E[Z 2 (t)] = |H2 (f )|2 SY Y (f ) df
−∞
2
e− f2 ; |f | < 3
SZZ (f ) =
0; otherwise
Z∞
2
∴ PZZ = E[Z (t)] = SZZ (f ) df
−∞
Z3
f2
= 10e− 2 df
−3
287
3
√ Z 1 − f2
= 10 2π √ e 2 df
2π
−3
√
= 10 2π[2 − 2Q(3)]
√
= 10 × 2 2π[1 − Q(3)]
√
= 10 × 2 2π[1 − 0.1350 × 10−2 ]
√
= 10 × 2 2π × 0.99865
= 50.06
The equation (9.4) is often used in practical applications. But we need simplified
calculation method to compute the noise power at the output of a filter.
Let H(ω) is the lowpass system transfer function and the spectrum of the input
process equals to N20 for all ω, with N0 a posite, real constant (such spectrum is called a
white noise spectrum).
By using equation(9.4),
Z∞
1 N0
PY Y = |H(ω)|2 dω
2π 2
ω=−∞
H(0); |ω| ≤ ωN
Let define ideal LPF HI (ω) =
0; |ω| > ωN
where WN is a positive constant chosen such that the noise power at the output of
the ideal filter is equal to the noise power at the output of the original (practical) filter.
Z∞ Z∞
1 N0 1 N0
∴ |H(ω)|2 dω = |H(0)|2 dω (9.5)
2π 2 2π 2
−∞ −∞
288
R∞
|H(ω)|2 dω
0
WN = (9.6)
|H(ω)|2
where WN is called the equivalent noise bandwidth of the filter with the transfer
function H(ω).
In the above Fig., the solid curve represents the practical characteristic and the
dashed line the ideal filter rectangular one.
The equivalent noise bandwidth is such that in this picture the dashed area is equal-
sthe shared area.
From equation (9.4) and (9.5), the output power of the filter can be written as
N0
PY Y = |H(0)|2 WN (9.7)
2π
Thus, it can be shown for the special case of white noise input that the integral of
equation (9.4) is reduced to a product and the filter can be characterized by means of a
single number WN as far as the noise filtering behavior is concerned.
• The intensity of this motion increases with increasing temparature and is zero
only at a temparature of absolute zero.
A(f )
G(f ) =
eB|f |
−1
where A and B are constant depend on temperature and other physical constants.
• For the frequencies below the knee of the curve, G(f ) is almost constant. If we
operate in this frequency range, wewe can consider thermal noise to be white
noise.
v 2 = R(0) = 4KT RB
289
where K− Boltzmens constant = 1.38 × 10−23 J/K
T − Temperature in o K
R− resistance value
B− observation bandwidth
∴ The hight of PSD over this constant region is 2KT R.
• If we have a system with number of noise generating devices with in it, we often
refers to the system noise temperature, Te in Kelvins. This is temperature of a
single noise source that would produce the same total noise power at the output.
• If the input to the system contains noise, the system then adds its own noise to
produce larger output noise.
• The system noise figure is the ratio of noise power at the output to that at the
input. It is usually expressed in decibels.
EX: If the noise figure is 3 dB indicates that the system is adding an amount of
noise is equal to that which appears at the input. So, the output noise power is
twice that of the input.
290
9.4 Narrow Band Noise
• Most communication system deals with band pass filters. Therefore while noise
appearing at the input to the system will be shaped into band limited noise by the
filtering operation. If the bandwidth of the noise is relatively small compared to
the center frequency. We refer to this as narrow band noise.
• We have no problem deriving the PSD and ACF of this noise, and these quan-
tities are sufficient to analyze the effect of linear system. However, by dealing
with multipliers and the frequency analysis approach is not sufficient, since non-
linear operations are present. In such cases, it proves useful to have trigonometric
expressions for the noise signals. The form of this expressions is
n(t) = Re{r(t)ej2πf0 t }
where r(t) is a complex function with a low frequency band limited fourier trans-
form. Re is the real part of the expression in the brackets that folls it, and the
exponential function has the effect of shifting the frequencies of r(t) by f0 . By
Euler’s identity,
The equation (9.8) and (9.10) are equal. But equation (9.8) is not simple way to
do by using “Hilbert transforms”.
291
• The Hilbert transform operation can be represented by a linear system, with H(f )
as shown in Fig.
• The phase function of a real system must be odd. The system function is given
by
H(f ) = −jsgn(f)
• The impulse response of this system is inverse tranform of H(f ). This is given
by
1
h(t) =
πt
• The Hilbert tranform of S(t) is given byconvolution of S(t) with h(t). Let us
denote the transform by Ŝ, then
Z∞
1 S(τ )
∴ Ŝ = − dτ
π t−τ
−∞
Z∞ ˆ)
1 S(τ
∴ S(t) = − dτ
π t−τ
−∞
292