RVSP
RVSP
UNIT-I
THE RANDOM VARIABLE
PROBABILITY
Probability theory is used in all those situations where there is
randomness about the occurrence of an event.
(Or)
Measure of the likeliness that an event will occur.
The meaning of randomness is existence of certain amount of unsureness
about an event.
This section deals with the chances of occurring any particular
phenomena. i.e. Electron emission, Telephone calls, Radar detection,
quality control, system failure, games of chance, birth and death rates,
Random walks, probability of detection, Probability of false alarm, BER
calculation, optimal coding (Huffman) and many more.
Experiment:-
A random experiment is an action or process that leads to one of several
possible outcomes
Sample Space:
The sample space is the collection of all possible outcomes of a random
experiment. The elements of are called sample points.
A sample space may be finite, countably infinite or uncountable.
A finite or countably infinite sample space is called a discrete
sample space.
An uncountable sample space is called a continuous sample space
Types of Sample Space:
Finite/Discrete Sample Space:
Consider the experiment of tossing a coin twice. The sample space can be
S = {HH, HT, T H , TT} the above sample space has a finite number of
sample points. It is called a finite sample space.
Countably Infinite Sample Space:
Consider that a light bulb is manufactured. It is then tested for its life
length by inserting it into a socket and the time elapsed (in hours) until it
burns out is recorded.
Let the measuring instrument is capable of recording time to two decimal
places, for example 8.32 hours.
Now, the sample space becomes count ably infinite i.e.
S = {0.0, 0.01, 0.02}
The above sample space is called a countable infinite sample space.
Uncountable / Infinite Sample Space/Continuous Sample Space
If the sample space consists of unaccountably infinite number of
elements then it is called uncountable/ Infinite Sample Space.
Event
An event is simply a set of possible outcomes. To be more specific, an
event is a subset A of the sample space S.
Example 1: tossing a fair coin
Example 2: Throwing a fair die
Types of Events:
Exhaustive Events:
A set of events is said to be exhaustive, if it includes all the possible
events.
Ex. In tossing a coin, the outcome can be either Head or Tail and there is
no other possible outcome. So, the set of events { H , T } is exhaustive.
Ex. In tossing a die, both head and tail cannot happen at the same time.
Independent Events:
Two events are said to be independent, if happening or failure of one
does not affect the happening or failure of the other. Otherwise, the
events are said to be dependent.
If two events, A and B are independent then the joint probability is
Axioms of Probability
For any event A, we assign a number P(A), called the probability of the event
A. This number satisfies the following conditions that act the axioms of
probability.
(i) P( A) ≥ 0 (Probabili ty is a nonnegativ e number)
(ii) P(s) = 1 (Probabili ty of the whole set is unity)
Note that (iii) states that if A and B are mutually exclusive (M.E.) events, the
probability of their union is the sum of their probabilities
Relative Frequency
Random experiment with sample space S. we shall assign non-negative
number called probability to each event in the sample space.
Let A be a particular event in S. then “the probability of event A” is
denoted by P(A).
Suppose that the random experiment is repeated n times, if the event A
occurs nA times, then the probability of event A is defined as "Relative
frequency"
• Relative Frequency Definition:
The probability of an event A is defined as
Joint probability
Joint probability is defined as the probability of joint (or) simultaneous
occurrence of two (or) more events.
Let A and B taking place, and is denoted by P (AB) or P(A∩B)
P (AB) = P(A∩B = P(A) + P(B) - P(AUB)
Random Variable
A random variable is a real valued function that maps all the elements of
sample space on to points on the real axis.
(OR)
A random variable is a function that maps outcomes of a random
experiment to real numbers.
A (real-valued) random variable, often denoted by X(or some other
capital letter), is a function mapping a probability space (S; P) into the
real line R.
The Figure Associated with each point s in the domain S the function X
assigns one and only one value X(s) in the range R.
Proof:
It is known that
𝐹𝑋 (𝑥) = P{𝑋 ≤ 𝑥}
𝐹𝑋 (−∞) = 0
Proof:
It is known that
𝐹𝑋 (𝑥) = P{𝑋 ≤ 𝑥}
𝐹𝑋 (∞) = P{𝑋 ≤ ∞}
𝐹𝑋 (∞) = 1
Proof
{𝑋 ≤ 𝑥2 } = {𝑋 ≤ 𝑥1 } + {𝑥1 ≤ 𝑋 ≤ 𝑥2 }
[𝐹𝑋 (𝑥)]∞
−∞ = 𝐹𝑋 (∞) − 𝐹𝑋 (−∞)
=1−0=1
3. The probability distribution function can be obtained from the
knowledge of density function. It means distribution function is the area
under the density function.
𝑥
𝐹𝑋 (𝑥) = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞
Proof
𝑑
𝑓𝑋 (𝑥) = 𝐹 (𝑥)
𝑑𝑥 𝑋
Integrating on both sides
𝑥 𝑥
𝑑
∫ 𝑓𝑋 (𝑥)𝑑𝑥 = ∫ 𝐹𝑋 (𝑥)
−∞ −∞ 𝑑𝑥
𝑥
= [𝐹𝑋 (𝑥)]−∞
= 𝐹𝑋 (𝑥) − 𝐹𝑋 (−∞) = 𝐹𝑋 (𝑥)
𝑥
𝐹𝑋 (𝑥) = ∫−∞ 𝑓𝑋 (𝑥)𝑑𝑥
Proof:
From Probability distribution function
𝑃{𝑥1 < 𝑋 ≤ 𝑥2 } = 𝐹𝑋 (𝑥2 ) − 𝐹𝑋 (𝑥1 )
𝑥2 𝑥1
= ∫ 𝑓𝑋 (𝑥)𝑑𝑥 − ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞ −∞
𝑥1 𝑥2 ∞ 𝑥1
= ∫ 𝑓𝑋 (𝑥)𝑑𝑥 + ∫ 𝑓𝑋 (𝑥)𝑑𝑥 + ∫ 𝑓𝑋 (𝑥)𝑑𝑥 − ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞ 𝑥1 𝑥2 −∞
𝑥2
= ∫ 𝑓𝑋 (𝑥)𝑑𝑥
𝑥1
𝑥2
𝑃{𝑥1 < 𝑋 ≤ 𝑥2 } = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
𝑥1
Applications
It is mostly applied to counting type problems
The no.of telephone calls made during a period of time
The no.of defective elements in a given samples
The no.of items waiting in a queue
𝐹𝑋 (𝑎) = 0
𝑏−𝑎
𝐹𝑋 (𝑏) = =1
𝑏−𝑎
0 𝑥<𝑎
𝑥−𝑎
𝐹𝑋 (𝑥) = { 𝑎≤𝑋<𝑏
𝑏−𝑎
1 𝑥≥𝑏
Applications
The random distribution of errors introduced in the round off process is
uniformly distributed.
In digital communications during sampling process.
0 𝑥<𝑎
where a and b are real constants −∞ < 𝑎 < ∞ 𝑎𝑛𝑑 𝑏 > 0
Probability distribution function
𝑥
𝐹𝑋 (𝑥) = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞
𝑥
1 −(𝑥−𝑎)
𝐹𝑋 (𝑥) = ∫ 𝑒 𝑏 𝑑𝑥
−∞ 𝑏
𝑎 𝑥
1 −(𝑥−𝑎) 1 −(𝑥−𝑎)
𝐹𝑋 (𝑥) = ∫ 𝑒 𝑏 𝑑𝑥 + ∫ 𝑒 𝑏 𝑑𝑥
−∞ 𝑏 𝑎 𝑏
𝑥−𝑎
−( )
𝐹𝑋 (𝑥) = 1 − 𝑒 𝑏
0 𝑥<𝑎
𝑥−𝑎
𝐹𝑋 (𝑥) = { 1 − −(
𝑒 𝑏
)
𝑥>𝑎
1 𝑥=∞
Applications
The distribution of fluctuations in signal strength received by radar
receivers from certain types of targets.
The distribution of raindrop sizes when a large number of rainstorm
measurements are made.
0 𝑥<𝑎
where a and b are real constants −∞ < 𝑎 < ∞ 𝑎𝑛𝑑 𝑏 > 0
Probability distribution function
𝑥
𝐹𝑋 (𝑥) = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞
𝑥 (𝑥−𝑎)2
2 −
𝐹𝑋 (𝑥) = ∫ (𝑥 − 𝑎)𝑒 𝑏 𝑑𝑥
−∞ 𝑏
(𝑥−𝑎)2 2
Let =𝑦 ⟹ (𝑥 − 𝑎)𝑑𝑥 = 𝑑𝑦
𝑏 𝑏
𝑥
𝐹𝑋 (𝑥) = ∫ 𝑒 −𝑦 𝑑𝑦
𝑎
𝑥
(𝑥−𝑎)2 (𝑥−𝑎)2
− −
= −𝑒 −𝑦 ]𝑎𝑥 = −𝑒 𝑏 ] =1− 𝑒 𝑏
𝑎
0 𝑥<𝑎
(𝑥−𝑎)2
𝐹𝑋 (𝑥) = {1 − 𝑒 −
𝑏 𝑥≥𝑎
1 𝑥=∞
Applications
It describes the envelope of white noise, when the noise is passed
through a band pass filter.
Some types of signal fluctuations received by receivers are modelled as
Rayleigh distribution
If X is a discrete RV
𝑁
𝐸[𝑋] = 𝑋̅ = ∑ 𝑥𝑖 𝑃(𝑥𝑖 )
𝑖=1
Properties of Expectations
1. The expected value of a constant is constant.
Proof
∞
𝐸[𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
𝐸[𝑘] = ∫ 𝑘. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
𝐸[𝑋] = 𝑘. ∫ 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
𝐸[𝑋] = 𝑘. 1
𝐸[𝑋] = 𝑘
2. Let 𝐸[𝑋] be the expected value of a RV ‘X’ then
𝐸[𝑎𝑋] = 𝑎 𝐸[𝑋]
Proof
∞
𝐸[𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
𝐸[𝑎𝑋] = ∫ 𝑎𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
= 𝑎 ∫−∞ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
𝐸[𝑎𝑋] = 𝑎 𝐸[𝑋]
3. Let 𝐸[𝑋] be the expected value of a RV ‘X’ then
𝐸[𝑎𝑋 + 𝑏] = 𝑎𝐸[𝑋] + 𝑏
∞
𝐸[𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
𝐸[𝑎𝑋 + 𝑏] = ∫ (𝑎𝑥 + 𝑏). 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞ ∞
= 𝑎 ∫−∞ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥+ 𝑏. ∫−∞ 𝑓𝑋 (𝑥) 𝑑𝑥
𝐸[𝑎𝑋] = 𝑎𝐸[𝑋]+b
Moments:
Moment of a RV describes the deviation from a reference value.
Moments about the origin
Moments about the Mean
Moments about the origin
The expected value of a given function 𝑔(𝑥) = 𝑋 𝑛
is called nth moment about the origin.
𝑚𝑛 = 𝐸(𝑋 𝑛 )
∞
= ∫ 𝑥 𝑛 . 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
∞
𝑚1 = 𝐸(𝑋) = ∫ 𝑥. 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
The first moment about the origin is nothing but mean value (or) expected value
of a random variable.
Second Moment
∞
2)
𝑚2 = 𝐸(𝑋 = ∫ 𝑥 2 . 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
= 𝑋̅ − 𝑋̅
=0
Variance:
The second central moment of a random variable ‘X’ is called Variance.
It is defined as the expected value of a function of the form 𝑔(𝑥) = (𝑋 − 𝑋̅)2
where 𝑋̅ − 𝑚𝑒𝑎𝑛 𝑣𝑎𝑙𝑢𝑒.
𝜇2 = 𝜎𝑋 2 = 𝑣𝑎𝑟(𝑥)
The variance is used to calculate the average power of a random signal in
communication related applications.
Relation between variance and moments about the origin
∞
= 𝑚2 + 𝑋̅ 2 − 2 𝑋̅ 𝑋̅
= 𝑚2 + 𝑚1 2 − 2 𝑚1 2
𝜎𝑋 2 = 𝑚2 − 𝑚1 2
Properties of Variance:
1. The variance of a constant is zero.
𝑣𝑎𝑟[𝑘] = 0
Proof:
It is known that 𝑣𝑎𝑟[𝑋] = 𝐸(𝑋 − 𝑋̅ )2
2
𝑣𝑎𝑟[𝑘] = 𝐸(𝑘 − 𝑘̅)
as k is constant 𝑘 = 𝑘̅
𝑣𝑎𝑟[𝑘] = 𝐸(𝑘 − 𝑘)2
𝑣𝑎𝑟[𝑘] = 0
It is the degree of distortion from the symmetrical bell curve or the normal
distribution. It measures the lack of symmetry in data distribution.
It differentiates extreme values in one versus the other tail. A symmetrical
distribution will have a skewness of 0.
Positive Skewness means when the tail on the right side of the distribution is
longer or fatter. The mean and median will be greater than the mode.
Negative Skewness is when the tail of the left side of the distribution is longer
or fatter than the tail on the right side. The mean and median will be less than
the mode.
Note:
If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.
If the skewness is between -1 and -0.5(negatively skewed) or between 0.5
and 1(positively skewed), the data are moderately skewed.
If the skewness is less than -1(negatively skewed) or greater than
1(positively skewed), the data are highly skewed.
Coefficient of skewness:
It is defined as the ratio of 3rd central moment to cube of standard deviation.
𝜇3
= 3
𝜎𝑋
𝑠𝑖𝑛𝑐𝑒 |𝑒 𝑗𝜔𝑋 | = 1
∞
|𝜙𝑋 (𝜔)| ≤ ∫ | 𝑓𝑋 (𝑥)| dx
−∞
|𝜙𝑋 (𝜔)| ≤ 1
5. The nth moment of random variable can be obtained from the knowledge
of characteristic function is
𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = (−𝑗)𝑛 |
𝑑𝜔 𝑛 𝜔=0
Proof:
Consider 𝜙𝑋 (𝜔) = 𝐸[𝑒 𝑗𝜔𝑋 ]
∞
𝜙𝑋 (𝜔) = ∫ 𝑒 𝑗𝜔𝑥 𝑓𝑋 (𝑥) dx
−∞
∞
= ∫ (𝑗𝑥)𝑛 𝑒 𝑗𝜔𝑥 𝑓𝑋 (𝑥) dx
−∞
∞
= (𝑗)𝑛 ∫ (𝑥)𝑛 𝑒 𝑗𝜔𝑥 𝑓𝑋 (𝑥) dx
−∞
= (𝑗)𝑛 𝑚𝑛
𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = (−𝑗)𝑛 |
𝑑𝜔 𝑛 𝜔=0
𝑀𝑋 (0) = 1
𝑑 𝑛 𝑀𝑋 (𝜈)
𝑚𝑛 = |
𝑑𝑣 𝑛 𝑣=0
Let there exist a random variable X such that 𝑥 = 𝑥0 then 𝑦0 = 𝑇(𝑥0 ) (or)
𝑥0 = 𝑇 −1 (𝑦0 )
P{𝑌 ≤ 𝑦0 } = P{𝑋 ≤ 𝑥0 }
𝐹𝑌 (𝑦0 ) = 𝐹𝑋 (𝑥0 )
𝑥
We know that 𝐹𝑋 (𝑥) = ∫−∞ 𝑓𝑋 (𝑥)𝑑𝑥
{𝑌 ≤ 𝑦0 } = {𝑋 ≥ 𝑥0 }
P{𝑌 ≤ 𝑦0 } = P{𝑋 ≥ 𝑥0 }
P{𝑌 ≤ 𝑦0 } = 1 − P{𝑋 < 𝑥0 }
𝐹𝑌 (𝑦0 ) = 1 − 𝐹𝑋 (𝑥0 )
𝑥
We know that 𝐹𝑋 (𝑥) = ∫−∞ 𝑓𝑋 (𝑥)𝑑𝑥
𝑦0 𝑥0
∫ 𝑓𝑌 (𝑦)𝑑𝑦 = 1 − ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞ −∞
P{𝑌 ≤ 𝑦0 } = P{𝑋/𝑌 ≤ 𝑦0 }
𝐹𝑌 (𝑦0 ) = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
𝑋/𝑌≤𝑦0
𝑑 𝑑
𝐹𝑌 (𝑦0 ) = ∫ 𝑓 (𝑥)𝑑𝑥
𝑑𝑦 𝑑𝑦 𝑋/𝑌≤𝑦0 𝑋
𝑓𝑋 (𝑥𝑛 )
𝑓𝑌 (𝑦) = ∑
𝑑𝑇(𝑥)
𝑛 | |
𝑑𝑥
𝑑𝑥1 𝑑𝑥2 𝑑𝑥3
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥1 ) | | + 𝑓𝑋 (𝑥2 ) | | + 𝑓𝑋 (𝑥3 ) | | ….
𝑑𝑦 𝑑𝑦 𝑑𝑦
−𝜆
𝜆𝑥
𝑃(𝑥) = 𝑒 𝑥 = 0,1,2,3 … …
𝑥!
Expected Value
If X is a discrete RV
𝑁
𝐸[𝑋] = 𝑋̅ = ∑ 𝑥𝑖 𝑃(𝑥𝑖 )
𝑖=1
−𝜆
𝜆0 𝜆1 𝜆2
=𝜆𝑒 ( + + + ⋯.)
0! 1! 2!
= 𝜆 𝑒 −𝜆 𝑒 𝜆
𝐸[𝑋] = 𝑚1 = 𝜆
VARIANCE:
𝜎𝑋 2 = 𝑚2 − 𝑚1 2
𝑁
𝐸(𝑋 2 ) = 𝑚2 = ∑ 𝑥𝑖 2 𝑃(𝑥𝑖 )
𝑖=1
∞
2 −𝜆
𝜆𝑥
𝑚2 = ∑ 𝑥 𝑒
𝑥!
𝑥=0
𝑥 2 = 𝑥(𝑥 − 1) + 𝑥
∞
−𝜆
𝜆𝑥
= ∑ (𝑥(𝑥 − 1) + 𝑥) 𝑒
𝑥!
𝑥=0
∞ ∞
−𝜆
𝜆𝑥 −𝜆
𝜆𝑥
= ∑ 𝑥(𝑥 − 1) 𝑒 +∑𝑥𝑒
𝑥! 𝑥!
𝑥=0 𝑥=0
2 −𝜆
𝜆0 𝜆1 𝜆2 −𝜆
𝜆0 𝜆1 𝜆2
=𝜆 𝑒 ( + + + ⋯.) + 𝜆 𝑒 ( + + + ⋯.)
0! 1! 2! 0! 1! 2!
= 𝜆2 𝑒 −𝜆 𝑒 𝜆 + 𝜆 𝑒 −𝜆 𝑒 𝜆
= 𝜆2 + 𝜆
Variance
𝜎 2 = 𝑚2 − 𝑚1 2
𝜎 2 = 𝜆2 + 𝜆 − 𝜆2
𝜎2 = 𝜆
For the Poisson random variable
𝐸[𝑋] = 𝝈𝟐 = 𝜆
1 𝑏 2 − 𝑎2
=
2 𝑏−𝑎
1 (𝑏 − 𝑎)(𝑏 + 𝑎)
=
2 𝑏−𝑎
(𝑏 + 𝑎)
𝑚1 = 𝐸[𝑋] =
2
VARIANCE:
𝜎𝑋 2 = 𝑚2 − 𝑚1 2
∞
2)
𝑚2 = 𝐸(𝑋 = ∫ 𝑥 2 . 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
𝑏
1
= ∫ 𝑥2 𝑑𝑥
𝑎 𝑏−𝑎
𝑏
1
= ∫ 𝑥 2 𝑑𝑥
𝑏−𝑎 𝑎
𝑏
1 𝑥3
= ]
𝑏−𝑎 3 𝑎
1 𝑏 3 − 𝑎3
=
3 𝑏−𝑎
1 (𝑏 − 𝑎)(𝑏 2 + 𝑎2 + 𝑎𝑏)
=
3 𝑏−𝑎
(𝑏 2 + 𝑎2 + 𝑎𝑏)
𝑚2 =
3
Variance
𝜎 2 = 𝑚2 − 𝑚1 2
2
(𝑏 2 + 𝑎2 + 𝑎𝑏) 𝑏+𝑎 2
𝜎 = −( )
3 2
2
(𝑏 2 + 𝑎2 + 𝑎𝑏) (𝑏 2 + 𝑎2 + 2𝑎𝑏)
𝜎 = −
3 4
2
(𝑏 2 + 𝑎2 − 2𝑎𝑏)
𝜎 =
12
2
(𝑏 − 𝑎)2
𝜎 =
12
For the Uniform random variable
(𝑏 + 𝑎)
𝐸[𝑋] =
2
2
(𝑏 − 𝑎)2
𝜎 =
12
Descriptive Questions
1. Summarize the properties of Probability distribution
function with relevant proofs.
2. Discuss the properties of characteristic function with the
help of necessary expressions.
3. Contrast the properties of Moment Generating Function.
4. Describe the properties of probability density Function.
5. A random variable X has a probability density
x
cos , 4 x 4
f X ( x) 16 8
0, elsewhere
Find (a) Mean value (b) Second moment (c) Variance.
5
f X ( x) 4
1 x4 , 0 x 1
0, elsewhere
3
f X ( x) 32
x 2 8 x 12 , 2 x6
0, elsewhere
PROBLEMS
1. Find the constant ‘b’ so that the given density function is a valid function.
3𝑥⁄
𝑓𝑋 (𝑥) = {𝑒
4, 0 < 𝑥 < 𝑏
0, 𝑒𝑙𝑠𝑒 𝑤ℎ𝑒𝑟𝑒
Sol:
∞
𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 ∫ 𝑓𝑋 (𝑥)𝑑𝑥 = 1
−∞
𝑏
3𝑥⁄
∫ 𝑒 4 𝑑𝑥 =1
0
𝑏
3𝑥
𝑒 ⁄4
[ ] =1
3
4 0
3𝑏 3
[𝑒 4 − 1] =
4
3𝑏 7
𝑒4 =
4
3𝑏 7
= ln ( )
4 4
b= 0.7461
2. Assume automobile arrivals at a gasoline station are Poisson and occurs
at an average rate of 50per/Hour. The station has only one gasoline pump.
If all cars are assumed to require one minute to obtain fuel, What is the
probability that a weighting line will occur at the pump?
Sol:
If two or more cars arrive, then weighting line will occur at the pump
𝑃(𝑋 ≥ 2) = 1 − 𝑃(𝑋 ≤ 1)
= 1 − 𝐹𝑋 (1)
∞
−𝑏
𝑏𝑘
Poisson distribution 𝐹𝑋 (𝑥) = 𝑒 ∑
𝑘!
𝑘=0
1 𝑛(𝑛+1)(𝑛+2)
= [ ]
650 6 6
= 0.14
𝑛2
= 1 − ∑4𝑛=1 = 0.9538
650
𝑃(6 < 𝑋 ≤ 9)
= 𝐹𝑋 (9) − 𝐹𝑋 (6)
𝑛2 𝑛2
= ∑9𝑛=1 − ∑6𝑛=1 = 0.2984
650 650
𝑘 tan−1 𝑥]∞
−∞ = 1
𝜋 𝜋 1
𝑘( + ) = 1 ⟹ 𝑘 =
2 2 𝜋
ii) The distribution function FX(x)
𝑥
1
𝐹𝑋 (𝑥) = 𝑘 ∫ 2
𝑑𝑥
−∞ 1 + 𝑥
= 𝑘 tan−1 𝑥]−∞
𝑥
1 𝜋
= (tan−1 (𝑥) + )
𝜋 2
∞ 1
iii)𝑃(𝑋 ≥ 0) = 𝑘 ∫0 𝑑𝑥
1+𝑥 2
= 𝑘 tan−1 𝑥]∞
0
1 𝜋
= ( ) = 0.5
𝜋 2
X 1 2 3 4 5
P(X) 0.1 0.2 0.4 0.2 0.1
Sol:
𝑁
= ∑ 𝑥𝑖 𝑃(𝑥𝑖 )
𝑖=1
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝜎𝑋 2 = 𝑚2 − 𝑚1 2
𝑁
𝑚2 = ∑ 𝑥𝑖 2 𝑃(𝑥𝑖 )
𝑖=1
𝑖) 𝑌̅ 𝑖𝑖) ̅̅̅̅
𝑌2 𝑖𝑖𝑖) 𝜎𝑌 2
𝑌̅ = 𝐸[2𝑋 − 3]
= 2 𝐸[𝑋] − 3
= 2 (−3) − 3 = −9
= 𝐸[4𝑋 2 − 12𝑋 + 9]
= 4𝐸[𝑋 2 ] − 12𝐸[𝑋] + 9
= (4 ∗ 11) − (12 ∗ −3) + 9 = 89
𝜎𝑌 2 = 𝑚2 − 𝑚1 2
𝜎𝑌 2 = 89 − 81 = 8
(OR)
𝜎𝑌 2 = 𝑣𝑎𝑟(𝑌)
= 𝑣𝑎𝑟(2𝑋 − 3)
Since the variance of a constant is zero
𝑣𝑎𝑟[𝑎𝑋] = 𝑎2 𝑣𝑎𝑟[𝑋]
= 4𝑣𝑎𝑟(𝑋)
=4∗2=8
4
𝜋 𝜋𝑥
= ∫ 𝑥 𝑐𝑜𝑠 ( ) 𝑑𝑥
16 8
−4
𝜋 4 π
𝜋 𝑥𝑠𝑖𝑛 8 𝑥 4 sin 𝑥
8
= [ 𝜋 | −∫ 𝜋 𝑑𝑥 ]
16 −4
8 −4 8
𝜋 8𝑥 𝜋𝑥 4 64 𝜋𝑥 4
= [ 𝑠𝑖𝑛 ( )| + 2 𝑐𝑜𝑠 ( )| ]
16 𝜋 8 −4 𝜋 8 −4
𝜋 32 π 64 𝜋 32 𝜋 64 𝜋
= [ sin + 2 𝑐𝑜𝑠 − [− 𝑠𝑖𝑛 (− ) + 2 𝑐𝑜𝑠 (− )]]
16 𝜋 2 𝜋 2 𝜋 2 𝜋 2
𝜋 32 32
= [ − ]
16 𝜋 𝜋
𝜋
= (0)
16
∴ 𝑀𝑒𝑎𝑛 𝑣𝑎𝑙𝑢𝑒 = 0
b) Second Moment:
∞
𝑚2 = 𝐸[𝑋 2 ] = ∫ 𝑥 2 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
4
𝜋 𝜋𝑥
= ∫ 𝑥2 cos ( ) 𝑑𝑥
−4 16 8
4
𝜋 𝜋𝑥
= ∫ 𝑥 2 cos ( ) 𝑑𝑥
16 8
−4
2𝜋 4 4 π
𝜋 𝑥 𝑠𝑖𝑛 8 𝑥 sin 𝑥
8
= [ 𝜋 | − ∫ 2𝑥 𝜋 𝑑𝑥]
16
8 −4 −4 8
∴ 𝑚2 = 10.9071
c) Variance:
𝜎𝑥2 = 𝑚2 − 𝑚1 2
= 10.9071 − 0
∴ 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 10.9071
𝐸[𝑋] = ∫ 𝑥 𝑓𝑋 (𝑥) 𝑑𝑥
−∞
1
5
= ∫𝑥 (1 − 𝑥 4 ) 𝑑𝑥
4
0
1 1
5 𝑥2 𝑥6
= [ | − | ]
4 2 0 6 0
5 1 1
= [ − ]
4 2 6
5
=
12
∴ 𝐸[𝑋] = 0.4166
b) E[4𝑋 + 2]:
𝐸[4𝑋 + 2] = 4𝐸[𝑋] + 2
= 4 × (0.4166) + 2
= 1.6664 + 2
∴ 𝐸[4𝑋 + 2] = 3.6664
c) E[𝑋 2 ]:
∞
We know that, 𝐸[𝑋 2 ] = ∫−∞ 𝑥 2 𝑓𝑋 (𝑥) 𝑑𝑥
1
5
= ∫ 𝑥 2 (1 − 𝑥 4 ) 𝑑𝑥
4
0
1 1
5
= [∫ 𝑥 2 𝑑𝑥 − ∫ 𝑥 6 𝑑𝑥 ]
4
0 0
1 1
5 𝑥3 𝑥7
= [ | − | ]
4 3 0 7 0
5 1 1
= [ − ]
4 3 7
5 4
= [ ]
4 21
5
=
21
∴ 𝐸[𝑋 2 ] = 0.2380
1 − |𝜔| , |𝜔| ≤ 1
𝜙𝑋 (𝜔) = {
0 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Sol: Given,
1 − |𝜔| , |𝜔| ≤ 1
𝜙𝑋 (𝜔) = {
0 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
⇒ −1 ≤ 𝜔 ≤ 1
We know that,
∞
1
𝑓𝑋 (𝑥) = ∫ 𝑒 −𝑗𝜔𝑥 𝜙𝑋 (𝜔) 𝑑𝜔
2𝜋
−∞
1 ∞ −𝑗𝜔𝑥
⇒ 𝑓𝑋 (𝑥) = ∫ 𝑒 [1 − |𝜔|] 𝑑𝜔
2𝜋 −∞
1
1
= ∫ 𝑒 −𝑗𝜔𝑥 [1 − |𝜔|] 𝑑𝜔
2𝜋
−1
0 1
1
= [ ∫ 𝑒 −𝑗𝜔𝑥 (1 + 𝜔) 𝑑𝜔 + ∫ 𝑒 −𝑗𝜔𝑥 (1 − 𝜔) 𝑑𝜔]
2𝜋
−1 0
0 0 1 1
1
= [ ∫ 𝑒 −𝑗𝜔𝑥 𝑑𝜔 + ∫ 𝜔𝑒 −𝑗𝜔𝑥 𝑑𝜔 + ∫ 𝑒 −𝑗𝜔𝑥 𝑑𝜔 − ∫ 𝜔𝑒 −𝑗𝜔𝑥 𝑑𝜔]
2𝜋
−1 −1 0 0
𝟎 𝟎 𝟏
𝟏 ⅇ−𝒋𝝎𝒙 𝟏 𝝎ⅇ−𝒋𝝎𝒙 ⅇ−𝒋𝝎𝒙 𝟏 ⅇ−𝒋𝝎𝒙
= [ ] + [ − ∫ ⅆ𝝎] + [ ]
𝟐𝝅 −𝒋𝒙 −𝟏 𝟐𝝅 −𝒋𝒙 −𝒋𝒙 𝟐𝝅 −𝒋𝒙 𝟎
−𝟏
𝟏
𝟏 𝝎ⅇ−𝒋𝝎𝒙 ⅇ−𝒋𝝎𝒙
− [ −∫ ⅆ𝝎]
𝟐𝝅 −𝒋𝒙 −𝒋𝒙
𝟎
𝟎
𝟏 𝟏 ⅇ𝒋𝒙 𝟏 𝝎ⅇ−𝒋𝝎𝒙 ⅇ−𝒋𝝎𝒙 𝟏 ⅇ−𝒋𝒙 𝟏
= [ − ]+ [ − ] + [ − ]
𝟐𝝅 −𝒋𝒙 −𝒋𝒙 𝟐𝝅 −𝒋𝒙 (𝒋𝒙)𝟐 𝟐𝝅 −𝒋𝒙 −𝒋𝒙
−𝟏
𝟏
𝟏 𝝎ⅇ−𝒋𝝎𝒙 ⅇ−𝒋𝝎𝒙
− [ − ]
𝟐𝝅 −𝒋𝒙 (𝒋𝒙)𝟐
𝟎
1 −1 + 𝑒 𝑗𝑥 1 1 −𝑒 𝑗𝑥 𝑒 𝑗𝑥 1 1 − 𝑒 𝑗𝑥
= [ ]+ [0 − −( − )] + [ ]
2𝜋 𝑗𝑥 2𝜋 (𝑗𝑥)2 −𝑗𝑥 (𝑗𝑥)2 2𝜋 𝑗𝑥
1 𝑒 −𝑗𝑥 𝑒 −𝑗𝑥 1
− [ − 2
− (0 − )]
2𝜋 −𝑗𝑥 (𝑗𝑥) (𝑗𝑥)2
1 1 𝑒 𝑗𝑥 + 𝑒 −𝑗𝑥 − 2
= [0] + [ ]
2𝜋 2𝜋 (𝑗𝑥)2
𝑒 𝑗𝑥 + 𝑒 −𝑗𝑥
1 −1 𝑒 𝑗𝑥 + 𝑒 −𝑗𝑥
= × 2[ 2 ] [∵ 𝑐𝑜𝑠 𝑥 = ]
2𝜋 (𝑗𝑥)2 2
1 𝑐𝑜𝑠𝑥 − 1
= [ ]
𝜋 (𝑗𝑥)2
1 𝑐𝑜𝑠𝑥 − 1
= [ ]
𝜋 −𝑥 2
1 − cos 𝑥
=
𝜋𝑥 2
𝟏 − 𝒄𝒐𝒔 𝒙
∴ 𝑫ⅇ𝒏𝒔𝒊𝒕𝒚 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏; 𝒇𝑿 (𝒙) =
𝝅𝒙𝟐