0% found this document useful (0 votes)
1K views

Random Variable and Mathematical Expectation

1) The document defines random variables and their probability distributions. A random variable is a function that assigns a numerical value to each possible outcome of a random experiment. 2) Random variables can be either discrete or continuous. Discrete random variables take on countable values, while continuous random variables can take any value in an interval. Probability distributions describe the probabilities of the possible values of a random variable. 3) For a discrete random variable, the probability mass function gives the probability of each value. For a continuous random variable, the probability density function is used instead and gives the probability of values within an interval. Probability distributions must satisfy properties like all probabilities summing to 1.

Uploaded by

Bhawna Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Random Variable and Mathematical Expectation

1) The document defines random variables and their probability distributions. A random variable is a function that assigns a numerical value to each possible outcome of a random experiment. 2) Random variables can be either discrete or continuous. Discrete random variables take on countable values, while continuous random variables can take any value in an interval. Probability distributions describe the probabilities of the possible values of a random variable. 3) For a discrete random variable, the probability mass function gives the probability of each value. For a continuous random variable, the probability density function is used instead and gives the probability of values within an interval. Probability distributions must satisfy properties like all probabilities summing to 1.

Uploaded by

Bhawna Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

2.

RANDOM VARIABLE AND


MATHEMATICAL EXPECTATION
2.0 Introduction:
It has been a general notion that if an experiment is
conducted under identical conditions, values so obtained would be
similar. Observations are always taken about a factor or character
under study, which can take different values and the factor or
character is termed as variable.
These observations vary even though the experiment is
conducted under identical conditions. Hence, we have a set of
outcomes (sample points) of a random experiment. A rule that
assigns a real number to each outcome (sample point) is called
random variable.
From the above discussion, it is clear that there is a value
for each outcome, which it takes with certain probability. Hence a
list of values of a random variable together with their
corresponding probabilities of occurrence, is termed as Probability
distribution.
As a tradition, probability distribution is used to denote the
probability mass or probability density, of either a discrete or a
continuous variable.
The formal definition of random variable and certain
operations on random variable are given in this chapter prior to the
details of probability distributions.
2.1 Random variable:
A variable whose value is a number determined by the
outcome of a random experiment is called a random variable.
We can also say that a random variable is a function defined
over the sample space of an experiment and generally assumes
different values with a definite probability associated with each
value. Generally, a random variable is denoted by capital letters
like X, Y, Z… .., where as the values of the random variable are
denoted by the corresponding small letters like x, y, z …….
37
Suppose that two coins are tossed so that the sample space
is S = {HH, HT, TH, TT}
Suppose X represent the number of heads which can come up, with
each sample point we can associate a number for X as shown in the
table below:

Sample HH HT TH TT
point
X 2 1 1 0

Thus the random variable X takes the values 0, 1,2 for this
random experiment.
The above example takes only a finite number of values and
for each random value we can associate a probability as shown in
the table.
Usually, for each random variable xi, the probability of
respective random variable is denoted by p(xi) or simply pi .
X x1 = 0 x2 = 1 x3 = 2
p(xi) 1 2 1
p(xi) = p(xi) = p(xi) =
4 4 4
Observe that the sum of the probabilities of all the random variable
1 2 1
is equal to one. ie p(x1) + p(x2) + p(x3) = + + = 1
4 4 4
Thus the probability distribution for a random variable
provides a probability for each possible value and that these
probabilities must sum to 1.
Similarly if 3 coins are tossed, the random variable for
getting head will be X=0, X=1, X=2, X=3 and sum of their
respective probabilities i.e Σp(xi) =1
If two dice are rolled then the sample space S consists of 36
sample points. Let X denote the sum of the numbers on the two
dice. Then X is a function defined on S by the rule X(i,j) = i+j .
Then X is a random variable which can takes the values
2,3,4……12. That is the range of X is {2,3,4… …12}
38
2.1.1 Discrete random variable:
If a random variable takes only a finite or a countable
number of values, it is called a discrete random variable.
For example, when 3 coins are tossed, the number of heads
obtained is the random variable X assumes the values 0,1,2,3 which
form a countable set. Such a variable is a discrete random variable.
2.1.2 Continuous random variable:
A random variable X which can take any value between
certain interval is called a continuous random variable.
Note that the probability of any single value at x, value of X
is zero. i.e P(X = x) = 0 Thus continuous random variable takes
value only between two given limits.
For example the height of students in a particular class lies
between 4 feet to 6 feet.
We write this as X = {x|4 ≤ x ≤ 6}
The maximum life of electric bulbs is 2000 hours. For this
the continuous random variable will be X = {x | 0 ≤ x ≤ 2000}
2.2 Probability mass function:
Let X be a discrete random variable which assumes the
values x1, x2, ...xn with each of these values, we associate a number
called the probability Pi= P(X=xi), i = 1,2,3…n This is called
probability of xi satisfying the following conditions.
(i) Pi ≥ 0 for all i, ie Pi’ s are all non-negative
(ii) Σpi = p1 + p2 + … pn =1
ie the total probability is one.
This function pi or p(xi) is called the probability mass
function of the discrete random variable X.
The set of all possible ordered pairs (x, p(x)) is called the
probability distribution of the random variable X.

Note:
The concept of probability distribution is similar to that of
frequency distribution. Just as frequency distribution tells us how
the total frequency is distributed among different values (or classes)
of the variable, a probability distribution tells us how total

39
probability 1 is distributed among the various values which the
random variable can take. It is usually represented in a tabular form
given below:
X x1 x2 x3 …. xn
P(X = x) P(x1) P(x2) P(x3) …. P(xn)

2.2.1 Discrete probability distribution:


If a random variable is discrete in general, its distribution
will also be discrete. For a discrete random variable X, the
distribution function or cumulative distribution is given by F(x) and
is written as F(x) = P(X ≤ x) ; - ∞ < x < ∞
Thus in a discrete distribution function, there are a
countable number of points x1, x2,….. and their probabilities pi such
that
F(xi) = ∑ pi , i = 1, 2, …….n
xi < x

Note:
For a discrete distribution function, F(xj) – F(xj-1) = p(xj)
2.2.2 Probability density function (pdf):
A function f is said to be the probability density function of
a continuous random variable X if it satisfies the following
properties.
(i) f(x) ≥ 0 −∞ < x < ∞

(ii) ∫−∞ f ( x) dx = 1
Remark:
In case of a discrete random variable, the probability at a
point ie P(x = a) is not zero for some fixed ‘ a’ However in case of
continuous random variables the probability at a point is always
zero
a
ie P(X = a) = ∫a f ( x) dx =0

40
Hence P( a ≤ X ≤ b) = P(a < X < b) = P(a ≤ X < b) =
P(a < X ≤ b)
The probability that x lies in the interval (a,b) is given by
b
P( a < X < b) = ∫a f ( x) dx
Distribution function for continuous random variable.
If X is a continuous random variable with p.d.f f(x), then the
distribution function is given by
x
(i) F(x) =
−∞
∫ f ( x ) dx = P(X ≤ x) ; -∞ < x < ∞
b
(ii) F(b) – F(a) = ∫a f ( x) dx = P(a ≤ X ≤ b)
2.3 Properties of distribution function:
Suppose that X be a discrete or continuous random variable, then
(i) F(x) is a non - decreasing function of x
(ii) 0 ≤ F(x) ≤ 1 , −∞ < x < ∞
(iii) F(- ∞) = limit F(x) =0
xà−∞
(iv) F(∞) = limit F(x) =1
xà∞
(v) If F(x) is the cumulative distribution function of a
continuous random variable X with p.d.f f(x) then
F′(x) = f(x)
Example 1:
A random variable has the following probability distribution
Values 0 1 2 3 4 5 6 7 8
of X
P(x) a 3a 5a 7a 9a 11 a 13 a 15 a 17 a
(1) Determine the value of a
(2) Find (i) P( x < 3) (ii) P(x ≤ 3) (iii) P(x >7)
(iv)P( 2 ≤ x ≤ 5), (v) P(2 < x <5)
(3) Find the cumulative distribution function of x.
41
Solution:
(1) Since pi is the probability mass function of discrete
random variable X,
We have Σpi = 1
∴ a + 3 a + 5a + 7a + 9a +11a + 13a + 15a + 17a = 1
81a = 1
a = 1/81
(2)
(i) P(x <3) = P(x=0) + P(x=1) + P(x=2)
= a + 3 a + 5a
= 9a
1
=9( )
81
1
=
9
(ii) P(x ≤ 3) = P (x=0) + P(x=1) + P(x=2) +P(x=3)
=a+3a+5a+7a
= 16 a
16
=
81
iii) P(x >7) = P(x = 8)
= 17 a
17
=
81
iv) P ( 2 ≤ x ≤ 5) = P(x=2) +P(x=3) + P( x = 4) +P(x=5)
= 5 a + 7a +9a +11a
= 32a
32
=
81
v) P(2 < x < 5 ) = P(x = 3) + P(x = 4)
= 7a + 9a
= 16a
16
=
81
42
3) The distribution function is as follows:

X=x 0 1 2 3 4 5 6 7 8
F(x)= a 4a 9a 16a 25a 36a 49a 64a 81a
P (X≤ x)
(or) 1 4 9 16 25 36 49 64 81
F(x) =1
81 81 81 81 81 81 81 81 81

Example 2:
Find the probability distribution of the number of sixes in
throwing two dice once.
Solution:
When two dice are thrown the total number of sample
points are 36.
Let X denote the number of sixes obtained in throwing two
dice once. Then X is the random variable, which can take the
values 0,1,2.
Let A denote the success of getting a six in throwing a die
and A denote not getting a six.

Then probability getting a six


1
P(A) =
6
Probability not getting a six
5
P( A ) =
6
No sixes:
∴ P(x = 0) = P( A and A )
= P( A ) . P( A )
5 5
= .
6 6
25
=
36
43
P(x = 1) = P(A and A ) or P( A and A)
= P(A) . P( A ) + P( A ) .P(A)
1 5 5 1
= . +
6 6 6 6
5 5
= +
36 36
10
=
36
5
=
18
P(x = 2) = P( A and A)
= P(A) .P(A)
1 1
= .
6 6
1
=
36
Hence the probability distribution of X is given by
X= x 0 1 2
P(X = x) 25 10 1
36 36 36
Example 3:
An urn contains 6 red and 4 white balls. Three balls are
drawn at random. Obtain the probability distribution of the number
of white balls drawn.
Solution:
The total number of balls in the urn is 10
Let X denote the number of white balls drawn
If three balls are drawn, the random variable takes the value
X= 0, 1, 2, 3

Probability of getting white balls from the urn containing 10


balls (red and white) with the following combination are
44
4C 0 6C 3 1× 120 5
P (no white, 3 red balls) = = =
10C 3 720 30
4C1 .6C 2 15
P (1 white, 2 red) = =
10C 3 30
4C 2 6C1 9
P (2 white, 1 red) = =
10C 3 30
4C 3 6C 0 1
P (3 white, no red) = =
10C 3 30
Hence the probability distribution of X is given by
C 0 1 2 3
P(X=x) 5 15 9 1
30 30 30 30
2.4 An introduction to elementary calculus:
Before going to see the problems on continuous random
variables, we need to know some fundamental ideas about
differentiation and integration, which are part of calculus in higher-
level mathematics.
Hence we introduce some simple techniques and formulae
to calculate the problems in statistics, which involve calculus
methods.
2.4.1 Differentiation:
1. Functional value is an exact value. For some function f(x),
when x = a, we obtain the functional value as f(a) = k.
2. Limiting value is an approximate value. This value
approaches the nearest to the exact value k.
Suppose the exact value is 4. Then the limiting value will be
4.000000001 or 3.999999994. Here the functional value and
limiting value are more or less same.
Hence in many occasions we use the limiting values for
critical problems.
The limiting value of f(x) when x approaches a number 2 is
given as
Limit f(x) = f(2) = l
xà 2
45
f (x + h ) − f ( x )
3. The special type of existing limit, limit is
h →0 h
called the derivative of the function f with respect to x and
is denoted by f ′(x). If y is the function x then we say the
differential coefficient of y with respect to x and is denoted
dy
as
dx
4. Some rules on differentiation:
(i) Derivative of a constant function is zero. f ′(c)=0 where
c is some constant.
(ii) If u is a function of x and k is some constant and dash
denotes the differentiation, [ku]′ = k[u]′
(iii) (u ± v)′ = u′ ± v′
(iv) (uv)′ = u′v +uv′

u u ' v − uv'
(v)   =
v v2
5. Important formulae:
(i) (xn)′ = nxn-1
(ii) (ex)′ = ex
1
(iii) (logx)′ =
x
Example 4:
Evaluate the following limits:
x 2 + 5x x 2 −1
(i) Limit (ii) Limit
x→2 x+2 x →1 x −1

Solution:
x 2 + 5x (2) 2 + 5(2) 4 + 10 14 7
(i) Limit = = = =
x→2 x+2 2+2 4 4 2

x 2 −1 12 − 1 0
(ii) Limit = = . This is an indeterminate form.
x →1 x −1 1−1 0
Therefore first factorise and simplify and then apply the same limit
to get the limiting value
46
x2 −1 ( x − 1)( x + 1)
∴ = = x +1
x −1 ( x − 1)
x 2 −1
∴ Limit = Limit (x+1) = 1+1 = 2
x →1 x −1 x →1

Example 5:
Find the derivative of the following with respect to x.
12 4 2 3 x x2 +1
(i) x + 7 (ii) (x + 4x –5) (iii) (x ) (e ) (iv)
x −5
Solution:
(i) Let y = x12 + 7
dy
∴ = 12x12-1 + 0 = 12x11
dx
(ii) Let y = x3 + 4x2 –5
y′ = 4x3 +4(2x) – 0
= 4x3 +8
(iii) Let y = x3 ex
(uv)′= u′v +uv′
= [x3]′ (ex) + (x3) [ex]′
= 3x2 ex +x3 ex
= ex (3x2 + x3)

x2 +1 u u ' v − uv'
(iv) y = . This is of the type   =
x −5 v v2


[ ]

dy x 2 + 1 (x − 5) − (x 2 + 1)[x − 5]
=

dx ( x − 5) 2

=
[2 x ]( x − 5) − ( x 2 + 1)[1]
( x − 5) 2
2x 2 − 10 x − x 2 − 1
=
(x − 5) 2
x 2 − 10 x − 1
=
( x − 5) 2

47
2.4.2 Integration:
Integration is known as the reverse process of
differentiation. Suppose the derivative of x3 is 3x2. Then the
integration of 3x2 with respect to x is x3 . We write this in symbol
as follows:
d 4
⇒ 4 ∫ x dx = x4
3
( x ) = 4x3
dx

Similarly
d 8
⇒ 8 ∫x
7
( x ) = 8x7 dx = x8
dx
d x
⇒ ∫e
x
(e ) = ex dx = ex
dx

Note:
While differentiating the constant term we get zero. But in
the reverse process, that is on integration, unless you know the
value of the constant we cannot include. That is why we include an
arbitrary constant C to each integral value.
Therefore the above examples, we usually write
∫ e dx = e +c and ∫ 8x dx = x + c
x x 7 8

These integrals are also called improper integrals or


indefinite integrals

Rules and formulae on integration:


(i) ∫ k dx = kx
+
xn 1
(ii) ∫ x dx =
n

n +1
(iii) ∫ e dx = ex
x

1
(iv) ∫ x dx = log x
(v) ∫ (u ± v)dx = ∫ u dx ± ∫ v dx

48
Example 6:
Integrate the following with respect to x:
+
x6 1 x7
(i) ∫ x dx =
6
= +c
6 +1 7
− + −
x 51 x 4 1 1 1
(ii) ∫ x dx = = −
-5
= =- +c
− 5 +1 −4 4x 4
4x 4
1
(iii) ∫ dx = log x +c
x
+
x1/ 2 1 x3/ 2 2 3/ 2
(iv) ∫ x dx = ∫ x dx =
1/ 2
= = x +c
1 3 3
+1
2 2
(v) ∫ (x +2x + 4x + 8) dx
4 2

x5 x3 x2
= +2 +4 + 8x + c
5 3 2
(vi) ∫ (ex + x4 + 1/x3 +10) dx
= ex + x5 /5 − 1/2x2 + 10x + c
The above discussed integrals are known as improper
integrals or indefinite integrals. For the proper or definite integrals
we have the limiting point at both sides. ie on the lower limit and
the upper limit.

This integral ∫ f ( x ) dx is an indefinite integral

Integrating the same function within the given limits a and


b is known as the definite integral.
b
ie ∫ f (x )dx
a
= k (a constant value) is a definite integral where a is

known as lower limit and b is known as the upper limit of the


definite integral.
To find the value of definite integral we use the formulae:
b
Suppose ∫ f ( x ) dx = F(x) then ∫ f ( x ) dx = F(b) – F(a)
a

49
An important note to the Teachers and students
As per as statistics problems concerned, the differentiation
and integration methods restricted to simple algebraic functions
only.

Example 7:
Evaluate the following definite integrals.
4 3 5
(i) ∫ 3x 2 dx (ii) ∫ x dx
3
(iii) ∫ xdx
0 1 2

Solution:
4
4
 3x 3 
(i) ∫ 3x dx =  2 3 4
 = [x ]0
0  3 0
= 4 –03
3
= 64
3
3
x4 
∫ x dx
3
(ii) =  
1  4 1
1 4 3
= [ x ]1
4
1
= [3 4 − 14 ]
4
1
= [81 − 1]
4
1
= [80]
4
= 20
5
5
x2 
(iii) ∫ xdx =  
2  2 2
1
= [5 2 − 2 2 ]
2
1 21
= [25 − 4] =
2 2
50
Example 8:
Examine whether f(x) = 5x4 , 0 < x < 1 can be a p.d.f of a
continuous random variable x.
Solution:

For a probability density function, to show that ∫
−∞
f ( x ) dx =1
1

∫ 5( x)
4
That is to show that dx = 1
0
1
1
 x5 
∫0
4
5( x ) dx = 5  
 5 0
5
= x5 0
5
1
[ ]
= [15 –0]
=1
∴ f(x) is a p.d.f
Example 9:
A continuous random variable x follows the rule
f(x) = Ax2, 0 < x < 1. Determine A

Solution:

Since f(x) is a p.d.f, ∫
−∞
f ( x ) dx =1
1
Therefore ∫
0
Ax2 dx = 1
1
 x3 
A  =1
 3 0

3
[ ]
A 31
x 0 =1
A
[1] = 1
3
A =3

51
Example 10:
Let f(x) = c(1-x) x2 , 0 < x < 1 be a probability density
function of a random variable x. Find the constant c
Solution:
f(x) = c(1-x)x2 , 0 < x < 1

since f(x) is a p.d.f ∫
−∞
f ( x ) dx = 1
1
∴ ∫0
c(x2 –x3)dx =1

1
 x3 x4 
c  −  =1
 3 4  0
 13 14  
c  −  − (0 − 0) = 1
 3 4  
1 1
c  −  =1
3 4
 4 −1
c  =1
 12 
1
c   =1
 12 
c = 12
Example 11:
A random variable x has the density function
1
 , -2< x<2
f(x) =  4
0, else where
obtain (i) P (−1 < x < 2) (ii) P (x >1)
Solution:
2
(i) P(-1 < x < 2) = ∫
−1
f(x) dx

52
2
1 1 +2
∫4
−1
dx =
4
[ x] −1

1
= [2 – (-1)]
4
1
= [3]
4
3
=
4
(ii) Here the upper limit of the p.d.f is 2∴the probability for the
given random variable.
2
1
P( x > 1) = ∫ dx
1
4

= [x ]1
1 2

4
1
= [2 –1]
4
1
= [1]
4
1
=
4
2.5 Mathematical Expectation:
Expectation is a very basic concept and is employed widely
in decision theory, management science, system analysis, theory of
games and many other fields. Some of these applications will be
discussed in the chapter on Decision Theory.
The expected value or mathematical expectation of a
random variable X is the weighted average of the values that X can
assume with probabilities of its various values as weights.
Thus the expected value of a random variable is obtained by
considering the various values that the variable can take
multiplying these by their corresponding probabilities and summing
these products. Expectation of X is denoted by E(X)

53
2.5.1 Expectation of a discrete random variable:
Let X be a discrete random variable which can assume any
of the values of x1, x2, x3……..xn with respective probabilities p1,
p2, p3……pn. Then the mathematical expectation of X is given by
E(x) = x1p1 + x2p2 + x3p3 +………xnpn
n n
= ∑
i=1
xipi , where ∑
i=1
pi = 1

Note:
Mathematical expectation of a random variable is also
known as its arithmetic mean. We shall give some useful theorems
on expectation without proof.
2.5.2 Theorems on Expectation:
1. For two random variable X and Y if E(X) and E(Y)
exist, E(X + Y) = E(X) + E(Y) . This is known as
addition theorem on expectation.
2. For two independent random variable X and Y,
E(XY) = E(X).E(Y) provided all expectation exist. This
is known as multiplication theorem on expectation.
3. The expectation of a constant is the constant it self.
ie E(C) = C
4. E(cX) = cE(X)
5. E(aX+b) = aE(X) +b
6. Variance of constant is zero. ie Var(c) = 0
7. Var(X+c) = Var X
Note: This theorem gives that variance is independent
of change of origin.
8. Var (aX) = a2 var(X)
Note: This theorem gives that change of scale affects
the variance.
9. Var (aX+b) = a2Var(X)
10. Var (b-ax) = a2 Var(x)
Definition:
Let f(x) be a function of random variable X. Then
expectation of f(x) is given by E(f(x)) = Σ f(x) P(X=x) , where
P(X=x) is the probability function of x.

54
Particular cases:
1. If we take f(x) = Xr, then E(Xr) = Σxrp(x) is defined as the
rth moment about origin or rth raw moment of the
probability distribution. It is denoted by µ′r
Thus µ′r = E(Xr)
µ′1 = E(X)
µ′2 = E(X2)
Hence mean = X = µ′1 = E(X)
2
Σ X 2 ΣX 
Variance = - 
N  N 
= E ( X ) – [E (X)]2
2

= µ′2 – (µ′1)2
Variance is denoted by µ2
2. If we take f(x) = (X – X )r then E(X – X )r = Σ(X – X )r p(x)
which is µr, the rth moment about mean or rth central moment.
In particular if r = 2, we get
µ2 = E (X – X )2
= Σ (X – X )2 p(X)
= E [X – E(X)]2
These two formulae give the variance of probability distribution in
terms of expectations.
Example 12:
Find the expected value of x, where x represents the
outcome when a die is thrown.
Solution:
Here each of the outcome (ie., number) 1, 2, 3, 4, 5 and 6
1
occurs with probability . Thus the probability distribution of
6
X will be
x 1 2 3 4 5 6
1 1 1 1 1 1
P(x)
6 6 6 6 6 6

55
Thus the expected value of X is
E(X) = Σxipi
= x1p1 + x2p2 + x3p3 + x4p4+ x5p5 + x6p6
 1  1  1  1
E(X) = 1×  +  2 ×  + 3 ×  +  4 × 
 6  6  6  6
 1  1
+ 5 ×  +  6 × 
 6  6
7
=
2
E(X) = 3.5

Remark:
In the games of chance, the expected value of the game is
defined as the value of the game to the player.
The game is said to be favourable to the player if the
expected value of the game is positive, and unfavourable, if value
of the game is negative. The game is called a fair game if the
expected value of the game is zero.
Example 13:
A player throws a fair die. If a prime number occurs he wins
that number of rupees but if a non-prime number occurs he loses
that number of rupees. Find the expected gain of the player and
conclude.
Solution:
Here each of the six outcomes in throwing a die have been
assigned certain amount of loss or gain. So to find the expected
gain of the player, these assigned gains (loss is considered as
negative gain) will be denoted as X.
These can be written as follows:
Outcome on a die 1 2 3 4 5 6
Associated gain to
-1 2 3 -4 5 -6
the outcome (xi)
1 1 1 1 1 1
P(xi)
6 6 6 6 6 6

56
Note that 2,3 and 5 are prime numbers now the expected gain is
66
E (x)∑
= EΣ=1 xi pi
i =1

1 1 1 1 1 1


= (–1)   + (2)   + (3)   + (– 4)   + (5)   + (– 6)  
6 6 6 6 6 6
1
=–  
6
Since the expected value of the game is negative, the game
is unfavourable to the player.
Example 14:
An urn contains 7 white and 3 red balls. Two balls are
drawn together at random from the urn. Find the expected number
of white balls drawn.
Solution:
From the urn containing 7 white and 3 red balls, two balls
can be drawn in 10C2 ways. Let X denote the number of white balls
drawn, X can take the values 0, 1 and 2.
The probability distribution of X is obtained as follows:
P(0) = Probability that neither of two balls is white.
= Probability that both balls drawn are red.
3C 2 3× 2 1
= = =
10C 2 10 × 9 15
P(1) = Probability of getting 1 white and 1 red ball.
7 × 3× 2
= 7C1 × 3C1 =
7
=
10C 2 10 × 9 15
P(2) = Probability of getting two white balls
7C 2 7× 6 7
= = =
10C 2 10 × 9 15
Hence expected number of white balls drawn is
 1  7  7
E(x) = Σ xi p(xi) =  0 ×  + 1 ×  +  2 × 
 15   15   15 
7
= = 1.4
5
57
Example 15:
A dealer in television sets estimates from his past
experience the probabilities of his selling television sets in a day is
given below. Find the expected number of sales in a day.

Number of
TV Sold in 0 1 2 3 4 5 6
a day
Probability 0.02 0.10 0.21 0.32 0.20 0.09 0.06

Solution:
We observe that the number of television sets sold in a day
is a random variable which can assume the values 0, 1,2, 3,4,5,6
with the respective probabilities given in the table.
Now the expectation of x = E(X) = Σxipi
= x1p1 + x2p2 + x3p3 + x4p4+ x5p5 + x6p6
= (0) (0.02) + (1) (0.010) + 2(0.21) + (3) (0.32) + 4(0.20)
+(5) (0.09) + (6) (0.06)
E(X) = 3.09
The expected number of sales per day is 3
Example 16:
Let x be a discrete random variable with the following
probability distribution
X -3 6 9
P(X= x) 1/6 1/2 1/3
Find the mean and variance

Solution:
E (x) = Σ xi pi
1 1 1 
= (-3)   + (6)   + (9)  
6  2 3
11
=  
2

58
E(x2) = Σ xi2 pi
1 1 1   93 
= (-3)2   + (6)2   + (9)2   =  
6  2 3 2
2 2
Var (X) = E (X ) - [E(X)]
2
 93  11 
=   -  
2 2
 93  121
=   - 
 2   4 
186 − 121
=
4
65
=
4
2.5.3 Expectation of a continuous random variable:
Let X be a continuous random variable with probability
density function f(x), then the mathematical expectation of x is
defined as

E(x) = ∫ xf ( x)dx , provided the integral exists.
−∞
Remark:
If g(x) is function of a random variable and E[g(x)] exists,

then E[(g(x)] = ∫
−∞
g(x) f(x)dx

Example 17:
Let X be a continuous random variable with p.d.f given by
f(x) = 4x3, 0 < x < 1. Find the expected value of X.

Solution:

We know that E(X) = ∫ xf ( x)dx
−∞
1

∫ x(4 x ) dx
3
In this problem E(X) =
0

59
1
= 4 ∫ x ( x 3 )dx
0
1
 x5 
=4  
 5 0
4 1
=  x 5 
5 0

4
= [ 15 - 05]
5
4
= [1]
5
4
=
5
Example 18:
Let x be a continuous random variable with pdf. given by
f(x) = 3x2 , 0 < x < 1 Find mean and variance
Solution:

E(x) = ∫ xf (x ) dx
−∞
1

∫ x[3x
2
E(x) = ] dx
0
1

∫ x dx
3
=3
0
1
x4 
= 3  
 4 0
=
3 41
4
[ ]
x 0

=
3 4
4
[ ]
1 −0
3
=
4
60

∫x
2 2
E(x ) = f ( x ) dx
−∞
1

∫x
2
= [3x 2 ] dx
0
1

∫x
4
=3 dx
0
1
 x5 
= 3  
 5 0
3
= x5 0
5
[ ]
1

3
= 15 − 0
5
[ ]
3
=
5
Variance = E(x2) – [E(x)]2
2
3 3
Var (x)= − 
5 4
3 9
= −
5 16
48 − 45 3
= =
80 80
2.6 Moment generating function (M.G.F) (concepts only):
To find out the moments, the moment generating function is
a good device. The moment generating function is a special form of
mathematical expectation and is very useful in deriving the
moments of a probability distribution.
Definition:
If X is a random variable, then the expected value of etx is
known as the moment generating functions, provided the expected
value exists for every value of t in an interval, - h < t < h , where h
is some positive real value.
61
The moment generating function is denoted as Mx(t)
For discrete random variable
Mx(t) = E(etx)
= Σ etx p(x)
 (tx ) 2 (tx ) 3 
= Σ 1 + tx + + + ....... px(x)
 2! 3! 
 t 2
t 3
 ∞ tr
Mx(t) = 1 + tµ1 '+ µ 2 '+ µ 3 '+.......  = ∑ µ r '

 2! 3!  r =0 r!
In the above expression, the rth raw moment is the
tr
coefficient of in the above expanded sum. To find out the
r!
moments differentiate the moment generating function with respect
to t once, twice, thrice… …and put t = 0 in the first, second, third,
….. derivatives to obtain the first, second, third,…
….. moments.
From the resulting expression, we get the raw moments
about the origin. The central moments are obtained by using the
relationship between raw moments and central moments.

2.7 Characteristic function:


The moment generating function does not exist for every
distribution. Hence another function, which always exists for all the
distributions is known as characteristic function.
It is the expected value of eitx, where i = − 1 and t has a
real value and the characteristic function of a random variable X is
denoted by φx(t)

For a discrete variable X having the probability function


p(x), the characteristic function is φx(t) = Σ eitx p(x)
For a continuous variable X having density function f(x),
b
such that a < x < b , the characteristic function φx(t) = ∫
a
eitx f(x)dx

62
Exercise - 2
I. Choose the best answer:
n
1. ∑ p( x ) is equal to
i =1
i

(a) 0 (b) 1 (c) –1 (d) ∞


2. If F(x) is distribution function, then F(-∞) is
(a) –1 (b) 0 (c) 1 (d) -∞
3. From the given random variable table, the value of a is
X=x 0 1 2
pi a 2a a
1 1
(a) 1 (b) (c) 4 (d)
2 4
4. E(2x+3) is
(a) E(2x) (b) 2E(x) +3 (c) E(3) (d) 2x+3
5. Var(x+8) is
(a) var (8) (b) var(x) (c) 8 var(x) (d) 0
6. Var(5x+2) is
(a) 25 var (x) (b) 5 var (x) (c) 2 var (x) (d) 25
7. Variance of the random variable X is
(a) E(x2) - [E(x)]2 (b) [E(x)]2 - E(x2)
2
(c) E(x ) (d) [E(x)]2
1
8. Variance of the random variable x is ; its standard
16
deviation is
1 1
(a) (b)
256 32
1 1
(c) (d)
64 4
9. A random variable X has E(x) = 2 and E(x2) = 8 its variance
is
(a) 4 (b) 6
(c) 8 (d) 2

63
10. If f(x) is the p.d.f of the continuous random variable x, then
E(x2) is
∞ ∞
(a) ∫ f ( x) dx
−∞
(b) ∫ xf (x ) dx
−∞
∞ ∞

∫x ∫ f (x
2 2
(c) f ( x ) dx (d) ) dx
−∞ −∞

II. Fill in the blanks:


11. If f(x) is a distribution function, then F(+∞) is equal to
________
12. If F(x) is a cumulative distribution function of a continuous
random variable x with p.d.f f(x) then F′(x) = __________
13. f(x) is the probability density function of a continuous

random variable X. Then ∫ f ( x) dx is equal to ________
−∞

14. Mathematical expectation of a random variable X is also


known as _____________
15. Variance of a constant is _____________
16. Var (12x) is _____________
17. Var (4x+7) is _________
18. If x is a discrete random variable with the probabilities pi ,
then the expected value of x2 is ________
19. If f(x) is the p.d.f of the continuous random variable X,
then the expectation of X is given by __________
20. The moment generating function for the discrete random
variable is given by ____________

III. Answer the following:


21. Define random variable.
22. Define discrete random variable
23. Define continuous random variable
24. What is probability mass function?
25. What is discrete probability distribution?
26. Define probability density function.
64
27. Write the properties of distribution function.
28. Define mathematical expectation for discrete random
variable.
29. Define the expectation of a continuous random variable.
30. State the moment generating function.
31. State the characteristic function for a discrete random
variable.
32. State the characteristic function for the continuous random
variable.
33. Write short note on moment generating function.
34. Write a short note on characteristic function.
35. Find the probability distribution of X when 3 coins are
tossed, where x is defined as getting head.
36. Two dice are thrown simultaneously and getting three is
termed as success. Obtain the probability distribution of the
number of threes.
37. Three cards are drawn at random successively, with
replacement, from a well shuffled pack of 52 cards. Getting
a card of diamond is termed as success. Obtain the
probability distribution of the number of success.
38. A random variable X has the following probability
distribution
Value of x 0 1 2 3 4
P(X=x) 3a 4a 6a 7a 8a
(a) determine the value of a (b) Find p( 1 < x < 4 )
(c) P(1 ≤ x ≤ 4) (d) Find P(x >2)
(e) Find the distribution function of x
39. A random variable X has the following probability
function.
Values 0 1 2 3 4 5 6 7
of X, x
P(x) 0 k 2k 2k 3k k2 2k2 7k2+k
(i) Find k (ii) Find p(0 < x < 5)
(iii) Find p(x ≤ 6)

65
40. Verify whether the following are probability density
function
(i) f(x) = 6x5 , 0<x<1
2x
(ii) f(x) = , 0<x<3
9
41. A continuous random variable x follows the probability
law. f(x) = Ax3, 0 < x < 1 determine A
42. A random variable X has the density function f(x) = 3x2 ,
0 < x < 1 Find the probability between 0.2 and 0.5
43. A random variable X has the following probability
distribution
X=x 5 2 1
1 1 1
P(x)
4 2 4
Find the expected value of x
44. A random variable X has the following distribution
x -1 0 1 2
1 1 1 1
P(x)
3 6 6 3
2
Find E(x) , E(x ) and Var (x)
1 1
45. A random variable X has E(x) = and E(x2) = find its
2 2
variance and standard deviation.
46. In a continuous distribution, whose probability density
3
function is given by f(x) = x(2-x) , 0 < x < 2. Find the
4
expected value of x.
47. The probability density function of a continuous random
x
variable X is given by f(x) = for 0 < x < 2. Find its mean
2
and variance

66
Answers
I.
1. (b) 2. (b) 3. (d) 4. (b) 5. (b)
6. (a) 7. (a) 8. (d) 9. (a) 10. (c)
II.
11. 1 12. f(x) 13. 1 14. Mean
15. zero 16. 144 var(x) 17. 16 var(x)
∞ ∞
tr
18. Σxi2 pi 19. ∫ x f (x ) dx 20. ∑ µ!r
−∞ r = 0 r!

III.
35.
X=x 0 1 2 3
P(xi) 1/8 3/8 3/8 1/8

36.
X=x 0 1 2
P(x=x) 25 10 1
36 36 36

37.
X=x 0 1 2 3
P(xi) 27 27 9 1
64 64 64 64
38. (i) a = 1/28 (ii) 13/28 (iii) 25/28 (iv) 15/28
(v)
x 0 1 2 3 4
F(x) 3 7 13 20 28
=1
28 28 28 28 28
39. (i) k = 1/10 (ii) 4/5 (iii) 83/100 40. (i) p.d.f (ii) p.d.f
41. A = 4 42. P(0.2 < x , 0.5) = 0.117 43. 2.5
44. E(x) = 1/2 , var (x) = 19/12 45. 1/4 , 1/2 46. E(x) = 1
47. E(x) = 4/3 , var (x) = 2/9
67

You might also like