0% found this document useful (0 votes)
11 views

IP Unit 4 (Expectation)

Uploaded by

gpt.paridhi13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

IP Unit 4 (Expectation)

Uploaded by

gpt.paridhi13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MATHEMATICAL EXPECTATION 6-3

$1,NTRODUCTION
used r.u's can be both characterized and dealt with effectively for practical
Many frequently their expectation. For exarthple,.a gambler might be
purposes by consideration of quantities called
interested in his average winnings at a game, a businessman in his avérage. profits on a product,
random
a physicistin the average charge of a particle, and so on. The 'average value of a
phenomenon is also termed as its mathematical expectation or expected value. In this chapter we will
iofine and study this concept in detail, which will be used extensively in subsequent chapters.
e2 MATHEMATICAL EXPECTATION OR EXPECTED VALUE OF A RANDOM VARIABLE
Oncdve have constructed the probability distribution for a random variable, we often want to
compute the mean or expected value of the random variable. The expected value of a discrete
random variable is a weighted average of all possible values of the random variable, where the
weights are the probabilities associated with the corresponding values. The mathematical
expression for computing the expected yalue of a discrete random variable X with probability
mass function (p.mf.) f() is given below :
E(X) =*f(» (for discrete r.v.) ... (6-1)

The mathematical expression for computing the expected value of a continuous random
variable Xwith probability density function (p.df.)f) is, however, as follows :
|E (X) = | f() dx ,( for contiuous r.v) ... (6-1a)
provided the right hand integral in (6-1a) or series in (6-1) is absolutely convergent, ie., provided
. (6-2)

Or (6-2a)
Remarks :
1. Since absolute convergence implies ordinary convergence, if (62) or (62 a) holds then the series
or integral in (6-1) and (6-1a) also exists, i.e., has afinite value and in that case we define E (X) by
(61) or (6-1la). It should be clearly understood that although Xhas an expectation only if L.H.S. in
(62) or (6-2a) exists, ie,-conyerges to a finite limit, its value is given by (6-1) or (6-1a).
V2 E(X) exists ifE IXI exists.
Lxpected Value and Variance of an Indicator Variable. Consider the indicator variable :

[1, if Ahappens
Jo, if Ahaypens
E(X) = 1. P(\= 1) + 0. P(X =0) E() - 1. PI, - 1) + 0. PJ =0)
E().= P(A)
This gives us a very useful tool to find P(A), rather than to evaluate E(X).
Thus P(A) = E() ... (6-2b)
E(X') = 12. P(X = 1) +02. P(X =0) = P(I, 1) = P(A}
Var X E(X) - (E(X)1?= P(A) - [P(A)? P(A) [1 - PA)] =P(A): P(A) (6-2c)
Var (a) - P(A)- P( .. (6-2d)
FUNDAMENTALS OF MATHEMATICAL
6.4
L2!, ...with probability law:
STATISTICS
4. If the r.v. X takes the values 0!. 1
el
;x=0, 1, 2, .. then Sx! P (X - x)= eS i
P (X = xI) = x! Y=0
x=0
does not exist.
which is a divergent series, In this case E (X)
variable X which takes the values
More rigorously, let us consider a random

X = (- 1)r* (i + 1) ; i= 1, 2, 3, ... with the probability : p;


= P (X = x) = ili+)-; i =1, 2,3, ..

Here, P (X - x) =
i=1
141-24 4
...

j=1
conditionally convergent
Using Leibnitz test for alternating series, the series on right-hand side is
since the terms alternate in sign, are monotonically decreasing and coverge to zero. By conditional

convergence we mean that although ),Pi Xi converges, >p; does not converge. So,
/=1 i=1

rigorously speaking, in the above example E(X) does not exist, although Pii is finite, viz,
/=1

log, 2.
(.2*-.(k=1,2,3,., wüih
As another example, let us consider the ro. X which takes the values x = k
probabilities P 2-k, Here also we get
1

k=1
X7 Pk
k=l
k --1-.--log, 2 and k=1 k=l

which is a divergent series. Hence in this case also expectation does not exist.
As an illustration of a continuous r.., let us consider the r.u. X with p.d.f. :
1

which is p.df. of standard Cauchy distribution. (cf. Chapter 9).

(" Integrand is an even function of x.)


Since this integral does not converge to' ä finite limit, E(X) does not exist.
66. EXPECTED VALUE OF FUNCTION OF A RANDOM VARIABLE
Consider a r.u. Xwith p.df (p.mf.) f (2) and distribution function F(), If g (-) is a function suu
that g(X) is a r.u. and E [gX)] cxists (i.e., is defined), then

...(63)
ELg (X)]= s ) f(a) dx (For continuous r.u.)
(6-34)
ELg (X)]= 8(*) f() (For discrete r.v.)
MATHEMATICAL EXPECTATION 6-5

g (X) is :
By definition, the expectation of Y
EÍR(X)]=E(n= ydHy () = yh(y) dy 64) or E()-
y
yh(y), ... (6-4)

where Hy)is
the distribution function of Yand h(y) is p.d.f. of Y.
mhe proof of equivalence of (6-3) and (6-4) is beyond the scope of the book.]
This result extends into higher dimensions. If X and Yhave a joint p.d.f, fx, y) and Z = h(x, y)
function and if E (Z) exists, then
is a random variable for some
.. (6.5)
E(2) - - o (* y) f(ey)ax dy
E (Z) = (6-5a)

Partlcular Cases
If we take g(X) =X, r being a positive integer, in (6-3),
E(X") = 'f(<) dx, (6-55)
which is defined as u,, the rth moment (about origin) of the probability distribution.
Thus , (about origin) = E (X). In particular
4 (about origin) = E(X) and Hy (about origin) = E(*)
Hence, Mean =7= (about origin) = E (X) .. (6-6)
and 4y-4= E (X') - (E (X))? (6-6a)
2. If 8 (X) = [X - E(X)]=(X- y, then from (6-3), we obtain

E[X- E(0)]- (*-E(*]f() dx = (-) f<)ax, .. (6-7)

which is .,the rth moment about mean.


In particular, if r = 2, we get

Hy -E[X-E()] -_*-} f<) dx, (6-8)


Formulae (6-6a) and (6-8) give the variance of the probability distribution of a continuous
T.o. X in terms of expectation.
3. Taking g ()= constant = C, say in (63), we get
E (c) = c .. (6-9)

mark. The corresponding results for a discrete r. Xcan be obtained on replacing integration by
Summation (2) over the given range of the variable Xin the formulae (6-5) to (6-9).
In the following sections, we shall establish some more results on 'Expectation' in the form of Theorems,
inteotus 7u.'s. The corresponding results for discrete ru.'s can be obtained similarly on replacing
integration by summation (2) over the given range of the variable Xand are left as an exercise to the reader.
6-6 FUNDAMENTALS OF MATHEMATICAL
6-4. PROPERTIES OF EXPECTATION STATISTICS
Property 1. Addition Theorem of Expectation.
If X and Yare random variables, then :
E(X + ) = E(X) + E(M, ... (610)
provided all the expectations exist.
Proof. Let Xand Ybe continuous r.u.'s with joint p.d,f. fxy (x, y) and marginal p.df's fy (x) and
fy ) respectively. Then by definition,
E(X) - xfx(x) dz (611)

and E()- y h(y)dy ...(6-12)

E(X+ ) - y )for (x.y) dr dy

-f*)d:+yf(y) áy
= E(X) + E () [On using (6-11) and (6-12))
The,result in (6-10) can be extended to n variables as given below.
Generallsation. The mathematical expectation of the sum of n random variables is equal to the sum
of their expectations, provided all the expectations exist. Symbolically, if X, X, ., X, are rmndon
variables then
E(X, +X, t ...+X) =E(X,) +E(X,) +..+ E(X,) ...(6-13)

if all the expectations exist. .. (6-13)


\i=1 i=1

Proof. Using (6-10), for two r..s X, and X, we get


E (X +X) =E (X,) +E (X) (6-13) is true for n = 2. ..)
Let us now suppose that (6-13) is true for n = r, (say), so that
(614)
X=
i=1
r+1
(610)
E
j=1
MATHEMATICAL EXPECTATION 6-7

-) E(X,)+ E(X,1)
i=1
[Using (6-14)]
r+1

-E(X))
j=1
Sonce if (6-13) is true for n =, it is also true for n =r+ 1. But we have proved lin ) above
for
that (6-13)is true rn=2. Hence it is true for n =2 + 1=3;n =3+1= 4;...and so on.
Hence
tae principle of mathematical induction, (6-13) is true for all positive integral values of n.
Property 2. Multipllcation Theorem of Expectation.
yfX and Yare independent random variables, then :
E(X) = E(X). E( ..(6-15)
provided all the expectations exist.
Proof. Proceeding as in property 1, we have

E(X Y =

- xy fx (*) fr (y)ar dy (Since X and Yare independent]

É() E(n, provided Xand Yare independent, [Using (6-11) and (6-12)]
Generaljsáton. The mathematical expectation of the roduct of anumber of independent random
Dariables is equal to the product of their expectations."Symbolically. if X, Xy , X are nindependent
r.o's, then
E (X,Xy, . X) = E (X) E (X,) ..E (X)
... (6-16)
i.e., E(X;),
i=1 i=1

provided al the expectations exist.


Proof. Using (6-15), for two independent r.v.'s X and Xy, we get
E(X, X) = E (X) E (X,) (6-16) is true for n = 2. ...()
Let us now suppose that (6-16) is true for n =r, (say), so that

E(X) ... (6-17)


i=1
(r+1
Thus, [Using (6-15)]
i=1 i=1 i=1

[Using (6-17)
/=1
r+1

-||E(%;)
i=1
6-8 FUNDAMENTALS OF MATHEMATICAL
Hence, if (6-16) is true forn=r, it is also true for n =r+1. Hence using (), STATISTICS
,by the principle
of mathematical induction we conclude that (6-16) is true for all positive integral values nf
Property 3. If X is a random variable and 'a' is constant, then
(i) E[aY (X)]=a E[Y (X)] ... (618)
(i) E [ (X) +aj =E [Y (X)] +4, . (6-19)
where (X), a function of X, is a ru. and all the expectations exist.
Proof.

() E[a Y(X)]-ay(z)-f(x) dx =a v(x).f(x) dx =a E [ (x)]


(i) E((0) +a] ={ve)+ a}f() dx = v()f() dx+a f) dx
- E [Y(X)] + a
Corollary
() If Y (X)= X, in (6-18) then
E (a X) = aE(X) and E(X + a) = E(X) + a . (620)
(i) If V (X)= 1in (6-18) then E (a) = a. .. (6-21)
PropertyA fX is a random variable and a and bare constants, then
E (a X + b) = a E (X) + b, ...(622)
provided all 6the expectations exist.
Proof. By def., we have

E(a X+ b) = (ax+ b) f(r) dx = a xf(*) dx +b| f(x) dx =a E(0) +b


Cor. 1. If b= 0, then we get E (a X) = a. E (X) ...(622)
Cor. 2. Taking a = 1, b =- X =- E (X), we get E(X - X)=0
Remark.
If we write 2( ) =a X + b . .(6-23)

then 8 [E (X)] -aE(X) + b t..(6-23a)

Hence, from (6-22) and (6-23a), . (624)

Now (6-23) and (6-24) imply that expectation of alinear function is the same linear function of he eapeciation.
The result, however, is not true if g ) is not linear. For instance.
E (1/X) * (1/E (X)} :;
E [log (X)] log [E (X)]: E(X) * [E (X)]2.
since all the functions stated above are non-linear. As an illustration, let us consider a random variable
Y which assumes only two values + and -1, each with equal probability 1
Then
2
MATHEMATICAL EXPECTATION. 6-9

1 1 1
E(X) = 1x-+
*-1) x 2 -0 and E (X) - 12 x 12 +(-1)° x2 = 1.

Thus E(X?) E(X))2


For anon-linear function g(X), it is difficult to obtain expressions for E Le(x)] in terms of g|E(X)J
sW, for Elog (X)] or E(X2) in terns of log [E(X)] or [E(CX)]². However, some results in the form of
inequalities between E Lg(X)]and gE(X)] are available, as discussed in later part of the chapter.
Property 5/Épectation of a Linear Combination of Random Variables :
Let X., X ., X, be any in random variables and if a, a,, ., a, are any n constants, then,

X 4; E(X;) .. (6-25)
i=1
i=1

provided all the expectations exist.


The result is obvious from (6-13) and (6-20).
Property 6. IfX20 then E (X) 20.
Proof. IfX is a continuous random variable s.t. X0, then

E(X) =x f(a) dx = *f() du >0,


Jo
[:: IfX2 0, p (x) =0 for x<0]
provided the expectation exists.
Property 7. IEX and Yare two random variables such that Ys X,then E(Y) SE (X), provided all
he expectatións exist.
Proof. Since Y<X, we have the r.v. Y- Xs0 X- Y2 0.
Hence E (X - Y 2 0 E (X) - E(Y) >0
E () s E (X), as desired.
E (X) 2 E ()
Property &. |E(X) |SE | X |,provided the expectations exist. ... (6-26)
Proof. Sinçe X s |X |, we have by Property 7, E (X) <E | X|
Again since -X <|X L, we have by Property 7, E (-X) s E| X|
-E(X) sE|X | (**)
From (") and (**), we get the desired result | E (X) | SE|X..
Property 9. If exists, then u' exists for all 1sssr.
Mathemátically, if E(X) exists, then E(X) exists for all 1 Ss Sr, ie,
E (X) < o E (X°) < , for all 1ss Sr (6-27)
"1
Proof. lxf dF(«)
lxl>1

If s<r, then lx < |*, for |x >1.

LJaF()sF()+| aF() >1


6-10 FUNDAMENTALS OF MATHEMATICAL
STATISTICS
x>1
since for -1<x<1, |x <1.

dF(*) s1 + EIXI< o E(X') exists, 1Sssr (: E(X) existsl


Remark.
The above result states that if the moments of a specified order exist, then all the lower order momenks
automatically exist. However, the converse is not true, i.e., we may have distributions for which all he
moments of a specified order exist but no higher order moment exist, e.g., for the u. with p.d.f.:
2/x; x21
; x<1

-2; E(X)-, *f)dr -2


Thus for the above distribution, 1st order moment (mean) exists but 2nd order moment (variance)
does not exist.

As another illustration, consider a r.u. X with p.d,f.: (r+1)a+1


fr) = (x+a) *2 20,a>0

4-E(X) -(r+1) &*J rs2 dx

Put x = ay and using Beta integral:


Jo 1+xy*n = B(m, n), we shall get, on simplification:
4, (r +1) a. B(r+1, 1) = a
r+1
However, K1EX*) -(r+ 1) a'* J aar d*
as the integral is not convergent. Hence in this case only the moments up to rth order exist
and igher
order moments do not exist.

Property 10. If Xand Yare independent random cariables, then


E[h (X). k (n] =E (h (X)] E (k (n] ..(6-28)
where h(-) is a function of X alone and k() is afunction of Yalone, provided expectations on bo
'sides exist.
Proof. Let frr) and gy) be the marginal p.d,f's of X and Yrespectively. Since X and Yare
independent, their joint p.df. fxy (*, y) is given by :
fxls y) =fx () fr (y)
By def., for continuous r.u.'s

E[hX) . k ( ) ] - )
K{y) fu,y) dz dy - (a) K(y) f() g(y) dx dy fFrom ()
Since E[h (X). k()] exists, the integral on the right-hand side is absolutely convergentand
hence by Fuibini's theorem for integrable functions, we can change the order of integration to get

as desired.
MATHEMATICAL EXPECTATION 6-11
Remark: The result can be proved for discrete random variables X and Y on replacing integration by
summation over the given range of X and Y.
6.5.PROPERTIES OF VARIANCE
IX is a random variable, then V(aX + b) = a'. V(X), where a and b are constants. ... (6-29)
Proof. Let Y = aX + b.
Then E(Y = a E(X) + b
Y- E() =a [X- E(X)]
Squaring and taking expectation of both sides, we get
E [Y -E(n]' - E[X - E()]2
V () = a V (x) Or V(aX + b) = a V(X),
where V (X) is written for variance of X.
Corollary
() If b=0, then V(aX) =a v(x) ’ ariance is not independent of change of scale..(6-29a)
(ii) If a= 0, then V (b) = 0 Variance of a constant is zero. . (6-29b)
(in) If a=1, then V(X +b) =V(X) Variance is independent of change of origin. (6-29c)
Hence, Nariance is independent of change of origin but not of scale.
6-5-1. Vafiance of Degenerate Random Variable.
(a) If X is a discrete r.o, then Var (X) = 0, if and only if X is degenerate.
(6) If X is a continuous r.u. then Var (X) # 0.
Proof.

(4) Let Xbe a discrete r.o. with p.mf. P(X =x)=P; and E (X) = ; P (X = x;) =p

Then, Var (X) = E(X -u) =


Var (X) =0, iff (x;- )= 0 i
P (X = ) = 1.
Hence, Var (X) =0 iff X is a degenerate r.v. at X = .
Note. If X= c(constant), then E (X) =c and E (X) =
Var (X) = E (X) - [E (X)]2 = -=0
(6) If X is a continuous r.u. with p.d,f. f(*), then

Var (X) =E(X - )=(*-)f(x)àx =0,


ifff(*) = 0, whenever x - u 0 i.e., x* H,which is not possible for any p.d,f. defined
on the real line R':]- o, o .
Hence, if X is a continuous ru., then Var (X) 0.
6.12 FUNDAMENTALS OF MATHEMATICAL
6-52, Approximate Expression for Expectation and Variance STATISTICS
Theorem. Let X be a ru. with E (X) = and Var (X) = o. Then, if Y= G(X), then
E (Y) = G(u) + ¿G"(u).
2
o ..(6-30)
and Var (Y) = [G (w d . (6-304)
provided G is twice differentiable at X= p.
Proof. We have : Y- G (X) = G[p +(X- )]
Expanding G
(X) as a Taylor's series (upto two terms) about X we get
() +(X - ) G') +
Y= G(0) = G 2!
C"()+R
where R, is the Remainder. If we discard the remainder term, thern on taking expectation of
both sides in ("), we get
E(n =G (4) + G
()E(X -)+ 2 G"(4). E(X -
1
E() = G()+ 2 G().o²
To obtain Var (Y), we expand G (X) as a Taylor's series (upto one term) about X = p to get :
Y- G(X)=G[u +(X -)]
=G () + (X- ). G () + R
where Ry is the Remainder. Discarding R, We get
Var () = Var [G () + (X-) G(H)]
= Var [G ())+ [G (u)'. Var (X - u); [ Var (aX) = Var (9)
=0+ [G (H)]. Var (X) ; : Var (C) = 0and Var (X ±a) =Var X]
Var (Y) =[G' (H)]'. o.
The aboye result can be extended to two dimensional r.v, as given in the
following Theorenl.
Theorem/ Let (X, Y) be two dimensional r.o. with :
E (X) - 4y,Var (X) = o, E ()= y and Var (Y) = o2, and let Z= G (X, Y.
If X and Yare independent, then

E (Z) = G (ux Hy t 1 aG ...(6-31)


dy?
and Var (Z) = ..(6-314)

zwhere all the partial derivatives are evaluated at the point (uy, y)
Proof. The proof of this theoremis based on Taylor series expansion of the ) oftwo
variables about the point (4, b), stated below.
functionf(*
f y) =f [a + (* - a), b +(y - b)] =f (a +h, b+) ; (h=x-4, k=y-)
MATHEMATICAL EXPECTATION 6-13

-f(a, b) + -+2hk
dxdy
where h = X - 4, k y-b and faf (a, b).
Using Taylor's series expansion of G (X, n about (Hx, Hy), we get :
Z =G (X, )= G [uy + (X - y), Hy +(Y - Hy)l
dG G
-G (Hx Hy)t

-+2(X -Hx)(Y-By) axdy + (Y-4y+,..)


oy
where all partial derivatives are evaluated at (uy Hy) i.e.,

[G (Hy Hy] ; r=1,2, and so on, and Ris the remainder term.
Discarding R, and taking expectation of both sides in ("), we get
aG
E (Z) =G(Hx Hy)t
[: E(X -H) = E (Y- Hy) =0
and E [(X - y) (Y- Hy)] = Cov (X, Y = 0, because X and Yare independent
aG G
E (Z) =G (Hx Hy)t
where partial derivatives are evaluated at the point (Hy Hy
To obtain the expression for Var (Z), writing Taylor's series expansion(") upto one term
only, we get
dG dG1 ...(**)
where R is the remainder term.
Discarding R, and taking variance of both sides in (*), we get
2
dG
Var (Z) =V[G (Hy, Hy)] + Var (X - Hx) t Var (Y- y
the covariance term vanishes because X and Yare independent.
2
Var (Z) = Va(x)Nar(r)
[:: Var (C) = 0, Var (X ± C) = Var X ; Var (Y t k) = Var Y

(2) + a.
Var-
where the partial derivatives are evaluated at the point (Hy Hy.
6-14 FUNDAMENTALS OFMATHEMATICAL
Note. The above result may be extended to a function of n independent rv's. STATISTICS
If Z=G(X, Xz, ., X,) with E(X) =H, Var (X) = o ; i=1, 2, ., n, then
that all the partial derivatives exist, we have the following approximation assurning
E(Z) = G (H Me ., P)t naG .. (632)
i=1

and Var (Z) = . (6-32a)


where all the partial derivatives are evaluated at the point (Hy Hg Hn).
6-6. COVARIANCE
If X And Y are two random variables, then covariance between them is defined as
Cov (X,Y)= E[X - E(X)) | Y-E ()] .. (633)
- E XY -XE ()- YE(X) + E (X) E (n]
= E (XY) - E () E (X) - E () E (Y) + E (X) E ()
= E (XY)- E(X) E (Y) ... (6334)
" If X and Y are independent then E (XY) = E (X)E (Y) and hence in this case
Cov (X, Y) = E (X) E (Y) - E (X) E (Y) = 0 .. (6-336)
Remarks :
1. Cov (aX, b) = E |(aX- E (aX) · (bY -E(Y)
-E a (X -E(X)) · b (Y- E (Y)
- ab E (X- E()) ·|Y- E(n
= ab Cov (X, Y) .. (6:34)
2 Cov (X + a, Y + b) = Cov (X, Y)
.. (6344)
3 CovY-. Oy Oyoy
1
Cov (X, Y . (6-346)
4. Similarly, we shall get :
Cov (aX + b, cY + d) = ac Cov (X, Y) ...(634)
Cov (X +Y, Z) = Cov (X, Z) + Cov (Y, Z) . (6-344)
Cov (aX +bY, cX + dY)= ac g +b d Gy +(a d + b c) . .(6-34e)
Cov (X, Y)
5. If X and Yare independent, Cov (X, Y) = 0. c.f. (6-33b)|.
However, the converse is not true. Fo
illusfrations see Chapter 10 on Correlation.
6-6-1. Variance of a Linear Combination of Random Variables
Let X, X, , X,, be n random variables, then

=
S4V(X)+2
j=1 i=l j=1
4; Con (X, X) (6-35)
MATHEMATICAL EXPECTATION 6-15

Proof. Let
So that E (U) = a E (X) + ag E (X2) t ..+ a, E (X)
u-E () =4, [X} - E(X,)] +4, [Xy - E(X-)] +. t4, [X, - E(X,)]
Squaring and taking expectation of both sides, we get
E[U -E(U]' =a' E[X - E(X,)'+ a E[X, - E(X,)]2 +.t a4,2 E[X,- E(X)'

vy -a? V(x) +a? VX) t..*a2 VX) +2) )44, CovX,, X;)
IM IM:
a V(x,)+2 44 Cov (X;,X;)
i=1
i=1 j=1
is

Remarks :
1. If a, =1;i- 1, 2, ., n, then

VX, +X, +..* X) =V(X) +V(X4) + .tV (X4)2Cov


+ (X, X{) (6-35a)
j=1

2. If X,, X ., X, are independent (pairwise), then Cov (X;. X) =0, (i * j).


Thus, from (6-35) and (6-35a), we get
V(a,X, +a,X, + ...t a, X,) = a' V(X,) + a, V(X,) +...+ (6-35b)
and V
(X, +X, t ..+ X) V(X) + V (K,) + ...+ V(X,),
provided X, X y X, are independent.
3. If a, =1 - a, and a, - 4 .4, = 0, then from (635), we get
V(X, + X) = V (X,) + V(X,) + 2 Cov (X, X)
Again if a, 1, a, =-1 and a, a, ..4, =0, then
V(X - X,) = V (X) + V(X,) - 2 Cov (X X)
Thus we have :
V(X + X) = V (X) + V(X) + 2 Cov (x, X,) . (6-35c)
I X and X, are independent, then Cov (X, X,) - 0and we get
V(X ± X,) = V(X,)+ V(X) .. (6-35d)

Example 61,Let Xbe a random variable with the following probability distribution :
-3 6 9
P (X =x): 1/6 1/2 1/3
Pind E (X) and E (X2) and using the laws of expectation, evaluate L (2X + 1).
6-16 FUNDAMENTALS OF MATHEMATICAL
1 11
STATISTICS
Solution. 1
x+6x;+9x;=
1
E(X) =xp(x) =(-3) 2 2

1 1 1 93
E(X) = p(x) -9x 6 +36 x 2 + 81 X 3
2
93 11
E(2X +1)? =E(4X2 +4X +1)= 4E (X) + 4E(0) +1=4 x +4x+1 =209.
Example 6-2.
(a Find the expectation of the number on a dice when thrown.
b) Tuo unbiased dice are throun. Find the expected values of the sum of numbers of points on then.
Solution.
() Let Xbe the random variable representing the number on a dice when thrown. Then, X
1
can take arny one of the values 1, 2, 3, .., 6, each with equal probability 6 Hence,
1 1
E (X) = 6 X1+x
6
2+ X3+...z x6=(1 +2+3+.+6) = 6xx
2
7
7
Remark. This does not mean that in a random throw of a dice, the player will get the number =35. n
2
fact, one can never get this (fractional) number in a throw of a dice. Rather, this implies that if the
7
player tosses the dice for a "long" period, theni on the average toss he will get
2 =3-5.
(b) The probability function of X (the sum of numbers obtained on two dice), is
Value of X:x 2 3 4 5 6 7 11 12

Probabilityp(*) 1/36 2/36. 3/36 4/36 5/36 6/36 2/36 1/36

E(X) -pl)
1 2 3 4 5 6 5 4
=2X +3 X +4X +5x +6X +7X +8 x +9 x
36 36 36 36 36 36 36 36
3 2 1
+ 10 x + 11 × + 12 X
36 36 36
1
=
(2+ 6 + 12 + 20 + 30 + 42 + 40 + 36 + 30 + 22 + 12) = x 252 =7.
36
Aliter. Let X; be the number obtained on the ith dice (i = 1, 2) when thrown. Then the sui
the number of points on two dice is given by :
7
S= X +X, ’ E(S) = E(X) + E(X4) = =7
[On using()
2
Remark. This result can be generalised to the sum of points obtained in a random throw of n dice. Ji
7n
E(S)-E(X) -
j=1 2
outcomes
Exampleb3. In four tosses of a coin, let Xbe the number of heads. Tabulate the 16- possible and
with the corresponding Dalues of X. By simple counting, derive the probability distributiorn OgfX
hence calculate the expected value and variance of X.
MATHEMATICAL EXPECTATION 6-17

Solutlon. Let Hrepresent a head, T a tail and X, the r.u. denoting


the number of heads.
Outcomes No. of Heads (X) S.No. Outcomes No. of Heads (X)
S.No.
HH H H 4 9 HTH T 2
1
HHH T 3 10 THTH 2
2
HHTH 3 11 THHT 2
3
HTHH 3 12 HTTT 1
4
5 THHH 3 13 THTT
TTHT 1
6 HHT T 2 14
HTTH 2 15 TTTH 1
7
TTHH 2 16 TTTT
8

The random variable X takes the values 0, 1, 2, 3 and 4. Since, from the above table,
we
find that the number of cases favourable to the coming of 0, 1, 2, 3 and 4 heads are 1,
4, 6, 4 and 1 respectively, we have
1 4 1 6
P (X = 0) = 16 P (X = 1) = 16 P(X =2) = 16 8

4 1 1
P(X =3) =164' P(X= 4) = .
The probability distribution of X can be summarized as follows :
0 1 2 3 4

1 3 1 1
1
P) 16 4 8 4 16

3 1
E (X) = xp(3) =1 : +2+3· 4 +4 16 = 2,
x=0

3 1 1+6+9+4
E (X) = S*pla) -12:+22.
4 +32. +4.6 4 4
=5
Y=0

Var (X) = E(X') - [E (X)] = 5-22 =1.


Allter. We can use Binomial distribution (Chapter 8).
1 1 1
If X - B=4,p=then E() -np -4x;=2 and Var () = p(1 -p) =4x x; 1.
2
divided into 20 equal
Examplé 6-4. A gamester has a disc with afreely revolving needle. The disc is or any multiple of
Sectors by thin lines and the sectors are marked 0, 1, 2, .., 19. The gamester treats5
bas lucky numbers and zero as a special lucky number. He allows a player to whirl
the needle on a charge
player twice
of 10monetary units. When the needle stops at the lucky number the gamester pays back the of the sum
the sum charged and at the special lucky number the gamester pays to the player 5 times
Charged. Is the game fair? What is the expectation of the player ?
Solution.
Event Favourable p (<) Player's Gain (x)
5, 10, 15 3/20 20 - 10 = 10
Lucky number
1/20 50 - 10 = 40
Special lucky No. - 10
Other numbers 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 16/20
13, 14, 16, 17, 18, 19
6.18 FUNDAMENTALS OF MATHEMATICAL
3 1 16 9
STATISTICS
E (X) = 20
x 10 +
20
X 40 -
20
x 10 = - +0, i.e., the game is not fair.

Example 6:5,A box contains 2" tickets among which "C, tickets bear the number i; i =0, 1, 2..
Agroup of M
tickets is drawn. What is the expectation of the sum of their numbers ?
Solution. Let X; ; i =1, 2, .., mbe the variable representing the number on the ith ticket drawn
Then the sum 'S' of the numbers on the tickets drawn is given by :

S- X+ X, t .t Xy =2
i=l
X, so that E(5) - i=1
E(X;)
X; is a random variable which can take any one of the possible values 0, 1, 2, ., M
with respective probabilities: "Co / 2", "C /2", "C, / 2", ., "C, / 2M.

E(X) =1-"C +2."Cz +3."C, +. tn."c,)


2!
+3 (n-1) (a -2) t..tn-1
3!

2!

E(5) = E(X;) = >(n/2) = mn

2
i=1

Example 6-6.Prove that the events E,, E, .., E, are independent iff their
variates I 7 y are independent. corresponding indicator

Solutlon. By definition : ; (w) = lo.1, ifif wewgE,E;| .. ()


If Part. Let the evens E, i= 1, 2, .., n be independent. By definition in (), I,
of E, Hence, if Efs are independent, their corresponding indicator (w) is a functio
functions I, (w) are als
independernt.
Only if Part. Let us assume that the indicator functions , Lh . , I, are independent. Ther,
we have.
E[I,.1, .. ]= E(}) .E (44).. EG)
But E!, .4 ...] -E|';nn.unE,=P (E nE,n..nE]
and E (I;)= P (E) ; i= 1,2, . n
Substituting in (**), we get
P(E nE, n...n E) = P(E). P((E) .. P(E,) ’ E, E,., E, are independet.
MATHEMATICAL EXPECTATION 6-19

Eyample 67. A poin is tossed until a hend appears. What is the expectation of the number of tosses required?
Solutlon. Let Xdenote the number of tosses required to get the first head. Then X can materialise
in the following ways :
Event
Probability, px)
1
H 1
2

TH 2

TTH 3

1 1
E(X)=2
x=1
P(*) -1x+2x+3x+
4 8 4 x16 + ... .. ()

1
This is an arithmetic-geometric series with ratio of GP being r= 2

1 1 1
Let S= 1 2
+2 |4 +3 8
+ 4
16
1 H| 1 1
Then 1 +2 +3
4 8 16
1
1 1 1 1 2
1-)s=
; *16 2
=1 S= 2.

Since the sum of an infinite G.P. with first term a and common ratio r (< 1) is
Hence, substituting in (*), we have : E (X) = 2.

Example 6-8. What is the expectation of the number of failures preceding the first suçcess in an infinite
series of independent trials with constant probabilityp of success in each trial?
Solutlon. Let the random variable X denote the number of failures preceding the first succes.
Then X can take the values 0, 1, 2, ..., o, We have
P (X = x) = p(x) = P(r failures precede the first success) gP.
where q =1 -p, is the probability of failure in a trial. Then by def.,

=pg (1 +24+3q? +4o..) ..*)


x=0 x=0
X=0
1 + 2q +3q2 + 4g+... is an infinite arithmetic-geometric series.
Let S- 1+2g + 3q + 4q +..
qS = q+ 2q + 3g +...
6-20 FUNDAMENTALS OF MATHEMATICAL
1
S=
1
STATISTICS
(1 - ) S = 1+ q+ q +q' t ..= 1-q (1-9)
1 P9
1+ 2g4 3q + 4g Hence, E (X)
(1-)2 (1-)? (From (1)
Example 69. Abox contains 'a' white and b' black balls. 'c' balls are drawn at random. Find l.
expected vaheof the number of white balls drawn.
Solutlon. Let the variable X, associated with ith draw, be defined as follows :
[1, if ith ball drawn is white
i=1,2,..., C.
]0, if ith ball drawn is black
Then the number 'S of the white balls amnong '¢ balls drawn is given by :

S= X+ X, +.+ X, = i=1 E (S) = E(X;)

P (X, = 1) = P (First ball drawn is white) = a+b

P(X, =1) =P(K,=1) · P(X, =1|X-1) +P(K - 0) -P (X, -1 |X,=0)


a-1
(a+b) (a+b-1) (a+b) (a+b-1)
(a+b)(a+b-)a-1)+b] - a+b and so on.

In general, P(X; = 1) =P (of drawing a white ball at the ith draw) =


a+b
b
P (X; = 0) =P (of drawing a black ball at the ith draw). =
a+b

E (X) =1. P (X= 1) + 0.P (X; =0) =


a+b

Hence, E(5) - 2()


i=1
CA

a+b
[From ()

sUPPLEMENTARY EXAMPLESs ON EXPECTATION


Example 6-10. Let the r.o. X have the distribution :
P(X = 0) = P (X= 2) = p ; P (X = 1) = 1 - 2p, for 0
sps 2
For what value of p is the Var (X) a maximum ?
Solution. Here the r.. X takes the values 0, 1 and 2 with respective probabil ities p, 1
1
and p, 0 Sps 2 Thus,
MATHEMATICAL EXPECTATION 6-21

E (X) = 0 xp+1x (1 - 2p) + 2xp = 1,


E(X) = 0 xp+1 x (1 - 2p) + 2 xp = 1+ 2p
1
Var (X) E (X) -(E (X)) =2p; 0sps 2
1 1 1
Obviously, for 0 Sp s Var (X) is maximum when p = 7 and (Var (X)] oy = 2 x 2
= 1.
2

Starting from the origin, unit steps are taken to the right with
Exemple 6-11.(Random Walk Problem).
probabilty pmd to the left with probability (- 1- p). Assuming
n steps
independent movements, find the
meOm and variance of the distance moved from origin after
Solutlon. Let us associate a variable X; with the ith step defined as follows :
[+1, ifftthe ith step is towards the right,
X; = |1if the ith step is towards the left.

Ther, S,=X, +X, i.+ X=X,


represents the random distance moved from origin after n steps.
E (X}) = 1 xp+(-1) x q =p -9
and E (X) = 12 xp + (-1) x q=p +q=1
Var (X) =E(X) - (E (X)}? =(q + p)?-(p - q= 4pq
Hence, E(5,)- (S,) = i=1 V(X;) =4 npq
E(X;) = n(p- g) and V
j=1
[: Movements of steps are independent.]
Example 6-12. Let X be a r.o. with mean u and variance ¡. Show that E (X- b), as a function of
b, is minimsed when b = .
Solutlon. E(X -b)² - E [(X - ) +( - b)]
- E (X- + ( - b)² +2 (4- b) E(X- ) =Var (X) + (u- b)'
[: E(X-p0]
E (X- b) 2 Var (X),
since ( b), being the square of a real quantity is always non-negative.
= b.
The sign of equality holds in () iff (4- by= 0
Hence, E(X b)² is minimised when u = band its minimum value is :
E(X - = o.
n Ihus result states that the sum of squares of deviations is minimum when takern about mean.
pie 13/ln a sequence of Bernoulli trials, let X be the length of the run of either successes or
Jailures starting with the first trial. Find E (X) and V(X).
uon. Let 'p denote the probability of success. Then q = 1-p, is the probability of failure. X=1
heans that we can have any of the possibilities SF and FS with respective probabilities pq and qp.
FUNDAMENTALS OF MATHEMATICAL
6-22
P(X = 1) = P (SF) + P (FS) = pq + qp = 2 pq
STATISTICS
Similarly, P (X = 2) =P (SSF) +P (FFS) =pq + qp
In general P(X = r) =P (SSS...SF) + P (FFF..FS) = p'.q tq' p

r=1 r=1
r=l

q p
(See Remark 1 to Example 6-15)
V
()- E(X) - [E (012= E[X (X -1)] +E() - [E ()]
E[X (X- 1)]- rr-)PK=) Xrr-1)(pg+¢'p)
r=2
r=2

-Xrt-)pa+rtt-)dP
r=2 r=2

r=2 'r=2

r=2 r=2

-2p'yl-p)'+24p(1- -2 2

)):-(#--6-9) [From ()

Example 6-14, (MATCHNG PROBLEM)A deck of nnumbered cards is thoroughly shufled and the cards
are inserted into n numbered cells one by one. If the card number " falls in the cell 'r, we count it a8
a match, otherwise not. Find the mean and variance of total number of such matches.
Solutlon. Let us associate a r.u, X, with the ith draw defined as follows :
[1, if the ith card falls in the ith cell
X o,0,otherw ise
The total nümber of matches 'S is given by IM:
S- X+X, t .i+t X=X ’ E (S) = > E(X;)
i=l

1
E(X) - 1P (X;-1) +0.P (K=0) =P(X;-1) =
MATHEMATICAL EXPECTATION 6-23

Hence, 1

V(5) - V(X, +X, +.t X) - VX;) +2 Cov(x,.X) ... (1)

2 n-1
11
v0)- EX)- [E (X)]² -1. P,-1)+02,P(X, -0)-( n 2 2 . (2)
Cov (X;, X) = E(X; X) - E(X) E(X) ... (3)
1
E(X; X) =1. P(%{ X=1) +0. P(X, X, =0) - -2
n! n(n-1)
since X; X,= 1if and only if both card numbers iand j are in their respective matching
places and there are (n - 2) ! arrangements of the remaining cards that correspond to
this event. Substituting in (3), we get
1 1 1 1
Cov (X{, X) = n(n-1) n n n(n-1) (4)
Substituting from (2) and (4) in (1), we have
1
V(S) =
i=l i=1 j=l
i<j
1 n-11
=
+- = 1,
n (n-1)
Example 6-15. If tis any positive real number, show that the function defined by
p(r)=e-t (1 -e-h*-1 ... (1)
can represent a probability function of a random variable X assuming the values 1, 2, 3, ...Find
E (X) and Var (X) of the distribution.
Solution. We have el>1, Vt0 ’ et<1 or 1-t> 0, Vt>0
1
Also >0, t>0
Hence, p(r) = e- (1-e-hx-1>0 0, x=1, 2, 3, ...

Also =e-t(1-ey -e-t (a =1-eh


X=1 X=1

1
=e' (1 + a+ a?+ a +... )= e-.
(1-a)
= [1- (1 - e = ot.el= 1
6-24 FUNDAMENTALS OF MATHEMATICAL
Hence, p(x) defined in (1) represents the probability function of a r. X. STATISTICS
E(0) - 2* pl) -ex(1-e} a , (a=1-ey
X=1 X=1

-e- (1 + 2a +3a² + 4g +..) = e- (1- a2 (See Remark 1)


-e-l (e-h- =e!

E(X) - Sp) -e-*


X=1
e- (1 +4a +9² +16n? +..)
=e-'(1 +a) (1 - a)=e- (2- e-h (See Remark 2)
Hence, Var (X) =E(X)- [E (X]2 =e-! (2 - e-he -t
- [2-e-) -1] =e (1 -e-h =d (e - 1).
Remarks :
1. Consider
S = 1+ 2a + 3a'+ 4a + ... (Arithmeticgeometric series)
aS = a + 2a2 +3a3 +
1
iM:
(1 - a) S =1+a + a' +a+... = or S (1 - a)-2
*(1-a)
= 1+ 2a + 3a2 +4a+ ... = (1 - a2 ..)

2. Consider
S = 1+22. a +32. a+ 42, a +52, a +..
S = 1+ 4a + 9a + 164 +254 +...
-3a S = -3a - 1242 - 273 - 48d4
+3a2S = + 3a2 + 1243 +27a4 t...
-

-404
Adding the above equations, we get
(1 - a) S = 1+ a (")
S (1 +a) (1 - a)3

- 1+ 4a + 9a + 16a + ... = (1 + a) (1 - a)3

The results (") and (*) are quite useful for numerical problems and should be committed to memony
Example 6-16. A man with n keys wants to open his door and tries the keys independently ana
yandom. Find the mean and variance of the number of trials required to open the door,
() if unsuccessful keys are not eliminated from further selection, and
(i) if they are.
Solutlon.
doorin
Suppose the man gets the first success at xth trial, i.e., he is.unable to open the
() the first (x-1) trials. If unsuccessful keys are not eliminated then X is a random vartav
which can take the values 1, 2, 3, .., o,

You might also like