0% found this document useful (0 votes)
43 views11 pages

EECE 522 Notes - 22 Results For 2 RVs

1. The document discusses key concepts related to two random variables X and Y including their joint probability distribution pXY(x,y), marginal distributions pX(x) and pY(y), conditional distributions pY|X(y|x) and pX|Y(x|y), independence, Bayes' rule, and expectations of functions of X and Y. 2. It provides examples of decomposing joint expectations using conditional and marginal distributions, and applying Bayes' rule and conditional expectations. 3. Key results presented include formulas for the marginal distributions in terms of the joint distribution, and ways to write the joint distribution or conditional distributions in terms of each other.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views11 pages

EECE 522 Notes - 22 Results For 2 RVs

1. The document discusses key concepts related to two random variables X and Y including their joint probability distribution pXY(x,y), marginal distributions pX(x) and pY(y), conditional distributions pY|X(y|x) and pX|Y(x|y), independence, Bayes' rule, and expectations of functions of X and Y. 2. It provides examples of decomposing joint expectations using conditional and marginal distributions, and applying Bayes' rule and conditional expectations. 3. Key results presented include formulas for the marginal distributions in terms of the joint distribution, and ways to write the joint distribution or conditional distributions in terms of each other.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Pre-Chapter 10

Results for Two Random Variables


See Reading Notes
posted on BB

1
Let X and Y be two RVs each with there own PDF: pX(x) and pY(y)

Their complete probabilistic description is captured in…

Joint PDF of X and Y: pXY(x,y)

Describes probabilities of joint events concerning X and Y.


bd
Pr{( a < X < b) and ( c < Y < d )} = ∫ ∫ p XY ( x, y )dxdy
ac

Marginal PDFs of X and Y: The individual PDFs pX(x) and pY(y)

Imagine “adding up” the joint PDF along one direction of a piece of
paper to give values “along one of the margins”.

p X ( x ) = ∫ p XY ( x, y )dy pY ( y ) = ∫ p XY ( x, y )dx
2
Expected Value of Functions of X and Y: You sometimes create a
new RV that is a function of the two of them: Z = g(X,Y).

E {Z } = E XY {g ( X , Y )} = ∫ ∫ g ( x, y ) p XY ( x, y )dxdy

Example: Z = X + Y
E {Z } = E XY {X + Y } = ∫ ∫ (x + y ) p XY ( x, y )dxdy

= ∫ ∫ xp XY ( x, y )dxdy + ∫ ∫ yp XY ( x, y )dxdy
   
= ∫ x  ∫ p XY ( x, y )dy  dx + ∫ y  ∫ p XY ( x, y )dx  dy
   
   

= ∫ xp X ( x )dx + ∫ ypY ( y )dy

= E X {X }+ EY {Y }
3
Conditional PDFs : If you know the value of one RV how is the
remaining RV now distributed?
 p XY ( x, y )  p XY ( x, y )
 p ( x) , p X ( x) ≠ 0  p ( y) , pY ( y ) ≠ 0
pY | X ( y | x ) =  X p X |Y ( x | y ) =  Y
 
0, otherwise 0, otherwise

Sometimes we think of a specific numerical value upon which we


are conditioning… pY|X(y|X = 5)

Other times it is an arbitrary value…

pY|X(y|X = x) or pY|X(y|x) or pY|X(y|X)

Various Notations

4
Independence: RVs X and Y are said to be independent if
knowledge of the value of one does not change the PDF model for
the other.
pY | X ( y | x ) = pY ( y )

p X |Y ( x | y ) = p X ( x )

This implies (and is implied by)… p XY ( x, y ) = p X ( x ) pY ( y )

p X ( x ) pY ( y )
pY | X ( y | x ) = = pY ( y )
p X ( x)

p X ( x ) pY ( y )
p X |Y ( x | y ) = = p X ( x)
pY ( y )

5
Decomposing the Joint PDF: Sometimes it is useful to be able to
write the joint PDF in terms of conditional and marginal PDFs.

From our results for conditioning above we get…

p XY ( x, y ) = pY | X ( y | x ) p X ( x )

p XY ( x, y ) = p X |Y ( x | y ) pY ( y )

From this we can get results for the marginals:

p X ( x ) = ∫ p X |Y ( x | y ) pY ( y )dy

pY ( y ) = ∫ pY | X ( y | x ) p X ( x )dx

6
Bayes’ Rule: Sometimes it is useful to be able to write one
conditional PDF in terms of the other conditional PDF.
p X |Y ( x | y ) pY ( y )
pY | X ( y | x ) =
p X ( x)

pY | X ( y | x ) p X ( x )
p X |Y ( x | y ) =
pY ( y )

Some alternative versions of Bayes’ rule can be obtained by


writing the marginal PDFs using some of the above results:
p X |Y ( x | y ) pY ( y ) p X |Y ( x | y ) pY ( y )
pY | X ( y | x ) = =
∫ p XY ( x, y )dy ∫ p X |Y ( x | y ) pY ( y )dy
pY | X ( y | x ) p X ( x ) pY | X ( y | x ) p X ( x )
p X |Y ( x | y ) = =
∫ p XY ( x, y )dx ∫ pY |X ( y | x) p X ( x)dx 7
Conditional Expectations: Once you have a conditional PDF it
works EXACTLY like a PDF… that is because it IS a PDF!

Remember that any expectation involves a function of a random


variable(s) times a PDF and then integrating that product.
So the trick to working with expected values is to make sure you
know three things:
1. What function of which RVs
2. What PDF
3. What variable to integrate over

8
For conditional expectations… one idea but several notations!

E X |Y {g ( X , Y )} = ∫ g ( x, y ) p X |Y ( x | y )dx
Uses subscript on E to indicate that you use the cond. PDF.
Does not explicitly state the value at which Y should be fixed so use an arbitrary y

E X |Y = yo {g ( X , Y )} = ∫ g ( x, yo ) p X |Y ( x | yo )dx
Uses subscript on E to indicate that you use the cond. PDF.
Explicitly states that the value at which Y should be fixed is yo

E {g ( X , Y ) | Y } = ∫ g ( x, y ) p X |Y ( x | y )dx

Uses “conditional bar” inside brackets of E to indicate use of the cond. PDF.
Does not explicitly state the value at which Y should be fixed so use an arbitrary y

E {g ( X , Y ) | Y = yo } = ∫ g ( x, yo ) p X |Y ( x | yo )dx
Uses “conditional bar” inside brackets of E to indicate use of the cond. PDF.
Explicitly states that the value at which Y should be fixed is yo
9
Decomposing Joint Expectations: When averaging over the joint
PDF it is sometimes useful to be able to decompose it into nested
averaging in terms of conditional and marginal PDFs.
This uses the results for decomposing joint PDFs.

E{g ( X , Y )} = E XY {g ( X , Y )} E {g ( X , Y )} = ∫ ∫ g ( x, y ) p ( x, y ) dxdy
$XY
!#!"
pY | X ( y |x ) p X ( x )

 
{
= E X EY | X {g ( X , Y )} } = ∫x ∫y g ( x , y ) pY |X ( y | x )  p X ( x )dx
$!!! !#!!!! "
EY | X {g ( X ,Y )}

This is an RV that
“inherits” the PDF of X!!!
10
Ex. Decomposing Joint Expectations: { }
E{g ( X , Y )} = E X EY | X {g ( X , Y )}

Let X = # on Red Die Y = # on Blue Die g(X,Y) = X + Y


X/Y 1 2 3 4 5 6 E{Y|X}
6
1 (1+1) (1+2) (1+3) (1+4) (1+5) (1+6) ∑ (1 + y ) 1 = 4.5
y =1
6
6
2 (2+1) (2+2) (2+3) (2+4) (2+5) (2+6) ∑ (2 + y ) 1 = 5.5 These
y =1
6
6
constitute
3 (3+1) (3+2) (3+3) (3+4) (3+5) (3+6) ∑ (3 + y ) 1 = 6.5 an RV with
y =1
6
uniform
6
4 (4+1) (4+2) (4+3) (4+4) (4+5) (4+6) ∑ (4 + y ) 1 = 7.5 probability
y =1
6 of 1/6
6
5 (5+1) (5+2) (5+3) (5+4) (5+5) (5+6) ∑ (5 + y ) 1 = 8.5
y =1
6
6
6 (6+1) (6+2) (6+3) (6+4) (6+5) (6+6) ∑ (6 + y ) 1 = 9.5
y =1
6

6
E{ X + Y } = E{E{Y | X }} = ∑ E{Y | x} = 7
1
x =1
6
11

You might also like