0% found this document useful (0 votes)
33 views

2.5 Averages: Chapter 2 Part 2

The document discusses key concepts related to discrete random variables including: 1) Different types of averages (mean, median, mode) and how to calculate them are introduced. 2) Functions of random variables are discussed, where a new random variable Y can be derived from an original random variable X through a function g(X). 3) Methods to determine the probability model, expected value, and other properties of the derived random variable Y are presented. 4) Variance and standard deviation are introduced as measures of dispersion around the expected value, with formulas given to calculate them.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

2.5 Averages: Chapter 2 Part 2

The document discusses key concepts related to discrete random variables including: 1) Different types of averages (mean, median, mode) and how to calculate them are introduced. 2) Functions of random variables are discussed, where a new random variable Y can be derived from an original random variable X through a function g(X). 3) Methods to determine the probability model, expected value, and other properties of the derived random variable Y are presented. 4) Variance and standard deviation are introduced as measures of dispersion around the expected value, with formulas given to calculate them.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 2: Discrete Random variables

Chapter 2 part 2

2.5 Averages

 The average of a collection of numerical observations is a statistic of the


collection.
 It is a single number that describes the entire collection.
 There are several kinds of averages.
 The most common ones are the mean, the medium and the mode.

The mean is the most familiar. It is obtained by adding up all the numbers in
the collection and dividing by the number of terms in the collection.

The mode of a set of numbers is the most common number in the collection
of observations.
If there are two or more numbers with this property, the collection of
observations is called multimodal.

The median is a number in the middle of the set of numbers.


An equal number of members of the set are below the median and above
the median.
Chapter 2: Discrete Random variables

Example 2.25 Solution:


∑𝟏𝟎
𝒊=𝟏 𝒙𝒊 𝟔𝟖
The mean value is = = 6.8
𝟏𝟎 𝟏𝟎

The median is 7 since there are four scores below 7 and four 4 scores above 7.
{4, 5, 5 , 5, 7, 7, 8, 8, 9, 10}

The mode is 5 since that score occurs more than any other. It occurs three
times.
Chapter 2: Discrete Random variables

E[R] = ∑𝑟=0,2 𝑟𝑃𝑅 (𝑟)


Chapter 2: Discrete Random variables

Error should read (1-q)2


𝒒
{Math Fact B.7: ∑∞
𝒊=𝟏 𝒊𝒒
𝒊
=
(𝟏−𝒒)𝟐
}

With reference to testing integrated circuits; if probability of rejecting an


integrated circuit is p = 1/5, then on the average, we have to perform E[Y] =
1/p = 5 tests to observe the first reject.

Recall:
𝜶𝒙 𝒆−𝜶
PMF of Poisson (𝛂) 𝐫𝐚𝐧𝐝𝐨𝐦 𝐯𝐚𝐫𝐢𝐚𝐛𝐥𝐞 X is: PX(x) = { 𝒙!
x = 0,1,2, …
0 otherwise
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

2.6 Functions of a Random Variable


In many practical situations, we observe sample values of a random variable
and use these sample values to compute other quantities.

An example is the measurement of the power level of the received signal of a


cellular phone.
Chapter 2: Discrete Random variables

An observation is x, the power level in milliwatts.

Usually, engineers convert the measurement to decibels using the relation


y = 10 log10 x in dBm (decibels with respect to one milliwatt)

If x is a sample value of the random variable X, then y is a sample value of the


random variable Y.

Y is a derived random variable


Chapter 2: Discrete Random variables

Not important to derive the formula

We would like to determine the probability model of a derived random variable


from the probability model of the original random variable.

We start with PX(x) (probability model of the original r.v.) and the function Y =
g(X). Then we find PY(y).

Note:
Although both PX(x) and g(x) are function of the random variable X, they are
totally different functions;
PY(x) is a probability model of r. v. X with its properties, while g(x) is just a
function relating two r. v. X and Y.

Meaning: If X = x is the outcome of an experiment, P[Y = y] = the sum of the


prob. of all outcomes X = x for which Y = y.
Chapter 2: Discrete Random variables

Example 2.28
In Example 2.27, assume that all faxes contain 1, 2, 3 or 4 pages with equal
probability. Find 1) the PMF and the expected value of Y, 2) the charge for a
fax or rather the expected value of fax.

Example 2.28 Solution


The number of pages X has a PMF as shown below:

The charge for the fax, Y has range SY = {10, 19, 27, 34} which correspond to SX =
{1, 2, 3, 4}.
The experiment can be described by the tree diagram.
Each value of Y results in a unique value of X. Equation (2.66) can be used to
find PY(y)
PY(y) = P[Y = g(x)] = P[X = x] = PX(x) (2.66)

Page Cost

X=1 • Y = 10
1/4

1/4 X=2 • Y = 19 𝟏
𝒚 = 𝟏𝟎, 𝟏𝟗, 𝟐𝟕, 𝟑𝟒
PY(y) = {𝟒
1/4 𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

X=3 • Y = 27

1/4

X=4 • Y = 34

E[Y] = ?
E[Y] = (¼) (10 + 19 + 27 + 34) = 22.5 cents
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

Read Example 2.30


Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

2.7 Expected Value of a Derived Random Variable

Sometimes, only the expected value of a derived random variable is


needed instead of the entire probability model.

To obtain this average, it is not necessary to compute the PMF or the


CDF of the new r.v.
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

Example 2.52 Solution


Using Theorem 2.12: “ E[aX + b] = aE[X] + b ” yields:
E[V] = E[g(R)] = 4E[R] + 7 = 4(3/2) + 7 = 13

We may also use Theorem 2.10 “E[Y] = Y   xSY g ( x) PX ( x)

E[V] = g(0)PR(0) + g(2)PR(2) = 7(1/4) + 15(3/4) = 13


Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

2.8 Variance and Standard Deviation

 We have already seen the average value of a r.v. which is a typical value.
 It is a number which summarizes the entire probability model.
 Looking further into the probability model, one may ask:

How typical is this average? Or,


What are the chances of observing an event far from the average?

As an example, assume that your score is 7 points above the class average.
You may ask; how good is that? Is it near the top or near the middle??

A measure of the dispersion is an answer to these questions.

If this measure is small, observations are likely to be near the average.

On the other hand, a high measure of the dispersion suggests that it is


not unusual to observe events that are far from the average.

The most important measures of the dispersion are the standard


deviation and the variance.
The variance of a r.v. X describes the difference between X and its
expected value.
Chapter 2: Discrete Random variables

Note:
The units of Var[X] are squares of the units of the r.v. X. But the units for 𝝈X are
the same as X.
Hence we can compare 𝝈X with the expected value of X.

The outcomes which are within ± 𝝈X of μX are considered to be in the center of


the distribution.
In other words; samples values which are within 𝝈X of the expected value,
i.e. x ∈ [ μX - 𝝈X , μX + 𝝈X], are considered as “typical” values of x and other
values as “unusual”.

Var[X] = E[(X - 𝝁𝑿 )2]


Chapter 2: Discrete Random variables

Var[X] = E[(X - 𝝁𝑿 )2]

= E[X2] – (E[X])2
Note:
E[X] and E[X2] are examples of moments of r. v. X.

Var[X] is a central moment of X.

Remark;
Like the PMF and CDF of a r. v. X, the set of moments of X is a complete
probability model.
The model based on moments can be expressed as a moment generating
functions.
Chapter 2: Discrete Random variables

 Using Theorem 2.10 , we can obtain Var[R] without using PW(w) which is
not available. E[Y] = Y   xS g ( x) PX ( x)
Y

V[R] = E[(R – μR)2] = (0 – 3/2)2 PR(0) + (2 – 3/2)2 PR(2) = 3/4


Chapter 2: Discrete Random variables

 Applying Theorem 2.13 Var[X] = E[X2] - 𝝁𝟐𝑿 = E[X2] - (E[X])2


We just need to evaluate E[R2] as E[R] = 3/2 is already known.
E[R2] = 02PR(0) + 22PR(2) = 0X (1/4)+ 4(3/4) = 3

Var[R] = 3 – (3/2)2 = 3 – 9/4 = ¾

Note:
For any r. v. X, Var[X] ≥ 0.

In fact, (X - µX)2 ≥ 0, hence its expected value is also nonnegative. ???


Chapter 2: Discrete Random variables

Example 2.35 Solution


 Using Theorem 2.12 : ” E[ax +b] = a E[X] + b ”, we can find the expected
value of Y = X + 1 .
E[Y] = E[X] + 1

 Var[Y] = ? Using Theorem 2.14: “ Var[aX + b] = a2Var[X] “ yields:


Var[Y] = Var[X]

Note:
As a consequence of Theorem 2.14 we can write:
Var[aX] = a2Var[X] and 𝝈aX = a 𝝈X
Which means that multiplying a random variable by a constant is equivalent
to a scale change in the unit of measurement of the r. v.
Chapter 2: Discrete Random variables

What are the missing values?


Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

2.9 Conditional Probability Mass Function

Recall:
P[A/B] expresses new knowledge regarding the occurrence of the event A,
after learning that event B has occurred .

Here, event A is just an observation of a particular value of the r. v. X.


i.e.; A = {X = x}

The conditioning event B contains information about X but not the precise
value of X! such as X ≤ 33 or l X l > 100.
Chapter 2: Discrete Random variables

The occurrence of the conditioning event B change the probability of


the event {X = x}.
Chapter 2: Discrete Random variables

Recall:

Theorem 1.10 : Law of total probability :


For an event space {B1, B2, …,Bm} with P[Bi] > for all i,
m
P[A] =  P[ A / Bi ]P[ Bi ]
i 1
Chapter 2: Discrete Random variables

Recall :

X is a geometric (p) random variable if its PMF has the form


𝒙−𝟏
PX(x): = {𝒑(𝟏 − 𝒑) 𝒙 = 𝟏, 𝟐, 𝟑, … .
𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆

Once we learn that an outcome x ∈ B , then


The probability of all x  B are 0 in our conditional model and the

probabilities of all x ∈ B are proportionally higher than they were before we


learned x ∈ B.
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

Remark: This is theorem 2.1 rewritten with S replaced with B


Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables

You might also like