2.5 Averages: Chapter 2 Part 2
2.5 Averages: Chapter 2 Part 2
Chapter 2 part 2
2.5 Averages
The mean is the most familiar. It is obtained by adding up all the numbers in
the collection and dividing by the number of terms in the collection.
The mode of a set of numbers is the most common number in the collection
of observations.
If there are two or more numbers with this property, the collection of
observations is called multimodal.
The median is 7 since there are four scores below 7 and four 4 scores above 7.
{4, 5, 5 , 5, 7, 7, 8, 8, 9, 10}
The mode is 5 since that score occurs more than any other. It occurs three
times.
Chapter 2: Discrete Random variables
Recall:
𝜶𝒙 𝒆−𝜶
PMF of Poisson (𝛂) 𝐫𝐚𝐧𝐝𝐨𝐦 𝐯𝐚𝐫𝐢𝐚𝐛𝐥𝐞 X is: PX(x) = { 𝒙!
x = 0,1,2, …
0 otherwise
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables
We start with PX(x) (probability model of the original r.v.) and the function Y =
g(X). Then we find PY(y).
Note:
Although both PX(x) and g(x) are function of the random variable X, they are
totally different functions;
PY(x) is a probability model of r. v. X with its properties, while g(x) is just a
function relating two r. v. X and Y.
Example 2.28
In Example 2.27, assume that all faxes contain 1, 2, 3 or 4 pages with equal
probability. Find 1) the PMF and the expected value of Y, 2) the charge for a
fax or rather the expected value of fax.
The charge for the fax, Y has range SY = {10, 19, 27, 34} which correspond to SX =
{1, 2, 3, 4}.
The experiment can be described by the tree diagram.
Each value of Y results in a unique value of X. Equation (2.66) can be used to
find PY(y)
PY(y) = P[Y = g(x)] = P[X = x] = PX(x) (2.66)
Page Cost
X=1 • Y = 10
1/4
1/4 X=2 • Y = 19 𝟏
𝒚 = 𝟏𝟎, 𝟏𝟗, 𝟐𝟕, 𝟑𝟒
PY(y) = {𝟒
1/4 𝟎 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
X=3 • Y = 27
1/4
X=4 • Y = 34
E[Y] = ?
E[Y] = (¼) (10 + 19 + 27 + 34) = 22.5 cents
Chapter 2: Discrete Random variables
Chapter 2: Discrete Random variables
We have already seen the average value of a r.v. which is a typical value.
It is a number which summarizes the entire probability model.
Looking further into the probability model, one may ask:
As an example, assume that your score is 7 points above the class average.
You may ask; how good is that? Is it near the top or near the middle??
Note:
The units of Var[X] are squares of the units of the r.v. X. But the units for 𝝈X are
the same as X.
Hence we can compare 𝝈X with the expected value of X.
= E[X2] – (E[X])2
Note:
E[X] and E[X2] are examples of moments of r. v. X.
Remark;
Like the PMF and CDF of a r. v. X, the set of moments of X is a complete
probability model.
The model based on moments can be expressed as a moment generating
functions.
Chapter 2: Discrete Random variables
Using Theorem 2.10 , we can obtain Var[R] without using PW(w) which is
not available. E[Y] = Y xS g ( x) PX ( x)
Y
Note:
For any r. v. X, Var[X] ≥ 0.
Note:
As a consequence of Theorem 2.14 we can write:
Var[aX] = a2Var[X] and 𝝈aX = a 𝝈X
Which means that multiplying a random variable by a constant is equivalent
to a scale change in the unit of measurement of the r. v.
Chapter 2: Discrete Random variables
Recall:
P[A/B] expresses new knowledge regarding the occurrence of the event A,
after learning that event B has occurred .
The conditioning event B contains information about X but not the precise
value of X! such as X ≤ 33 or l X l > 100.
Chapter 2: Discrete Random variables
Recall:
Recall :