0% found this document useful (0 votes)
81 views

Introductory Econometrics: Probability and Statistics Refresher

This document provides an outline and overview of key concepts in introductory econometrics including: 1) Random variables and their probability distributions including discrete, continuous, mean, variance, and standard deviation. 2) Properties of expectation including linearity and how expectation applies to sums and transformations of random variables. 3) Concepts of conditional expectation and how properties of expectation extend to conditional expectations.

Uploaded by

chanlego123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views

Introductory Econometrics: Probability and Statistics Refresher

This document provides an outline and overview of key concepts in introductory econometrics including: 1) Random variables and their probability distributions including discrete, continuous, mean, variance, and standard deviation. 2) Properties of expectation including linearity and how expectation applies to sums and transformations of random variables. 3) Concepts of conditional expectation and how properties of expectation extend to conditional expectations.

Uploaded by

chanlego123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Introductory Econometrics

Probability and Statistics Refresher

Monash Econometrics and Business Statistics

2019

1 / 35
Outline

I Reference: Appendices B-1, B-2, B-3, B-4 of the textbook


I Random variables (discrete, continuous) and their probability
distribution
I Mean, variance, standard deviation
I Properties of expectation
I Covariance and correlation
I Joint and conditional distributions
I Conditional expectation function as the fundamental target of
modelling

2 / 35
Random variables
I Economic and financial variables are by nature random. We do not
know what their values will be until we observe them.
I A random variable is a rule that assigns a numerical outcome to an
event in each possible state of the world.
I For example, the first wage offer that a BCom graduate receives in
the job market is a random variable. The value of ASX200 index
tomorrow is a random variable. Other examples are . . .
I A discrete random variable has a finite number of distinct outcomes.
For example, rolling a die is a random variable with 6 distinct
outcomes.
I A continuous random variable can take a continuum of values
within some interval. For example, rainfall in Melbourne in May can
be any number in the range from 0.00 to 200.00 mm.
I While the outcomes are uncertain, they are not haphazard. The rule
assigns each outcome to an event according to a probability.

3 / 35
A random variable and its probability distribution
I A discrete random variable is fully described by
its possible values x1 , x2 , . . . , xm
probability corresponding to each value p1 , p2 , . . . , pm
with the interpretation that P(X = x1 ) = p1 , P(X = x2 ) = p2 , ...,
P(X = xm ) = pm .
I The probability density function (pdf) for a discrete random variable
X is a function f with f (xi ) = pi , i = 1, 2, . . . , m and f (x) = 0 for
all other x.
I Probabilities of all possible outcomes of a random variable must
sum to 1.
Xm
pi = p1 + p2 + · · · + pm = 1
i=1

4 / 35
I Examples:
1. Rolling a die
2. Sex of a baby who is not yet born. Is it a random variable?
3. The starting wage offer to a BCom graduate
I The probability density function (pdf) for a continuous random
variable X is a function f such that P(a ≤ X ≤ b) is the area under
the pdf between a and b.
I The total area under the pdf is equal to 1.

5 / 35
I Example: Distribution of men’s height
I The area under the pdf is equal to 1

6 / 35
I The probability of that the height of a randomly selected man lies in
a certain interval is the area under the pdf over that interval

7 / 35
Features of probability distributions: 1. Measures of
Central Tendency
Textbook reference B-3

I Expected value or mean of a discrete random variable is given by


m
X
E (X ) = p1 x1 + p2 x2 + · · · + pm xm = pi xi
i=1

and for a continuous random variable is given by


Z ∞
E (X ) = xf (x)dx
−∞

I Intuitively, expected value is the long-run average if we observe X


many, many, many times.
I It is convention to use the Greek letter µ to denote expected value:

µX = E (X )

8 / 35
I Another measure of central tendency is the median of X , which is
the “middle-most” outcome of X , i.e. xmed such that
P(X ≤ xmed ) = 0.5. Median is preferred to the mean when the
distribution is heavily skewed, e.g. income or house prices.
I Finally, there is the mode which is the most likely value, i.e. the
outcome with the highest probability. It is not a widely used
measure of central tendency.

9 / 35
2. Measures of dispersion
Textbook reference B-3

I Variance of a random variable:

σX2 = Var (X ) = E (X − µX )2

I Variance is a measure of spread of the distribution of X around its


mean.
I If X is an action with different possible outcomes, then Var (X )
gives an indication of riskiness of that action.
I Standard deviation is the square root of the variance. In finance,
standard deviation is called the volatility in X.
p
σX = sd(X ) = E (X − µX )2

I The advantage of standard deviation over variance is that it has the


same units as X.

10 / 35
Properties of the Expected Value
Textbook reference B-3

1. For any constant c, E (c) = c.


2. For any constants a and b,

E (aX + b) = aE (X ) + b

3. Expected value is a linear operator, meaning that expected value of


sum of several variables is the sum of their expected values:

E (X + Y + Z ) = E (X ) + E (Y ) + E (Z )

I The above three properties imply that for any constants a, b, c and
d and random variables X , Y and Z ,

E (a + bX + cY + dZ ) = a + bE (X ) + cE (Y ) + dE (Z )

11 / 35
Question - https://flux.qa - Code: UIGIRZ

We have put one quarter of our savings in a term deposit (a risk free
investment) with annual return of 2% and invested the other three
quarters in an investment fund with expected annual return of 3% and
variance of 4.
The expected value of the annual return of our portfolio is
2+3
A. 2 = 1.5%
1 3
B. 4 ×2+ 4 × 3 = 2.75%
3
C. 4 × 3 = 2.25%
1 3

D. 4 ×2+ 4 ×3× 4 = 5%
1 3
E. 4 ×0+ 4 × 4 = 3%

12 / 35
I It is important to have in mind that E is a linear operator, so it
“goes through” sums of random variables, but it does not go
through non-linear transformations of random variables. For
example:
E (X 2 ) 6= (E (X ))2
E (log X ) 6= log(E (X ))
E (XY ) 6= E (X )E (Y ) unless X and Y are statistically independent
I Using properties of expectations, we can now show that
Var (X ) = E (X 2 ) − µ2

Var (X ) = E (X − µ)2 = E (X 2 − 2µX + µ2 )


= E (X 2 ) − 2µE (X ) + µ2 = E (X 2 ) − 2µ2 + µ2
= E (X 2 ) − µ2

13 / 35
Properties of the Variance
Textbook reference B-3

1. For any constant c, Var (c) = 0.


2. For any constants a and b,

Var (aX + b) = a2 Var (X )

3. There is a third property related to the variance of linear


combinations of random variables that is very important and we will
see later after we introduce the covariance.

14 / 35
Question - https://flux.qa - Code: UIGIRZ

We have put one quarter of our savings in a term deposit (a risk free
investment) with annual return of 2% and invested the other three
quarters in an investment fund with expected annual return of 3% and
variance of 4.
The variance of the annual return of our portfolio is
2
A. 34 × 4 = 2.25
1 3
B. 4 ×0+ 4 ×4=3
1 3
C. 4 × 2 + × 4 = 3.25
4
1 2
2
× 2 + 43 × 3 = 1.8125

D. 4
1 2
2
× 2 + 43 × 4 = 2.375

E. 4

15 / 35
Properties of the Conditional Expectation
I Conditional expectation of Y given X is generally a function of X .
I Property 1: Conditional on X , any function of X is no longer a
random variable and can be treated as a known constant, and then
the usual properties of expectations apply. For example, if X , Y and
Z are random variables and a, b and c are constants, then

E (XY | X ) = XE (Y | X )

or

E ((a+bX +cXY +X 2 Z ) | X ) = a+bX +cXE (Y | X )+X 2 E (Z | X )

I Property 2: If E (Y | X ) = c where c is a constant that does not


depend on X , then E (Y ) = c. This is intuitive: if no matter what
X happens to be, we always expect Y to be c, then the expected
value of Y must be c regardless of X , i.e. the unconditional
expectation of Y must be c.

16 / 35
Important features of joint probability distribution of two
random variables: Measures of Association
Textbook reference B-4

I Statistical dependence tells us that knowing the outcome of one


variable is informative about probability distribution of another.
I To analyse the nature of dependence, we can look at the joint
probability distribution of random variables
I This is too complicated when random variables have many possible
outcomes (e.g. per capita income and life span, or returns on
Telstra and BHP stocks)
I We simplify the question to: when X is above its mean, is Y more
likely to be below or above its mean?
I This corresponds to the popular notion of X and Y being
“positively or negatively correlated”

17 / 35
Covariance

I Question: “when X is above its mean, is Y more likely to be below


or above its mean?”
I We can answer this by looking at the sign of the covariance between
X and Y defined as:

Cov (X , Y ) = E ((X − µX )(Y − µY )) = E (XY ) − µX µY

I If X and Y are independent Cov (X , Y ) = 0.


I For any constants a1 , b1 , a2 and b2

Cov (a1 X + b1 , a2 Y + b2 ) = a1 a2 Cov (X , Y )

18 / 35
Correlation
I Only the sign of covariance is informative. Its magnitude changes
when we scale variables.

Cov (aX , bY ) = a b Cov (X , Y )

I A better and unit free measure of association is correlation which is


defined as:
Cov (X , Y )
Corr (X , Y ) =
sd(X )sd(Y )
I Correlation is always between -1 and +1, and its magnitude, as well
as its sign, is meaningful.
I Correlation does not change if we change the units of measurement

Corr (a1 X + b1 , a2 Y + b2 ) = Corr (X , Y )

19 / 35
Variance of sums of random variables: Diversification
Textbook reference B4

I One of the important principles of risk management is “Don’t put


all your eggs in one basket.”
I The scientific basis of this is that:

Var (aX + bY ) = a2 Var (X ) + b 2 Var (Y ) + 2abCov (X , Y )

I Example: You have the choice of buying shares of company A with


mean return of 10 percent and standard deviation of 5 percent, or
shares of company B with mean return of 10 percent and standard
deviation of 10 percent. Which would you prefer?
I Obviously A is less risky, and you prefer A to B.

20 / 35
Variance of sums of random variables: Diversification

I Now consider a portfolio of investing 0.8 of your capital in company


A and the rest in B, where, as before, A has mean return of 10
percent and standard deviation of 5 percent, and B has mean return
of 10 percent and standard deviation of 10 percent. What are the
return and the risk of this position with the added assumption that
the returns to A and B are independent.

I Denoting the portfolio return by Z , we have

Z = 0.8A + 0.2B
E (Z ) = E (0.8A + 0.2B) =
Var (Z ) = Var (0.8A + 0.2B) =

I We can see that this portfolio has the same expected return as A,
and is safer than A.

21 / 35
Diversification in econometrics - Averaging
I Suppose we are interested in starting salaries of BCom graduates.
This is a random variable with many possibilities and a probability
distribution.
I Let’s denote this random variable by Y . We also denote its
population mean and variance by µ and σ 2 . We are interested in
estimating µ, which is the expected wage of a BCom graduate.
I Suppose we choose one BCom graduate at random and denote
his/her starting salary by Y1 . Certainly Y1 is also a random variable
with the same possible outcomes and probabilities as Y . Therefore
E (Y1 ) = µ. So it is OK to take the value of Y1 as an estimate of µ,
and the variance of this estimator is σ 2 .
I But if we take 2 independent observations and use their average as
our estimator of µ, we have:
1 1
E ( (Y1 + Y2 )) = (µ + µ) = µ
2 2
1 1 1
Var ( (Y1 + Y2 )) = Var (Y1 ) + Var (Y2 )
2 4 4
1 2
= (σ + σ ) = σ 2 /2
2
22 / 35
4
Diversification in econometrics - Averaging
I Now consider a sample of n independent observations of starting
salaries of BCom graduate {Y1 , Y2 , . . . , Yn }
I Y1 to Yn are i.i.d. (independent and identically distributed) with
mean µ and variance σ 2 .
I Their average is a portfolio of that gives each of these n random
variables the same weight of n1 . So
n
! n n
 1X 1X 1X
E Ȳ = E Yi = E (Yi ) = µ = µ.
n n n
i=1 i=1 i=1
" n # " n #
 1X 1 X
Var Ȳ = Var Yi = 2 Var Yi
n n
i=1 i=1
n n
1 X 1 X 2 σ2
= 2 Var (Yi ) = 2 σ = .
n n n
i=1 i=1

I The sample average has the same expected value as Y but a lot less
risk. In this way, we use the scientific concept of diversification in
econometrics to find better estimators!
23 / 35
Key concepts and their importance
I Business and economic variables can be thought of as random
variables whose outcomes are determined by their probability
distribution.
I A measure of central tendency of the distribution of a random

variable is its expected value.


I Important measures of dispersion of a random variable are

variance and standard deviation. These are used as measures


of risk.
I Covariance and correlation are measures of linear statistical
dependence between two random variables.
I Correlation is unit free and measures the strength and

direction of association.
I Statistical dependence or association does not imply causality.
I Two random variables that have non-zero covariance or

correlation are statistically dependent, meaning that knowing


the outcome of one of the two random variables gives us useful
information about the other.
I Averaging is a form of diversification and it reduces risk.

24 / 35
Let’s Play
http://www.onlinestatbook.com/stat sim/sampling dist/

25 / 35
Modelling mean

E (y | x) = β0 + β1 x (y | x) = βˆ0 + βˆ1 x
ŷ = E\
(Conditional expectation function) (Sample regression function)

26 / 35
Laws of Probability
I To understand what a conditional expectation is, we start with laws
of probability

1. Probability of any event is a number between 0 and 1. The probab-


ilities of all possible outcomes of a random variable add up to 1
2. If A and B are mutually exclusive events then

P(A or B) = P(A) + P(B)

Example: You role a fair die. What is the probability of the die
showing a number less than or equal to 2?
3. If A and B are two events, then
P(A and B)
P(A | B) =
P(B)
Example: You role a fair die. What is the probability that it shows 1
given that it is showing a number less than or equal to 2?
27 / 35
Joint probability density function

I The ultimate goal of this unit is to study the relationship between


one variable y with many variables x1 to xk . But let’s start with
only one x, and with the discrete case.
I Suppose y is the number of bathrooms and x is the number of
bedrooms in an apartment in Melbourne. Assume y has two
possible values, 1 and 2, and x has three possible values 1, 2 and 3.
The joint pdf is gives us the probabilities of every possible outcome
of (x, y ).

28 / 35
y ↓, x → 1 2 3 marginal fy
1 0.40 0.24 0.04
2 0.00 0.16 0.16
marginal fx

I The entries show the probabilities of different possible combination


of bedrooms and bathrooms (x, y ). For example the top left cell
shows P(x = 1&y = 1) = 0.40.
I Check the first law of probability: all probabilities are between 0 and
1 and the probabilities of all possible outcomes sum to 1
I Using the second law of probability, we can use the joint pdf to
deduce the pdf of x by itself (called the marginal density of x), and
also the marginal density of y

29 / 35
Conditional density
I Using the third law of probability, we can also deduce the
conditional distribution of number of bathrooms in an apartment
given that it has 1 bedroom.
P(y = 1 & x = 1) 0.40
P(y = 1 | x = 1) = = = 1.00
P(x = 1) 0.40
P(y = 2 | x = 1) =

I Similarly, we can deduce the conditional distribution of y given


x =2
P(y = 1 & x = 2)
P(y = 1 | x = 2) = =
P(x = 2)
P(y = 2 | x = 2) =

I And y given x = 3
P(y = 1 | x = 3) =
P(y = 2 | x = 3) =

30 / 35
Conditional expectation function
I Each of these conditional densities has an expected value

y | x = 1 fy |x=1
1 1.00 ⇒ E (y | x = 1) = 1 × 1.00 + 2 × 0.00 = 1.00
2 0.00

y | x = 2 fy |x=2
1 0.60 ⇒ E (y | x = 2) = 1 × 0.60 + 2 × 0.40 = 1.40
2 0.40

y | x = 3 fy |x=3
1 0.20 ⇒ E (y | x = 3) = 1 × 0.20 + 2 × 0.80 = 1.80
2 0.80

I Plot the expected values for different values of x. Do they fit on a


straight line? What is the equation of that line?

31 / 35
I When y and x have many possible outcomes or when they are
continuous random variables (like birth weight and number of
cigarettes during pregnancy, or price of a house and its land size) we
cannot enumerate the joint density and perform the same exercise
I Therefore we go after the conditional expectation function directly

E (y | x) = β0 + β1 x

I For example, if y is the price of a house and x is its area, the mean
of the price of houses with area x is given by β0 + β1 x
I The price of each house can be written as random variations around
this central value
y = β0 + β1 x + u
where u is a random variable with E (u | x) = 0, which implies that
E (u) = 0 also

32 / 35
The simple linear regression model in a picture

f (y | x)

β0 + β1 x

x1
x2 x
x3

33 / 35
The simple linear regression model in equation form

I The following equation specifies the conditional mean of y given x

y = β0 + β1 x + u with E (u | x) = 0

I It is an incomplete model because it does not specify the probability


distribution of y conditional of x
I If we add the assumptions that Var (u | x) = σ 2 and that the
conditional distribution of u given x is normal, then we have a
complete model, which is called the “classical linear model” and was
shown in the picture on the previous slide.
I In summary: we make the assumption that in the big scheme of
things, data are generated by this model, and we want to use
observed data to learn the unknowns β0 , β1 and σ 2 in order to
predict y using x.

34 / 35
Summary
I We reviewed some fundamentals of probability:

I Random variables (discrete, continuous) and their probability


distribution
I Mean, variance, standard deviation
I Expectation, covariance and correlation
I Joint and conditional probability distributions

I We established that the problem of finding good estimators and the


problem of forming optimal portfolios are based on the same
principles.

I We concluded by establishing that the conditional expectation


function of y given x as the vehicle for predicting y using x.

I Next week we learn how to estimate the parameters of the


conditional expectation function.
35 / 35

You might also like