0% found this document useful (0 votes)
33 views

Assignment 3 - (Theory) Markov Chains: T T 0 T T T 0

This document contains 10 questions about Markov chains from past exams on time series analysis. The questions cover basic concepts of Markov chains like transition matrices and stationary properties. They also cover inference for Markov chains including maximum likelihood estimation of transition probabilities and asymptotic confidence intervals. Real-world examples involving job status, political opinions, and stock prices are used.

Uploaded by

Assan Achibat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Assignment 3 - (Theory) Markov Chains: T T 0 T T T 0

This document contains 10 questions about Markov chains from past exams on time series analysis. The questions cover basic concepts of Markov chains like transition matrices and stationary properties. They also cover inference for Markov chains including maximum likelihood estimation of transition probabilities and asymptotic confidence intervals. Real-world examples involving job status, political opinions, and stock prices are used.

Uploaded by

Assan Achibat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Bocconi University. 20236 – Time Series Analysis.

Assignment 3 – (theory)
Markov chains
March 2, 2020

NOTE: This assignment is aimed to help your “distance learning” of the topics presented in
Lectures 5-6-7 (lecture notes are posted on BBoard).

Assignments of this kind (“on the theory”) are not evaluated, and they are not
mandatory. Again, the intention if to provide you additional material for helping your distance
learning, and to encourage you to ask me if you find anything unclear (don’t hesitate to email me,
we can arrange “virtual office hours”).

Moreover, the questions below are of the kind you may find in the written proof for the final
exam, and I hope you may find this useful, too.

* Questions on Lecture 5: Basic notions on Markov chains.

1. Consider a time series (Yt )t≥0 with Yt ∈ {1, 2, . . . , K}.


- When is (Yt )t≥0 a Markov chain?
- Provide an example.

2. Consider a homogeneous Markov chain (Yt )t≥0 , with a finite state-space {1, 2, . . . , K}.
- What is the initial distribution? And what is the transition matrix? What would change
for non-homogeneous Markov chains?
- Suppose that K = 3 and write the expression of P (Y2 = 1 | Y1 = 2, Y0 = 2).
- Then write the expression of

P (Y0 = 2, Y1 = 2, Y3 = 1, Y4 = 2, Y5 = 2, Y6 = 2, Y7 = 1, Y8 = 3).

3. Consider a game where a player has initial capital y0 > 0 and, at each step, gets a gain of
+1 with probability p, or −1 (a loss) with probability (1 − p). Let Yt be the random capital
owned by the player at step t. The process (Yt )t≥0 is a simple random walk.

• Is the process (Yt )t≥0 stationary?


• Is it a Markov chain? Motivate your answer.

1
4. Consider a random walk
i.i.d
Yt = Yt+1 + Zt , Zt ∼ N (µ, σ 2 ).

• Write the mean function and the variance function for the process (Yt )t≥0 . Is this process
stationary? Motivate your answer.
• Does the process (Yt )t≥0 satisfy the Markov property?
• In financial applications, the Yt may represent the logarithm of the price of a financial
asset at time t. A classical model for perfect markets assumes that the log-prices are a
random walk. Let us consider the returns Rt = Yt −Yt−1 . Is the process (Rt ) stationary?
Motivate your answer.

5. (from past written proof (final exam))


Consider a Markov chain (Yt )t≥0 with state space Y = {1, 2, 3}, initial value Y0 = 1 and
transition matrix
t − 1 \t 1 2 3
1 .6 .4 0
2 0 .7 .3
3 .1 .1 .8

• What is the probability that Y2 = 2?


• What is the probability that Y1 = 1, given that Y2 = 2 (and Y0 = 1)?

6. (past written proof). Suppose that (Yt )t≥0 is a Markov chain with transition matrix

t-1 \t 1 2 3
1 0.6 0.4 0
2 0.3 0.6 0.1
3 0 0.2 0.8

What is the state-space of this Markov chain?


What is the marginal probability distribution of Y2 , given the starting value Y0 = 1?

* Questions on Lecture 6: Properties of Markov chains: asymptotic behavior.


Remark: This topic may be skipped at the moment, as it is not strictly needed for the next
developments. We will indeed return on this topic later the course, when presenting Markov
chain Monte Carlo (MCMC) algorithms for Bayesian inference.

7. (past written proof). Consider a Markov chain (Yt )t≥0 with state space Y = {1, 2, 3}, initial
value Y0 = 1 and transition matrix

2
t − 1 \t 1 2 3
1 .6 .4 0
2 0 .7 .3
3 .1 .2 .7

• What is the probability that (Y1 = 2, Y2 = 1), conditionally on Y0 = 1?


• Is the above Markov chain irriducible? Motivate your answer.

8. (past written proof, optional question).


Let (Yt )t≥0 be a Markov chain with finite state-space Y. Discuss the asymptotic behavior of
the probability distribution P (Yn = · | Y0 = j) of Yn given the initial state Y0 = j.
What can you say about the asymptotic behavior of the marginal distribution of Yn ?

* Questions on Lecture 7: Inference for Markov chains.

9. (past written proof)


In a survey, a random sample of n = 100 new graduates is quarterly monitored, along one
year after graduation, recording, for each individual, her job status, coded as Y = 1: full time
job; Y = 2: temporary job; Y = 3: unemployed. The graduation day, t = 0, is the same for
all individuals in the survey.
For individual i, the data are regarded as a sample from the categorical time series (Yi,t )t≥0 ,
where Yi,t denotes his job status at time t. Across individuals, the (Yi,t ), i = 1, . . . , n, are
modeled as independent and identically distributed homogeneous Markov chains.
(a) Suppose the observed matrices of transition counts are

t = 0 \t = 1 1 2 3 t = 1 \t = 2 1 2 3
1 10 0 5 15 1 20 5 0 25
2 5 20 5 30 2 10 20 5 35
3 10 15 30 55 3 10 10 20 40

t = 2 \t = 3 1 2 3 t = 3 \t = 4 1 2 3
1 35 0 5 40 1 40 5 5 50
2 15 10 10 35 2 10 5 5 20
3 0 10 15 25 3 5 20 5 30

(a) Compute the maximum likelihood estimate (MLE) of the transition probability p3,1 =
P (Yt = 1 | Yt−1 = 3) (i.e., of a transition from “unemployed” to “full time job”).
(b) Provide the asymptotic confidence interval of level 0.95 for p3,1 (The 0.9 quantile of a
standard Gaussian distribution is 1.28, the 0.95 quantile is 1.65, the 0.975 quantile is 1.96).

3
(c) Let us now relax the assumption that the Markov chain is homogeneous. Consider the
transition matrix Pt ≡ [pi,j (t)], where pi,j (t) is the probability of a transition from state i at
time t − 1, to state j at time t. Compute the MLE of p3,1 (t), for t = 1, 2.

10. (past written proof)


A social survey includes periodic interviews on a panel of n = 100 individuals; each individual
is asked to express his/her opinion on a political group (1 =“in favor”; 2= “negative”; 3=
“uncertain”). The interviews are taken monthly, from March (t = 0) until July (t = 4). The
observed transition counts are reported in the following table.

t − 1 \t 1 2 3
1 60 20 40 120
2 20 80 20 120
3 20 120 20 160

Let us model the individual series (Yi,t ), i = 1, . . . , 100 as independent and identically dis-
tributed homogeneous Markov chains, with common transition matrix P = [pi,j ]. The statis-
tical task is to estimate P based on the data.

(a) For individual 1, the data are (1, 1, 3, 3, 2). Write the expression of the joint probability
P (Y1,1 = 1, Y1,2 = 3, Y1,3 = 3, Y1,4 = 2 | Y1,0 = 1; P).
(b) Write the expression of the likelihood of the pi,j ’s (consider the initial values yi,0 as
fixed).
Then obtain the expression of the maximum likelihood estimate of pi,j .
(c) What is the estimated probability that an individual who is in favour in May will turn
negative in June? Provide the MLE, together with the asymptotic confidence interval of
level 90% (The 0.9 quantile of a standard Gaussian distribution is 1.28, the 0.95 quantile
is 1.65, the 0.975 quantile is 1.96).

You might also like