U3 Markov Chain
U3 Markov Chain
11
3
2 1
1
(1 at )2 1 at
2
(1 at )3 1
(1 at ) 2
2(1 at ) 1
2at 1
2
Var[ X (t )] E[ X 2 (t )] E[ X (t )] 2at 1 (1)2 2at
{X (t )} is not a stationary.
Recognizable R.
The state space is {D, R}.
The tpm of the Markov chain is
3. 12 Probability and Queueing Theory
14 3 14 3 367
a and b a b
15 23 15 23 345
b a
Steady state distribution ,
a b a b
3 / 23 14 / 15 45 322
, ,
367 / 345 367 / 345 367 367
45
The fraction of signals that are highly distorted is .
367
Problem 3.7:
An observer at a lake notices that when fish are caught, only 1
out of 9 trout is caught after another trout, with no other fish
between, whereas 10 out of 11 non-trout are caught following
non-trout, with no trout between. Assuming that all fish are
equally likely to be caught, what fraction of fish in the lake is
trout? (N/D 2012)
Solution:
Let trout fish T and
non-trout N.
The state space is {T, N}.
The tpm of the Markov chain is
Random Processes 3. 13
8 1 8 1 97
a and b a b
9 11 9 11 99
b a
Steady state distribution ,
a b a b
1 / 11 8/9 9 88
, ,
97 / 99 97 / 99 97 97
9
The fraction of trout fish is .
97
Problem 3.8:
A man either drives a car (or) catches a train to go to office each
day. He never goes two days in a row by train but if he drives
one day, then the next day he is just as likely to drive again as he
is to travel by train. Now suppose that on the first day of the
week, the man tossed a fair die and drove to work if and only if a
6 appeared. Find (1) the probability that he takes a train on the
third day and (2) the probability that he drives to work in the
long run. (N/D 2011),(N/D 2015)
Solution:
The tpm of the Markov chain is
3. 14 Probability and Queueing Theory
(1)
1 1
(2) (1) 1 5 11 1
P P P 2 2
6 6 12 12
1 0
1 1
(3) (2) 11 1 13 11
P P P 2 2
12 12 24 24
1 0
11
P (The man takes a train on the third day) = .
24
(2)
The tpm of the Markov chain is
1 1 3
a and b 1 a b 1
2 2 2
b a
Steady state distribution ,
a b a b
1 1/ 2 2 1
, ,
3/ 2 3/ 2 3 3
2
P (The man drives (car) to work in the long run) = .
3
Random Processes 3. 15
Problem 3.9:
A salesman territory consists of three cities A, B and C. He never
sells in the same city on successive days. If he sells in city-A,
then the next day he sells in city-B. However if he sells in either
city-B or city-C, the next day he is twice as likely to sell in city-
A as in the other city. In the long run how often does he sell in
each of the cities? (M/J 2012),(N/D 2013)
Solution:
The tpm of the Markov chain is
1 2 3 1
0 1 0
(1) 1 2 3 2 / 3 0 1/ 3 1 2 3
2 / 3 1/ 3 0
2 2
2 3 1
3 3
1
1 3 2
3
1
2 3
3
3. 16 Probability and Queueing Theory
2 2 1
(3) 2 2 1
3 3 3
8
1 2
9
1 2 3 1
8 1
2 2 2 1
9 3
8 1 9
2 1 1 2
9 3 10
8 9 8
(6) 1
9 20 20
1 9 3
(5) 3
3 20 20
8 9 3
The percentage value is 100 100 100
20 20 20
Hence in the long run, he sells 40% of the item in city A, 45% of
the item in city B and 15% of the item in city C.
Random Processes 3. 17
Problem 3.10:
Find the limiting-state probabilities associated with the following
0.4 0.5 0.1
transition probability matrix 0.3 0.3 0.4 . (A/M 2011)
0.3 0.2 0.5
Solution:
P
1 2 3 1
0.4 0.5 0.1
(1) 1 2 3 0.3 0.3 0.4 1 2 3
0.3 0.2 0.5
The above three equations (3), (4) and (5) can be rewrite as
0.6 1 0.3 2 0.3 3 0
Now solving (6), (7) and (8) with the help of (2).
3. 18 Probability and Queueing Theory
Problem 3.11:
Consider a Markov chain with transition probability matrix
0.5 0.4 0.1
P 0.3 0.4 0.3 . Find the limiting probabilities of the
0.2 0.3 0.5
system. (M/J 2014)
Solution:
(Similar to previous problem)
Random Processes 3. 19
Problem 3.12:
Consider a Markov chain with 3 states and transition probability
0.6 0.2 0.2
matrix P 0.1 0.8 0.1 . Find the stationary probabilities of
0.6 0 0.4
the chain. (N/D 2017)
Solution:
(Similar to Problem 3.10)
Problem 3.13:
The transition probability matrix of a Markov chain
X (t ) , n 1,2,3,... , having three states 1, 2 and 3 is
0.1 0.5 0.4
P 0.6 0.2 0.2 and the initial distribution is
0.3 0.4 0.3
p(0) 0.7 0.2 0.1 . Find (1) p X 2 3
(2) p X 3 2, X 2 3, X1 3, X 0 2 .
(M/J 2012),(N/D 2013),(M/J 2014)
Solution:
Given the transition probability matrix with three states 1,2 and 3
0.1 0.5 0.4
P 0.6 0.2 0.2 and p(0) 0.7 0.2 0.1 .
0.3 0.4 0.3
(1)
3
p[ X 2 3] p[ X 2 3 / X0 i] p[ X 0 i]
i 1
p[ X 3 2 / X2 3, X1 3, X 0 2] p[ X 2 3, X1 3, X 0 2]
(1)
p32 p[ X 2 3, X1 3, X 0 2]
(1)
p32 p[ X 2 3 / X1 3, X 0 2] p[ X1 3, X 0 2]
(1) (1)
p32 p33 p[ X1 3, X 0 2]
(1) (1)
p32 p33 p[ X1 3 / X0 2] p[ X 0 2]
(1) (1) (1)
p32 p33 p23 p[ X 0 2]
Problem 3.14:
The following is the transition probability matrix of a Markov
chain with state space 0, 1, 2, 3, 4 . Specify the classes, and
determine which classes are transient and which are recurrent.
Random Processes 3. 21
2 3
0 0 0
5 5
1 1 1
0 0
3 3 3
1 1
Give reasons P 0 0 0 . (N/D 2010)
2 2
1 3
0 0 0
4 4
1 2
0 0 0
3 3
Solution:
Given tmp for the state {0, 1, 2, 3, 4} is
Solution:
(Similar to previous problem)
Problem 3.16:
The following is the transition probability matrix of a Markov
chain with state space 1, 2, 3, 4, 5 . Specify the classes, and
determine which classes are transient and which are recurrent
0 0 0 0 1
0 1/ 3 0 2/3 0
P 0 0 1/ 2 0 1/ 2 . (N/D 2012)
0 0 0 1 0
0 0 2/5 0 3/5
Solution:
(Similar to previous problem)
Random Processes 3. 23
Problem 3.17:
Let the Markov Chain consisting of the states 0, 1, 2, 3 have the
0 0 1/ 2 1/ 2
1 0 0 0
transition probability matrix P . Determine
0 1 0 0
0 1 0 0
which states are transient and which are recurrent by defining
transient and recurrent states. (A/M 2010)
Solution:
(Similar to previous problem)
Problem 3.18:
A gambler has Rs.2. He bets Rs.1 at a time and wins Rs.1 with
probability 1 / 2 . He stops playing if he loses Rs.2 or wins Rs.4.
(1) What is the tpm of the related Markov chain? (2) What is the
probability that he has lost his money at the end of 5 plays?
(M/J 2013)
Solution:
Let X n represent the amount with the player at the end of the
n th round of the play.
The state space is {0, 1, 2, 3, 4, 5, 6}.
The player stops playing if he losses Rs. 2 or wins Rs.4.
We observe that if the player wins Rs.4, then he will have Rs. 6.
3. 24 Probability and Queueing Theory
P(0) 0 0 1 0 0 0 0
P(1) P(0) P
1 0 0 0 0 0 0
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 1 0 0 0 0 0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
0 0 0 0 0 0 1
Random Processes 3. 25
1 1
0 0 0 0 0
2 2
P(2) P(1) P
1 0 0 0 0 0 0
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1 1 1
0 0 0 0 0 0 0 0 0 0
2 2 2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
0 0 0 0 0 0 1
1 1 1
0 0 0 0
4 2 4
Similarly,
1 1 3 1
P(3) 0 0 0
4 4 8 8
3 5 1 1
P(4) 0 0 0
8 16 4 16
3 5 9 1 1
P(5) 0 0
8 32 32 8 16
3
P X5 0
8