0% found this document useful (0 votes)
126 views15 pages

U3 Markov Chain

The document provides solutions to problems involving Markov chains. It includes: 1) A problem where the states are highly distorted (D) and recognizable (R) signals, and the fraction of highly distorted signals is calculated to be 45/367. 2) A problem where the states are trout (T) and non-trout (N) fish, and the fraction of trout fish is calculated to be 9/97. 3) A problem where the states are driving (car) or taking the train, and the long-run probability of driving is calculated to be 2/3.

Uploaded by

ranjeet singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views15 pages

U3 Markov Chain

The document provides solutions to problems involving Markov chains. It includes: 1) A problem where the states are highly distorted (D) and recognizable (R) signals, and the fraction of highly distorted signals is calculated to be 45/367. 2) A problem where the states are trout (T) and non-trout (N) fish, and the fraction of trout fish is calculated to be 9/97. 3) A problem where the states are driving (car) or taking the train, and the long-run probability of driving is calculated to be 2/3.

Uploaded by

ranjeet singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Random Processes 3.

11

3
2 1
1
(1 at )2 1 at
2
(1 at )3 1
(1 at ) 2
2(1 at ) 1

2at 1

2
Var[ X (t )] E[ X 2 (t )] E[ X (t )] 2at 1 (1)2 2at

Which is not a constant.

{X (t )} is not a stationary.

Problems on Markov Chain


Problem 3.6:
An engineer analyzing a series of digital signals generated by a
testing system observes that only 1 out of 15 highly distorted
signals follow a highly distorted signal, with no recognizable
signal between, whereas 20 out of 23 recognizable signals follow
recognizable signals, with no highly distorted signal between.
Given that only highly signals are not recognizable, find the
fraction of signals that are highly distorted.
(N/D 2010),(N/D 2014)
Solution:
Let Highly distorted D and

Recognizable R.
The state space is {D, R}.
The tpm of the Markov chain is
3. 12 Probability and Queueing Theory

14 3 14 3 367
a and b a b
15 23 15 23 345
b a
Steady state distribution ,
a b a b

3 / 23 14 / 15 45 322
, ,
367 / 345 367 / 345 367 367

45
The fraction of signals that are highly distorted is .
367

Problem 3.7:
An observer at a lake notices that when fish are caught, only 1
out of 9 trout is caught after another trout, with no other fish
between, whereas 10 out of 11 non-trout are caught following
non-trout, with no trout between. Assuming that all fish are
equally likely to be caught, what fraction of fish in the lake is
trout? (N/D 2012)
Solution:
Let trout fish T and
non-trout N.
The state space is {T, N}.
The tpm of the Markov chain is
Random Processes 3. 13

8 1 8 1 97
a and b a b
9 11 9 11 99
b a
Steady state distribution ,
a b a b

1 / 11 8/9 9 88
, ,
97 / 99 97 / 99 97 97

9
The fraction of trout fish is .
97

Problem 3.8:
A man either drives a car (or) catches a train to go to office each
day. He never goes two days in a row by train but if he drives
one day, then the next day he is just as likely to drive again as he
is to travel by train. Now suppose that on the first day of the
week, the man tossed a fair die and drove to work if and only if a
6 appeared. Find (1) the probability that he takes a train on the
third day and (2) the probability that he drives to work in the
long run. (N/D 2011),(N/D 2015)
Solution:
The tpm of the Markov chain is
3. 14 Probability and Queueing Theory

The initial state distribution is

(1)

1 1
(2) (1) 1 5 11 1
P P P 2 2
6 6 12 12
1 0
1 1
(3) (2) 11 1 13 11
P P P 2 2
12 12 24 24
1 0
11
P (The man takes a train on the third day) = .
24
(2)
The tpm of the Markov chain is

1 1 3
a and b 1 a b 1
2 2 2
b a
Steady state distribution ,
a b a b

1 1/ 2 2 1
, ,
3/ 2 3/ 2 3 3

2
P (The man drives (car) to work in the long run) = .
3
Random Processes 3. 15

Problem 3.9:
A salesman territory consists of three cities A, B and C. He never
sells in the same city on successive days. If he sells in city-A,
then the next day he sells in city-B. However if he sells in either
city-B or city-C, the next day he is twice as likely to sell in city-
A as in the other city. In the long run how often does he sell in
each of the cities? (M/J 2012),(N/D 2013)
Solution:
The tpm of the Markov chain is

Let 1 2 3 be the steady state distribution of the


chain.
By the property, we have

1 2 3 1

0 1 0
(1) 1 2 3 2 / 3 0 1/ 3 1 2 3
2 / 3 1/ 3 0

2 2
2 3 1
3 3
1
1 3 2
3
1
2 3
3
3. 16 Probability and Queueing Theory

Substituting (5) in (3), we have

2 2 1
(3) 2 2 1
3 3 3
8
1 2
9

Substituting 1 and 3 in (2), we have

1 2 3 1
8 1
2 2 2 1
9 3
8 1 9
2 1 1 2
9 3 10
8 9 8
(6) 1
9 20 20
1 9 3
(5) 3
3 20 20

The steady-state distribution of the chain is


8 9 3
.
20 20 20

8 9 3
The percentage value is 100 100 100
20 20 20

40% 45% 15%

Hence in the long run, he sells 40% of the item in city A, 45% of
the item in city B and 15% of the item in city C.
Random Processes 3. 17

Problem 3.10:
Find the limiting-state probabilities associated with the following
0.4 0.5 0.1
transition probability matrix 0.3 0.3 0.4 . (A/M 2011)
0.3 0.2 0.5

Solution:

0.4 0.5 0.1


Let P 0.3 0.3 0.4 .
0.3 0.2 0.5

Consider 1 2 3 be the steady state distribution of


the chain.
By the property, we have

P
1 2 3 1
0.4 0.5 0.1
(1) 1 2 3 0.3 0.3 0.4 1 2 3
0.3 0.2 0.5

0.4 1 0.3 2 0.3 3 1

0.5 1 0.3 2 0.2 3 2

0.1 1 0.4 2 0.5 3 3

The above three equations (3), (4) and (5) can be rewrite as
0.6 1 0.3 2 0.3 3 0

0.5 1 0.7 2 0.2 3 0

0.1 1 0.4 2 0.5 3 0

Now solving (6), (7) and (8) with the help of (2).
3. 18 Probability and Queueing Theory

(6) 0.6 1 0.3 2 0.3 3 0

(2) 0.6 0.6 1 0.6 2 0.6 3 0.6


-----------------------------------------------
0.9 2 0.9 3 0.6

0.3 2 0.3 3 0.2

(7) 0.5 1 0.7 2 0.2 3 0

(2) 0.5 0.5 1 0.5 2 0.5 3 0.5


-----------------------------------------------
1.2 2 0.3 3 0.5

1.2 2 0.3 3 0.5

Solving (9) and (10), we get

2 0.3 and 3 0.3

Substituting 2 and 3 values in (2), we get

1 0.3 0.3 1 1 0.4

Hence 0.4 0.3 0.3 .

Problem 3.11:
Consider a Markov chain with transition probability matrix
0.5 0.4 0.1
P 0.3 0.4 0.3 . Find the limiting probabilities of the
0.2 0.3 0.5
system. (M/J 2014)
Solution:
(Similar to previous problem)
Random Processes 3. 19

Problem 3.12:
Consider a Markov chain with 3 states and transition probability
0.6 0.2 0.2
matrix P 0.1 0.8 0.1 . Find the stationary probabilities of
0.6 0 0.4
the chain. (N/D 2017)
Solution:
(Similar to Problem 3.10)
Problem 3.13:
The transition probability matrix of a Markov chain
X (t ) , n 1,2,3,... , having three states 1, 2 and 3 is
0.1 0.5 0.4
P 0.6 0.2 0.2 and the initial distribution is
0.3 0.4 0.3
p(0) 0.7 0.2 0.1 . Find (1) p X 2 3

(2) p X 3 2, X 2 3, X1 3, X 0 2 .
(M/J 2012),(N/D 2013),(M/J 2014)

Solution:
Given the transition probability matrix with three states 1,2 and 3
0.1 0.5 0.4
P 0.6 0.2 0.2 and p(0) 0.7 0.2 0.1 .
0.3 0.4 0.3

0.43 0.31 0.26


2
P 0.24 0.42 0.34
0.36 0.35 0.29
3. 20 Probability and Queueing Theory

(1)
3
p[ X 2 3] p[ X 2 3 / X0 i] p[ X 0 i]
i 1

(2) (2) (2)


p13 p[ X 0 1] p23 p[ X 0 2] p33 p[ X 0 3]

0.26 0.7 0.34 0.2 0.29 0.1

0.182 0.068 0.029 0.279


(2)
p[ X 3 2, X 2 3, X1 3, X 0 2]

p[ X 3 2 / X2 3, X1 3, X 0 2] p[ X 2 3, X1 3, X 0 2]
(1)
p32 p[ X 2 3, X1 3, X 0 2]
(1)
p32 p[ X 2 3 / X1 3, X 0 2] p[ X1 3, X 0 2]
(1) (1)
p32 p33 p[ X1 3, X 0 2]
(1) (1)
p32 p33 p[ X1 3 / X0 2] p[ X 0 2]
(1) (1) (1)
p32 p33 p23 p[ X 0 2]

0.4 0.3 0.2 0.2 0.0048

Problem 3.14:
The following is the transition probability matrix of a Markov
chain with state space 0, 1, 2, 3, 4 . Specify the classes, and
determine which classes are transient and which are recurrent.
Random Processes 3. 21

2 3
0 0 0
5 5
1 1 1
0 0
3 3 3
1 1
Give reasons P 0 0 0 . (N/D 2010)
2 2
1 3
0 0 0
4 4
1 2
0 0 0
3 3
Solution:
Given tmp for the state {0, 1, 2, 3, 4} is

Transition State Diagram of the Markov Chain:


3. 22 Probability and Queueing Theory

From the transition state diagram, states 2 and 4 communicate


each other because we can reach the states 2 to 4 and 4 to 2.
Similarly 0 and 3 also communicate each other.
{2, 4} and {0, 3} are form classes and thus {2, 4} and {0, 3}
are recurrent states.
From 1 we can reach 0 and 3, but we cannot reach 1 from them
1
and the system goes from 1 to 1 with probability .
3
The state 1 is transient (non recurrent).
Problem 3.15:
Find the nature of the states of the Markov chain with the tpm
0 1 0
P 1/ 2 0 1/ 2 . (M/J 2013),(A/M 2018)
0 1 0

Solution:
(Similar to previous problem)
Problem 3.16:
The following is the transition probability matrix of a Markov
chain with state space 1, 2, 3, 4, 5 . Specify the classes, and
determine which classes are transient and which are recurrent
0 0 0 0 1
0 1/ 3 0 2/3 0
P 0 0 1/ 2 0 1/ 2 . (N/D 2012)
0 0 0 1 0
0 0 2/5 0 3/5

Solution:
(Similar to previous problem)
Random Processes 3. 23

Problem 3.17:
Let the Markov Chain consisting of the states 0, 1, 2, 3 have the
0 0 1/ 2 1/ 2
1 0 0 0
transition probability matrix P . Determine
0 1 0 0
0 1 0 0
which states are transient and which are recurrent by defining
transient and recurrent states. (A/M 2010)
Solution:
(Similar to previous problem)
Problem 3.18:
A gambler has Rs.2. He bets Rs.1 at a time and wins Rs.1 with
probability 1 / 2 . He stops playing if he loses Rs.2 or wins Rs.4.
(1) What is the tpm of the related Markov chain? (2) What is the
probability that he has lost his money at the end of 5 plays?
(M/J 2013)

Solution:
Let X n represent the amount with the player at the end of the
n th round of the play.
The state space is {0, 1, 2, 3, 4, 5, 6}.
The player stops playing if he losses Rs. 2 or wins Rs.4.
We observe that if the player wins Rs.4, then he will have Rs. 6.
3. 24 Probability and Queueing Theory

(1) TPM of the Markov chain:

(2) Money at at the end of 5 plays:

Since the player has got Rs. 2 initially, the initial


probability distribution of X n is

P(0) 0 0 1 0 0 0 0
P(1) P(0) P
1 0 0 0 0 0 0
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 1 0 0 0 0 0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
0 0 0 0 0 0 1
Random Processes 3. 25

1 1
0 0 0 0 0
2 2

P(2) P(1) P
1 0 0 0 0 0 0
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
1 1 1 1
0 0 0 0 0 0 0 0 0 0
2 2 2 2
1 1
0 0 0 0 0
2 2
1 1
0 0 0 0 0
2 2
0 0 0 0 0 0 1
1 1 1
0 0 0 0
4 2 4

Similarly,
1 1 3 1
P(3) 0 0 0
4 4 8 8
3 5 1 1
P(4) 0 0 0
8 16 4 16
3 5 9 1 1
P(5) 0 0
8 32 32 8 16

P Player has lost money at the end of 5th plays

3
P X5 0
8

You might also like