0% found this document useful (0 votes)
36 views10 pages

Sistema. Markov Chain - Anton, Rorres - 10.4 (Intro) (Solucao de Sistema)

This document discusses Markov chains, which model systems that change states randomly over time. The probabilities of transitioning between states are given by the transition matrix. Two examples are provided: a car rental system with three locations, and an alumni donation system with two states of donating or not donating. The transition matrix must be a stochastic matrix where the columns sum to 1, since the system must be in one of the possible states at each time step.

Uploaded by

Diego Salazar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views10 pages

Sistema. Markov Chain - Anton, Rorres - 10.4 (Intro) (Solucao de Sistema)

This document discusses Markov chains, which model systems that change states randomly over time. The probabilities of transitioning between states are given by the transition matrix. Two examples are provided: a car rental system with three locations, and an alumni donation system with two states of donating or not donating. The transition matrix must be a stochastic matrix where the columns sum to 1, since the system must be in one of the possible states at each time step.

Uploaded by

Diego Salazar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

10.

4 Markov Chains 551

(c) Use the methods in Section 5.2 and a computer to show that (a) Use a computer to compute A− 1
k for k = 1, 2, 3, 4, and 5.
# √ √ √ √ $
(2 + 3 )n−1 − (2 − 3 )n−1 (2 − 3 )n−2 − (2 + 3 )n−2 (b) From your results in part (a), discover the conjecture that
! "n−2 √ n−2 √ n−2 √ n−3 √ n−3
4 −1 (2 + 3 ) − (2 − 3 ) (2 − 3 ) − (2 + 3 )
= √
1 0 2 3 A−1
n = [αij ]
and hence √ √
(2 + 3 )n+1 − (2 − 3 )n+1 where αij = αj i and
Dn = √
2 3 % &
Dn−j Di−1
for n = 1, 2, 3, . . . . αij = (−1)i+j
Dn
(d) Using a computer, check this result for 1 ≤ n ≤ 10.
for i ≤ j .
T2. In this exercise, we determine a formula for calculating A− 1
(c) Use the result in part (b) to compute A 7−1 and compare it to
n
from Dk for k = 0, 1, 2, 3, . . . , n, assuming that D0 is defined to
be 1. the result obtained using the computer.

10.4 Markov Chains


In this section we describe a general model of a system that changes from state to state. We
then apply the model to several concrete problems.

PREREQUISITES: Linear Systems


Matrices
Intuitive Understanding of Limits

A Markov Process Suppose a physical or mathematical system undergoes a process of change such that at
any moment it can occupy one of a finite number of states. For example, the weather
in a certain city could be in one of three possible states: sunny, cloudy, or rainy. Or
an individual could be in one of four possible emotional states: happy, sad, angry, or
apprehensive. Suppose that such a system changes with time from one state to another
and at scheduled times the state of the system is observed. If the state of the system
at any observation cannot be predicted with certainty, but the probability that a given
state occurs can be predicted by just knowing the state of the system at the preceding
observation, then the process of change is called a Markov chain or Markov process.

DEFINITION 1 If a Markov chain has k possible states, which we label as 1, 2, . . . , k ,


then the probability that the system is in state i at any observation after it was in state j
at the preceding observation is denoted by pij and is called the transition probability
from state j to state i . The matrix P = [pij ] is called the transition matrix of the
Markov chain.

For example, in a three-state Markov chain, the transition matrix has the form
Preceding State
1 2 3
⎡ ⎤
p11 p12 p13 1
⎢ ⎥
⎣p21 p22 p23 ⎦ 2 New State
p31 p32 p33 3
In this matrix, p32 is the probability that the system will change from state 2 to state 3,
p11 is the probability that the system will still be in state 1 if it was previously in state 1,
and so forth.
552 Chapter 10 Applications of Linear Algebra

E X A M P L E 1 Transition Matrix of the Markov Chain


A car rental agency has three rental locations, denoted by 1, 2, and 3. A customer may
rent a car from any of the three locations and return the car to any of the three locations.
The manager finds that customers return the cars to the various locations according to
the following probabilities:
Rented from Location
1 2 3
⎡ ⎤
.8 .3 .2 1 Returned
⎢ ⎥
⎣.1 .2 .6 ⎦ 2 to
.1 .5 .2 3 Location

This matrix is the transition matrix of the system considered as a Markov chain. From
this matrix, the probability is .6 that a car rented from location 3 will be returned to
location 2, the probability is .8 that a car rented from location 1 will be returned to
location 1, and so forth.

E X A M P L E 2 Transition Matrix of the Markov Chain


By reviewing its donation records, the alumni office of a college finds that 80% of its
alumni who contribute to the annual fund one year will also contribute the next year,
and 30% of those who do not contribute one year will contribute the next. This can be
viewed as a Markov chain with two states: state 1 corresponds to an alumnus giving a
donation in any one year, and state 2 corresponds to the alumnus not giving a donation
in that year. The transition matrix is
! "
.8 .3
P =
.2 .7

In the examples above, the transition matrices of the Markov chains have the property
that the entries in any column sum to 1. This is not accidental. If P = [pij ] is the
transition matrix of any Markov chain with k states, then for each j we must have

p1j + p2j + · · · + pkj = 1 (1)

because if the system is in state j at one observation, it is certain to be in one of the k


possible states at the next observation.
A matrix with property (1) is called a stochastic matrix, a probability matrix, or a
Markov matrix. From the preceding discussion, it follows that the transition matrix for
a Markov chain must be a stochastic matrix.
In a Markov chain, the state of the system at any observation time cannot generally
be determined with certainty. The best one can usually do is specify probabilities for
each of the possible states. For example, in a Markov chain with three states, we might
describe the possible state of the system at some observation time by a column vector
⎡ ⎤
x1
⎢ ⎥
x = ⎣x2 ⎦
x3
in which x1 is the probability that the system is in state 1, x2 the probability that it is
in state 2, and x3 the probability that it is in state 3. In general we make the following
definition.
10.4 Markov Chains 553

DEFINITION 2 The state vector for an observation of a Markov chain with k states is
a column vector x whose i th component xi is the probability that the system is in the
i th state at that time.

Observe that the entries in any state vector for a Markov chain are nonnegative and have
a sum of 1. (Why?) A column vector that has this property is called a probability vector.
Let us suppose now that we know the state vector x(0) for a Markov chain at some
initial observation. The following theorem will enable us to determine the state vectors

x(1) , x(2) , . . . , x(n) , . . .

at the subsequent observation times.

THEOREM 10.4.1 If P is the transition matrix of a Markov chain and x(n) is the state
vector at the nth observation, then x(n+1) = P x(n) .

The proof of this theorem involves ideas from probability theory and will not be given
here. From this theorem, it follows that
x(1) = P x(0)
x(2) = P x(1) = P 2 x(0)
x(3) = P x(2) = P 3 x(0)
..
.
x(n) = P x(n−1) = P n x(0)

In this way, the initial state vector x(0) and the transition matrix P determine x(n) for
n = 1, 2, . . . .

E X A M P L E 3 Example 2 Revisited
The transition matrix in Example 2 was
! "
.8 .3
P =
.2 .7
We now construct the probable future donation record of a new graduate who did not
give a donation in the initial year after graduation. For such a graduate the system is
initially in state 2 with certainty, so the initial state vector is
! "
0
x(0) =
1
From Theorem 10.4.1 we then have
! "! " ! "
(1) (0) .8 .3 0 .3
x = Px = =
.2 .7 1 .7
! "! " ! "
.8 .3 .3 .45
x(2) = P x(1) = =
.2 .7 .7 .55
! "! " ! "
.8 .3 .45 .525
x(3) = P x(2) = =
.2 .7 .55 .475
554 Chapter 10 Applications of Linear Algebra

Thus, after three years the alumnus can be expected to make a donation with probability
.525. Beyond three years, we find the following state vectors (to three decimal places):
! " ! " ! " ! "
.563 .581 .591 .595
x(4) = , x(5) = , x(6) = , x(7) =
.438 .419 .409 .405
! " ! " ! " ! "
.598 .599 .599 .600
x(8) = , x(9) = , x(10) = , x(11) =
.402 .401 .401 .400
For all n beyond 11, we have ! "
.600
x(n) =
.400
to three decimal places. In other words, the state vectors converge to a fixed vector as
the number of observations increases. (We will discuss this further below.)

E X A M P L E 4 Example 1 Revisited
The transition matrix in Example 1 was
⎡ ⎤
.8 .3 .2
⎢ ⎥
⎣.1 .2 .6 ⎦
.1 .5 .2
If a car is rented initially from location 2, then the initial state vector is
⎡ ⎤
0
⎢ ⎥
x(0) = ⎣1⎦
0
Using this vector and Theorem 10.4.1, one obtains the later state vectors listed in Table 1.

Table 1
n
0 1 2 3 4 5 6 7 8 9 10 11
x(n)

x(n)
1 0 .3 00 .400 .477 .511 .53 3 .544 .550 .553 .555 .556 .557
x(n)
2 1 .2 00 .3 70 .2 52 .2 61 .2 40 .2 3 8 .2 3 3 .2 3 2 .2 3 1 .230 .230
x(n)
3 0 .500 .2 3 0 .2 71 .2 2 8 .2 2 7 .2 19 .2 17 .2 15 .2 14 .214 .213

For all values of n greater than 11, all state vectors are equal to x(11) to three decimal
places.
Two things should be observed in this example. First, it was not necessary to know
how long a customer kept the car. That is, in a Markov process the time period between
observations need not be regular. Second, the state vectors approach a fixed vector as n
increases, just as in the first example.

E X A M P L E 5 Using Theorem 10.4.1


A traffic officer is assigned to control the traffic at the eight intersections indicated in
Figure 10.4.1.She is instructed to remain at each intersection for an hour and then to
either remain at the same intersection or move to a neighboring intersection. To avoid
establishing a pattern, she is told to choose her new intersection on a random basis,
with each possible choice equally likely. For example, if she is at intersection 5, her next
10.4 Markov Chains 555

intersection can be 2, 4, 5, or 8, each with probability 41 . Every day she starts at the
1 2 location where she stopped the day before. The transition matrix for this Markov chain
is
Old Intersection
3 4 5 1 2 3 4 5 6 7 8
⎡ ⎤
1 1 1
0 0 0 0 0 1
⎢3 3 5 ⎥
⎢1 1
0 0 1
0 0 0⎥ 2
⎢3 3 4 ⎥
6 7 8 ⎢ 1 1 1

⎢0 0 0 0 0⎥ 3
⎢ 3 5 3 ⎥
⎢1 1 1 1 1 ⎥
Figure 10.4.1
⎢3 0 3 5 4
0 4
0⎥ 4 New
⎢ ⎥
⎢0 1
0 1 1
0 0 1⎥
5 Intersection
⎢ 3 5 4 3⎥
⎢ ⎥
⎢0 0 1
0 0 1 1
0⎥ 6
⎢ 3 3 4 ⎥
⎢ 1 1 1 1⎥
⎢0 0 0 5
0 3 4

3⎦
7

1 1 1
0 0 0 0 4
0 4 3
8

If the traffic officer begins at intersection 5, her probable locations, hour by hour, are
given by the state vectors given in Table 2. For all values of n greater than 22, all state
vectors are equal to x(22) to three decimal places. Thus, as with the first two examples,
the state vectors approach a fixed vector as n increases.

Table 2
n
0 1 2 3 4 5 10 15 20 22
x(n)

x(n)
1 0 .000 .133 .116 .130 .123 .113 .109 .108 .107
x(n)
2 0 .250 .146 .163 .140 .138 .115 .109 .108 .107
x(n)
3 0 .000 .050 .039 .067 .073 .100 .106 .107 .107
x(n)
4 0 .250 .113 .187 .162 .178 .178 .179 .179 .179
x(n)
5 1 .250 .279 .190 .190 .168 .149 .144 .143 .143
x(n)
6 0 .000 .000 .050 .056 .074 .099 .105 .107 .107
x(n)
7 0 .000 .133 .104 .131 .125 .138 .142 .143 .143
x(n)
8 0 .250 .146 .152 .124 .121 .108 .107 .107 .107

Limiting Behavior of the In our examples we saw that the state vectors approached some fixed vector as the number
State Vectors of observations increased. We now ask whether the state vectors always approach a fixed
vector in a Markov chain. A simple example shows that this is not the case.

E X A M P L E 6 Sy stem Oscillates Between Two State Vectors


Let ! " ! "
0 1 (0) 1
P = and x =
1 0 0
Then, because P 2 = I and P 3 = P , we have that
! "
(0) (2) (4) 1
x =x =x = ··· =
0
556 Chapter 10 Applications of Linear Algebra

and ! "
0
x(1) = x(3) = x(5) = · · · =
1 ! " ! "
1 0
This system oscillates indefinitely between the two state vectors and , so it does
0 1
not approach any fixed vector.

However, if we impose a mild condition on the transition matrix, we can show that
a fixed limiting state vector is approached. This condition is described by the following
definition.

DEFINITION 3 A transition matrix is regular if some integer power of it has all positive
entries.

Thus, for a regular transition matrix P , there is some positive integer m such that all
entries of P m are positive. This is the case with the transition matrices of Examples 1 and
2 for m = 1. In Example 5 it turns out that P 4 has all positive entries. Consequently, in
all three examples the transition matrices are regular.
A Markov chain that is governed by a regular transition matrix is called a regular
Markov chain. We will see that every regular Markov chain has a fixed state vector q such
that P n x(0) approaches q as n increases for any choice of x(0) . This result is of major
importance in the theory of Markov chains. It is based on the following theorem.

THEOREM 10.4.2 Behavior of P n as n → !


If P is a regular transition matrix, then as n → !,
⎡ ⎤
q1 q1 ···
q1
⎢q q2 q2 ⎥
···
⎢ 2 ⎥
Pn → ⎢ . .. .. ⎥
⎣ .. . .⎦
qk qk · · · qk
where the qi are positive numbers such that q1 + q2 + · · · + qk = 1.

We will not prove this theorem here. We refer you to a more specialized text, such as
J. Kemeny and J. Snell, Finite Markov Chains (New York: Springer-Verlag, 1976).
Let us set ⎡ ⎤ ⎡ ⎤
q1 q1 ··· q1 q1
⎢q q2 ··· q2 ⎥ ⎢q ⎥
⎢ 2 ⎥ ⎢ 2⎥
Q=⎢. .. .. ⎥ and q = ⎢ . ⎥
⎣ .. . . ⎦ ⎣ .. ⎦
qk qk · · · qk qk
Thus, Q is a transition matrix, all of whose columns are equal to the probability vector
q . Q has the property that if x is any probability vector, then
⎡ ⎤⎡ ⎤ ⎡ ⎤
q1 q1 ··· q1 x1 q1 x1 + q1 x2 + · · · + q1 xk
⎢q ··· q2 ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 2 q2 ⎥ ⎢x2 ⎥ ⎢q2 x1 + q2 x2 + · · · + q2 xk ⎥
Qx = ⎢ . .. ⎥ ⎢ ⎥ = ⎢
.. ⎦ ⎣ .. ⎦ ⎣ .. .. .. ⎥
⎣ .. . . . . . . ⎦
qk qk · · · qk xk qk x1 + qk x2 + · · · + qk xk
⎡ ⎤
q1
⎢q ⎥
⎢ 2⎥
= (x1 + x2 + · · · + xk ) ⎢ . ⎥ = (1)q = q
⎣ .. ⎦
qk
10.4 Markov Chains 557

That is, Q transforms any probability vector x into the fixed probability vector q . This
result leads to the following theorem.

THEOREM 10.4.3 Behavior of P n x as n → !


If P is a regular transition matrix and x is any probability vector, then as n → !,
⎡ ⎤
q1
⎢q ⎥
⎢ 2⎥
P nx → ⎢ . ⎥ = q
⎣ .. ⎦
qk
where q is a fixed probability vector, independent of n, all of whose entries are positive.

This result holds since Theorem 10.4.2 implies that P n → Q as n → !. This in turn
implies that P n x → Qx = q as n → !. Thus, for a regular Markov chain, the system
eventually approaches a fixed state vector q . The vector q is called the steady-state vector
of the regular Markov chain.
For systems with many states, usually the most efficient technique of computing the
steady-state vector q is simply to calculate P n x for some large n. Our examples illustrate
this procedure. Each is a regular Markov process, so that convergence to a steady-state
vector is ensured. Another way of computing the steady-state vector is to make use of
the following theorem.

THEOREM 10.4.4 Steady -State Vector


The steady-state vector q of a regular transition matrix P is the unique probability
vector that satisfies the equation P q = q .

To see this, consider the matrix identity PP n = P n+1 . By Theorem 10.4.2, both P n and
P n+1 approach Q as n → !. Thus, we have PQ = Q. Any one column of this matrix
equation gives P q = q . To show that q is the only probability vector that satisfies this
equation, suppose r is another probability vector such that P r = r. Then also P n r = r
for n = 1, 2, . . . . When we let n → !, Theorem 10.4.3 leads to q = r.
Theorem 10.4.4 can also be expressed by the statement that the homogeneous linear
system
(I − P )q = 0
has a unique solution vector q with nonnegative entries that satisfy the condition q1 +
q2 + · · · + qk = 1. We can apply this technique to the computation of the steady-state
vectors for our examples.

E X A M P L E 7 Example 2 Revisited
In Example 2 the transition matrix was
! "
.8 .3
P =
.2 .7
so the linear system (I − P )q = 0 is
! "! " ! "
.2 −.3 q1 0
= (2)
−.2 .3 q2 0
558 Chapter 10 Applications of Linear Algebra

This leads to the single independent equation


.2q1 − .3q2 = 0
or
q1 = 1.5q2
Thus, when we set q2 = s , any solution of (2) is of the form
! "
1.5
q =s
1
where s is an arbitrary constant. To make the vector q a probability vector, we set
s = 1/(1.5 + 1) = .4. Consequently,
! "
.6
q =
.4
is the steady-state vector of this regular Markov chain. This means that over the long
run, 60% of the alumni will give a donation in any one year, and 40% will not. Observe
that this agrees with the result obtained numerically in Example 3.

E X A M P L E 8 Example 1 Revisited
In Example 1 the transition matrix was
⎡ ⎤
.8 .3 .2
⎢ ⎥
P = ⎣.1 .2 .6 ⎦
.1 .5 .2
so the linear system (I − P )q = 0 is
⎡ ⎤⎡ ⎤ ⎡ ⎤
.2 −.3 −.2 q1 0
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣−.1 .8 −.6⎦ ⎣q2 ⎦ = ⎣0⎦
−.1 −.5 .8 q3 0

The reduced row echelon form of the coefficient matrix is (verify)


⎡ ⎤
1 0 − 34
13
⎢ ⎥
⎢0 1 − 14 ⎥
⎣ 13 ⎦
0 0 0

so the original linear system is equivalent to the system


- 34 .
q1 = 13
q3
- 14 .
q2 = 13 q3
When we set q3 = s , any solution of the linear system is of the form
⎡ ⎤
34
⎢ 13 ⎥
q =s⎢ 14 ⎥
⎣ 13 ⎦
1
To make this a probability vector, we set
1 13
s= 34 14
=
13
+ 13
+1 61
10.4 Markov Chains 559

Thus, the steady-state vector of the system is


⎡ ⎤ ⎡ ⎤
34
61 .5573 . . .
⎢ 14 ⎥ ⎢ ⎥
q =⎢ ⎥
⎣ 61 ⎦ = ⎣.2295 . . .⎦
13 .2131 . . .
61

This agrees with the result obtained numerically in Table 1. The entries of q give the
long-run probabilities that any one car will be returned to location 1, 2, or 3, respectively.
If the car rental agency has a fleet of 1000 cars, it should design its facilities so that there
are at least 558 spaces at location 1, at least 230 spaces at location 2, and at least 214
spaces at location 3.

E X A M P L E 9 Example 5 Revisited
We will not give the details of the calculations but simply state that the unique probability
vector solution of the linear system (I − P )q = 0 is
⎡3⎤ ⎡ ⎤
28 .1071…
⎢3⎥ ⎢
⎢ 28 ⎥ ⎢.1071…⎥

⎢ ⎥ ⎢ ⎥
⎢3⎥ ⎢
⎢ 28 ⎥ ⎢.1071…⎥

⎢ ⎥ ⎢
⎢ 5 ⎥ ⎢.1785…⎥⎥
⎢ ⎥ ⎢
q = ⎢ 28 ⎥=⎢ ⎥
⎢ ⎥ ⎢.1428…⎥
4

⎢ 28 ⎥ ⎢ ⎥
⎢3⎥ ⎢
⎢ 28 ⎥ ⎢.1071…⎥

⎢ ⎥ ⎢
⎢ 4 ⎥ ⎣.1428…⎥⎦
⎣ 28 ⎦
3 .1071…
28

The entries in this vector indicate the proportion of time the traffic officer spends at
each intersection over the long term. Thus, if the objective is for her to spend the same
proportion of time at each intersection, then the strategy of random movement with equal
probabilities from one intersection to another is not a good one. (See Exercise 5.)

Exercise Set 10.4


1. Consider the transition matrix (b) State why P is regular and find its steady-state vector.
! "
.4 .5 3. Find the steady-state vectors of the following regular transition
P =
.6 .5 matrices:
! " ⎡ ⎤
(0)
1 #1 $ ! "
1 1
0
(a) Calculate x (n)
for n = 1, 2, 3, 4, 5 if x = . 3
.81 .26 ⎢ 31 2

0 (a)
3 4
(b) (c) ⎢ 0 1⎥
2 1 .19 .74 ⎣3 4⎦
(b) State why P is regular and find its steady-state vector. 3 4 1 1 3
3 2 4
2. Consider the transition matrix
⎡ ⎤ 4. Let P be the transition matrix
.2 .1 .7 #1 $
⎢ ⎥ 2
0
P = ⎣.6 .4 .2⎦ 1
1
.2 .5 .1 2

(a) Show that P is not regular. # $


(a) Calculate x(1) , x(2) , and x(3) to three decimal places if 0
n (0)
⎡ ⎤ (b) Show that as n increases, P x approaches for any
0 1
initial state vector x(0) .
⎢ ⎥
x(0) = ⎣0⎦
(c) What conclusion of Theorem 10.4.3 is not valid for the
1 steady state of this transition matrix?
56 0 Chapter 10 Applications of Linear Algebra

5. Verify that if P is a k × k regular transition matrix all of whose with


row sums are equal to 1, then the entries of its steady-state ⎡ ⎤
1
# $ 0 0 3
vector are all equal to 1/k . 0 1
⎢ ⎥
2
P2 = 1
, P3 = ⎢
⎣0
1
2
1⎥
3⎦
,
6. Show that the transition matrix 1 2 1 1
⎡ ⎤ 1 2 3
1 1
0 2 2
⎡ ⎤
⎢1 ⎥ ⎡ ⎤ 0 0 0 0 1
P =⎢
⎣2
1
2
0⎥
⎦ 0 0 0 1 ⎢ 5

⎢ 4
⎥ ⎢0 0 0 1 1⎥
1
0 1 ⎢0 1 1⎥ ⎢ 4 5⎥
2 2 ⎢ 0 3 4⎥
⎢ ⎥
P4 = ⎢ 1 1 1⎥
⎥, P5 = ⎢
⎢0 0 1
3
1
4
1⎥
5⎥
,
is regular, and use Exercise 5 to find its steady-state vector. ⎢0 ⎢
⎣ 2 3 4⎦
⎢0 1 1 1 1⎥⎥
1 1 1 1 ⎣ 2 3 4 5⎦
7. John is either happy or sad. If he is happy one day, then he is 2 3 4
1 1 1 1 1
2 3 4 5
happy the next day four times out of five. If he is sad one day,
then he is sad the next day one time out of three. Over the long and so on.
term, what are the chances that John is happy on any given day? (a) Use a computer to show that each of these four matrices is
8. A country is divided into three demographic regions. It is found regular by computing their squares.
that each year 5% of the residents of region 1 move to region 2, (b) Verify Theorem 10.4.2 by computing the 100th power of Pk
and 5% move to region 3. Of the residents of region 2, 15% move for k = 2, 3, 4, 5. Then make a conjecture as to the limiting
to region 1 and 10% move to region 3. And of the residents of value of Pkn as n → ! for all k = 2, 3, 4, . . . .
region 3, 10% move to region 1 and 5% move to region 2. What
(c) Verify that the common column q k of the limiting matrix you
percentage of the population resides in each of the three regions
found in part (b) satisfies the equation Pk q k = q k , as required
after a long period of time?
by Theorem 10.4.4.

Working withTechnology T2. A mouse is placed in a box with nine rooms as shown in the ac-
companying figure. Assume that it is equally likely that the mouse
The following exercises are designed to be solved using a technol-
goes through any door in the room or stays in the room.
ogy utility. Typically, this will be MATLAB, Mathematica, Maple,
Derive, or Mathcad, but it may also be some other type of linear (a) Construct the 9 × 9 transition matrix for this problem and
algebra software or a scientific calculator with some linear algebra show that it is regular.
capabilities. For each exercise you will need to read the relevant (b) Determine the steady-state vector for the matrix.
documentation for the particular utility you are using. The goal
(c) Use a symmetry argument to show that this problem may be
of these exercises is to provide you with a basic proficiency with
solved using only a 3 × 3 matrix.
your technology utility. Once you have mastered the techniques
in these exercises, you will be able to use your technology utility
to solve many of the problems in the regular exercise sets.
1 2 3
T1. Consider the sequence of transition matrices
{P2 , P3 , P4 , . . .}

4 5 6

7 8 9

Figure Ex-T2

You might also like