0% found this document useful (0 votes)
7 views

Topic 4 CEM615

Uploaded by

nr.husninathirah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Topic 4 CEM615

Uploaded by

nr.husninathirah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

CEM615 Infrastructure Asset

Management
TOPIC 4

ASSET DETERIORATION
1
Lesson Learning Outcomes (LO)
• At the conclusion of this lesson, the student
should be able to:
– Define and explain Markov Chain (MC)
– Describe transition probabilities, transition
diagram, transition matrix, absorbing state
– Differentiate Regular Markov Chain and Non-
regular Markov Chain
– Elaborate state vector, steady state Condition,
steady state vector

2
5.1 -Markov Chains

• In 1907, A. A. Markov began the study of an


important new type of chance process. In this
process, the outcome of a given experiment
can affect the outcome of the next experiment.
This type of process is called a Markov chain.

3
• We have a set of states, S = {S1 , S2 , …….., Sr}.
• If the chain is currently in state Si, then it
moves to state Sj at the next step with a
probability denoted by transition probabilities,
Pij.
• P44 = absorbing state…once enter cannot leave
(unless you repair the asset)

4
• Transition Diagram

5
END 1

6
Example 11.1
• Example 11.1. The Land of Oz is never have two nice days in a row. If they have a nice day,
they are just as likely to have snow as rain the next day. If they have snow or rain, they have
an even chance of having the same the next day. If there is change from snow or rain, only
half of the time is this a change to a nice day. With this information we form a Markov chain
as follows. We take as states the kinds of weather R, N, and S. From the above information
we determine the transition probabilities and the steady state condition / long term
condition of the weather.
• The transition probabilities are most conveniently represented in a square array as

7
• Such a square array is called the matrix of
transition probabilities, or the transition
matrix.

1/2 1/2 1/4 



P = 1/2 0 1/2  
1/4 1/4 1/2 

8
9
• Theorem 11.1. Let P be the transition matrix
of a Markov chain. The ijth entry p(n)ij of the
matrix Pijn gives the probability that the
Markov chain, starting in state Si, will be in
state Sj after n steps. (Quiz 5)

10
• In the long term, the
probabilities (for weather) of
Rainy days is 40%
Nice days is 20%
Snowy days 40%

11
END 2

12
Regular Markov Chain
• Regular Markov Chain is a Markov Chain where some power of transition matrix
has all the entries non-zero.
• @ where for some positive integer n, the matrix T n has no 0 entries.

 0.1 0.1 0.8 


• Yes T = 0.25 0 0.75
 0.4 0.5 0.1 
check if T n = non - zero entries 

0 1 
T = 
• No 1 0 
check if T n = non - zero entries 

13
• Regular Markov Chains (or irreducible)

• Irreducible Markov Chain: When all its states communicate


with each others, or it is easier to think of it as: connectable.
(It is strongly recommended to draw the transition diagram)

• Note: Anytime a state is communicating only with itself as in


state 3, the matrix is not irreducible (not connectable)

14
• Example 1: Determine if the following is irreducible (connectable):

0.25 0.75
T = 
0.65 0.35
0.2 0.4 0.4
T = 0.3 0 0.7
 0 0 1 

0.6 0 0.4
T = 0.2 0 0.8
 0 0.8 0.2

15
• Regular Markov Chain:

a) If the transition matrix is not irreducible (not connectable),


then it is not regular

b) If the transition matrix is irreducible (connectable) and at


least one entry of the main diagonal is nonzero, then it is
regular

c) If all entries on the main diagonal are zero, but T n (after


multiplying by itself n times) contain all positive entries, then
it is regular.
16
END 3

17
Non-regular Markov chain
• One important special type of a non-regular Markov chain is an absorbing
Markov chain.

• “A state Sk of a Markov chain is called an absorbing state if, once the Markov
chain enters the state, it remains there forever. In other words, the probability
of leaving the state is zero. This means Pkk = 1, and Pjk = 0 for j ≠ k .

• A Markov chain is called an absorbing chain if


(i) it has at least one absorbing state; and
(ii) for every state in the chain, the probability of reaching an absorbing
state in a finite number of steps is nonzero.

• In order to avoid being an absorbing state Markov chain, it should be possible


to move from every state to another state

18
19
20
21
Summary:
1) "Regular Markov chains always ACHIEVE A STEADY STATE. A
markov chain is regular if the transition matrix P or some power of P has no zero
entries.“……….. definition from a textbook.

• So we indeed have to test every power of P and see if P^2, P^3, etc have zero
entries or not. How do we know when to stop testing? We can never conclude that
a Markov chain is non-regular without testing powers of P until
infinity.........=> you will often find that either zeros vanish quickly or they are
obviously not going to go away. So basically to test it, just go up a few
powers and see if the zeros disappears.
• If you are starting from a state of certainty. You know that you DEFINITELY lost the
game you just played, so what is your initial state?

2) S(n) = S(0) P^n = P^n S(0) where S is your state vector

22
END 4

23
Steady state condition
(The above can only applied on Regular Markov chain)

• The state vector or is called the Steady State Vector where: P.T = P
• multiplying the Steady State Vector by the Transition Matrix = Steady State Vector
• In the following example, steady state vector = (0.5294, 0.4706)

24
Example

25
Example

26
• Continue example 4.4

27
Example
2) Gina noticed that the performance of her baseball team seemed to depend
on the outcome of their previous game. When her team won, there was a
70% chance that they would win the next game. If they lost, however,
there was only a 40% chance that they would win their next game.

2a) Following a loss, what is the probability that Gina's team will win two
games later? Use the Markov chains to find out. [the answer provided is
0.52, but no matter what I try, I can't get the answer]

http://www.sciforums.com/threads/math-markov-chains-regular-markov-
chains.55328/

28
• The probability that Gina’s team continuously win a game is 0.7. Using P(W->W) = .7,
P(W->L) = .3, P(L->W) = .4, P(L->L) = .6 you can find:

Transition matrix , 0.7 0.3


P= 

 0 .4 0 .6 
Suppose you start the season off with a win, then your initial state is just the vector
(1,0)
▪ Or….the initial state should be S(0) = [0 1] because you definitely lost that game.

▪ S(0) = [0, 1]
▪ S(1) = S(0) P = [.4 .6]
▪ S(2) = S(1) P = S(0) P^2 = [.52 .49].

▪ Therefore, following a loss, the probability that Gina's team will win two games later is
0.52 or 52%

29
30
END 5

31
Exercise 1
• Draw the transition diagram. Find T2. T = Matrix below

32
Exercise 2
• A market analyst is interested in whether consumers prefer APPLE tablet
or SAMSUNG tablet. Two market surveys taken one year apart reveal the
following:
– 10% of APPLE owners had switched to SAMSUNG and the rest
continued with APPLE.
– 35% of SAMSUNG owners had switched to APPLE and the rest
continued with SAMSUNG.
• In long run, what is the fraction of Apple and Samsung in the market?

[Clue : stationary probability for regular Markov Chain]


(Pg 46, markovchain-IKRana)

33
• A market analyst is interested in whether consumers prefer APPLE tablet or SAMSUNG tablet. Two market
surveys taken one year apart reveal the following:
– 10% of APPLE owners had switched to SAMSUNG and the rest continued with APPLE.
– 35% of SAMSUNG owners had switched to APPLE and the rest continued with SAMSUNG.
• In long run, what is the fraction of Apple and Samsung in the market?

[Clue : stationary probability for regular Markov Chain]


0 − Apple 1 - Samsung (Pg 46, markovchain-IKRana)

0 − Apple
 0.9 0.1 
1 − Samsung 0.35 0.65
 

 0.9 0.1 
T=  , until T n
= steady state
0.35 0.65
34
END 6

35
Articles to read
• 5.2 -Regular_Markov_Chains
• 5.2a -Regular_Markov_Chains + Eg 4.3
• 5.3 -Regular Markov chains
• 5.8 Chap4part1
• 5.10 sec92

• 5.1 -Markov Chains

36
37
Exercise 3
• A sewerage treatment plant is in one of four possible states: 0 = working
without a problem; 1 = working but in need of minor repair; 2 = working
but in need of major repair; 3 = out-of-order. The corresponding transition
probability matrix is

a) Verify whether the corresponding transition probability matrix is a regular


Markov chain.
b) Find the steady-state distribution.
c) Draw a Transition diagram

38
0.80 0.14 0.04 0.02
 0 0.60 0.30 0.10 
T= 
 0 0 0.65 0.35
 
 0.90 0 0 0.10 
• Regular Markov Chain is a Markov Chain where some power of transition matrix has all the entries non-
zero. A regular Markov Chain is one where for some positive integer n, the matrix T n has no 0 entries.
• Check Tn until no 0 entries.
• Generate Tnew until no significant change in values

39
Exercise 4
• The corresponding transition probability matrix P is :

a) Verify whether the corresponding transition probability matrix is a regular Markov


chain.
b) Find the steady-state distribution.
c) Draw a Transition diagram

40
T= 0.33 0.33 0.33
0.25 0.50 0.25
0.17 0.33 0.50

T2 = 0.33 0.33 0.33 0.33 0.33 0.33


0.25 0.50 0.25 0.25 0.50 0.25
0.17 0.33 0.50 0.17 0.33 0.50

= 0.25 0.39 0.36


0.25 0.42 0.33
0.22 0.39 0.39

T3 = 0.24 0.40 0.36


0.24 0.40 0.35
0.24 0.40 0.37

T4 = 0.24 0.40 0.36


0.24 0.40 0.36
0.24 0.40 0.36

T5 = 0.24 0.40 0.36


0.24 0.40 0.36
0.24 0.40 0.36

41
Exercise 5/ good question
• If a bridge has 150 meter long element, assumed to be a steel girder, that
at age zero(new bridge) has 100% of this element in condition one, and
the estimated Makovian transition matrix p is as shown in Table 1.0.
Predict the future condition of bridge on 10 years of life span.

• Table 1.0 : Matrix of Bridge condition

Matrix 1 Condition 1 Condition 2 Condition 3 Condition 4

Condition 1 9.940E-01 6.000E-03 0.000E+00 0.000E+00

Condition 2 0.000E+00 9.930E-01 7.000E-03 0.000E+00

Condition 3 0.000E+00 0.000E+00 9.760E-01 2.400E-02

Condition 4 0.000E+00 0.000E+00 0.000E+00 1.000E+00

42
So = ( 1 0 0 0 )

S1 = ( 1 0 0 0 ) 0.994 0.006 0 0
0 0.993 0.007 0
0 0 0.976 0.024
0 0 0 1

S2 = ( 0.994 0.006 0 0 )

S3 = ( 0.988 0.01192 4E-05 0 )

S4= ( 0.982 0.01777 0.0001 1E-06 )

S5 = ( 0.976 0.02354 0.0002 4E-06 )

S6 = ( 0.97 0.02923 0.0004 9.9E-06 )

S7 = ( 0.965 0.03485 0.0006 2E-05 )

S8 = ( 0.959 0.04039 0.0008 3.4E-05 )

S9 = ( 0.953 0.04586 0.0011 5.4E-05 )

S10 = ( 0.947 0.05126 0.0014 8E-05 )

S11 = ( 0.942 0.05658 0.0017 0.00011 )

43
Exercise 6
Suppose that General Motor (GM), Ford (F), and Chrysler (C) each introduce a new
SUV vehicle.
• General Motors keep 85% of its customers but loses 10% to Ford and 5% to
Chrysler.
• Ford keeps 80% of its customers but loses 10% to General Motors and 10% to
Chrysler.
• Chrysler keeps 60% of its customers but loses 25% to General Motors and 15% to
Ford.

Find the distribution of the market in the long run or the Steady State Vector.

Find T n = steady state

44
1 – General Motor 2 – Ford 3 - Chrysler

1 – General Motor

2 – Ford

3 - Chrysler

45
• Suppose that General Motor (GM), Ford (F), and Chrysler (C) each introduce a new SUV vehicle.
• General Motors keep 85% of its customers but loses 10% to Ford and 5% to Chrysler.
• Ford keeps 80% of its customers but loses 10% to General Motors and 10% to Chrysler.
• Chrysler keeps 60% of its customers but loses 25% to General Motors and 15% to Ford.

Find the distribution of the market in the long run or the Steady State Vector.
Find T n = steady state
1 – General Motor 2 – Ford 3 - Chrysler

1 – General Motor

0.85 0.10 0.05


T = 0.10 0.80 0.10
2 – Ford

3 - Chrysler 0.25 0.15 0.60

46
END 7

47
5.4 -Markov Model for Storm Water Pipe Deterioration
• P44 = 1… is an absoring state, i.e. once enter cannot be left

48
End for students notes

49
50
5.7 markovchain-IKRana

51
52
53
54
END 8

55
Exercise 1.3
• Consider a person moving on a 4 × 4 grid. He
can move only to the inter- section points on
the right or down, each with probability 1/2. If
he starts his walk from the top left corner and
Xn, n ≥ 1 denotes his position after n steps.
Show that {Xn}n≥0} is a Markov chain. Sketch
its transition graph and compute the
transition probability matrix. Also find the
initial distribution vector.
56
57
Exercise 1.4

58
59
60
61
62
63
• End of……..5.7 markovchain-IKRana

64
END 9

65

You might also like