0% found this document useful (0 votes)
116 views17 pages

20-Markov Chains 1

Markov Chains describe a stochastic process where the probability of transitioning between states depends only on the current state, not on the process history. The transitions are defined by a transition matrix where entries pij give the probability of moving from state i to j. For example, a Markov chain could model weather forecasts or a drunkard's random walk between locations. Key questions include the probability of absorption, probability of ending in a given absorbing state, and expected time steps to absorption.

Uploaded by

sumit kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views17 pages

20-Markov Chains 1

Markov Chains describe a stochastic process where the probability of transitioning between states depends only on the current state, not on the process history. The transitions are defined by a transition matrix where entries pij give the probability of moving from state i to j. For example, a Markov chain could model weather forecasts or a drunkard's random walk between locations. Key questions include the probability of absorption, probability of ending in a given absorbing state, and expected time steps to absorption.

Uploaded by

sumit kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Markov Chains

What is a “Markov Chain”

We have a set of states, S = {s1, s2,...,sr}.


A process starts in one of these states and moves successively from
one state to another. Each move is called a step.
If the chain is currently in state si, then it moves to state sj at the
next step with a probability denoted by pij , and this probability
does not depend upon which states the chain was in before the
current state.
The probabilities pij are called transition probabilities. The process
can remain in the state it is in, and this occurs with probability pii.
Andrey Markov

• (14 June 1856 – 20 July 1922) was a Russian


mathematician. He is best known for his work on 
stochastic processes. primary subject of his research
later became known as Markov chains and 
Markov processes.
• Markov and his younger brother Vladimir Andreevich
Markov (1871–1897) proved 
Markov brothers' inequality.
• His son, another Andrei Andreevich Markov (1903–
1979), was also a notable mathematician, making
contributions to constructive mathematics and 
recursive function theory.
Example: Land of Oz

• They never have two nice days in a row.

• If they have a nice day, they are just as likely to have snow as rain
the next day.

• If they have snow or rain, they have an even chance of having the
same the next day.

• If there is change from snow or rain, only half of the time is this a
change to a nice day.
Transition Matrix for the Land of Oz

R N S

S
2-Day Forecast for the Land of Oz

We consider the question of determining the probability that, given


the chain is in state i today, it will be in state j two days from now.
We denote this probability by pij(2) .
We see that if it is rainy today then the event that it is snowy two
days from now is the disjoint union of the following three events:
1) it is rainy tomorrow and snowy two days from now,
2) it is nice tomorrow and snowy two days from now, and
3) it is snowy tomorrow and snowy two days from now.
2 Day Forecast Probabilities

We have:
p13(2) = pi11j p13j+ p12 p23 + p13 p33
and so forth, so that:

Theorem: Let matrix of a Markov chain. The ij-th entry


pij(n) of the matrix Pn gives the probability that the Markov
chain, starting in state si, will be in state sj after n steps.
6-Day Forecast for Land of Oz
Example: Drunkard’s Walk

A man walks along a four-block stretch of Park Avenue:


• If he is at corner 1, 2, or 3, then he walks to the left or right with
equal probability. He continues until he reaches corner 4, which is
a bar, or corner 0, which is his home.
• If he reaches either home or the bar, he stays there.
Transition Matrix for Drunkard Walk

0 1 2 3 4

4
Transient vs Absorbing

• Definition: A state si of a Markov chain is called absorbing if it is


impossible to leave it (i.e., pii = 1).

• Definition: A Markov chain is absorbing if it has at least one


absorbing state, and if from every state it is possible to go to an
absorbing state (not necessarily in one step).

• Definition: In an absorbing Markov chain, a state which is not


absorbing is called transient.
Long-term Drunken Walk
Questions to Ask about a Markov Chain

• What is the probability that the process will eventually reach an


absorbing state?
• What is the probability that the process will end up in a given
absorbing state?
• On the average, how long will it take for the process to be
absorbed?
• On the average, how many times will the process be in each
transient state?
Example: Running for Election

The President of the United States tells person A his or her intention
to run or not to run in the next election. Then A relays the news to
B, who in turn relays the message to C, and so forth, always to some
new person. We assume that there is a probability a that a person
will change the answer from yes to no when transmitting it to the
next person and a probability b that he or she will change it from no
to yes. We choose as states the message, either yes or no.
Transition Matrix
Example: Elite Colleges

In the Dark Ages, Harvard, Dartmouth, and Yale admitted only male
students. Assume that, at that time:
• 80 percent of the sons of Harvard men went to Harvard and the
rest went to Yale
• 40 percent of the sons of Yale men went to Yale, and the rest split
evenly between Harvard and Dartmouth
• of the sons of Dartmouth men, 70 percent went to Dartmouth, 20
percent to Harvard, and 10 percent to Yale
Transition Matrix

You might also like