没有合适的资源?快使用搜索试试~ 我知道了~
Markov Chains and Monte–Carlo Simulation.pdf
需积分: 13 5 下载量 74 浏览量
2019-06-20
18:05:20
上传
评论 1
收藏 1.13MB PDF 举报
温馨提示
Markov chains – are a fundamental class of stochastic models for sequences of non–independent random variables, i.e. of random variables possessing a specific dependency structure. – have numerous applications e.g. in insurance and finance. – play also an important role in mathematical modelling and analysis in a variety of other fields such as physics, chemistry, life sciences, and material sciences.
资源推荐
资源详情
资源评论
























U
N
I
V
E
R
S
I
T
Ä
T
U
L
M
·
S
C
I
E
N
D
O
·
D
O
C
E
N
D
O
·
C
U
R
A
N
D
O
·
Markov Chains and Monte–Carlo
Simulation
Ulm University
Institute of Stochastics
Lecture Notes
Prof. Dr. Volker Schmidt
Summer 2010
Ulm, July 2010

CONTENTS 2
Contents
1 Introduction 4
2 Markov Chains 5
2.1 Specification of the Model and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 State Space, Initial Distribution and Transition Probabilities . . . . . . . . . . . . . . . . . 5
2.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Recursive Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.4 The Matrix of the n–Step Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Ergodicity and Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Basic Definitions and Quasi-positive Transition Matrices . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Estimates for the Rate of Convergence; Perron–Frobenius–Theorem . . . . . . . . . . . . . 20
2.2.3 Irreducible and Aperiodic Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.4 Stationary Initial Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.5 Direct and Iterative Computation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Reversibility; Estimates for the Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.2 Recursive Construction of the „Past” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.3 Determining the Rate of Convergence under Reversibility . . . . . . . . . . . . . . . . . . . 43
2.3.4 Multiplicative Reversible Version of the Transition Matrix; Spectral Representation . . . . . 45
2.3.5 Alternative Estimate for the Rate of Convergence; χ
2
-Contrast . . . . . . . . . . . . . . . . 46
2.3.6 Dirichlet–Forms and Rayleigh–Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.3.7 Bounds for the Eigenvalues λ
2
and λ
`
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3 Monte–Carlo Simulation 58
3.1 Generation of Pseudo-Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.1.1 Simple Applications; Monte–Carlo Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.1.2 Linear Congruential Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1.3 Statistical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2 Transformation of Uniformly Distributed Random Numbers . . . . . . . . . . . . . . . . . . . . . . 68
3.2.1 Inversion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.2 Transformation Algorithms for Discrete Distributions . . . . . . . . . . . . . . . . . . . . . 71
3.2.3 Acceptance-Rejection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.4 Quotients of Uniformly Distributed Random Variables . . . . . . . . . . . . . . . . . . . . . 79
3.3 Simulation Methods Based on Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3.1 Example: Hard–Core Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3.2 Gibbs Sampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

CONTENTS 3
3.3.3 Metropolis–Hastings Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.4 Error Analysis for MCMC Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.4.1 Estimate for the Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.4.2 MCMC Estimators; Bias and Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 96
3.4.3 Asymptotic Variance of Estimation; Mean Squared Error . . . . . . . . . . . . . . . . . . . 99
3.5 Coupling Algorithms; Perfect MCMC Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.5.1 Coupling to the Future; Counterexample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.5.2 Propp–Wilson Algorithm; Coupling from the Past . . . . . . . . . . . . . . . . . . . . . . . 106
3.5.3 Monotone Coupling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.5.4 Examples: Birth–and–Death Processes; Ising Model . . . . . . . . . . . . . . . . . . . . . . 110
3.5.5 Read–Once Modification of the CFTP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 114

1 INTRODUCTION 4
1 Introduction
• Markov chains
– are a fundamental class of stochastic models for sequences of non–independent random variables, i.e.
of random variables possessing a specific dependency structure.
– have numerous applications e.g. in insurance and finance.
– play also an important role in mathematical modelling and analysis in a variety of other fields such as
physics, chemistry, life sciences, and material sciences.
• Questions of scientific interest often exhibit a degree of complexity resulting in great difficulties if the
attempt is made to find an adequate mathematical model that is solely based on analytical formulae.
• In these cases Markov chains can serve as an alternative tool as they are crucial for the construction of
computer algorithms for the Markov Chain Monte Carlo simulation (MCMC) of the mathematical models
under consideration.
This course on Markov chains and Monte Carlo simulation will be based on the methods and models introduced
in the course “Elementare Wahrscheinlichkeitsrechnung und Statistik”. Further knowledge of probability theory
and statistics can be useful but is not required.
• The main focus of this course will be on the following topics:
– discrete–time Markov chains with finite state space
– stationarity and ergodicity
– Markov Chain Monte Carlo (MCMC)
– reversibility and coupling algorithms
• Notions and results introduced in “Elementare Wahrscheinlichkeitsrechnung and Statistik” will be used
frequently. References to these lecture notes will be labelled by the prefix “WR” in front of the number
specifying the corresponding section, theorem, lemma, etc.
• The following list contains only a small collection of introductory texts that can be recommended for in–
depth studies of the subject complementing the lecture notes.
– E. Behrends (2000) Introduction to Markov Chains. Vieweg, Braunschweig
– P. Bremaud (2008) Markov Chains, Gibbs Fields, Monte Carlo Simulation, and Queues. Springer,
New York
– B. Chalmond (2003) Modeling and Inverse Problems in Image Analysis. Springer, New York
– D. Gamerman, H. Lopes (2006) Markov Chain Monte Carlo: Stochastic Simulation for Bayesian
Inference. Chapman & Hall, London
– O. Häggström (2002) Finite Markov Chains and Algorithmic Applications. Cambridge University
Press, Cambridge
– D. Levin, Y. Peres, E. Wilmer (2009) Markov chains and mixing times. Publications of the AMS,
Riverside
– S. Resnick (1992) Adventures in Stochastic Processes. Birkhäuser, Boston
– C. Robert, G. Casella (2009) Introducing Monte Carlo Methods with R. Springer, Berlin
– T. Rolski, H. Schmidli, V. Schmidt, J. Teugels (1999) Stochastic Processes for Insurance and Finance.
Wiley, Chichester
– Y. Suhov, M. Kelbert (2008) Probability and Statistics by Example. Volume 2. Markov Chains: A
Primer in Random Processes and their Applications. Cambridge University Press, Cambridge
– H. Thorisson (2002) Coupling, Stationarity, and Regeneration. Springer, New York
– G. Winkler (2003) Image Analysis, Random Fields and Dynamic Monte Carlo Methods. Springer,
Berlin

2 MARKOV CHAINS 5
2 Markov Chains
• Markov chains can describe the (temporal) dynamics of objects, systems, etc.
– that can possess one of finitely or countably many possible configurations at a given time,
– where these configurations will be called the states of the considered object or system, respectively.
• Examples for this class of objects and systems are
– the current prices of products like insurance policies, stocks or bonds, if they are observed on a discrete
(e.g. integer) time scale,
– the monthly profit of a business,
– the current length of the checkout lines (so–called “queues”) in a grocery store,
– the vector of temperature, air pressure, precipitation and wind velocity recorded on an hourly basis at
the meteorological office Ulm–Kuhberg,
– digital maps, for example describing the momentary spatial dispersion of a disease.
– microscopical 2D or 3D images describing the current state (i.e. structural geometrical properties) of
biological tissues or technical materials such as polymers, metals or ceramics.
Remarks
• In this course we will focus on discrete–time Markov chains, i.e., the temporal dynamics of the consi-
dered objects, systems etc. will be observed stepwise, e.g. at integer points in time.
• The algorithms for Markov Chain Monte Carlo simulation we will discuss in part II of the course are
based on exactly these discrete–time Markov chains.
• The number of potential states can be very high.
• For mathematical reasons it is therefore convenient to consider the case of infinitely many states as
well. As long as the infinite case is restricted to countably many states, only slight methodological
changes will be necessary.
2.1 Specification of the Model and Examples
2.1.1 State Space, Initial Distribution and Transition Probabilities
• The stochastic model of a discrete–time Markov chain with finitely many states consists of three components:
state space, initial distribution and transition matrix.
– The model is based on the (finite) set of all possible states called the state space of the Markov chain.
W.l.o.g. the state space can be identified with the set E = {1, 2, . . . , `} where ` ∈ N = {1, 2, . . .} is an
arbitrary but fixed natural number.
– For each i ∈ E, let α
i
be the probability of the system or object to be in state i at time n = 0, where
it is assumed that
α
i
∈ [0, 1] ,
`
X
i=1
α
i
= 1 . (1)
The vector α = (α
1
, . . . , α
`
)
>
of the probabilities α
1
, . . . , α
`
defines the initial distribution of the
Markov chain.
– Furthermore, for each pair i, j ∈ E we consider the (conditional) probability p
ij
∈ [0 , 1] for the
transition of the object or system from state i to j within one time step.
剩余116页未读,继续阅读
资源评论


Alladins
- 粉丝: 1
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 基因工程复习题样本.doc
- 系统集成高级项目经理培训部分课程课后练习题.docx
- 区域物流网络的规划与设计.pptx
- 萨拉齐电厂二次系统安全防护专项检查方案样本.doc
- C++模板编程和STL.ppt
- 网站建设客户需求分析调查表.doc
- 中国网络营销的发展现状.pptx
- 对大学生对网络商店看法的调查报告.doc
- netease-cloud-music-gtk-Rust资源
- 嵌入式系统硬件开发流程.doc
- 数据挖掘与用户画像PPT课件.pptx
- 基于JAVA的社交网络的信息采集系统的研究与设计.docx
- 基于数控机床的PLC毕业设计(论文)word格式.doc
- 微信小程序餐饮点餐外卖,到手即可使用
- 电子网络营销培训资料模板级.ppt
- 电子商务实训报告书模板.doc
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
