Continuous-Time_Markov_Chain_50050_49_When_it_is
Continuous-Time_Markov_Chain_50050_49_When_it_is
As n was arbitrary, this reasoning holds for any n, and therefore for reversible Markov chains π is always
A. As n was arbitrary, this reasoning holds for any n, and therefore for reversible Markov chains π
B. [citation needed]■■The probability of going from state i to state j in n time steps is■■and the s
C. .
D. If we denote the chain by ■■■■■X■■0■■■,■■X■■1■■■,■■X■■2■■■,■.
2. This condition is known as the detailed balance condition (or local balance equation).
A. The matrix ■■■■■P■■∗■■■■■{\displaystyle P^{*}}■■ can be computed by solving a quadr
B. Hence, it does not have the Markov property.
C. This condition is known as the detailed balance condition (or local balance equation).
D. {\displaystyle X_{0},X_{1},X_{2},...}■■ then ■■■■■X■■0■■■■■{\displaystyle X_{0}}■■ is
3. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in E. The se
A. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in
B. [1] Note that even though a state has period ■■■■k■■■{\displaystyle k}■■, it may not be po
C. The process continues forever, indexed by the natural numbers.
D. [4][5]■■A Markov chain is said to be reversible if there is a probability distribution π over its sta
6. Other stochastic processes can satisfy the Markov property, the property that past behavior does not aff
A. However, if a state j is aperiodic, then■■■■■■lim■■n■→■∞■■■■p■■j■j■■■(■n■)■■■=
B. Other stochastic processes can satisfy the Markov property, the property that past behavior do
C. [6] Clearly the total amount of money π each person has remains the same after the time-step
D. Such can occur in Markov chain Monte Carlo (MCMC) methods in situations where a number
7. These descriptions highlight the structure of the Markov chain that is independent of the initial distributio
A. Each state can be described as transient or recurrent, depending on the probability of the chai
B. These descriptions highlight the structure of the Markov chain that is independent of the initial
C. If a state i is periodic with period k > 1 then the limit■■■■■■lim■■n■→■∞■■■■p■■i■i■■
D. [1]■■A communicating class is closed if the probability of leaving the class is zero, namely if i
8. [1]■■A state i is said to be transient if, given that we start in state i, there is a non-zero probability that w
A. Extending these distributions to the overall chain, setting all values to zero outside the commu
B. [1]■■A state i is said to be transient if, given that we start in state i, there is a non-zero probab
C. However, Markov chains are frequently assumed to be time-homogeneous (see variations bel
D. [1]■■A state i is recurrent if and only if the expected number of visits to i is infinite:[1]■■Even
11. An example of a stochastic process which is not a Markov chain is the model of a machine which has s
A. {\displaystyle \Pr(X_{1}=x_{1}).}
B. [1]■■A state i is called absorbing if it is impossible to leave this state.
C. An example of a stochastic process which is not a Markov chain is the model of a machine wh
D. Otherwise the period is not defined.
12. A state j is said to be accessible from a state i (written i → j) if a system started in state i has a non-zer
A. {\displaystyle \Pr(X_{n+1}=x\mid X_{n}=x_{n}).}
B. However, Markov chains are frequently assumed to be time-homogeneous (see variations belo
C. Positive and null recurrence are classes properties.
D. A state j is said to be accessible from a state i (written i → j) if a system started in state i has a
13. [2] A state is final if and only if its communicating class is closed.
A. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states contin
B. [citation needed]■■For a subset of states A ⊆ S, the vector kA of hitting times (where element
C. [1]■■A state i is recurrent if and only if the expected number of visits to i is infinite:[1]■■Even
D. [2] A state is final if and only if its communicating class is closed.
14. Even with time-inhomogeneous Markov chains, where multiple transition matrices are used, if each suc
A. Even with time-inhomogeneous Markov chains, where multiple transition matrices are used, if
B. [4][5]■■A Markov chain is said to be reversible if there is a probability distribution π over its sta
C. {\displaystyle X_{0},X_{1},X_{2},...}■■ then ■■■■■X■■0■■■■■{\displaystyle X_{0}}■■ is
D. [2] A state is final if and only if its communicating class is closed.
15. The detailed balance condition states that upon each payment, the other person pays exactly the same
A. {\displaystyle \lim \nolimits _{n\rightarrow \infty }p_{ij}^{(n)}=C{\tfrac {f_{ij}}{M_{j}}}.}
B. A state j is said to be accessible from a state i (written i → j) if a system started in state i has a
C. The detailed balance condition states that upon each payment, the other person pays exactly
D. Formally, the period of state ■■■■i■■■{\displaystyle i}■■ is defined as■■(where ■■■■gcd
16. A Markov chain can be described by a stochastic matrix, which lists the probabilities of moving to each
A. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in
B. A Markov chain can be described by a stochastic matrix, which lists the probabilities of moving
C. [4][5]■■A Markov chain is said to be reversible if there is a probability distribution π over its sta
D. The distribution of such a time period has a phase type distribution.
17. [4][5]■■A Markov chain is said to be reversible if there is a probability distribution π over its states such
A. [4][5]■■A Markov chain is said to be reversible if there is a probability distribution π over its sta
B. The same information is represented by the transition matrix from time n to time n + 1.
C. [1][3]:■20■■■A state ■■■■i■■■{\displaystyle i}■■ has period ■■■■k■■■{\displaystyle k}
D. Formally, let the random variable Ti be the first return time to state i (the "hitting time"):■■The
20. If there is a probability distribution over states ■■■■■π■■■■{\displaystyle {\boldsymbol {\pi }}}■■ su
A. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known
B. Each state can be described as transient or recurrent, depending on the probability of the chai
C. If there is a probability distribution over states ■■■■■π■■■■{\displaystyle {\boldsymbol {\pi
D. Reversible Markov chains are common in Markov chain Monte Carlo (MCMC) approaches bec
21. [1]■■Markov chains are often described by a sequence of directed graphs, where the edges of graph n
A. Considering a fixed arbitrary time n and using the shorthand■■the detailed balance equation c
B. The process continues forever, indexed by the natural numbers.
C. [1]■■Markov chains are often described by a sequence of directed graphs, where the edges o
D. {\displaystyle \Pr(X_{1}=x_{1}).}
22. For instance, a machine may have two states, A and E. When it is in state A, there is a 40% chance of
A. For instance, a machine may have two states, A and E. When it is in state A, there is a 40% ch
B. The evolution of the process through one time step is described by■■(The superscript (n) is a
C. This can be written as:[1]■■This condition implies that ■■■■π■■P■■n■■■=■π■■■{\displ
D. A state i is inessential if it is not essential.
23. [7]■■For example, consider the following Markov chain:■■This Markov chain is not reversible.
A. [7]■■For example, consider the following Markov chain:■■This Markov chain is not reversible
B. Formally, the period of state ■■■■i■■■{\displaystyle i}■■ is defined as■■(where ■■■■gcd
C. {\displaystyle \Pr(X_{1}=x_{1}).}
D. .
24. Markov chains can have properties including periodicity, reversibility and stationarity.
A. The mean recurrence time at state i is the expected return time Mi:■■State i is positive recurre
B. Markov chains can have properties including periodicity, reversibility and stationarity.
C. A state j is said to be accessible from a state i (written i → j) if a system started in state i has a
D. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states contin
25. Reversible Markov chains are common in Markov chain Monte Carlo (MCMC) approaches because the
A. Reversible Markov chains are common in Markov chain Monte Carlo (MCMC) approaches bec
B. [1]■■A state i is recurrent if and only if the expected number of visits to i is infinite:[1]■■Even
C. The simplest such distribution is that of a single exponentially distributed transition.
D. Even with time-inhomogeneous Markov chains, where multiple transition matrices are used, if
26. A Markov chain is said to be irreducible if its state space is a single communicating class; in other word
A. This condition is known as the detailed balance condition (or local balance equation).
B. Therefore, the state i is absorbing if and only if■■If every state can reach an absorbing state,
C. A Markov chain is said to be irreducible if its state space is a single communicating class; in ot
D. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known
27. [1]■■The marginal distribution Pr(Xn = x) is the distribution over states at time n. The initial distribution
A. Otherwise (■■■■k■>■1■■■{\displaystyle k>1}■■), the state is said to be periodic with perio
B. For example, suppose it is possible to return to the state in ■■■■{■6■,■ ■8■,■ ■10■,■ ■1
C. .
D. [1]■■The marginal distribution Pr(Xn = x) is the distribution over states at time n. The initial dis
28. A communicating class is closed if and only if it has no outgoing arrows in this graph.
A. [1] The set of communicating classes forms a directed, acyclic graph by inheriting the arrows f
B. [6] Clearly the total amount of money π each person has remains the same after the time-step,
C. [1]■■A state i is said to be transient if, given that we start in state i, there is a non-zero probab
D. A communicating class is closed if and only if it has no outgoing arrows in this graph.
29. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a sto
A. .
B. If we denote the chain by ■■■■■X■■0■■■,■■X■■1■■■,■■X■■2■■■,■.
C. A state i is said to be essential or final if for all j such that i → j it is also true that j → i.
D. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known
30. [citation needed]■■The probability of going from state i to state j in n time steps is■■and the single-ste
A. As n was arbitrary, this reasoning holds for any n, and therefore for reversible Markov chains π
B. The same information is represented by the transition matrix from time n to time n + 1.
C. This can be written as:[1]■■This condition implies that ■■■■π■■P■■n■■■=■π■■■{\displ
D. [citation needed]■■The probability of going from state i to state j in n time steps is■■and the
31. Hence, it does not have the Markov property.
A. [citation needed]■■The probability of going from state i to state j in n time steps is■■and the s
B. Hence, it does not have the Markov property.
C. The distribution of such a time period has a phase type distribution.
D. For example, suppose it is possible to return to the state in ■■■■{■6■,■ ■8■,■ ■10■,■ ■1
32. Therefore, state i is transient if■■State i is recurrent (or persistent) if it is not transient.
A. Therefore, state i is transient if■■State i is recurrent (or persistent) if it is not transient.
B. The distribution of such a time period has a phase type distribution.
C. The detailed balance condition states that upon each payment, the other person pays exactly
D. [1]■■A communicating class is closed if the probability of leaving the class is zero, namely if i
33. The simplest such distribution is that of a single exponentially distributed transition.
A. The simplest such distribution is that of a single exponentially distributed transition.
B. {\displaystyle X_{0},X_{1},X_{2},...}■■ with the Markov property, namely that the probability of
C. These descriptions highlight the structure of the Markov chain that is independent of the initial
D. Formally, let the random variable Ti be the first return time to state i (the "hitting time"):■■The
35. [1] The set of communicating classes forms a directed, acyclic graph by inheriting the arrows from the o
A. The simplest such distribution is that of a single exponentially distributed transition.
B. [1] The set of communicating classes forms a directed, acyclic graph by inheriting the arrows f
C. [1] Note that even though a state has period ■■■■k■■■{\displaystyle k}■■, it may not be po
D. This condition is known as the detailed balance condition (or local balance equation).
38. Otherwise (■■■■k■>■1■■■{\displaystyle k>1}■■), the state is said to be periodic with period ■■■■
A. However, if a state j is aperiodic, then■■■■■■lim■■n■→■∞■■■■p■■j■j■■■(■n■)■■■=
B. Otherwise (■■■■k■>■1■■■{\displaystyle k>1}■■), the state is said to be periodic with perio
C. If there is a probability distribution over states ■■■■■π■■■■{\displaystyle {\boldsymbol {\pi
D. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known
40. Formally, let the random variable Ti be the first return time to state i (the "hitting time"):■■The number■
A. This can be shown more formally by the equality■■which essentially states that the total amou
B. However, if a state j is aperiodic, then■■■■■■lim■■n■→■∞■■■■p■■j■j■■■(■n■)■■■=
C. Formally, let the random variable Ti be the first return time to state i (the "hitting time"):■■The
D. Communication is an equivalence relation, and communicating classes are the equivalence cla
42. [1]■■If a chain has more than one closed communicating class, its stationary distributions will not be u
A. Therefore, the state i is absorbing if and only if■■If every state can reach an absorbing state,
B. [1] Note that even though a state has period ■■■■k■■■{\displaystyle k}■■, it may not be po
C. From this matrix, the probability of being in a particular state n steps in the future can be calcu
D. [1]■■If a chain has more than one closed communicating class, its stationary distributions will
43. The process continues forever, indexed by the natural numbers.
A. The probability ■■■■Pr■(■■X■■n■■■=■x■■■■X■■1■■■=■■x■■1■■■)■■■{\display
B. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states contin
C. As n was arbitrary, this reasoning holds for any n, and therefore for reversible Markov chains π
D. The process continues forever, indexed by the natural numbers.
44. The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set
A. [1]■■Markov chains are often described by a sequence of directed graphs, where the edges o
B. Each state can be described as transient or recurrent, depending on the probability of the chai
C. A communicating class is a maximal set of states C such that every pair of states in C commu
D. The hitting time is the time, starting in a given set of states until the chain arrives in a given sta
45. Periodicity is a class property—that is, if a state has period ■■■■k■■■{\displaystyle k}■■ then every
A. Such can occur in Markov chain Monte Carlo (MCMC) methods in situations where a number o
B. [1]■■The marginal distribution Pr(Xn = x) is the distribution over states at time n. The initial dis
C. The assumption is a technical one, because the money not really used is simply thought of as
D. Periodicity is a class property—that is, if a state has period ■■■■k■■■{\displaystyle k}■■ th
46. Each state can be described as transient or recurrent, depending on the probability of the chain ever re
A. [1][3]:■20■■■A state ■■■■i■■■{\displaystyle i}■■ has period ■■■■k■■■{\displaystyle k}
B. Each state can be described as transient or recurrent, depending on the probability of the chai
C. This is because the behavior of the machine depends on the whole history—if the machine is i
D. [1] The set of communicating classes forms a directed, acyclic graph by inheriting the arrows f
47. For any time-homogeneous Markov chain given by a transition matrix ■■■■P■∈■■■R■■■n■×■n■
A. Such can occur in Markov chain Monte Carlo (MCMC) methods in situations where a number o
B. Reversible Markov chains are common in Markov chain Monte Carlo (MCMC) approaches bec
C. Hence, it does not have the Markov property.
D. For any time-homogeneous Markov chain given by a transition matrix ■■■■P■∈■■■R■■■n
48. Recurrence and transience are class properties, that is, they either hold or do not hold equally for all m
A. [7]■■For example, consider the following Markov chain:■■This Markov chain is not reversible
B. For any time-homogeneous Markov chain given by a transition matrix ■■■■P■∈■■■R■■■n
C. Recurrence and transience are class properties, that is, they either hold or do not hold equally
D. These descriptions highlight the structure of the Markov chain that is independent of the initial
49. For example, suppose it is possible to return to the state in ■■■■{■6■,■ ■8■,■ ■10■,■ ■12■,■…■
A. For example, suppose it is possible to return to the state in ■■■■{■6■,■ ■8■,■ ■10■,■ ■1
B. .
C. Otherwise (■■■■k■>■1■■■{\displaystyle k>1}■■), the state is said to be periodic with perio
D. [7]■■For example, consider the following Markov chain:■■This Markov chain is not reversible
50. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states continuously t
A. A Markov chain's state space can be partitioned into communicating classes that describe whi
B. Hence, it does not have the Markov property.
C. [1][3]:■20■■■A state ■■■■i■■■{\displaystyle i}■■ has period ■■■■k■■■{\displaystyle k}
D. A continuous-time Markov chain is like a discrete-time Markov chain, but it moves states contin