Unit 04 Finite Markov Decision Processes
Unit 04 Finite Markov Decision Processes
More specifically, the agent and environment interact at each of a sequence of discrete time steps, t = 0, 1, 2,
3,. One time step later, in part as a consequence of its action, the agent receives a numerical reward, Rt+1 ∊ R
⊂ R, and finds itself in a new state, St+1. The MDP and agent together thereby give rise to a sequence or
trajectory that begins like this:
In a finite MDP, the sets of states, actions, and rewards (S, A, and R) all have a finite number of elements.
In this case, the random variables Rt and St have well defined discrete probability distributions dependent
only on the preceding state and action. there is a probability of those values occurring at time t, given particular
values of the preceding state and action:
The function p defines the dynamics of the MDP. The dot over the equals sign in the equation reminds us that
it is a definition (in this case of the function p). The dynamics function p : S x R x S x A → [0, 1] is an ordinary
deterministic function of four arguments. The ‘|’ in the middle of it comes from the notation for conditional
probability, but here it just reminds us that p specifies a probability distribution for each choice of s and a, that
is, that
In a Markov decision process, the probabilities given by p completely characterize the environment’s
dynamics. That is, the probability of each possible value for St and Rt depends on the immediately preceding
state and action, St−1 and At−1, and, given them, not at all on earlier states and actions.
The state must include information about all aspects of the past agent–environment interaction that make a
difference for the future. If it does, then the state is said to have the Markov property.
From the four-argument dynamics function, p, one can compute anything else one might want to know about
the environment, such as the state-transition probabilities (which we denote, with a slight abuse of notation,
as a three-argument function p : S x S x A→ [0, 1]),
We can also compute the expected rewards for state–action pairs as a two-argument function r : S x A →ℝ:
and the expected rewards for state–action–next-state triples as a three-argument function r : S x A x S →ℝ,
The MDP framework is abstract and flexible and can be applied to many different problems in many different
ways. For example, the time steps need not refer to fixed intervals of real time; they can refer to arbitrary
successive stages of decision making and acting.
Actions can be any decisions we want to learn how to make, and states can be anything we can know that might
be useful in making them.
In particular, the boundary between agent and environment is typically not the same as the physical boundary
of a robot’s or an animal’s body. Usually, the boundary is drawn closer to the agent than that.
The general rule we follow is that anything that cannot be changed arbitrarily by the agent is considered to be
outside of it and thus part of its environment. We do not assume that everything in the environment is
unknown to the agent.
in some cases the agent may know everything about how its environment works and still face a difficult
reinforcement learning task, just as we may know exactly how a puzzle like Rubik’s cube works, but still be
unable to solve it. The agent–environment boundary represents the limit of the agent’s absolute control, not
of its knowledge.
The agent–environment boundary can be located at different places for different purposes. In a complicated
robot, many different agents may be operating at once, each with its own boundary.
The MDP framework is a considerable abstraction of the problem of goal-directed learning from interaction. It
proposes that whatever the details of the sensory, memory, and control apparatus, and whatever objective one
is trying to achieve, any problem of learning goal-directed behavior can be reduced to three signals passing
back and forth between an agent and its environment: one signal to represent the choices made by the agent
(the actions), one signal to represent the basis on which the choices are made (the states), and one signal to
define the agent’s goal (the rewards). This framework may not be sufficient to represent all decision-learning
problems usefully, but it has proved to be widely useful and applicable.
where T is a final time step. This approach makes sense in applications in which there is a natural notion of
final time step, that is, when the agent–environment interaction breaks naturally into subsequences, which we
call episodes. Each episode ends in a special state called the terminal state, followed by a reset to a standard
starting state or to a sample from a standard distribution of starting states. Even if you think of episodes as
ending in different ways, such as winning and losing a game, the next episode begins independently of how the
previous one ended. Thus, the episodes can all be considered to end in the same terminal state, with different
rewards for the different outcomes. Tasks with episodes of this kind are called episodic tasks.
On the other hand, in many cases the agent–environment interaction does not break naturally into identifiable
episodes, but goes on continually without limit.
We call these continuing tasks. The return formulation (3.7) is problematic for continuing tasks because the
final time step would be T = ∞, and the return, which is what we are trying to maximize, could easily be infinite.
The additional concept that we need is that of discounting. According to this approach, the agent tries to select
actions so that the sum of the discounted rewards it receives over the future is maximized. In particular, it
chooses At to maximize the expected discounted return:
Note that although the return (3.8) is a sum of an infinite number of terms, it is still finite if the reward is
nonzero and constant—if γ < 1. For example, if the reward is a constant +1, then the return is
We are almost always considering a particular episode, or stating something that is true for all episodes.
Accordingly, in practice we almost always abuse notation slightly by dropping the explicit reference to episode
number. That is, we write St to refer to St,i, and so on.
We need one other convention to obtain a single notation that covers both episodic and continuing tasks. We
have defined the return as a sum over a finite number of terms in one case (3.7) and as a sum over an infinite
number of terms in the other (3.8). These two can be unified by considering episode termination to be the
entering of a special absorbing state that transitions only to itself and that generates only rewards of zero. For
example, consider the state transition diagram:
Here the solid square represents the special absorbing state corresponding to the end of an episode. Starting
from S0, we get the reward sequence +1, +1, +1, 0, 0, 0, . . .. Summing these, we get the same return whether
we sum over the first T rewards (here T = 3) or over the full infinite sequence. This remains true even if we
introduce discounting.
Thus, we can define the return, in general using the convention of omitting episode numbers when they are
not needed, and including the possibility that γ = 1 if the sum remains defined (e.g., because all episodes
terminate). Alternatively, we can write
including the possibility that T = ∞ or γ = 1 (but not both). We use these conventions throughout the rest of
the book to simplify notation and to express the close parallels between episodic and continuing tasks.
where Eπ[·] denotes the expected value of a random variable given that the agent follows policy π, and t is any
time step. Note that the value of the terminal state, if any, is always zero. We call the function vπ the state-value
function for policy π.
Similarly, we define the value of taking action a in state s under a policy π, denoted qπ(s, a), as the expected
return starting from s, taking the action a, and thereafter following policy π:
A policy π is defined to be better than or equal to a policy π’ if its expected return is greater than or equal to
that of π’ for all states.
There is always at least one policy that is better than or equal to all other policies. This is an optimal policy.
Although there may be more than one, we denote all the optimal policies by π*. They share the same state-
value function, called the optimal state-value function, denoted v*, and defined as
Optimal policies also share the same optimal action-value function, denoted q*, and defined as
for all s ∊ S and a ∊ A(s). For the state–action pair (s, a), this function gives the expected return for taking action
a in state s and thereafter following an optimal policy. Thus, we can write q* in terms of v* as follows:
Because v* is the value function for a policy, it must satisfy the self-consistency condition given by the Bellman
equation for state values. Because it is the optimal value function, however, v*’s consistency condition can be
written in a special form without reference to any specific policy. This is the Bellman equation for v*, or the
Bellman optimality equation. Intuitively, the Bellman optimality equation expresses the fact that the value of a
state under an optimal policy must equal the expected return for the best action from that state:
The last two equations are two forms of the Bellman optimality equation for v*. The Bellman optimality
equation for q* is
The backup diagrams in the figure below show graphically the spans of future states and actions considered
in the Bellman optimality equations for v* and q*. The backup diagram on the left graphically represents the
Bellman optimality equation for v* and the backup diagram on the right graphically represents Bellman
optimality equation for q*.
The Bellman optimality equation is actually a system of equations, one for each state, so if there are n states,
then there are n equations in n unknowns. If the dynamics p of the environment are known, then in principle
one can solve this system of equations for v* using any one of a variety of methods for solving systems of
nonlinear equations.
Once one has v*, it is relatively easy to determine an optimal policy. For each state s, there will be one or more
actions at which the maximum is obtained in the Bellman optimality equation. Any policy that assigns nonzero
probability only to these actions is an optimal policy.
Another way of saying this is that any policy that is greedy with respect to the optimal evaluation function v*
is an optimal policy.
Having q* makes choosing optimal actions even easier. With q*, the agent does not even have to do a one-step-
ahead search: for any state s, it can simply find any action that maximizes q*(s, a). The action-value function
effectively caches the results of all one-step-ahead searches. It provides the optimal expected long-term return
as a value that is locally and immediately available for each state–action pair.
Hence, at the cost of representing a function of state–action pairs, instead of just of states, the optimal action
value function allows optimal actions to be selected without having to know anything about possible successor
states and their values, that is, without having to know anything about the environment’s dynamics.
3.7 Optimality and Approximation
A well-defined notion of optimality organizes the approach to learning and provides a way to understand the
theoretical properties of various learning algorithms, but it is an ideal that agents can only approximate. Even
if we have a complete and accurate model of the environment’s dynamics, it is usually not possible to simply
compute an optimal policy by solving the Bellman optimality equation.
For example, board games such as chess are a tiny fraction of human experience, yet large, custom-designed
computers still cannot compute the optimal moves.
A critical aspect of the problem facing the agent is always the computational power available to it, in particular,
the amount of computation it can perform in a single time step.
The memory available is also an important constraint. A large amount of memory is often required to build up
approximations of value functions, policies, and models. In tasks with small, finite state sets, it is possible to
form these approximations using arrays or tables with one entry for each state (or state–action pair).
This we call the tabular case, and the corresponding methods we call tabular methods. In many cases of
practical interest, however, there are far more states than could possibly be entries in a table. In these cases
the functions must be approximated, using some sort of more compact parameterized function representation.
Our framing of the reinforcement learning problem forces us to settle for approximations. However, it also
presents us with some unique opportunities for achieving useful approximations.
For example, in approximating optimal behavior, there may be many states that the agent faces with such a
low probability that selecting suboptimal actions for them has little impact on the amount of reward the agent
receives.
The online nature of reinforcement learning makes it possible to approximate optimal policies in ways that
put more effort into learning to make good decisions for frequently encountered states, at the expense of less
effort for infrequently encountered states. This is one key property that distinguishes reinforcement learning
from other approaches to approximately solving MDPs.