Probability Theory - When Can We Interchange The Derivative With An Expectation - Mathematics Stack Exchange
Probability Theory - When Can We Interchange The Derivative With An Expectation - Mathematics Stack Exchange
t
Let (X t
) be a stochastic process, and define a new stochastic process by Y t
= ∫
0
f (Xs )ds . Is it true in
general that d
E(Y t ) = E(f (Xt )) ? If not, under what conditions would we be allowed to interchange the
35
dt
probability-theory stochastic-processes
14
asked Oct 20 '12 at 23:34
Jonas
1,765 2 20 28
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only
have to derive the conditions under which such operation is ok. Regards. – TheBridge Oct 22 '12 at 20:58
3 Where could I find information about when such an operation is ok? – jmbejara Dec 4 '13 at 20:04
E(|f (X )|)ds ∫
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
s
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence
theorem. Here is a version of such a result.
32 Lemma. Let X ∈ X be a random variable g: R × X → R a function such that g(t, X) is integrable for all t
and g is continuously differentiable w.r.t. t. Assume that there is a random variable Z such that
g(t, X)| ≤ Z a.s. for all t and E(Z ) < ∞ . Then
∂
|
∂t
∂ ∂
E(g(t, X)) = E( g(t, X)).
∂t ∂t
Proof. We have
∂ 1
E(g(t, X)) = lim (E(g(t + h, X)) − E(g(t, X)))
∂t h→0 h
g(t + h, X) − g(t, X)
= lim E( )
h→0 h
∂
= lim E( g(τ (h), X)),
h→0 ∂t
where τ (h) ∈ (t, t + h) exists by the mean value theorem. By assumption we have
∂
∣ ∣
g(τ (h), X) ≤ Z
∣ ∣
∂t
∂ ∂ ∂
E(g(t, X)) = E( lim g(τ (h), X)) = E( g(t, X)).
∂t h→0 ∂t ∂t
dt
E(Y t ) = E(f (X t )) would be for f to be bounded.
If you want to take the derivative only for a single point t = t , boundedness of the derivative is only required
∗
in a neighbourhood of t . Variants of the lemma can be derived by using different convergence theorems in
∗
place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
you know better results than this? – jochen Oct 26 '16 at 20:25
@Did ah, yes, your Fubini solution is more elegant. – jochen Oct 27 '16 at 8:03
@batman why not? You can have X ∈ C ([0, ∞), R) be the whole random path of the process X , and g the function
which integrates the path until time t. – jochen Aug 14 '17 at 19:01
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
our Terms of Service.
/