0% found this document useful (0 votes)
465 views

Probability Theory - When Can We Interchange The Derivative With An Expectation - Mathematics Stack Exchange

Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It allows anybody to ask or answer questions on mathematics topics. Users can vote on answers, with the best answers rising to the top. A question was asked about when the derivative can be interchanged with an expectation for a stochastic process. One answer provided that a sufficient condition is that the expectation of the integral equals the integral of the expectations, and this can be achieved if the function inside the expectation is bounded.

Uploaded by

As Ren
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
465 views

Probability Theory - When Can We Interchange The Derivative With An Expectation - Mathematics Stack Exchange

Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It allows anybody to ask or answer questions on mathematics topics. Users can vote on answers, with the best answers rising to the top. A question was asked about when the derivative can be interchanged with an expectation for a stochastic process. One answer provided that a sufficient condition is that the expectation of the integral equals the integral of the expectations, and this can be achieved if the function inside the expectation is bounded.

Uploaded by

As Ren
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Mathematics Stack Exchange is a

question and answer site for people


studying math at any level and
professionals in related fields. It only
takes a minute to sign up.

Join this community

Anybody can ask a question

Anybody can answer

The best answers are voted


up and rise to the top

When can we interchange the derivative with an expectation?


Asked 7 years, 7 months ago Active 2 months ago Viewed 20k times

t
Let (X t
) be a stochastic process, and define a new stochastic process by Y t
= ∫
0
f (Xs )ds . Is it true in
general that d
E(Y t ) = E(f (Xt )) ? If not, under what conditions would we be allowed to interchange the
35
dt

derivative operator with the expectation operator?

probability-theory stochastic-processes

14
asked Oct 20 '12 at 23:34
Jonas
1,765 2 20 28

@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only
have to derive the conditions under which such operation is ok. Regards. – TheBridge Oct 22 '12 at 20:58

3 Where could I find information about when such an operation is ok? – jmbejara Dec 4 '13 at 20:04

8 A sufficient condition is that


t t

E (∫ f (Xs )ds) = ∫ E(f (Xs ))ds


0 0

and for that, some regularity of (X t) and f and the finiteness of


t

E(|f (X )|)ds ∫
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
s

our Terms of Service.


suffice. Keyword: Fubini. – Did Oct 26 '16 at 19:42 /
y

1 Answer Active Oldest Votes

Interchanging a derivative with an expectation or an integral can be done using the dominated convergence
theorem. Here is a version of such a result.
32 Lemma. Let X ∈ X be a random variable g: R × X → R a function such that g(t, X) is integrable for all t
and g is continuously differentiable w.r.t. t. Assume that there is a random variable Z such that
g(t, X)| ≤ Z a.s. for all t and E(Z ) < ∞ . Then

|
∂t

∂ ∂
E(g(t, X)) = E( g(t, X)).
∂t ∂t

Proof. We have

∂ 1
E(g(t, X)) = lim (E(g(t + h, X)) − E(g(t, X)))
∂t h→0 h

g(t + h, X) − g(t, X)
= lim E( )
h→0 h


= lim E( g(τ (h), X)),
h→0 ∂t

where τ (h) ∈ (t, t + h) exists by the mean value theorem. By assumption we have


∣ ∣
g(τ (h), X) ≤ Z
∣ ∣
∂t

and thus we can use the dominated convergence theorem to conclude

∂ ∂ ∂
E(g(t, X)) = E( lim g(τ (h), X)) = E( g(t, X)).
∂t h→0 ∂t ∂t

This completes the proof.


t
In your case you would have g(t, X) = ∫
0
f (X s ) ds and a sufficient condition to obtain
d

dt
E(Y t ) = E(f (X t )) would be for f to be bounded.

If you want to take the derivative only for a single point t = t , boundedness of the derivative is only required

in a neighbourhood of t . Variants of the lemma can be derived by using different convergence theorems in

place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.

edited Mar 18 at 17:50 answered Oct 26 '16 at 19:02


jochen
688 8 12
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
our Terms of Service.
The uniform boundedness of f seems to be a much too restrictive condition. – Did Oct 26 '16 at 19:44 /
@Did yes, it's only a sufficient condition. In the lemma I showed, Z is allowed to depend on X , so you can do much
better, and if you use the Vitali convergence theorem you get the condition that the f (X ) are uniformly integrable. Do
t

you know better results than this? – jochen Oct 26 '16 at 20:25

@Did ah, yes, your Fubini solution is more elegant. – jochen Oct 27 '16 at 8:03

@batman why not? You can have X ∈ C ([0, ∞), R) be the whole random path of the process X , and g the function
which integrates the path until time t. – jochen Aug 14 '17 at 19:01

By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and
our Terms of Service.
/

You might also like