0% found this document useful (0 votes)
14 views

Existence and Uniqueness Notes

Uploaded by

umermunir01.pk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Existence and Uniqueness Notes

Uploaded by

umermunir01.pk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2.3 The Existence and Uniqueness Theorem.

Suppose that f (x, y) is continuous on the domain D and satisfies y-Lipschitz condition

| f (x, y1 ) f (x, y2 )|  K|y1 y2 | 8(x, y1 ), (x, y2 ) 2 D.

We already know in this case that a solution passing through any given (x0 , y0 ) 2 D exists by Peano’s
Theorem, and is unique by Osgood’s Theorem.

Theorem 4 (Existence and Uniqueness Theorem). Consider the initial value problem
(
y0 = f (x, y)
.
y(x0 ) = y0

Let D be an open set in R2 that contains (x0 , y0 ) and assume that f : D ! R is continuous in t and Lipschtiz
in y with Lipschitz constant K. Then there exists a > 0 so that the initial value problem has a solution on
(x0 a, x0 + a) and this solution is unique.

We’ll prove existence in two different ways and will prove uniqueness in two different ways. The first
existence proof is constructive: we’ll use a method of successive approximations — the Picard iterates —
and we’ll prove they converge to a solution. The second existence proof uses a fixed-point argument. Then
we’ll finish up by presenting two different proofs of uniqueness.

We start by relating the initial value problem


(
y0 = f (x, y)
y(x0 ) = y0

to an integral equation instead of a differential equation. Namely, if we can find a continuous function y on
[a, b ] where x0 2 (a, b ) such that y satisfies
Z x
y(x) = y0 + f (t, y(t)) dt (2.3.1)
x0

then y on (a, b ) is a solution of the initial value problem. Why? We know f (x, y) is continuous hence
f (t, y(t)) is continuous on [a, b ]. The fundamental theorem of Calculus then implies y0 (x) = f (x, y(x)) for
all x 2 (a, b ). Evaluating (2.3.1) at x = x0 yields the desired y(x0 ) = y0 .

Proof of Existence via Picard Iterates. This is the same proof as found on pages 734-739 of “Ordinary
Differential Equations” by M. Tenenbaum and H. Pollard.
First, choose a rectangle R 0 that is centered at (x0 , y0 ) such that R 0 ⇢ D:

R 0 = [x0 A, x0 + A] ⇥ [y0 L, y0 + L].

Since f is continuous,
| f (x, y)|  M for all (x, y) 2 R 0

38
for some M > 0. Using M we define the radius of the interval of existence:

L
a = min ,A .
M

Let R be the (possibly narrower) rectangle

R = [x0 a, x0 + a] ⇥ [y0 L, y0 + L].

We will now show that there is a continuous function y on [x0 a, x0 +a] that satisfies the integral formulation
(2.3.1). We construct a sequence of functions on [x0 a, x0 + a]. The first function will be the constant
function
y1 (x) ⌘ y0 .
Starting with y0 (x), we recursively define a sequence of continuous functions1 on [x0 a, x0 + a]:
Z x
y2 (x) = y0 + f (t, y1 (t)) dt,
Zx0x
y3 (x) = y0 + f (t, y2 (t)) dt,
x0
..
.
Z x
yn+1 (x) = y0 + f (t, yn (t)) dt.
x0

Our goal will be to show that this sequence converges uniformly to some continuous function y(x) and that
the function y satisfies the integral equation (2.3.1).
First of all, we show that for each n, the graph of yn is inside the rectangle R. Obviously, this is true
for y1 (x) ⌘ y0 . Assume the graph of yn is inside the rectangle R. Using the definition of yn+1 , we have
Z x Z max{x,x0 }
|yn+1 (x) y0 |  f (t, yn (t)) dt  f (t, yn (t)) dt  M|x x0 |  Ma  L
x0 min{x,x0 }

since | fn (x, y)| is bounded by M inside R. This shows that the graph of yn+1 is inside R. By induction, the
graph of yn is inside the rectangle R for all n.
We now compare successive Picard iterates. For n 2, we have
Z x Z x
|yn+1 (x) yn (x)| = f (t, yn (t)) dt f (t, yn 1 (t)) dt
x0 x0
Z max{x,x0 }
 | f (t, yn (t)) f (t, yn 1 (t))| dt
min{x,x0 }
Z max{x,x0 }
 K|yn (t) yn 1 (t)| dt,
min{x,x0 }

1 Seepages 144-146 of the third edition of Differential Equations, Dynamical Systems, and an Introduction to Chaos by Hirsch,
Smale, and Devaney for concrete examples of the Picard iterates for y0 = y and for ~X 0 = [0, 1; 1, 0]~X.

39
where in the last inequality we used the y-Lipschitz assumption on f . In short, we showed that
Z max{x,x0 }
|yn+1 (x) yn (x)|  K|yn (t) yn 1 (t)| dt (2.3.2)
min{x,x0 }

for n 2. We will now use this inequality iteratively, and our starting points will be the inequality
Z x Z max{x,x0 } Z max{x,x0 }
|y2 (x) y1 (x)| = f (t, y0 ) dt  | f (t, y0 )| dt  M dt = M|x x0 |.
x0 min{x,x0 } min{x,x0 }

Using this in (2.3.2) for n = 2 gives


Z max{x,x0 }
|x x 0 |2
|y3 (x) y2 (x)|  K M|t x0 | dt = KM .
min{x,x0 } 2

Now, using this in (2.3.2) for n = 3 gives


Z max{x,x0 }
|t x0 |2 |x x0 |3
|y4 (x) y3 (x)|  K2 M dt = K 2 M .
min{x,x0 } 2 3!

The pattern is becoming clear, so we now use induction on n. Assume that

|x x0 |n
|yn+1 (x) yn (x)|  K n 1 M . (2.3.3)
n!
Plugging this into (2.3.2), we get
Z max{x,x0 }
n |t x 0 |n |x x0 |n+1
|yn+2 (x) yn+1 (x)|  K M dt = K n M .
min{x,x0 } n! (n + 1)!

This proves that (2.3.3) holds for all n 1. It follows that

an M (Ka)n
|yn+1 (x) yn (x)|  K n 1 M = 8n 1, 8x 2 [x0 a, x0 + a].
n! K n!

We use this to show that we have a Cauchy sequence in the uniform norm.

|ym (x) yn (x)|  |ym (x) ym 1 (x)| + |ym 1 (x) ym 2 (x)| + . . . |yn+1 (x) yn (x)|
m 1
M (Ka)k

K Â k!
8x 2 [x0 a, x0 + a].
k=n

Because •
(Ka)k
eKa = Â <•
k=0 k!
we know that we can make the tail as small as we want by choosing N sufficiently large. Given e > 0 there
exists N0 so that
M • (Ka)k
N N0 =) Â k! < e.
K k=N
This means that for any m, n N0 we have kym yn k < e — that is {yn } is a Cauchy sequence in the uniform
norm. It follows that the sequence converges uniformly to some continuous function y.

40
It remains to show that y satisfies the integral equation
Z x
y(x) = y0 + f (t, y(t)) dt.
x0

To show this, we use that Z x


yn (x) = y0 + f (t, yn 1 (t)) dt
x0

and let n ! •. The left hand side converges to y(x), so we only need to check that the integral on the right
R
hand side converges to xx0 f (t, y(t)) dt. This again follows from the Lipschitz condition (the graph of y is in
R ⇢ D because the graph of yn is in R for all n):
Z x Z x Z max{x,x0 }
f (t, yn 1 (t)) dt f (t, y(t)) dt  K|yn 1 (t) y(t)| dt
x0 x0 min{x,x0 }
 Ka sup |yn 1 (x) y(x)| = Kakyn 1 yk ! 0
x0 axx0 +a

as n ! •, because we proved that yn 1 (x) converges to y(x) uniformly on [x0 a, x0 + a]. This proves that y
is a solution of (2.3.1), as desired.

The above proof is similar to, but different from, the proof on pages 390-394 of of the third edition of
Differential Equations, Dynamical Systems, and an Introduction to Chaos by Hirsch, Smale, and Devaney.
They use a less-delicate method for bounding |yn+1 (x) yn (x)| and, as a result, they end up studying a
geometric series Â(aK)n . This leads to the additional requirement that a < 1/K for the series to converge.
The above proof is more careful in the bounding, leading to the series for the exponential function — it has
an infinite radius of convergence and so no additional assumption is needed on a.
Also, in the above there was a second way of knowing that I could take the limit inside the integral.
You just need to prove that if yn ! y uniformly and f (x, y) is continuous then f (x, yn ) ! f (x, y) uniformly.

Proof of Existence using Banach Fixed Point Theorem. First, choose a rectangle R 0 that is centered at
(x0 , y0 ) such that R 0 ⇢ D:
R 0 = [x0 A, x0 + A] ⇥ [y0 L, y0 + L].
Since f is continuous,
| f (x, y)|  M for all (x, y) 2 R 0
for some M > 0. Using M we define the radius of the interval of existence:

L 1
a < min , A, .
M K

Let R be (possibly narrower) rectangle

R = [x0 a, x0 + a] ⇥ [y0 L, y0 + L].

(Note that a may be smaller in this proof than in the Picard Iterates proof because we also require that
aK < 1.)

41
We define the following subset of C([x0 a, x0 + a]):

X := {y 2 C([x0 a, x0 + a]) | ||y y0 ||  L}.

By construction, y 2 X implies that the graph of y is contained in R. This, in turn, implies that | f (x, y(x))| 
M for all x 2 [x0 a, x0 + a].
We introduce the Picard mapping G : C([x0 a, x0 + a]) ! C([x0 a, x0 + a])
Z x
G(y)(x) = y0 + f (s, y(s)) ds.
x0

If we can find y 2 C([x0 a, x0 + a]) such that G(y) ⌘ y then we have found a function that satisfies (2.3.1)
and have found a solution. Our strategy is:
Step 1: Show that G maps X into X . Let y 2 X . Then
Z x Z max{x,x0 } Z max{x,x0 }
|G(y)(x) y0 | = | f (s, y(s)) ds|  | f (s, y(s))| ds  M ds = M|x x0 |  Ma  L.
x0 min{x,x0 } min{x,x0 }

This is true for all x 2 [x0 a, x0 + a], hence ky y0 k  L, as desired.


Step 2: Show that for all y and z in X we have

kG(y) G(z)k  aK ky zk.

Because we chose a so that aK < 1, the Banach Fixed Point Theorem then implies that there exists a unique
y 2 X so that G(y) = y.
Let y, z 2 X . Then
Z x Z max{x,x0 }
|G(y)(x) G(z)(x)| = | f (s, y(s)) f (s, z(s)) ds|  | f (s, y(s)) f (s, z(s))| ds
x0 min{x,x0 }
Z max{x,x0 } Z max{x,x0 }
 K|y(s) z(s)| ds  ky zk K ds = M|x x0 | ky zk  Maky zk
min{x,x0 } min{x,x0 }

The proof via fixed point argument is quick and clean and gives us uniqueness for free. On the other
hand, it may give a smaller interval of existence and it uses higher-powered machinery such as the Banach
Fixed Point theorem and understanding C([x0 a, x0 + a]) as a complete metric space. One of the results
of the Fixed Point theorem is that one can choose any function in X as the first function y1 (x) and then
construct a sequence via yn+1 = G(yn ) and that this sequence will converge to a fixed point of G (and hence a
solution of the ODE). That construction is precisely the Picard Iterates that we used in the first proof except
that we took y1 (x) ⌘ y0 .

Theorem 5 (Banach Fixed Point Theorem). Let (X, d) be a non-empty complete metric space with a map-
ping G : X ! X that satisfies
d(G(x), G(y))  q d(x, y)
for all x, y 2 X for some q 2 [0, 1). Then G has a unique fixed point x⇤ in X. This fixed point can be found as
follows: choose an arbitrary x0 2 X and define a sequence via xn+1 = G(xn ). This sequence converges to x⇤ .

42
First Uniqueness Proof. This is the same proof as found on pages 739-740 of “Ordinary Differential Equa-
tions” by M. Tenenbaum and H. Pollard.
Suppose we have another solution z on [x0 a, x0 + a] such that z(x0 ) = y0 . Then the graph of z is inside
R. (Make sure you understand why this must be true!) Then
Z x
|y(x) z(x)| = f (t, y(t)) f (t, z(t)) dt
x0
Z max{x,x0 } Z max{x,x0 }
 f (t, y(t)) f (t, z(t)) dt  K |y(t) z(t)| dt (2.3.4)
min{x,x0 } min{x,x0 }
Z max{x,x0 }
K 2L dt = 2LK|x x0 |.
min{x,x0 }

We proceed by the same induction argument we did earlier for existence, by plugging the bound recursively
into the integral in (2.3.4). Namely,
Z max{x,x0 }
|y(x) z(x)|  K 2L dt =) |y(x) z(x)|  2LK|x x0 |,
min{x,x0 }
Z max{x,x0 } 2L(K|x x0 |)2
|y(x) z(x)|  K 2LK|t x0 | dt =) |y(x) z(x)|  ,
min{x,x0 } 2
..
.
Z max{x,x0 }
2LK n 1 |t x0 |n 1 2L(K|x x0 |)n
|y(x) z(x)|  K dt =) |y(x) z(x)|  .
min{x,x0 } (n 1)! n!

We know that (K|x x0 |)n /n! ! 0 as n ! •. If there were a point x̃ 2 [x0 a, x0 + a] where |y(x̃) z(x̃)| > 0
then this would cause a contradiction because for n sufficiently large the inequality would be violated. For
this reason there can be no such x̃ and y ⌘ z on [x0 a, x0 + a], as desired.

Second Uniqueness Proof. Suppose we have another solution z on [x0 a, x0 + a] such that z(x0 ) = y0 . By
the same arguments as above, the inequality (2.3.4) holds.
We introduce u(x) = |y(x) z(x)| and assume x > x0 . In this case (2.3.4) can be written as
Z x
u(x)  K u(t) dt.
x0
R
If we denote the integral on the right hand side as U(x) = xx0 u(t) dt then U 0 (x) = u(x) and the inequality
can be written as
U 0 (x)  KU(x). (2.3.5)
If this was equality instead of inequality, we could solve this separable differential equation to get U(x) =
U(x0 )eK(x x0 ) for all x 2 [x0 , x0 + a]. Recalling that U 0 and U(x0 ) = 0, it would follow that U ⌘ 0 on
[x0 , x0 + a]. This would then imply u ⌘ 0 and thus y ⌘ z on [x0 , x0 + a], as desired.
In fact, we can use the differential inequality (2.3.5) to prove that U(x)  U(x0 )eK(x x0 ) for all x 2
[x0 , x0 + a]. This implies that U ⌘ 0 on [x0 , x0 + a] which then implies y ⌘ z on [x0 , x0 + a], as desired.
To see that (2.3.5) implies U(x)  U(x0 )eK(x x0 ) = 0 for all x 2 [x0 , x0 + a], divide U(x) by eK(x x0 ) and

43
compute the derivative of this ratio:

K(x x0 ) 0 K(x x0 ) K(x x0 ) K(x x0 )


U(x)e = U 0 (x)e KU(x)e = U 0 (x) KU(x) e  0.

Therefore, this ratio is nonincreasing and

K(x x0 ) K(x0 x0 )
U(x)e  U(x0 )e = U(x0 ).

This implies that U(x)  U(x0 )eK(x x0 ) , as desired.


It remains to show that y ⌘ z on [x0 a, x0 ]. For x < x0 , (2.3.4) is
Z x0
u(x)  K u(t) dt.
x

Small modifications of the above argument lead to the desired result.

In the above proof, we proved that U 0 (x)  KU(x) implies U(x)  U(x0 )eK(x x0 ) . This can be a useful thing
to know, simply at the level of solutions of differential inequalities. In fact, it was more than we needed to
finish off the uniqueness proof — the moment we knew that 0  U(x)e K(x x0 )  U(x0 ) = 0 we were done.
The second uniqueness proof is a classic method of proving uniqueness. The differential inequality is
a Grönwall’s Inequality. Here is a slightly more general form of Grönwall’s inequality, when K is a function
of x rather than a constant.

Theorem 6 (Grönwall’s Inequality). Suppose that U 0 (x)  K(x)U(x), where K(x) is continuous and U(x)
is differentiable for x x0 . Then Rx
K(t) dt
U(x)  U(x0 )e x0 .
In other words, U(x) is bounded by the solution of the differential equation U 0 (x) = K(x)U(x).

Can you see how to generalize the previous proof to prove this theorem?

From the first proof of the existence theorem, we saw that the interval of existence (x0 a, x0 + a) of
the solution was determined by a = min{L/M, A} where the rectangle [x0 A, x0 + A] ⇥ [y0 L, y0 + L] is
contained in the region where f (x, y) is continuous and Lipschitz. What does this mean in practice? What
happens when the ODE is one which has a maximal interval of existence which isn’t all of R?
Let’s consider the initial value problem
(
y0 = y2
y(x0 ) = y0 > 0

In this case, f (x, y) = y2 ; this is continuous on R2 and is Lipschitz on infinite horizontal strips R ⇥ [C, D]
where C, D 2 R. For this reason, we can take the rectangle [x0 A, x0 + A] ⇥ [y0 L, y0 + L] in the existence
proof to be whatever size we want. Once the rectangle is fixed, the upper bound M is determined: M =
(y0 + L)2 . Then a = min{L/(y0 + L)2 , A}. Let’s assume A has been taken large and so a = L/(y0 + L)2 . We
see that a ! 0 as L ! 0. Also, a ! 0 as L ! •. The largest interval of existence guaranteed by the existence
proof occurs when we take L to be “just right”. Specifically, L = y0 ; this results in aopt = 1/(4y0 ).

44
We now consider (
y0 = y2 1
=) y= , x 2 ( •, 1)
y(x0 ) = 1 1 x

Applying the existence and uniqueness theorem once, we get a solution on (x0 a0 , x0 + a0 ) = ( 1/4, 1/4).
We’d like to continue the solution to the right and so we consider
(
y0 = y2
y(1/4) = 1/(1 1/4) = 4/3

That is, we’re applying the existence theorem about the point (x1 , y1 ) = (1/4, 4/3). The interval of existence
is (x1 a1 , x + 1 + a1 ) where the optimal choice of a is a1 = 3/16. Pause to think here! We started with a
solution on ( 1/4, 1/4). We then found a solution on (1/4 3/16, 1/4 + 3/16). Because these two intervals
overlap and because we have uniqueness of solutions, we know that the first solution equals the second
solution in the overlap region. This allows us to use the two solutions to create a solution on ( 1/4, 1/4 +
3/16) via “use the first solution on ( 1/4, 1/4) and use the second solution on [1/4, 1/4 + 3/16).
We’d like to continue the solution further to the right and so we consider
(
y0 = y2
y(7/16) = 1/(1 7/16) = 16/9

That is, we’re applying the existence theorem about the point (x2 , y2 ) = (7/16, 16/9). The interval of exis-
tence is (x2 a2 , x + 2 + a2 ) where the optimal choice of a is a2 = 9/64.
We started with a solution on ( 1/4, 1/4). At this point, we’ve continued the solution to the right twice
and have a solution on ( 1/4, 37/64). We can keep doing this. We’ll find

xk = 1 (3/4)k , yk = (4/3)k , and ak = 1/4 (3/4)k .

As k ! • we have the interval of existence shrinking (ak ! 0) and xk " 1 as yk " •. However many times
we use the existence theorem to continue the solution to the right, we can’t get past x = 1.
Caveat The above is a circular argument. I knew the exact solution y(x) = 1/(1 x) and I used this exact
solution in evaluating the yk that went into the initial value problems. These then went into the ak which then
led to the xk which then failed to get past x = 1 as k ! •. In some sense it’s a bit, ”Put rabbit into hat, reach
into hat, Look! A rabbit!”
But the real point is: if we find ourselves studying
(
y0 = f (x, y)
y(x0 ) = y0

and we’re able to find d so that d < a for all (x0 , y0 ) then this implies that the maximal interval of existence
is R. Similarly, if we’re able to find an upper bound for a where the upper bound goes to zero as x0 increases
then, if the upper bound goes to zero sufficiently fast, this may imply that the solution cannot be continued
past some x = x⇤ . The art is in finding such estimates when one doesn’t have an explicit solution to work
with.

45

You might also like