Ordinary Differential Equations
Ordinary Differential Equations
GABRIEL NAGY
Mathematics Department,
Michigan State University,
East Lansing, MI, 48824.
x2
x2
x1
b a
0 x1
1
G. NAGY – ODE August 16, 2015 I
Contents
2.3.4. Exercises 92
2.4. Nonhomogeneous Equations 93
2.4.1. The General Solution Formula 93
2.4.2. The Undetermined Coefficients Method 94
2.4.3. The Variation of Parameters Method 98
2.4.4. Exercises 103
2.5. Applications 104
2.5.1. Review of Constant Coefficient Equations 104
2.5.2. Undamped Mechanical Oscillations 104
2.5.3. Damped Mechanical Oscillations 107
2.5.4. Electrical Oscillations 109
2.5.5. Exercises 111
We start our study of differential equations in the same way the pioneers in this field did. We
show particular techniques to solve particular types of first order differential equations. The
techniques were developed in the eighteen and nineteen centuries and the equations include
linear equations, separable equations, Euler homogeneous equations, and exact equations.
Soon this way of studying differential equations reached a dead end. Most of the differential
equations cannot be solved by any of the techniques presented in the first sections of this
chapter. People then tried something different. Instead of solving the equations they tried to
show whether an equation has solutions or not, and what properties such solution may have.
This is less information than obtaining the solution, but it is still valuable information. The
results of these efforts are shown in the last sections of this chapter. We present Theorems
describing the existence and uniqueness of solutions to a wide class of differential equations.
y
y 0 = 2 cos(t) cos(y)
π
2
0 t
π
−
2
2 G. NAGY – ODE august 16, 2015
where v is a positive constant describing the wave speed, and we have used the notation
∂ to mean partial derivative.
(d) The Heat Equation: The heat equation describes the change of temperature in a
solid material as function of time and space. The unknown is a scalar-valued function
u : R × R3 → R, where u(t, x) is the temperature at time t and the point x = (x, y, z)
in the solid. The equation is
∂t u(t, x) = k ∂xx u(t, x) + ∂yy u(t, x) + ∂zz u(t, x) ,
where k is a positive constant representing thermal properties of the material.
The equations in examples (a) and (b) are called ordinary differential equations (ODE),
since the unknown function depends on a single independent variable, t in these examples.
The equations in examples (c) and (d) are called partial differential equations (PDE), since
the unknown function depends on two or more independent variables, t, x, y, and z in these
examples, and their partial derivatives appear in the equations.
The order of a differential equation is the highest derivative order that appears in the
equation. Newton’s equation in Example (a) is second order, the time decay equation in
Example (b) is first order, the wave equation in Example (c) is second order is time and
G. NAGY – ODE August 16, 2015 3
space variables, and the heat equation in Example (d) is first order in time and second order
in space variables.
1.1.2. Linear Equations. We start with a precise definition of the differential equations
we study in this Chapter. We use primes to denote derivatives,
dy
(t) = y 0 (t),
dt
because it is a compact notation. We use it when there is no risk of confusion.
Definition 1.1.1. A first order ordinary differential equation in the unknown y is
y 0 (t) = f (t, y(t)), (1.1.1)
where y : R → R is the unknown function and f : R2 → R is a given function. The equation
in (1.1.1) is called linear iff the function with values f (t, y) is linear on its second argument;
that is, there exist functions a, b : R → R such that
y 0 (t) = a(t) y(t) + b(t), f (t, y) = a(t) y + b(t). (1.1.2)
A linear first order equation has constant coefficients iff both functions a and b in
Eq. (1.1.2) are constants. Otherwise, the equation has variable coefficients.
A different sign convention for Eq. (1.1.2) may be found in the literature. For example,
Boyce-DiPrima [3] writes it as y 0 = −a y + b. The sign choice in front of function a is just
a convention. Some people like the negative sign, because later on, when they write the
equation as y 0 + a y = b, they get a plus sign on the left-hand side. In any case, we stick
here to the convention y 0 = ay + b.
Example 1.1.1:
(a) An example of a first order linear ODE is the equation
y 0 (t) = 2 y(t) + 3.
In this case, the right-hand side is given by the function f (t, y) = 2y + 3, where we can
see that a(t) = 2 and b(t) = 3. Since these coefficients do not depend on t, this is a
constant coefficient equation.
(b) Another example of a first order linear ODE is the equation
2
y 0 (t) = − y(t) + 4t.
t
In this case, the right hand side is given by the function f (t, y) = −2y/t + 4t, where
a(t) = −2/t and b(t) = 4t. Since the coefficients are nonconstant functions of t, this is
a variable coefficients equation. C
Example 1.1.3: Find the differential equation y 0 = f (t, y) satisfied by y(t) = 4 e2t + 3.
Solution: We compute the derivative of y,
y 0 = 8 e2t
We now write the right hand side above, in terms of the original function y, that is,
y = 4 e2t + 3 ⇒ y − 3 = 4 e2t ⇒ 2(y − 3) = 8 e2t .
So we got a differential equation satisfied by y, namely
y 0 = 2y − 6.
C
The Fundamental Theorem of Calculus implies y(t) = y 0 (t) dt. Using this equality in the
R
Remarks:
(a) Eq. (1.1.4) is called the general solution of the differential equation in (1.1.3). Theo-
rem 1.1.2 says that Eq. (1.1.3) has infinitely many solutions, one solution for each value
of the constant c, which is not determined by the equation. This is reasonable. Since
the differential equation contains one derivative of the unknown function y, finding a
solution of the differential equation requires to compute an integral. Every indefinite
integration introduces an integration constant. This is the origin of the constant c above.
(b) In the next Section we generalize this idea to find solutions linear equations with vari-
able coefficients. In Section 1.4 we generalize this idea to certain nonlinear differential
equations.
Proof of Theorem 1.1.2: Write the equation with y on one side only,
y 0 − a y = b,
and then multiply the differential equation by a function µ, called an integrating factor,
µ y 0 − a µ y = µ b. (1.1.5)
Now comes the critical step. We choose a function µ such that
− a µ = µ0 . (1.1.6)
For any function µ solution of Eq. (1.1.6), the differential equation in (1.1.5) has the form
µ y 0 + µ0 y = µ b.
But the left-hand side is a total derivative of a product of two functions,
0
µ y = µ b. (1.1.7)
This is the property we want in an integrating factor, µ. We want to find a function µ such
that the left-hand side of the differential equation for y can be written as a total derivative,
just as in Eq. (1.1.7). We only need to find one of such functions µ. So we go back to
Eq. (1.1.6), the differential equation for µ, which is simple to solve,
µ0 0
µ0 = −a µ ⇒ = −a ⇒ ln(µ) = −a ⇒ ln(µ) = −at + c0 .
µ
Computing the exponential of both sides in the equation above we get
µ = e−at+c0 = e−at ec0 ⇒ µ = c1 e−at , c1 = ec0 .
Since c1 is a constant which will cancel out from Eq. (1.1.5) anyway, we choose the integration
constant c0 = 0, hence c1 = 1. The integrating function is then
µ(t) = e−at .
This function is an integrating factor, because if we start again at Eq. (1.1.5), we get
0
e−at y 0 − a e−at y = b e−at ⇒ e−at y 0 + e−at y = b e−at ,
0
where we used the main property of the integrating factor, −a e−at = e−at . Now the
product rule for derivatives implies that the left-hand side above is a total derivative,
0
e−at y = b e−at .
b 0
The right-hand above can also be rewritten as a derivative, b e−at = − e−at , hence
a
b 0 h b −at
i0
e−at y + e−at = 0 ⇔ y+ e = 0.
a a
6 G. NAGY – ODE august 16, 2015
We have succeeded in writing the whole differential equation as a total derivative. The
differential equation is the total derivative of a potential function,
b −at
ψ(t, y(t)) = y(t) + e .
a
The differential equation for y is a total derivative,
dψ
(t, y(t)) = 0,
dt
so it is simple to integrate,
b −at b
ψ(t, y(t)) = c ⇒ y(t) + e =c ⇒ y(t) = c eat − .
a a
This establishes the Theorem.
We solve the example below following the proof of Theorem 1.1.2. In this way we see
how the ideas in the proof of the Theorem work in a particular example.
Example 1.1.4: Find all solutions to the constant coefficient equation
y 0 = 2y + 3 (1.1.8)
Solution: The equation above is the case of a = 2 and b = 3 in Eq. (1.1.3). Therefore,
using these values in the expression for the solution given in Eq. (1.1.4) we obtain
3
y(t) = c e2t − .
2
C
1.1.4. The Initial Value Problem. Sometimes in physics one is not interested in all
solutions to a differential equation, but only in those solutions satisfying an extra condition.
For example, in the case of Newton’s second law of motion for a point particle, one could
be interested only in those solutions satisfying an extra condition: At an initial time the
particle must be at a specified initial position. Such condition is called an initial condition,
and it selects a subset of solutions of the differential equation. An initial value problem
means to find a solution to both a differential equation and an initial condition.
Definition 1.1.3. The initial value problem (IVP) for a constant coefficients first order
linear ODE is the following: Given constants a, b, t0 , y0 , find a function y solution of
y 0 = a y + b, y(t0 ) = y0 . (1.1.10)
The second equation in (1.1.10) is called the initial condition of the problem. Although the
differential equation in (1.1.10) has infinitely many solutions, the associated initial value
problem has a unique solution.
Theorem 1.1.4 (Constant Coefficients IVP). The initial value problem in (1.1.10), for
given constants a, b, t0 , y0 ∈ R, and a 6= 0, has the unique solution
b a(t−t0 ) b
y(t) = y0 + e − . (1.1.11)
a a
Solution: Write the differential equation as y 0 + 3 y = 1, and multiply the equation by the
integrating factor µ = e3t , which will convert the left-hand side above into a total derivative,
0
e3t y 0 + 3 e3t y = e3t ⇔ e3t y 0 + e3t y = e3t .
This is the key idea, because the derivative of a product implies
0
e3t y = e3t .
The exponential e3t is an integrating factor. Integrate on both sides of the equation,
1
e3t y = e3t + c.
3
So every solution of the differential equation above is given by
1
y(t) = c e−3t + , c ∈ R.
3
The initial condition y(0) = 2 selects only one solution,
1 2
1 = y(0) = c + ⇒ c= .
3 3
2 −3t 1
We get the solution y(t) = e + . C
3 3
Notes. This section corresponds to Boyce-DiPrima [3] Section 2.1, where both constant
and variable coefficient equations are studied. Zill and Wright give a more concise exposition
in [17] Section 2.3, and a one page description is given by Simmons in [10] in Section 2.10.
G. NAGY – ODE August 16, 2015 9
1.1.5. Exercises.
1.1.1.- Find constants a, b, so that 1.1.4.- Find the solution of the IVP
y(t) = (t + 3) e2t y 0 = −4y + 2, y(0) = 5.
is solution of the IVP
1.1.5.- Find the solution of the IVP
y 0 = ay + e2t , y(0) = b. dy
(t) = 3 y(t) − 2, y(1) = 1.
dt
1.1.2.- Follow the steps below to find all so-
lutions of 1.1.6.- Express the differential equation
0
y = −4y + 2 y0 = 6 y + 1 (1.1.13)
(a) Find the integrating factor µ. as a total derivative of a potential func-
(b) Write the equations as a total de- tion ψ(t, y), that is, find ψ satisfying
rivative of a function ψ, that is y 0 = 6 y + 1 ⇔ ψ 0 = 0.
0 0
y = −4y + 2 ⇔ ψ = 0. Integrate the equation for the poten-
(c) Integrate the equation for ψ. tial function ψ to find all solutions y of
(d) Compute y using part (c). Eq. (1.1.13).
1.1.3.- Find all solutions of 1.1.7.- Find the solution of the IVP
0
y = 2y + 5 y 0 = 6 y + 1, y(0) = 1.
10 G. NAGY – ODE august 16, 2015
1.2.1. Linear Equations with Variable Coefficients. We start this section generalizing
Theorem 1.1.2 from constant coefficient equations to variable coefficients equations.
Theorem 1.2.1 (Variable Coefficients). If the functions a, b are continuous, then
y 0 = a(t) y + b(t), (1.2.1)
has infinitely many solutions and every solution, y, can be labeled by c ∈ R as follows
Z
y(t) = c eA(t) + eA(t) e−A(t) b(t) dt, (1.2.2)
Z
where we introduced the function A(t) = a(t) dt, any primitive of the function a.
Remark: In the particular case of constant coefficient equations, a primitive (also called
antiderivative) for the constant function a is A(t) = at, so the second term in Eq. (1.2.2) is
Z Z b b
eA(t) e−A(t) b(t) dt = eat b e−at dt = eat − e−at = − .
a a
b
Hence Eq. (1.2.2) contains the expression y(t) = c eat − given in Eq. (1.1.4).
a
Proof of Theorem 1.2.1: Write down the differential equation with y on one side only,
y 0 − a y = b,
and then multiply the differential equation by a function µ, called an integrating factor,
µ y 0 − a µ y = µ b. (1.2.3)
The critical step is to choose a function µ such that
− a(t) µ(t) = µ0 (t). (1.2.4)
For any function µ solution of Eq. (1.2.4), the differential equation in (1.2.3) has the form
µ y 0 + µ0 y = µ b.
But the left-hand side is a total derivative of a product of two functions,
0
µ y = µ b. (1.2.5)
This is the property we want in an integrating factor, µ. We want to find a function µ such
that the left-hand side of the differential equation for y can be written as a total derivative,
G. NAGY – ODE August 16, 2015 11
just as in Eq. (1.2.5). We only need to find one of such functions µ. So we go back to
Eq. (1.2.4), the differential equation for µ, which is simple to solve,
µ0 0
µ0 = −a µ ⇒ = −a ⇒ ln(µ) = −a ⇒ ln(µ) = −A + c0 ,
µ
where A is a primitive, or antiderivative, of function a, that is A0 = a, and c0 is an arbitrary
constant. Computing the exponential of both sides we get
µ = e−A ec0 ⇒ µ = c1 e−A , c1 = ec0 .
Since c1 is a constant which will cancel out from Eq. (1.2.3) anyway, we choose the integration
constant c0 = 0, hence c1 = 1. The integrating function is then
µ(t) = e−A .
This function is an integrating factor, because if we start again at Eq. (1.2.3), we get
0
e−A y 0 − a e−A y = e−A b ⇒ e−A y 0 + e−A y = e−A b,
0
where we used the main property of the integrating factor, −a e−A = e−A . Now the
product rule for derivatives implies that the left-hand side above is a total derivative,
0
e−A y = e−A b.
The right-hand above can also be rewritten as a derivative of the function
Z
K(t) = e−A(t) b(t) dt,
since K 0 = e−A b. We cannot write the function K explicitly, because we do not know the
functions A and b. But given the functions A and b we can always compute K. Therefore,
the differential equation for y has the form
0
e−A y − K = 0.
We have succeeded in writing the whole differential equation as a total derivative. The
differential equation is the total derivative of a potential function,
ψ(t, y(t)) = e−A(t) y(t) − K(t) .
1.2.2. The Initial Value Problem. We now generalize Theorems 1.1.4 from constant
coefficients to variable coefficients equations. But first, let us introduce the initial value
problem for a variable coefficients equation, a simple generalization of Def. 1.1.3.
Definition 1.2.2. The initial value problem (IVP) for a first order linear ODE is the
following: Given functions a, b and constants t0 , y0 , find a function y solution of
y 0 = a(t) y + b(t), y(t0 ) = y0 . (1.2.6)
The second equation in (1.2.6) is called the initial condition of the problem. We saw
in Theorem 1.2.1 that the differential equation in (1.2.6) has infinitely many solutions,
parametrized by a real constant c. The associated initial value problem has a unique solution
though, because the initial condition fixes the constant c.
Theorem 1.2.3 (Variable coefficients IVP). Given continuous functions a, b, with do-
main (t1 , t2 ), and constants t0 ∈ (t1 , t2 ) and y0 ∈ R, the initial value problem
y 0 = a(t) y + b(t), y(t0 ) = y0 , (1.2.7)
has the unique solution y on the domain (t1 , t2 ), given by
Z t
y(t) = y0 eA(t) + eA(t) e−A(s) b(s) ds, (1.2.8)
t0
Z t
where the function A(t) = a(s) ds is a particular primitive of function a.
t0
G. NAGY – ODE August 16, 2015 13
Remark: In the particular case of a constant coefficient equation, that is, a, b ∈ R, the
solution given in Eq. (1.2.8) reduces to the one given in Eq. (1.1.11). Indeed,
Z t Z t
b b
A(t) = − a ds = −a(t − t0 ), e−a(s−t0 ) b ds = − e−a(t−t0 ) + .
t0 t0 a a
Therefore, the solution y can be written as
b b b a(t−t0 ) b
y(t) = y0 ea(t−t0 ) + ea(t−t0 ) − e−a(t−t0 ) + = y0 + e − .
a a a a
Proof Theorem 1.2.3: We follow closely the proof of Theorem 1.1.2. From Theorem 1.2.1
we know that all solutions to the differential equation in (1.2.7) are given by
Z
y(t) = c eA(t) + eA(t) e−A(t) b(t) dt,
Z
for every c ∈ R. Let us use again the notation K(t) = e−A(t) b(t) dt, and then introduce
the initial condition in (1.2.7), which fixes the constant c,
y0 = y(t0 ) = c eA(t0 ) + eA(t0 ) K(t0 ).
So we get the constant c,
c = y0 e−A(t0 ) − K(t0 ).
Using this expression in the general solution above,
y(t) = y0 e−A(t0 ) − K(t0 ) eA(t) + eA(t) K(t) = y0 eA(t)−A(t0 ) + eA(t) K(t) − K(t0 ) .
Let us introduce the particular primitives Â(t) = A(t) − A(t0 ) and K̂(t) = K(t) − K(t0 ),
which vanish at t0 , that is,
Z t Z t
Â(t) = a(s) ds, K̂(t) = e−A(s) b(s) ds.
t0 t0
which is equivalent to
Z t
y(t) = y0 eÂ(t) + eA(t)−A(t0 ) e−[A(s)−A(t0 )] b(s) ds,
t0
so we conclude that Z t
y(t) = y0 e Â(t)
+e Â(t)
e−Â(s) b(s) ds.
t0
Solution: We first write the equation above in a way it is simple to see the functions a
and b in Theorem 1.2.3. In this case we obtain
2 2
y 0 = − y + 4t ⇔ a(t) = − , b(t) = 4t. (1.2.9)
t t
14 G. NAGY – ODE august 16, 2015
So the potential function is ψ(t, y(t)) = t2 y(t) − t4 . Integrating on both sides we obtain
c
t2 y − t4 = c ⇒ t2 y = c + t4 ⇒ y(t) = + t2 .
t2
The initial condition implies that
1
2 = y(1) = c + 1 ⇒ c=1 ⇒ y(t) = + t2 .
t2
C
Example 1.2.3: Find the solution of the problem given in Example 1.2.2 using the results
of Theorem 1.2.3.
Solution: We find the solution simply by using Eq. (1.2.8). First, find the integrating
factor function µ as follows:
Z t
2
ds = −2 ln(t) − ln(1) = −2 ln(t) ⇒ A(t) = ln(t−2 ).
A(t) = −
1 s
1 t 3
Z
2
= 2+ 2 4s ds
t t 1
2 1
= 2 + 2 (t4 − 1)
t t
2 1 1
= 2 + t2 − 2 ⇒ y(t) = 2 + t2 .
t t t
C
G. NAGY – ODE August 16, 2015 15
1.2.3. The Bernoulli Equation. In 1696 Jacob Bernoulli struggled for months trying to
solve a particular differential equation, now known as Bernoulli’s differential equation. He
could not solve it, so he organized a contest among his peers to solve the equation. In
short time his brother Johann Bernoulli solved it. This was bad news for Jacob because
the relation between the brothers was not the best at that time. Later on the equation was
solved by Leibniz using a different method than Johann. Leibniz transformed the original
nonlinear equation into a linear equation. We now explain Leibniz’s idea in more detail.
Definition 1.2.4. A Bernoulli equation in the unknown function y, determined by the
functions p, q, and a number n ∈ R, is the differential equation
y 0 = p(t) y + q(t) y n . (1.2.10)
In the case that n = 0 or n = 1 the Bernoulli equation reduces to a linear equation. The
interesting cases are when the Bernoulli equation is nonlinear. We now show in an Example
the main idea to solve a Bernoulli equation: To transform the original nonlinear equation
into a linear equation.
Example 1.2.4: Find every solution of the differential equation
y 0 = y + 2y 5 .
Solution: This is a Bernoulli equation for n = 5. Divide the equation by the nonlinear
factor y 5 ,
y0 1
5
= 4 + 2.
y y
Introduce the function v = 1/y 4 and its derivative v 0 = −4(y 0 /y 5 ), into the differential
equation above,
v0
− = v + 2 ⇒ v 0 = −4 v − 8 ⇒ v 0 + 4 v = −8.
4
The last equation is a linear differential equation for the function v. This equation can be
solved using the integrating factor method. Multiply the equation by µ(t) = e4t , then
0 8
e4t v = −8 e4t ⇒ e4t v = − e4t + c.
4
We obtain that v = c e−4t − 2. Since v = 1/y 4 ,
1 1
= c e−4t − 2 ⇒ y(t) = ± 1/4 .
y4 c e−4t − 2
C
We now summarize the first part of the calculation in the Example above.
Theorem 1.2.5 (Bernoulli). The function y is a solution of the Bernoulli equation
y 0 = p(t) y + q(t) y n , n 6= 1,
iff the function v = 1/y (n−1) is solution of the linear differential equation
v 0 = −(n − 1)p(t) v − (n − 1)q(t).
16 G. NAGY – ODE august 16, 2015
Remark: This result summarizes Laplace’s idea to solve the Bernoulli equation. To trans-
form the Bernoulli equation for y, which is nonlinear, into a linear equation for v = 1/y (n−1) .
One then solves the linear equation for v using the integrating factor method. The last step
is to transform back to y = (1/v)1/(n−1) .
Proof of Theorem 1.2.5: Divide the Bernoulli equation by y n ,
y0 p(t)
n
= n−1 + q(t).
y y
Introduce the new unknown v = y −(n−1) and compute its derivative,
0 v 0 (t) y 0 (t)
v 0 = y −(n−1) = −(n − 1)y −n y 0 ⇒ −
= n .
(n − 1) y (t)
If we substitute v and this last equation into the Bernoulli equation we get
v0
− = p(t) v + q(t) ⇒ v 0 = −(n − 1)p(t) v − (n − 1)q(t).
(n − 1)
This establishes the Theorem.
Example 1.2.5: Given any constants a0 , b0 , find every solution of the differential equation
y 0 = a0 y + b0 y 3 .
Define the new unknown function v = 1/y (n−1) , that is, v = y 2/3 , compute is derivative,
2 y0
v0 = , and introduce them in the differential equation,
3 y 1/3
3 0 3 2 2
v = v + t4 ⇒ v 0 − v = t4 .
2 t t 3
This is a linear equation for v. Integrate this equation using the integrating factor method.
To compute the integrating factor we need to find
Z
2
A(t) = dt = 2 ln(t) = ln(t2 ).
t
Then, the integrating factor is µ(t) = e−A(t) . In this case we get
2 −2 1
µ(t) = e− ln(t )
= eln(t )
⇒ .
µ(t) =
t2
Therefore, the equation for v can be written as a total derivative,
1 0 2 2 2 v 2 3 0
v − v = t ⇒ − t = 0.
t2 t 3 t2 9
The potential function is ψ(t, v) = v/t2 −(2/9)t3 and the solution of the differential equation
is ψ(t, v(t)) = c, that is,
v 2 3 2
2 3 2
2
− t = c ⇒ v(t) = t c + t ⇒ v(t) = c t2 + t5 .
t 9 9 9
Once v is known we compute the original unknown y = ±v 3/2 , where the double sign is
related to taking the square root. We finally obtain
2 3/2
y(t) = ± c t2 + t5 .
9
C
Notes. This section corresponds to Boyce-DiPrima [3] Section 2.1, and Simmons [10]
Section 2.10. The Bernoulli equation is solved in the exercises of section 2.4 in Boyce-
Diprima, and in the exercises of section 2.10 in Simmons.
18 G. NAGY – ODE august 16, 2015
1.2.4. Exercises.
1.2.1.- Find the general solution of 1.2.6.- Find the solutions to the IVP
y 0 = −y + e−2t . 2ty − y 0 = 0, y(0) = 3.
1.2.2.- Find the solution y to the IVP 1.2.7.- Find all solutions of the equation
y 0 = y + 2te2t , y(0) = 0. y 0 = y − 2 sin(t).
1.2.3.- Find the solution y to the IVP 1.2.8.- Find the solution to the initial value
sin(t) π 2 problem
t y0 + 2 y = , y = , π
t 2 π t y 0 = 2 y + 4t3 cos(4t), y = 0.
for t > 0. 8
1.2.9.- Find all solutions of the equation
1.2.4.- Find all solutions y to the ODE
y0 + t y = t y2 .
y0
2
= 4t.
(t + 1)y 1.2.10.- Find all solutions of the equation
√
1.2.5.- Find all solutions y to the ODE y 0 = −x y + 6x y.
ty 0 + n y = t2 , 1.2.11.- Find all solutions of the IVP
with n a positive integer. 3
y 0 = y + 2 , y(0) = 1.
y
G. NAGY – ODE August 16, 2015 19
(f) The linear equation y 0 = a(t) y + b(t), with a 6= 0 and b nonconstant, is not separable.
C
20 G. NAGY – ODE august 16, 2015
From the last two examples above we see that linear differential equations, with a 6= 0,
are separable for b constant, and not separable otherwise. Separable differential equations
are simple to solve. We simply integrate on both sides of the equation.
Theorem 1.3.2 (Separable Equations). If h, g are continuous, with h 6= 0, then
h(y) y 0 = g(t) (1.3.1)
has infinitely many solutions y satisfying the algebraic equation
H(y(t)) = G(t) + c, (1.3.2)
where c ∈ R is arbitrary, H is a primitive (antiderivative) of h, and G is a primitive of g.
Remark: The function H is a primitive of function h means H 0 = h, where H 0 = dH/dy.
Analogously, G is a primitive of g means G0 = g, with G0 = dG/dt.
Before we prove this Theorem we solve a particular example. The example will help us
identify the functions h, g, H and G, and it will also show how to prove the theorem.
Example 1.3.2: Find all solutions y to the differential equation
t2
y 0 (t) = . (1.3.3)
1 − y 2 (t)
Solution: We write the differential equation in (1.3.3) in the form h(y) y 0 = g(t),
1 − y 2 (t) y 0 (t) = t2 .
In this example the functions h and g defined in Theorem 1.3.2 are given by
h(y) = (1 − y 2 ), g(t) = t2 .
We now integrate with respect to t on both sides of the differential equation,
Z Z
1 − y 2 (t) y 0 (t) dt = t2 dt + c,
where c is any constant. The integral on the right-hand side can be computed explicitly.
The integral on the left-hand side can be done by substitution. The substitution is
u = y(t), du = y 0 (t) dt.
This substitution on the left-hand side integral above gives,
u3 t3
Z Z
(1 − u2 ) du = t2 dt + c ⇔ u − = + c.
3 3
Substitute back the original unknown y into the last expression above and we obtain
y 3 (t) t3
y(t) − = + c.
3 3
We have solved the differential equation, since there are no derivatives in this last equation.
When the solution is given in terms of an algebraic equation, we say that the solution y is
given in implicit form. C
In the case that H is not invertible or H −1 is difficult to compute, we leave the solution
y in implicit form. Now we show a proof of Theorem 1.3.2 that is based in an integration
by substitution, just like we did in the Example 1.3.2.
Proof of Theorem 1.3.2: Integrate with respect to t on both sides in Eq. (1.3.1),
Z Z
0 0
h(y(t)) y (t) = g(t) ⇒ h(y(t)) y (t) dt = g(t) dt + c,
where c is an arbitrary constant. Introduce on the left-hand side of the second equation
above the substitution
u = y(t), du = y 0 (t) dt.
The result of the substitution is
Z Z Z Z
h(y(t)) y 0 (t) dt = h(u)du ⇒ h(u) du = g(t) dt + c.
To integrate on each side of this equation means to find a function H, primitive of h, and
a function G, primitive of g. Using this notation we write
Z Z
H(u) = h(u) du, G(t) = g(t) dt.
Solution: Theorem 1.3.2 tell us how to obtain the solution y. Writing Eq. (1.3.3) as
1 − y 2 y 0 (t) = t2 ,
Then, Theorem 1.3.2 implies that the solution y satisfies the algebraic equation
y 3 (t) t3
y(t) − = + c, (1.3.5)
3 3
where c ∈ R is arbitrary. C
Remark: Sometimes it is simpler to remember ideas than formulas. So one can solve a
separable equation as we did in Example 1.3.2, instead of using the solution formulas, as in
Example 1.3.3. (Although in the case of separable equations both methods are very close.)
In the next Example we show that an initial value problem can be solved even when the
solutions of the differential equation are given in implicit form.
Example 1.3.4: Find the solution of the initial value problem
t2
y 0 (t) = , y(0) = 1. (1.3.6)
1 − y 2 (t)
Solution: From Example 1.3.2 we know that all solutions to the differential equation
in (1.3.6) are given by
y 3 (t) t3
y(t) − = + c,
3 3
where c ∈ R is arbitrary. This constant c is now fixed with the initial condition in Eq. (1.3.6)
y 3 (0) 0 1 2 y 3 (t) t3 2
y(0) − = +c ⇒ 1− =c ⇔ c= ⇒ y(t) − = + .
3 3 3 3 3 3 3
So we can rewrite the algebraic equation defining the solution functions y as the roots of a
cubic polynomial,
y 3 (t) − 3y(t) + t3 + 2 = 0.
C
The solution is given in implicit form. However, in this case is simple to solve this algebraic
equation for y and we obtain the following explicit form for the solutions,
2
y(t) = .
sin(2t) − 2c
The initial condition implies that
2
1 = y(0) = ⇔ c = −1.
0 − 2c
So, the solution to the IVP is given in explicit form by
2
y(t) = .
sin(2t) + 2
C
Example 1.3.6: Follow the proof in Theorem 1.3.2 to find all solutions y of the ODE
4t − t3
y 0 (t) = .
4 + y 3 (t)
Example 1.3.7: Find the explicit form of the solution to the initial value problem
2−t
y 0 (t) = y(0) = 1. (1.3.8)
1 + y(t)
with c ∈ R. The initial condition in Eq. (1.3.8) fixes the value of constant c, as follows,
y 2 (0) 1 3
y(0) + =0+c ⇒ 1+ =c ⇒ c= .
2 2 2
We conclude that the implicit form of the solution y is given by
y 2 (t) t2 3
y(t) + = 2t − + , ⇔ y 2 (t) + 2y(t) + (t2 − 4t − 3) = 0.
2 2 2
The explicit form of the solution can be obtained realizing that y(t) is a root in the quadratic
polynomial above. The two roots of that polynomial are given by
1 p p
y± (t) = −2 ± 4 − 4(t2 − 4t − 3) ⇔ y± (t) = −1 ± −t2 + 4t + 4.
2
We have obtained two functions y+ and y− . However, we know that there is only one
solution to the IVP. We can decide which one is the solution by evaluating them at the
value t = 0 given in the initial condition. We obtain
√
y+ (0) = −1 + 4 = 1,
√
y− (0) = −1 − 4 = −3.
Therefore, the solution is y+ , that is, the explicit form of the solution is
p
y(t) = −1 + −t2 + 4t + 4.
C
In the case the functions M, N are homogeneous of the same degree, the differential
equation N (t, y) y 0 (t) + M (t, y) = 0 can be written as
M (t, y)
y 0 (t) = − ,
N (t, y)
where the function −M/N is homogeneous of degree n = 0. Such functions are also called
scale invariant, and the differential equation is called Euler homogeneous.
Definition 1.3.5. An equation y 0 (t) = f t, y(t) is Euler homogeneous iff the function
f is homogeneous degree zero, that is for every real numbers t, u and every c 6= 0 we have
f (ct, cu) = f (t, u).
Example 1.3.9: Show that the functions below are scale invariant functions,
y t3 + t2 y + t y 2 + y 3
f1 (t, y) = , f2 (t, y) = .
t t3 + t y 2
Solution: Function f1 is scale invariant since
cy y
f (ct, cy) = = = f (t, y).
ct t
The function f2 is scale invariant as well, since
c3 t3 + c2 t2 cy + ct c2 y 2 + c3 y 3 c3 (t3 + t2 y + t y 2 + y 3 )
f2 (ct, cy) = = = f2 (t, y).
c3 t3 + ct c2 y 2 c3 (t3 + t y 2 )
C
2
t
Example 1.3.10: Determine whether the equation y 0 = is Euler homogeneous.
1 − y3
Solution: The differential equation is written in the standard for y 0 = f (t, y), where
t2 c2 t2
f (t, y) = 3
. but f (ct, cy) = 6= f (t, y).
1−y 1 − c3 y 3
So, the equation is not Euler homogeneous. C
Remark: If a function f , on the variables t, y, is scale invariant, one has the identity,
1
f (t, y) = f (ct, cy), and choosing c= ⇒ f (t, y) = f (1, y/t).
t
This means that f is actually a function of only one variable, y/t. If we introduce the
notation F (y/t) = f (1, y/t), then the functions f and F in the example above are,
y2 y y 2
2y − 3t − 2 −3−
f (t, y) = t F (y/t) = th y it .
(t − y) 1−
t
Using the notation from this remark, a first order differential equation is Euler homoge-
neous iff it has the form
y(t)
y 0 (t) = F . (1.3.9)
t
Equation 1.3.9 is often in the literature the definition of an Euler homogeneous equation.
Now that we now what Euler homogeneous equations are, let us see how we can solve them.
Theorem 1.3.6 (Euler Homogeneous). If the differential equation for a function y
y 0 (t) = f t, y(t)
y(t)
is Euler homogeneous, then the function v(t) = satisfies the separable equation
t
v0 1
= ,
(F (v) − v) t
where we have denoted F (v) = f (1, v).
Remark: In § 1.2 we transformed a Bernoulli equation into an equation we knew how
to solve, a linear equation. Theorem 1.3.6 transforms an homogeneous equation into an
equation we know how to solve, a separable equation. The original homogeneous equation
for the function y is transformed into a separable equation for the unknown function v = y/t.
One solves for v, in implicit or explicit form, and then transforms back to y = t v.
Proof of Theorem 1.3.6: If y 0 = f (t, y) is homogeneous, then we known that it can be
written as y 0 = F (y/t), where F (y/t) = f (1, y/t). Introduce the function v = y/t into the
differential equation,
y 0 = F (v).
We still need to replace y 0 in terms of v. This is done as follows,
y(t) = t v(t) ⇒ y 0 (t) = v(t) + t v 0 (t).
Introducing these expressions into the differential equation for y we get
0 0 F (v) − v v0 1
v + t v = F (v) ⇒ v = ⇒ = .
t F (v) − v t
The equation on the far right is separable. This establishes the Theorem.
t2 + 3y 2
Example 1.3.12: Find all solutions y of the differential equation y 0 = .
2ty
Solution: The equation is Euler homogeneous, since
c2 t2 + 3c2 y 2 c2 (t2 + 3y 2 ) t2 + 3y 2
f (ct, cy) = = = = f (t, y).
2ctcy 2c2 ty 2ty
G. NAGY – ODE August 16, 2015 27
Notes. This section corresponds to Boyce-DiPrima [3] Section 2.2. Zill and Wright study
separable equations in [17] Section 2.2, and Euler homogeneous equations in Section 2.5.
Zill and Wright organize the material in a nice way, they present first separable equations,
then linear equations, and then they group Euler homogeneous and Bernoulli equations in
a section called Solutions by Substitution. Once again, a one page description is given by
Simmons in [10] in Chapter 2, Section 7.
28 G. NAGY – ODE august 16, 2015
1.3.3. Exercises.
1.3.1.- Find all solutions y to the ODE 1.3.5.- Find every solution y of the Euler
t2 homogeneous equation
y0 = . y+t
y y0 = .
Express the solutions in explicit form. t
1.3.6.- Find all solutions y to the ODE
1.3.2.- Find every solution y of the ODE
t2 + y 2
3t2 + 4y 3 y 0 − 1 + y 0 = 0. y0 = .
ty
Leave the solution in implicit form.
1.3.7.- Find the explicit solution to the IVP
1.3.3.- Find the solution y to the IVP (t2 + 2ty) y 0 = y 2 , y(1) = 1.
0 2 2
y =t y , y(0) = 1.
1.3.8.- Prove that if y 0 = f (t, y) is an Euler
1.3.4.- Find every solution y of the ODE homogeneous equation and y1 (t) is a so-
p lution, then y(t) = (1/k) y1 (kt) is also a
ty + 1 + t2 y 0 = 0. solution for every non-zero k ∈ R.
G. NAGY – ODE August 16, 2015 29
The next example shows that linear equations, written as in § 1.2, are not exact.
Example 1.4.2: Show whether the linear differential equation below is exact or not,
y 0 (t) = a(t) y(t) + b(t), a(t) 6= 0.
Solution: We first find the functions N and M rewriting the equation as follows,
y 0 + a(t)y − b(t) = 0 ⇒ N (t, y) = 1, M (t, y) = −a(t) y − b(t).
Let us check whether the equation is exact or not,
)
N (t, y) = 1 ⇒ ∂t N (t, y) = 0,
⇒ ∂t N (t, y) 6= ∂y M (t, y).
M (t, y) = −a(t)y − b(t) ⇒ ∂y M (t, y) = −a(t),
So, the differential equation is not exact. C
30 G. NAGY – ODE august 16, 2015
The following examples show that there are exact equations which are not separable.
Example 1.4.3: Show whether the differential equation below is exact or not,
2ty(t) y 0 (t) + 2t + y 2 (t) = 0.
Solution: We first identify the functions N and M . This is simple in this case, since
2ty(t) y 0 (t) + 2t + y 2 (t) = 0 ⇒ N (t, y) = 2ty, M (t, y) = 2t + y 2 .
Example 1.4.4: Show whether the differential equation below is exact or not,
sin(t)y 0 (t) + t2 ey(t) y 0 (t) − y 0 (t) = −y(t) cos(t) − 2tey(t) .
Solution: We first identify the functions N and M by rewriting the equation as follows,
sin(t) + t2 ey(t) − 1 y 0 (t) + y(t) cos(t) + 2tey(t) = 0
1.4.2. Finding a Potential Function. Exact equations can be rewritten as a total deriv-
ative of a function, called a potential function. The condition ∂t N = ∂y M is equivalent to
the existence of a potential function–result proven by Henri Poincaré around 1880 and now
called Poincaré Lemma.
Lemma 1.4.2 (Poincaré). Continuously differentiable functions M, N : R → R, on a
rectangle R = (t1 , t2 ) × (y1 , y2 ), satisfy the equation
∂t N (t, y) = ∂y M (t, y) (1.4.1)
iff there exists a twice continuously differentiable function ψ : R → R, called potential
function, such that for all (t, y) ∈ R holds
∂y ψ(t, y) = N (t, y), ∂t ψ(t, y) = M (t, y). (1.4.2)
Remark:
(a) A differential equation defines the functions N and M . The exact condition in (1.4.1)
is equivalent to the existence of ψ, related to N and M through Eq. (1.4.2).
(b) If we recall the definition of the gradient of a function, that is, ∇ψ = h∂t ψ, ∂y ψi, then
the equations in (1.4.2) say that ∇ψ = hM, N i.
In our next example we verify that a given function ψ is a potential function for an exact
differential equation. We also show that the differential equation can be rewritten as a
total derivative of this potential function. (In Theorem 1.4.3 we show how to compute such
potential function from the differential equation, integrating the equations in (1.4.2).)
Example 1.4.5: Show that the function ψ(t, y) = t2 + ty 2 is a potential function for the
exact differential equation
2ty(t) y 0 (t) + 2t + y 2 (t) = 0.
Furthermore, show that the differential equation is the total derivative of this potential
function.
Solution: In Example 1.4.3 we showed that the differential equation above is exact, since
N (t, y) = 2ty, M (t, y) = 2t + y 2 ⇒ ∂t N = 2y = ∂y M.
2 2
Let us check that the function ψ(t, y) = t + ty , is a potential function of the differential
equation. First compute the partial derivatives,
∂t ψ = 2t + y 2 = M, ∂y ψ = 2ty = N.
Now, for the furthermore part, we use the chain rule to compute the t derivative of the
potential function ψ evaluated at the unknown function y,
d dy
ψ(t, y(t)) = ∂y ψ + ∂t ψ .
dt dt
But we have just computed these partial derivatives,
d
ψ(t, y(t)) = 2t y(t) y 0 + 2t + y 2 (t) = 0.
dt
dψ
So we have shown that the differential equation can be written as t, y(t) = 0. C
dt
Exact equations always have a potential function ψ, and this function is not difficult to
compute–we only need to integrate Eq. (1.4.2). Having a potential function of an exact
equation is essentially the same as solving the differential equation, since the integral curves
of ψ define implicit solutions of the differential equation.
Theorem 1.4.3 (Exact Equation). If the differential equation
N (t, y(t)) y 0 (t) + M (t, y(t)) = 0 (1.4.3)
is exact on R = (t1 , t2 ) × (y1 , y2 ), then every solution y must satisfy the algebraic equation
ψ(t, y(t)) = c, (1.4.4)
where c ∈ R and ψ : R → R is a potential function for Eq. (1.4.3).
Proof of Theorem 1.4.3: The differential equation in (1.4.3) is exact, then Lemma 1.4.2
implies that there exists a potential function ψ satisfying Eqs. (1.4.2), that is,
N (t, y) = ∂y ψ(t, y), M (t, y) = ∂t ψ(t, y).
Therefore, the differential equation is given by
0 = N (t, y) y 0 (t) + M (t, y)
= ∂y ψ(t, y) y 0 + ∂t ψ(t, y)
d
= ψ(t, y(t)),
dt
32 G. NAGY – ODE august 16, 2015
where in the last step we used the chain rule, which is the way to compute derivative
of a composition of functions. So, the differential equation has been rewritten as a total
t-derivative of the potential function, which is simple to integrate,
d
ψ(t, y(t)) = 0 ⇒ ψ(t, y(t)) = c,
dt
where c is an arbitrary constant. This establishes the Theorem.
Example 1.4.6: Find all solutions y to the differential equation
2ty(t) y 0 (t) + 2t + y 2 (t) = 0.
Solution: The first step is to verify whether the differential equation is exact. We know
the answer, the equation is exact, we did this calculation before in Example 1.4.3, but we
reproduce it here anyway.
)
N (t, y) = 2ty ⇒ ∂t N (t, y) = 2y,
⇒ ∂t N (t, y) = ∂y M (t, y).
M (t, y) = 2t + y 2 ⇒ ∂y M (t, y) = 2y.
Since the equation is exact, Lemma 1.4.2 implies that there exists a potential function ψ
satisfying the equations
∂y ψ(t, y) = N (t, y), (1.4.5)
∂t ψ(t, y) = M (t, y). (1.4.6)
Let us compute ψ. Integrate Eq. (1.4.5) in the variable y keeping the variable t constant,
Z
∂y ψ(t, y) = 2ty ⇒ ψ(t, y) = 2ty dy + g(t),
Remark: An exact equation and its solutions can be pictured on the graph of a potential
function. This is called a geometrical interpretation of the exact equation. We saw that an
exact equation N y 0 + M = 0 can be rewritten as dψ/dt = 0. Solutions of the differential
equation are level curves of the potential function, ψ(t, y(t)) = c. Given a level curve, the
vector r(t) = ht, y(t)i, which belongs to the ty-plane, points to the level curve, while its
derivative r0 (t) = h1, y 0 (t)i is a vector tangent to the level curve. Since the gradient vector
∇ψ = hM, N i is a vector perpendicular to the level curve,
r0 ⊥ ∇ψ ⇔ r0 · ∇ψ = 0 ⇔ M + N y 0 = 0.
We wanted to remark that the differential equation can be thought as the condition r0 ⊥ ∇ψ.
G. NAGY – ODE August 16, 2015 33
Solution: The first step is to verify whether the differential equation is exact. We know
the answer, the equation is exact, we did this calculation before in Example 1.4.4, but we
reproduce it here anyway.
N (t, y) = sin(t) + t2 ey − 1 ⇒ ∂t N (t, y) = cos(t) + 2tey ,
y
M (t, y) = y cos(t) + 2te ⇒ ∂y M (t, y) = cos(t) + 2tey .
Therefore, the differential equation is exact. Then, Lemma 1.4.2 implies that there exists a
potential function ψ satisfying the equations
∂y ψ(t, y) = N (t, y), (1.4.8)
∂t ψ(t, y) = M (t, y). (1.4.9)
We know proceed to compute the function ψ. We first integrate in the variable y the
equation ∂y ψ = N keeping the variable t constant,
Z
2 y
sin(t) + t2 ey − 1 dy + g(t)
∂y ψ(t, y) = sin(t) + t e − 1 ⇒ ψ(t, y) =
The solution is g(t) = c0 , with c0 a constant, but we can always choose that constant to be
zero. We conclude that
g(t) = 0.
We found g, so we have the complete potential function,
ψ(t, y) = y sin(t) + t2 ey − y.
Theorem 1.4.3 implies that any solution y satisfies the implicit equation
y(t) sin(t) + t2 ey(t) − y(t) = c.
The solution y above cannot be written in explicit form. We notice that the choice g(t) = c0
only modifies the constant c above. C
Proof of Theorem 1.4.4: We know that the original differential equation in (1.4.10) is not
exact because ∂t N 6= ∂y M . Now multiply the differential equation by a nonzero function µ
that depends only on t,
(µN ) y 0 + (µM ) = 0. (1.4.13)
We look for a function µ such that this new equation is exact. This means that µ must
satisfy the equation
∂t (µN ) = ∂y (µM ).
Recalling that µ depends only on t and denoting ∂t µ = µ0 , we get
µ0 N + µ ∂t N = µ ∂y M ⇒ µ0 N = µ (∂y M − ∂t N ).
So the differential equation in (1.4.13) is exact iff holds
µ0 (∂y M − ∂t N )
= ,
µ N
and a necessary condition for such an equation to have solutions is that the right-hand side
be independent of the variable y. This establishes the Theorem.
Example 1.4.8: Find all solutions y to the differential equation
t + t y(t) y 0 (t) + 3t y(t) + y 2 (t) = 0.
2
(1.4.14)
so we get the exactness condition ∂t Ñ = ∂y M̃ . The solution y can be found as we did in the
previous examples in this Section. That is, we find the potential function ψ by integrating
the equations
∂y ψ(t, y) = Ñ (t, y), (1.4.16)
∂t ψ(t, y) = M̃ (t, y). (1.4.17)
From the first equation above we obtain
Z
∂y ψ = t3 + t2 y t3 + t2 y dy + g(t).
⇒ ψ(t, y) =
We have seen in Example 1.4.2 that linear differential equations with a 6= 0 are not exact.
In Section 1.2 we found solutions to linear equations using the integrating factor method.
We multiplied the linear equation by a function that transformed the equation into a total
derivative. Those calculations are now a particular case of Theorem 1.4.4, as we can see it
in the following Example.
Example 1.4.9: Use Theorem 1.4.4 to find all solutions to the linear differential equation
y 0 (t) = a(t) y(t) + b(t), a(t) 6= 0. (1.4.18)
Solution: We first write the linear equation in a way we can identify functions N and M ,
y 0 + −a(t) y − b(t) = 0.
We now verify whether the linear equation is exact or not. Actually, we have seen in
Example 1.4.2 that this equation is not exact, since
N (t, y) = 1 ⇒ ∂t N (t, y) = 0,
M (t, y) = −a(t)y − b(t) ⇒ ∂y M (t, y) = −a(t).
But now we can go further, we can check wheteher the condtion in Theorem 1.4.4 holds or
not. We compute the function
1
∂y M (t, y) − ∂t N (t, y) = −a(t)
N (t, y)
G. NAGY – ODE August 16, 2015 37
and we see that it is independent of the variable y. Theorem 1.4.4 says that we can transform
the linear equation into an exact equation. We only need to multiply the linear equation by
a function µ, solution of the equation
µ0 (t)
Z
= −a(t) ⇒ µ(t) = e−A(t) , A(t) = a(t)dt.
µ(t)
This is the same integrating factor we discovered in Section 1.2. Therefore, the equation
below is exact,
e−A(t) y 0 − a(t) e−A(t) y − b(t) e−A(t) = 0.
(1.4.19)
This new version of the linear equation is exact, since
Ñ (t, y) = e−A(t) ⇒ ∂t Ñ (t, y) = −a(t) e−A(t) ,
M̃ (t, y) = −a(t) e−A(t) y − b(t) e−A(t) ⇒ ∂y M̃ (t, y) = −a(t) e−A(t) .
Since the linear equation is now exact, the solutions y can be found as we did in the previous
examples in this Section. We find the potential function ψ integrating the equations
∂y ψ(t, y) = Ñ (t, y), (1.4.20)
∂t ψ(t, y) = M̃ (t, y). (1.4.21)
From the first equation above we obtain
Z
−A(t)
∂y ψ = e ⇒ ψ(t, y) = e−A(t) dy + g(t).
All solutions y to the linear differential equation in (1.4.18) satisfy the equation
Z
e−A(t) y(t) − b(t) e−A(t) dt = c0 ,
where c0 ∈ R is arbitrary. This is the implicit form of the solution, but in this case it is
simple to find the explicit form too,
h Z i
y(t) = eA(t) c0 + b(t) e−A(t) dt .
This expression agrees with the one in Theorem 1.2.3, when we studied linear equations. C
38 G. NAGY – ODE august 16, 2015
1.4.4. The Integrating Factor for the Inverse Function. If a differential equation
for a function y is exact, then the equation for the inverse function y −1 is also exact.
This result is proven in Theorem 1.4.5. We then focus on nonexact equations. We review
Theorem 1.4.4, where we stablished sufficient conditions for the existence of an integrating
factor for nonexact equations. Sometimes the integrating factor for the differential equation
for y does not exist, but the integrating factor for the differential equation for the inverse
function, y −1 , does exist. We study this situation in a bit more detail now. We use the
notation y(x) for the function values, and x(y) for the inverse function values. So in this
last subsection we replace the variable t by x.
Theorem 1.4.5. If a differential equation is exact, as defined in this section, and a solution
is invertible, then the differential equation for the inverse function is also exact.
Proof of Theorem 1.4.5: Write the differential equation of a function y with values y(x),
N (x, y) y 0 + M (x, y) = 0.
We have assumed that the equation is exact, so in this notation ∂x N = ∂y M . If a solution
y is invertible and we use the notation y −1 (y) = x(y), we have the well-known relation
1
x0 (y) = .
y 0 (x(y))
Divide the differential equation above by y 0 and use the relation above, then we get
N (x, y) + M (x, y) x0 = 0,
where now y is the independent variable and the unknwon function is x, with values x(y),
and the prime means x0 = dx/dy. The condition for this last equation to be exact is
∂y M = ∂x N,
which we know holds because the equation for y is exact. This establishes the Theorem.
Suppose now that a differential equation N (x, y) y 0 + M (x, y) = 0 is not exact. Theo-
rem 1.4.4 says that if the function (∂y M −∂x N )/N does not depend on y, then the differential
equation
(µ N ) y 0 + (µM ) = 0
is exact in the case that the integrating factor µ is a solution of
µ0 (x) (∂y M − ∂x N )
= .
µ(x) N
If the function (∂y M −∂x N )/N does depend on y, the integrating factor µ for the equation
for y may not exist. But the integrating factor for the equation for the inverse function x
may exist. The following result says when this is the case.
Theorem 1.4.6 (Integrating factor II). Assume that the differential equation
M (x, y) x0 + N (x, y) = 0 (1.4.22)
is not exact because ∂y M (x, y) 6= ∂x N (x, y) holds for the continuously differentiable func-
tions M, N on their domain R = (x1 , x2 ) × (y1 , y2 ). If M 6= 0 on R and the function
1
− ∂y M (x, y) − ∂x N (x, y) (1.4.23)
M (x, y)
does not depend on the variable y, then the equation below is exact,
(µ M ) x0 + (µ N ) = 0 (1.4.24)
G. NAGY – ODE August 16, 2015 39
where the function µ, which depends only on y ∈ (y1 , y2 ), is a solution of the equation
µ0 (y) 1
=− ∂y M (x, y) − ∂x N (x, y) .
µ(y) M (x, y)
Remark: The differential equations for both the function y and its inverse are
1
N y 0 + M = 0, M x0 + N = 0, where y 0 (x) = .
x0 (y)
We want to remark that the functions M and N are the same in both equations.
Proof of Theorem 1.4.6: We know that the original differential equation in (1.4.22) is not
exact because ∂y M 6= ∂x N . Now multiply the differential equation by a nonzero function µ
that depends only on y,
(µM ) x0 + (µN ) = 0. (1.4.25)
We look for a function µ such that this new equation is exact. This means that µ must
satisfy the equation
∂y (µM ) = ∂x (µN ).
Recalling that µ depends only on y and denoting ∂y µ = µ0 , we get
µ0 M + µ ∂y M = µ ∂x N ⇒ µ0 M = µ (∂x N − ∂y M ).
So the differential equation in (1.4.13) is exact iff holds
µ0 (∂y M − ∂x N )
=− ,
µ M
and a necessary condition for such an equation to have solutions is that the right-hand side
be independent of the variable x. This establishes the Theorem.
Example 1.4.10: Find all solutions to the differential equation
5x e−y + 2 cos(3x) y 0 + 5 e−y − 3 sin(3x) = 0.
Solution: We first check if the equation is exact for the unknown function y, which depends
on the variable x. If we write the equation as N y 0 + M = 0, with y 0 = dy/dx, then
N (x, y) = 5x e−y + 2 cos(3x) ⇒ ∂x N (x, y) = 5 e−y − 6 sin(3x),
M (x, y) = 5 e−y − 3 sin(3x) ⇒ ∂y M (x, y) = −5 e−y − 9 cos(3x).
Since ∂x N 6= ∂y M , the equation is not exact. Let us check if there exists an integrating
factor µ that depends only on x. Following Theorem 1.4.4 we study the function
1 −10 e−y + 6 sin(3x)
∂y M − ∂x N = ,
N 5x e−y + 2 cos(3x)
which is a function of both x and y and cannot be simplified into a function of x alone.
Hence an integrating factor cannot be function of only x.
Let us now consider the equation for the inverse function x, which depends on the variable
y. The equation is M x0 + N = 0, with x0 = dx/dy, where M and N are the same as before,
M (x, y) = 5 e−y − 3 sin(3x) N (x, y) = 5x e−y + 2 cos(3x).
We know from Theorem 1.4.5 that this equation is not exact. Both the equation for y and
equation for its inverse x must satisfy the same condition to be exact. The condition is
∂x N = ∂y M , but we have seen that this is not true for the equation in this example. The
40 G. NAGY – ODE august 16, 2015
last thing we can do is to check if the equation for the inverse function x has an integrating
factor µ that depends only on y. Following Theorem 1.4.6 we study the function
1 −10 e−y + 6 sin(3x)
− ∂y M − ∂x N = − = 2.
M 5 e−y − 3 sin(3x)
Since the function above does not depend on y, we can solve the differential equation for µ,
function of y, as follows
µ0 (y) 1 µ0 (y)
= 2 ⇒ µ(y) = µ0 e2y .
=− ∂y M − ∂x N ⇒
µ(y) M µ(y)
Since µ is an integrating factor, we can choose µ0 = 1, hence µ = e2y . If we multiply the
equation for x by this integrating factor we get
e2y 5 e−y − 3 sin(3x) x0 + e2y 5x e−y + 2 cos(3x) = 0,
that is,
5 ey − 3 sin(3x) e2y x0 + 5x ey + 2 cos(3x) e2y = 0.
Notes. Exact differential equations are studied in Boyce-DiPrima [3], Section 2.6, and in
most differential equation textbooks. Often in these textbooks one can find exact equations
written in the notation of differential forms. Both the equation for y and its inverse x,
M (x, y) + N (x, y) y 0 = 0, M (x, y) x0 + N (x, y) = 0, (1.4.26)
are written together as
M (x, y) dx + N (x, y) dy = 0. (1.4.27)
Eq. (1.4.27) makes sense in the framework of differential forms, but this subject is beyond
the scope of these notes. In some textbooks one can find arguments like this: if one thinks
a derivative y 0 = dy/dx as an actual fraction, where dy is divided by dx, then one should
multiply by dx the first equation in (1.4.26) to get Eq. (1.4.27). Unfortunately, dy/dx is not
a fraction, so multiplying by dx doesn’t make any sense.
G. NAGY – ODE August 16, 2015 41
1.4.5. Exercises.
+ 3 e−2y + 5 cos(5x) = 0.
1.4.3.- Consider the equation
−2 − y ety (a) Is this equation for y exact? If not,
y0 = .
−2y + t ety does this equation have an integrat-
(a) Determine whether the differential ing factor depending on x?
equation is exact. (b) Is the equation for x = y −1 exact?
(b) Find every solution of the equation If not, does this equation have an
above. integrating factor depending on y?
(c) Find an implicit expression for all
1.4.4.- Consider the equation solutions y of the differential equa-
tion above.
(6x5 − xy) + (−x2 + xy 2 )y 0 = 0,
with initial condition y(0) = 1.
(a) Find an integrating factor µ that
converts the equation above into an
exact equation.
(b) Find an implicit expression for the
solution y of the IVP.
42 G. NAGY – ODE august 16, 2015
1.5. Applications
Different physical systems may be described by the same mathematical structure. The
radioactive decay of a substance, the cooling of a solid material, or the salt concentration
on a water tank can be described with linear differential equations. A radioactive substance
decays at a rate proprotional to the substance amount at the time. Something similar
happens to the temperature of a cooling body. Linear, constant coefficients, differential
equations describe these two situations. The salt concentration inside a water tank changes
in the case that salty water is allowed in and out of the tank. This situation is described
with a linear variable coefficients differential equation.
1.5.1. Exponential Decay. An example of exponential decay is the radioactive decay of
certain substances, such as Uranium-235, Radium-226, Radon-222, Polonium-218, Lead-214,
Cobalt-60, Carbon-14, etc. These nuclei break into several smaller nuclei and radiation. The
radioactive decay of a single nucleus cannot be predicted, but the decay of a large number
can. The rate of change in the amount of a radioactive substance in a sample is proportional
to the negative of that amount.
Definition 1.5.1. The exponential decay equation for N with decay constant k > 0 is
N 0 = −k N.
Remark: The equation N 0 = k N , with k > 0 is called the exponential growth equation.
We have seen in § 1.1 how to solve this equation. But we review it here one more time.
Theorem 1.5.2 (Exponential Decay). The general solution for the exponential decay
equation for N with constant k and intial condition N (0) = N0 is
N (t) = N0 e−kt .
Proof of Theorem 1.5.2: The differential equation above is both linear and separable.
We choose to solve it using the integrating factor method. The integrating factor is ekt ,
0
N 0 + k N ekt = 0 ⇒ ekt N = 0 ⇒ ekt N = N0 .
Proof of Theorem 1.5.4: We know that the amount of a radioactive material as function
of time is given by
N (t) = N0 e−kt .
Then, the definition of half-life implies,
N0 1
= N0 e−kτ ⇒ −kτ = ln ⇒ kτ = ln(2).
2 2
This establishes the Theorem.
Remark: A radioactive material, N , can be expressed in terms of the half-life,
(−t/τ )
N (t) = N0 e(−t/τ ) ln(2) ⇒ N (t) = N0 eln[2 ]
⇒ N (t) = N0 2−t/τ .
From this last expression is clear that for t = τ we get N (τ ) = N0 /2.
Our first example is about dating remains with Carbon-14. The Carbon-14 is a radioac-
tive isotope of Carbon-12 with a half-life of τ = 5730 years. Carbon-14 is being constantly
created in the atmosphere and is accumulated by living organisms. While the organism
lives, the amount of Carbon-14 in the organism is held constant. The decay of Carbon-14
is compensated with new amounts when the organism breaths or eats. When the organism
dies, the amount of Carbon-14 in its remains decays. So the balance between normal and
radioactive carbon in the remains changes in time.
Example 1.5.1: If certain remains are found containing an amount of 14 % of the original
amount of Carbon-14, find the date of the remains.
Solution: Suppose that t = 0 is set at the time when the organism dies. If at the present
time t the remains contain 14% of the original amount, that means
14
N (t) =
N0 .
100
Since Carbon-14 is a radioactive substant with half-life τ , the amount of Carbon-14 decays
in time as follows,
N (t) = N0 2−t/τ ,
where τ = 5730 years is the Carbon-14 half-life. Therefore,
14 t
2−t/τ = ⇒ − = log2 (14/100) ⇒ t = τ log2 (100/14).
100 τ
We obtain that t = 16, 253 years. The organism died more that 16, 000 years ago. C
Remark: The problem in Ex. 1.5.1 can be solved using the solution N written in terms of
the decay constant k instead of the half-life τ . Just write the condition for t, to be 14 % of
the original Carbon-14, as follows,
14 14 14 1 100
N0 e−kt = N0 ⇒ e−kt = ⇒ −kt = ln ⇒ t = ln .
100 100 100 k 14
Recalling the expression for k in terms of τ , that is kτ = ln(2), we get
ln(100/14)
t=τ .
ln(2)
The value of t in the expression above and the one in Ex. 1.5.1 are the same, since
ln(100/14)
log2 (100/14) = .
ln(2)
44 G. NAGY – ODE august 16, 2015
1.5.2. Newton’s Cooling Law. In 1701 Newton published, anonymously, the result of his
home made experiments done fifteen years earlier. He focused on the time evolution of the
temperature of objects that rest in a medium with constant temperature. He found that the
difference between the temperatues of an object and the constant temperature of a medium
varies geometrically towards zero as time varies arithmetically. This was his way of saying
that the difference of temperatures, ∆T , depends on time as
(∆T )(t) = (∆T )0 e−t/τ ,
for some initial temperature difference (∆T )0 and some time scale τ . Although this is called
a “Cooling Law”, it also describes objects that warm up. When (∆T )0 > 0, the object is
cooling down, but when (∆T )0 < 0, the object is warming up.
Newton knew pretty well that the function ∆T above is solution of a very particular
differential equation. But he chose to put more emphasis in the solution rather than in the
equation. Nowadays people think that differential equations are more fundamental than
their solutions, so we define Newton’s cooling law as follows.
Definition 1.5.5. The Newton cooling law says that the temperature T at a time t of a
material placed in a surrounding medium held at a constant temperature Ts satisfies
(∆T )0 = −k (∆T ),
with ∆T (t) = T (t)−Ts , and k > 0, constant, characterizing the material thermal properties.
Remark: Newton’s cooling law equation for ∆T is the same as the radioactive decay
equation. But now the initial temperature difference, (∆T )0 = T0 − Ts , with T0 = T (0), can
be either positive or negative. Newton’s cooling law equation is a first order linear equation,
which we solved in § 1.1. The general solution is (∆T )(t) = (∆T )0 e−kt , so
(T − Ts )(t) = (T0 − Ts ) e−kt ⇒ T (t) = (T0 − Ts ) e−kt + Ts .
where the constant k > 0, which has units of one over time, depends on the interaction
between the object material and the surrounding medium.
Example 1.5.2: A cup with water at 45 C is placed in the cooler held at 5 C. If after 2
minutes the water temperature is 25 C, when will the water temperature be 15 C?
Solution: We know that the solution of the Newton cooling law equation is
T (t) = (T0 − Ts ) e−kt + Ts ,
and we also know that in this case we have
T0 = 45, Ts = 5, T (2) = 25.
In this example we need to find t1 such that T (t1 ) = 15. In order to find that t1 we first
need to find the constant k,
T (t) = (45 − 5) e−kt + 5 ⇒ T (t) = 40 e−kt + 5.
Now use the fact that T (2) = 25 C, that is,
1
20 = T (2) = 40 e−2k ⇒ ln(1/2) = −2k ln(2).⇒ k=
2
Having the constant k we can now go on and find the time t1 such that T (t1 ) = 15 C.
√ √
T (t) = 40 e−t ln( 2)
+5 ⇒ 10 = 40 e−t1 ln( 2)
⇒ t1 = 4.
C
G. NAGY – ODE August 16, 2015 45
1.5.3. Salt in a Water Tank. We study the system pictured in Fig. 3. A tank has a salt
mass Q(t) dissolved in a volume V (t) of water at a time t. Water is pouring into the tank
at a rate ri (t) with a salt concentration qi (t). Water is also leaving the tank at a rate ro (t)
with a salt concentration qo (t). Recall that a water rate r means water volume per unit
time, and a salt concentration q means salt mass per unit volume.
We assume that the salt entering in the tank
ri , qi (t)
gets instantaneously mixed. As a consequence
the salt concentration in the tank is homoge-
neous at every time. This property simplifies
the mathematical model describing the salt in
the tank. Tank
Before stating the problem we want to solve, Instantaneously mixed
we review the physical units of the main fields
involved in it. Denote by [ri ] the units of the V (t) Q(t) ro , qo (t)
quantity ri . Then we have
Volume Mass
[ri ] = [ro ] = , [qi ] = [qo ] = ,
Time Volume Figure 3. Description of
[V ] = Volume, [Q] = Mass. the water tank problem.
Definition 1.5.6. The Water Tank Problem refers to water coming into a tank at a rate
ri with salt concentration qi , and going out the tank at a rate ro and salt concentration qo ,
so that the water volume V and the total amount of salt Q, which is instantaneously mixed,
in the tank satisfy the following equations,
V 0 (t) = ri (t) − ro (t), (1.5.1)
0
Q (t) = ri (t) qi (t) − ro (t), qo (t), (1.5.2)
Q(t)
qo (t) = , (1.5.3)
V (t)
ri0 (t) = ro0 (t) = 0. (1.5.4)
The first and second equations above are just the mass conservation of water and salt,
respectively. Water volume and mass are proportional, so both are conserved, and we
chose the volume to write down this conservation in Eq. (1.5.1). This equation is indeed
a conservation because it says that the water volume variation in time is equal to the
difference of volume time rates coming in and going out of the tank. Eq. (1.5.2) is the salt
mass conservation, since the salt mass variation in time is equal to the difference of the
salt mass time rates coming in and going out of the tank. The product of a water rate
r times a salt concentration q has units of mass per time and represents the amount of
salt entering or leaving the tank per unit time. Eq. (1.5.3) is implied by the instantaneous
mixing mechanism in the tank. Since the salt is mixed instantaneously in the tank, the
salt concentration in the tank is homogeneous with value Q(t)/V (t). Finally the equations
in (1.5.4) say that both rates in and out are time independent, that is, constants.
Theorem 1.5.7. The amount of salt Q in a water tank problem defined in Def. 1.5.6
satisfies the differential equation
Q0 (t) = a(t) Q(t) + b(t), (1.5.5)
where the coefficients in the equation are given by
ro
a(t) = − , b(t) = ri qi (t). (1.5.6)
(ri − ro ) t + V0
46 G. NAGY – ODE august 16, 2015
Proof of Theorem 1.5.7: The equation for the salt in the tank given in (1.5.5) comes
from Eqs. (1.5.1)-(1.5.4). We start noting that Eq. (1.5.4) says that the water rates are
constant. We denote them as ri and ro . This information in Eq. (1.5.1) implies that V 0 is
constant. Then we can easily integrate this equation to obtain
V (t) = (ri − ro ) t + V0 , (1.5.7)
where V0 = V (0) is the water volume in the tank at the initial time t = 0. On the other
hand, Eqs.(1.5.2) and (1.5.3) imply that
ro
Q0 (t) = ri qi (t) − Q(t).
V (t)
Since V (t) is known from Eq. (1.5.7), we get that the function Q must be solution of the
differential equation
ro
Q0 (t) = ri qi (t) − Q(t).
(ri − ro ) t + V0
This is a linear ODE for the function Q. Indeed, introducing the functions
ro
a(t) = − , b(t) = ri qi (t),
(ri − ro ) t + V0
the differential equation for Q has the form
Q0 (t) = a(t) Q(t) + b(t).
This establishes the Theorem.
We could use the formula for the general solution of a linear equation given in Section 1.2
to write the solution of Eq. (1.5.5) for Q. Such formula covers all cases we are going to
study in this section. Since we already know that formula, we choose to find solutions in
particular cases. These cases are given by specific choices of the rate constants ri , ro , the
concentration function qi , and the initial data constants V0 and Q0 = Q(0). The study of
solutions to Eq. (1.5.5) in several particular cases might provide a deeper understanding of
the physical situation under study than the expression of the solution Q in the general case.
Example 1.5.3 (General Case for V (t) = V0 ): Consider a water tank problem with equal
constant water rates ri = ro = r, with constant incoming concentration qi , and with a given
initial water volume in the tank V0 . Then, find the solution to the initial value problem
Q0 (t) = a(t) Q(t) + b(t), Q(0) = Q0 ,
where function a and b are given in Eq. (1.5.6). Graph the solution function Q for different
values of the initial condition Q0 .
Solution: The assumption ri = ro = r implies that the function a is constant, while the
assumption that qi is constant implies that the function b is also constant too,
ro r
a(t) = − ⇒ a(t) = − = a0 ,
(ri − ro ) t + V0 V0
b(t) = ri qi (t) ⇒ b(t) = ri qi = b0 .
Then, we must solve the initial value problem for a constant coefficients linear equation,
Q0 (t) = a0 Q(t) + b0 , Q(0) = Q0 ,
The integrating factor method can be used to find the solution of the initial value problem
above. The formula for the solution is given in Theorem 1.1.4,
b0 a0 t b0
Q(t) = Q0 + e − .
a0 a0
G. NAGY – ODE August 16, 2015 47
In our case the we can evaluate the constant b0 /a0 , and the result is
b0 V
0 b0
= (rqi ) − ⇒ − = qi V0 .
a0 r a0
Then, the solution Q has the form,
Q(t) = Q0 − qi V0 e−rt/V0 + qi V0 .
(1.5.8)
The initial amount of salt Q0 in the tank can be any non-negative real number. The solution
behaves differently for different values of Q0 . We classify these values in three classes:
Example 1.5.4 (Find a particular time, for V (t) = V0 ): Consider a water tank problem
with equal constant water rates ri = ro = r and fresh water is coming into the tank, hence
qi = 0. Then, find the time t1 such that the salt concentration in the tank Q(t)/V (t) is 1%
the initial value. Write that time t1 in terms of the rate r and initial water volume V0 .
Solution: The first step to solve this problem is to find the solution Q of the initial value
problem
Q0 (t) = a(t) Q(t) + b(t), Q(0) = Q0 ,
where function a and b are given in Eq. (1.5.6). In this case they are
ro r
a(t) = − ⇒ a(t) = − ,
(ri − ro ) t + V0 V0
b(t) = ri qi (t) ⇒ b(t) = 0.
The initial value problem we need to solve is
r
Q0 (t) = − Q(t), Q(0) = Q0 .
V0
From Section 1.1 we know that the solution is given by
Q(t) = Q0 e−rt/V0 .
We can now proceed to find the time t1 . We first need to find the concentration Q(t)/V (t).
We already have Q(t) and we now that V (t) = V0 , since ri = ro . Therefore,
Q(t) Q(t) Q0 −rt/V0
= = e .
V (t) V0 V0
48 G. NAGY – ODE august 16, 2015
Example 1.5.5 (Nonzero qi , for V (t) = V0 ): Consider a water tank problem with equal
constant water rates ri = ro = r, with only fresh water in the tank at the initial time, hence
Q0 = 0 and with a given initial volume of water in the tank V0 . Then find the function salt
in the tank Q if the incoming salt concentration is given by the function
qi (t) = 2 + sin(2t).
where we used that the initial condition is Q0 = 0. Recalling the definition of the function
b we obtain
Z t
−a0 t
ea0 s 2 + sin(2s) ds.
Q(t) = e
0
This is the formula for the solution of the problem, we only need to compute the integral
given in the equation above. This is not straightforward though. We start with the following
integral found in an integration table,
eks
Z
eks sin(ls) ds = 2
k sin(ls) − l cos(ls) ,
k + l2
G. NAGY – ODE August 16, 2015 49
y
8
f (x) = 2 − e−x
5
2
Q(t)
1.5.4. Exercises.
1.5.2.- A vessel with liquid at 18 C is placed 1.5.5.- A tank with a capacity of Vm = 500
in a cooler held at 3 C, and after 3 min- liters originally contains V0 = 200 liters
utes the temperature drops to 13 C. of water with Q0 = 100 grams of salt
(a) Find the differential equation satis- in solution. Water containing salt with
fied by the temperature T of a liq- concentration of qi = 1 gram per liter
uid in the cooler at time t = 0. is poured in at a rate of ri = 3 liters
(b) Find the function temperature of per minute. The well-stirred water is
the liquid once it is put in the allowed to pour out the tank at a rate
cooler. of ro = 2 liters per minute. Find the
(c) Find the liquid cooling constant. salt concentration in the tank at the
time when the tank is about to overflow.
1.5.3.- A tank initially contains V0 = 100 Compare this concentration with the
liters of water with Q0 = 25 grams of limiting concentration at infinity time
salt. The tank is rinsed with fresh wa- if the tank had infinity capacity.
ter flowing in at a rate of ri = 5 liters
per minute and leaving the tank at the
same rate. The water in the tank is well-
stirred. Find the time such that the
amount the salt in the tank is Q1 = 5
grams.
G. NAGY – ODE August 16, 2015 51
1.6.1. The Picard-Lindelöf Theorem. We will show that a large class of nonlinear dif-
ferential equations have solutions. First, let us recall the definition of a nonlinear equation.
Definition 1.6.1. An ordinary differential equation y 0 (t) = f (t, y(t)) is called nonlinear
iff the function f is nonlinear in the second argument.
Example 1.6.1:
(a) The differential equation
t2
y 0 (t) =
y 3 (t)
is nonlinear, since the function f (t, y) = t2 /y 3 is nonlinear in the second argument.
(b) The differential equation
y 0 (t) = 2ty(t) + ln y(t)
is nonlinear, since the function f (t, y) = 2ty + ln(y) is nonlinear in the second argument,
due to the term ln(y).
(c) The differential equation
y 0 (t)
= 2t2
y(t)
is linear, since the function f (t, y) = 2t2 y is linear in the second argument.
C
The Picard-Lindelöf Theorem shows that certain nonlinear equations have solutions,
uniquely determined by appropriate initial data. Notice that there is no explicit formula
for this solution. Results like this one are called existence and uniqueness statements about
solutions of differential equations.
Theorem 1.6.2 (Picard-Lindelöf). Consider the initial value problem
y 0 (t) = f (t, y(t)), y(t0 ) = y0 . (1.6.1)
2
If f : S → R is continuous on the square S = [t0 − a, t0 + a] × [y0 − a, y0 + a] ⊂ R , for some
a > 0, and satisfies the Lipschitz condition that there exists k > 0 such that
|f (t, y2 ) − f (t, y1 )| < k |y2 − y1 |,
for all (t, y2 ), (t, y1 ) ∈ S, then there exists a positive b < a such that there exists a unique
solution y : [t0 − b, t0 + b] → R to the initial value problem in (1.6.1).
52 G. NAGY – ODE august 16, 2015
Remark: We prove this theorem rewriting the differential equation as an integral equation
for the unknown function y. Then we use this integral equation to construct a sequence of
approximate solutions {yn } to the original initial value problem. Next we show that this
sequence of approximate solutions has a unique limit as n → ∞. We end the proof showing
that this limit is the only solution of the original initial value problem. This proof follows
[15] § 1.6 and Zeidler’s [16] § 1.8. It is important to read the review on complete normed
vector spaces, called Banach spaces, given in these references.
Proof of Theorem 1.6.2: We start writing the differential equation in 1.6.1 as an integral
equation, hence we integrate on both sides of that equation with respect to t,
Z t Z t Z t
0
y (s) ds = f (s, y(s)) ds ⇒ y(t) = y0 + f (s, y(s)) ds. (1.6.2)
t0 t0 t0
We have used the Fundamental Theorem of Calculus on the left-hand side of the first
equation to get the second equation. And we have introduced the initial condition y(t0 ) = y0 .
We use this integral form of the original differential equation to construct a sequence of
functions {yn }∞
n=0 . The domain of every function in this sequence is Da = [t0 − a, t0 + a].
The sequence is defined as follows,
Z t
yn+1 (t) = y0 + f (s, yn (s)) ds, n > 0, y0 (t) = y0 . (1.6.3)
t0
We see that the first element in the sequence is the constant function determined by the
initial conditions in (1.6.1). The iteration in (1.6.3) is called the Picard iteration. The
central idea of the proof is to show that the sequence {yn } is a Cauchy sequence in the
space C(Db ) of uniformly continuous functions in the domain Db = [t0 − b, t0 + b] for a small
enough b > 0. This function space is a Banach space under the norm
See [15] and references therein for the definition of Cauchy sequences, Banach spaces, and
the proof that C(Db ) with that norm is a Banach space. We now show that the sequence
{yn } is a Cauchy sequence in that space. Any two consecutive elements in the sequence
satisfy
Z t Z t
kyn+1 − yn k = max f (s, yn (s)) ds − f (s, yn−1 (s)) ds
t∈Db t0 t0
Z t
6 max f (s, yn (s)) − f (s, yn−1 (s)) ds
t∈Db t0
Z t
6 k max |yn (s) − yn−1 (s)| ds
t∈Db t0
6 kb kyn − yn−1 k.
Using the triangle inequality for norms and and the sum of a geometric series one compute
the following,
kyn − yn+m k = kyn − yn+1 + yn+1 − yn+2 + · · · + yn+(m−1) − yn+m k
6 kyn − yn+1 k + kyn+1 − yn+2 k + · · · + kyn+(m−1) − yn+m k
6 (rn + rn+1 + · · · + rn+m ) ky1 − y0 k
6 rn (1 + r + r2 + · · · + rm ) ky1 − y0 k
1 − rm
6 rn ky1 − y0 k.
1−r
Now choose the positive constant b such that b < min{a, 1/k}, hence 0 < r < 1. In this case
the sequence {yn } is a Cauchy sequence in the Banach space C(Db ), with norm k k, hence
converges. Denote the limit by y = limn→∞ yn . This function satisfies the equation
Z t
y(t) = y0 + f (s, y(s)) ds,
t0
which says that y is not only continuous but also differentiable in the interior of Db , hence
y is solution of the initial value problem in (1.6.1). The proof of uniqueness of the solution
follows the same argument used to show that the sequence above is a Cauchy sequence.
Consider two solutions y and ỹ of the initial value problem above. That means,
Z t Z t
y(t) = y0 + f (s, y(s) ds, ỹ(t) = y0 + f (s, ỹ(s) ds.
t0 t0
Example 1.6.3: Use the proof of Picard-Lindelöf’s Theorem to find the solution to
y0 = 2 y + 3 y(0) = 1.
We now compute the first elements in the sequence. We said y0 = 1, now y1 is given by
Z t Z t
n = 0, y1 (t) = 1 + (2 y0 (s) + 3) ds = 1 + 5 ds = 1 + 5t.
0 0
So y1 = 1 + 5t. Now we compute y2 ,
Z t Z t Z t
5+10s ds = 1+5t+5t2 .
y2 = 1+ (2 y1 (s)+3) ds = 1+ 2(1+5s)+3 ds ⇒ y2 = 1+
0 0 0
So we’ve got y2 (t) = 1 + 5t + 5t2 . Now y3 ,
Z t Z t
2(1 + 5s + 5s2 ) + 3 ds
y3 = 1 + (2 y2 (s) + 3) ds = 1 +
0 0
so we have, Z t
10 3
5 + 10s + 10s2 ds = 1 + 5t + 5t2 +
y3 = 1 + t .
0 3
10 3
So we obtained y3 (t) = 1 + 5t + 5t2 + t . We now try reorder terms in this last expression
3
so we can get a power series expansion we can write in terms of simple functions. The first
step is identify common factors, like the factor five in y3 ,
2
y3 (t) = 1 + 5 t + t2 + t3 .
3
We now try to rewrite the expression above to get an n! in the denominator of each term
with a power tn , that is,
2t2 4t3
y3 (t) = 1 + 5 t + + .
2! 3!
We then realize that we can rewrite the expression above in terms of power of (2t), that is,
2 2t2 4t3 5 (2t)2 (2t)3
y3 (t) = 1 + 5 t+ + =1+ (2t) + + .
2 2! 3! 2 2! 3!
From this last expressionis simple to guess the n-th approximation
∞
5 (2t)2 (2t)3 (2t)n 5 X (2t)k
yn (t) = 1 + (2t) + + + ··· + ⇒ yn (t) = 1 + .
2 2! 3! n! 2 k!
k=1
Recall now that the power series expansion for the exponential
∞
X (at)k
eat = .
k!
k=0
Notice that the sum in the exponential starts at k = 0, while the sum in yn starts at k = 1.
Then, the limit n → ∞ is given by
∞
5 X (2t)k 5
= 1 + e2t − 1 ,
y(t) = lim yn (t) = 1 +
n→∞ 2 k! 2
k=1
We have been able to add the power series and we have the solution written in terms of
simple functions. We have used the expansion for the exponential function
∞
(at)2 (at)3 X (at)k
eat − 1 = (at) + + + ··· =
2! 3! k!
k=1
with a = 2. One last rewriting of the solution and we obtain
5 3
y(t) = e2t − .
2 2
C
56 G. NAGY – ODE august 16, 2015
Example 1.6.4: Use the proof of Picard-Lindelöf’s Theorem to find the solution to
y0 = a y + b y(0) = ŷ0 , a, b ∈ R.
We then realize that we can rewrite the expression above in terms of power of (at), that is,
a at2 a2 t3
y3 (t) = ŷ0 + (aŷ0 + b) t+ +
a 2 3!
b (at)2 (at)3
= ŷ0 + ŷ0 + (at) + + .
a 2 3!
From this last expressionis simple to guess the n-th approximation
b (at)2 (at)3 (at)n
yn (t) = ŷ0 + ŷ0 + (at) + + + ··· +
a 2 3! n!
∞
b X (at)k
= ŷ0 + ŷ0 + .
a k!
k=1
Recall now that the power series expansion for the exponential
∞
X (at)k
eat = .
k!
k=0
Notice that the sum in the exponential starts at k = 0, while the sum in yn starts at k = 1.
Then, the limit n → ∞ is given by
y(t) = lim yn (t)
n→∞
∞
b X (at)k
= ŷ0 + ŷ0 +
a k!
k=1
b
eat − 1 ,
= ŷ0 + ŷ0 +
a
We have been able to add the power series and we have the solution written in terms of
simple functions. We have used the expansion for the exponential function
∞
(at)2 (at)3 X (at)k
eat − 1 = (at) + + + ··· = .
2! 3! k!
k=1
1.6.2. Comparison Linear Nonlinear Equations. Let us recall the initial value problem
for a linear differential equation. Given functions a, b and constants t0 , y0 , find a function y
solution of the equations
y 0 = a(t) y + b(t), y(t0 ) = y0 . (1.6.4)
The main result regarding solutions to this problem is summarized in Theorem 1.2.3, which
we reproduce it below.
Theorem 1.2.3 (Variable coefficients). Given continuous functions a, b : (t1 , t2 ) → R
and constants t0 ∈ (t1 , t2 ) and y0 ∈ R, the initial value problem
y 0 = a(t) y + b(t), y(t0 ) = y0 , (1.2.7)
58 G. NAGY – ODE august 16, 2015
From the Theorem above we can see that solutions to linear differential equations satisfiy
the following properties:
(a) There is an explicit expression for the solutions of a differential equations.
(b) For every initial condition y0 ∈ R there exists a unique solution.
(c) For every initial condition y0 ∈ R the solution y(t) is defined for all (t1 , t2 ).
Remark: None of these properties hold for solutions of nonlinear differential equations.
From the Picard-Lindelöf Theorem one can see that solutions to nonlinear differential
equations satisfy the following properties:
(i) There is no explicit formula for the solution to every nonlinear differential equation.
(ii) Solutions to initial value problems for nonlinear equations may be non-unique when
the function f does not satisfy the Lipschitz condition.
(iii) The domain of a solution y to a nonlinear initial value problem may change when we
change the initial data y0 .
The next three examples (1.6.5)-(1.6.7) are particular cases of the statements in (i)-(iii).
We start with an equation whose solutions cannot be written in explicit form. The reason
is not lack of ingenuity, it has been proven that such explicit expression does not exist.
Example 1.6.5: For every constant a1 , a2 , a3 , a4 , find all solutions y to the equation
t2
y 0 (t) = . (1.6.5)
y 4 (t) + a4 y 3 (t) + a3 y 2 (t) + a2 y(t) + a1
Solution: The nonlinear differential equation above is separable, so we follow § 1.3 to find
its solutions. First we rewrite the equation as
y 4 (t) + a4 y 3 (t) + a3 y 2 (t) + a2 y(t) + a1 y 0 (t) = t2 .
Integrate the left-hand side with respect to u and the right-hand side with respect to t.
Substitute u back by the function y, hence we obtain
1 5 a4 4 a3 3 a2 t3
y (t) + y (t) + y (t) + y(t) + a1 y(t) = + c.
5 4 3 2 3
This is an implicit form for the solution y of the problem. The solution is the root of a
polynomial degree five for all possible values of the polynomial coefficients. But it has been
proven that there is no formula for the roots of a general polynomial degree bigger or equal
five. We conclude that that there is no explicit expression for solutions y of Eq. (1.6.5). C
G. NAGY – ODE August 16, 2015 59
Finally, an example of the statement in (iii). In this example we have an equation with
solutions defined in a domain that depends on the initial data.
Example 1.6.7: Find the solution y to the initial value problem
y 0 (t) = y 2 (t), y(0) = y0 .
Solution: This is a nonlinear separable equation, so we can again apply the ideas in
Sect. 1.3. We first find all solutions of the differential equation,
Z 0 Z
y (t) dt 1 1
2
= dt + c0 ⇒ − = t + c0 ⇒ y(t) = − .
y (t) y(t) c0 + t
We now use the initial condition in the last expression above,
1 1
y0 = y(0) = − ⇒ c0 = − .
c0 y0
60 G. NAGY – ODE august 16, 2015
This solution diverges at t = 1/y0 , so the domain of the solution y is not the whole real line
R. Instead, the domain is R − {y0 }, so it depends on the values of the initial data y0 . C
In the next example we consider an equation of the form y 0 (t) = f (t, y(t)) for a particualr
function f . We study the function values f (t, u) and show the regions on the tu-plane where
the hypotheses in Theorem 1.6.2 are not satisfied.
Summary: Both Theorems 1.2.3 and 1.6.2 state that there exist solutions to linear and
nonlinear differential equations, respectively. However, Theorem 1.2.3 provides more infor-
mation about the solutions to a reduced type of equations, linear problems; while Theo-
rem 1.6.2 provides less information about solutions to wider type of equations, linear and
nonlinear.
• Initial Value Problem for Linear Differential Equations
(a) There is an explicit expression for all the solutions.
(b) For every initial condition y0 ∈ R there exists a unique solution.
(c) The domain of all solutionis independent of the initial condition y0 ∈ R.
• Initial Value Problem for Nonlinear Differential Equations
(i) There is no general explicit expression for all solutions y(t).
(ii) Solutions may be nonunique at points (t, u) ∈ R2 where ∂u f is not continuous.
(iii) The domain of the solution may depend on the initial data y0 .
G. NAGY – ODE August 16, 2015 61
1.6.3. Direction Fields. Nonlinear differential equations are more difficult to solve that
the linear ones. That is why one tries to find information about solutions of differential
equations without having to actually solve the equations. One way to do this is with the
direction fields. Consider a differential equation
y 0 (t) = f (t, y(t)).
Recall that y 0 (t) represents the slope of the tangent line to the graph of function y at the
point (t, y(t)) in the ty-plane. Then, the differential equation above provides all these slopes,
f (t, y(t)), for every point (t, y(t)) in the ty-plane. So here comes the key idea to construct
a direction field. Graph the function values f (t, y) on the ty-plane, not as points, but as
slopes of small segments.
Definition 1.6.3. A direction field for the differential equation y 0 (t) = f (t, y(t)) is the
graph on the ty-plane of the values f (t, y) as slopes of a small segments.
Example 1.6.9: We know that the solutions of y 0 = y are the exponentials y(t) = y0 et , for
any constant y0 ∈ R. The graph of these solution is simple. So is the direction field shown
in Fig. 7. C
y
y0 = y
0 t
−1
Example 1.6.10: The equation y 0 = sin(y) is separable so the solutions can be computed
csc(y ) + cot(y )
0 0
using the ideas from § 1.3. The implicit solutions are ln = t, for any
csc(y) + cot(y)
y0 ∈ R. The graphs of these solutions are not simple to do. But the direction field is simpler
to plot and can be seen in Fig. 8. C
Example 1.6.11: We do not need to compute the explicit solution of y 0 = 2 cos(t) cos(y) to
have a qualitative idea of its solutions. The direction field can be seen in Fig. 9. C
62 G. NAGY – ODE august 16, 2015
y
y 0 = sin(y)
0 t
−π
π
2
0 t
π
−
2
1.6.4. Exercises.
1.6.1.- Use the Picard iteration to find the 1.6.3.- Find the domain where the solution
first four elements, y0 , y1 , y2 , and y3 , of the initial value problems below is
of the sequence {yn }∞n=0 of approximate well-defined.
solutions to the initial value problem −4t
(a) y 0 = , y(0) = y0 > 0.
y
y 0 = 6y + 1, y(0) = 0. 0 2
(b) y = 2ty , y(0) = y0 > 0.
1.6.2.- Use the Picard iteration to find the
1.6.4.- By looking at the equation coeffi-
information required below about the
cients, find a domain where the solution
sequence {yn }∞n=0 of approximate solu-
of the initial value problem below exists,
tions to the initial value problem
(a) (t2 −4) y 0 +2 ln(t) y = 3t, and initial
y 0 = 3y + 5, y(0) = 1.
condition y(1) = −2.
(a) The first 4 elements in the sequence, y
y0 , y1 , y2 , and y3 . (b) y 0 = , and initial condition
t(t − 3)
(b) The general term ck (t) of the ap- y(−1) = 2.
proximation
n 1.6.5.- State where in the plane with points
X ck (t)
yn (t) = 1 + . (t, y) the hypothesis of Theorem 1.6.2
k!
k=1 are not satisfied.
(c) Find the limit y(t) = limn→∞ yn (t). y2
(a) y 0 = .
2t − 3y
0
p
(b) y = 1 − t2 − y 2 .
64 G. NAGY – ODE august 16, 2015
Newton’s second law of motion, ma = f , is maybe one of the first differential equations
written. This is a second order equation, since the acceleration is the second time derivative
of the particle position function. Second order differential equations are more difficult to
solve than first order equations. In § 2.1 we compare results on linear first and second order
equations. While there is an explicit formula for all solutions to first order linear equations,
not such formula exists for all solutions to second order linear equations. The most one
can get is the result in Theorem 2.1.7. In § 2.2 we introduce the Reduction Order Method
to find a new solution of a second order equation if we already know one solution of the
equation. In § 2.3 we find explicit formulas for all solutions to linear second order equations
that are both homogeneous and with constant coefficients. These formulas are generalized
to nonhomogeneous equations in § 2.4. In § 2.5 we describe a few physical systems described
by second order linear differential equations.
y1 y2
e−ωd t
−e−ωd t
G. NAGY – ODE August 16, 2015 65
Example 2.1.2: Find the differential equation satisfied by the family of functions
y(t) = c1 e4t + c2 e−4t .
where c1 , c2 are arbitrary constants.
Solution: From the definition of y compute c1 ,
c1 = y e−4t − c2 e−8t .
Now compute the derivative of function y
y 0 = 4c1 e4t − 4c2 e−4t ,
Replace c1 from the first equation above into the expression for y 0 ,
y 0 = 4(y e−4t − c2 e−8t )e4t − 4c2 e−4t ⇒ y 0 = 4y + (−4 − 4)c2 e−4t ,
so we get an expression for c2 in terms of y and y 0 ,
1
c2 = (4y − y 0 ) e4t
8
We can now compute c1 in terms of y and y 0 ,
1 1
c1 = y e−4t − (4y − y 0 )e4t e−8t ⇒ c1 = (4y + y 0 ) e−4t .
8 8
We can now take the expression of either c1 or c2 and compute one more derivative. We
choose c2 ,
1 1
0 = c02 = (4y − y 0 )e4t + (4y 0 − y 00 ) e4t ⇒ 4(4y − y 0 ) + (4y 0 − y 00 ) = 0
2 8
which gives us the following second order linear differential equation for y,
y 00 − 16 y = 0.
C
Example 2.1.3: Find the differential equation satisfied by the family of functions
y(x) = c1 x + c2 x2 .
where c1 , c2 are arbitrary constants.
Solution: Compute the derivative of function y
y 0 (x) = c1 + 2c2 x,
From here it is simple to get c1 ,
c1 = y 0 − 2c2 x.
Use this expression for c1 in the expression for y,
y0 y
y = (y 0 − 2c2 x) x + c2 x2 = x y 0 − c2 x2 ⇒ c2 = − 2.
x x
Therefore we get for c1 the expression
y0 y 2y 2y
c1 = y 0 − 2( − 2 ) x = y0 − 2 y0 + ⇒ c1 = −y 0 + .
x x x x
To obtain an equation for y we compute its second derivative, and replace in that derivative
the formulas for the constants c1 and c2 . In this particular example we only need c2 ,
y0 y 2 2
y 00 = 2c2 = 2 − 2 ⇒ y 00 − y 0 + 2 y = 0.
x x x x
C
G. NAGY – ODE August 16, 2015 67
Here is the first of the two main results in this section. Second order linear differential
equations have solutions in the case that the equation coefficients are continuous functions.
Since the solution is unique when we specify two extra conditions, called initial conditions,
we infer that a general solution must have two arbitrary integration constants.
Theorem 2.1.2 (Variable Coefficients). If the functions a1 , a0 , b : I → R are continuous
on a closed interval I ⊂ R, t0 ∈ I, and y0 , y1 ∈ R are any constants, then there exists a
unique solution y : I → R to the initial value problem
y 00 + a1 (t) y 0 + a0 (t) y = b(t), y(t0 ) = y0 , y 0 (t0 ) = y1 . (2.1.2)
Remark: The fixed point argument used in the proof of Picard-Lindelöf’s Theorem 1.6.2
can be extended to prove Theorem 2.1.2. This proof will be presented later on.
Example 2.1.4: Find the longest interval I ∈ R such that there exists a unique solution to
the initial value problem
(t − 1)y 00 − 3ty 0 + 4y = t(t − 1), y(−2) = 2, y 0 (−2) = 1.
Solution: We first write the equation above in the form given in Theorem 2.1.2,
3t 0 4
y 00 − y + y = t.
t−1 t−1
The intervals where the hypotheses in the Theorem above are satisfied, that is, where the
equation coefficients are continuous, are I1 = (−∞, 1) and I2 = (1, ∞). Since the initial
condition belongs to I1 , the solution domain is
I1 = (−∞, 1).
C
2.1.2. Homogeneous Equations. We need to simplify the problem to get further in its
solution. From now on in this section we study homogeneous equations only. Once we learn
properties of solutions to homogeneous equations we can get back at the nonhomogeneous
case. But before getting into homogeneous equations, we introduce a new notation to write
differential equations. This is a shorter, more economical, notation. Given two functions
a1 , a0 , introduce the function L acting on a function y, as follows,
L(y) = y 00 + a1 (t) y 0 + a0 (t) y. (2.1.3)
The function L acts on the function y and the result is another function, given by Eq. (2.1.3).
8
Example 2.1.5: Compute the operator L(y) = t y 00 + 2y 0 − y acting on y(t) = t3 .
t
Solution: Since y(t) = t3 , then y 0 (t) = 3t2 and y 00 (t) = 6t, hence
8 3
L(t3 ) = t (6t) + 2(3t2 ) −
t ⇒ L(t3 ) = 4t2 .
t
The function L acts on the function y(t) = t3 and the result is the function L(t3 ) = 4t2 . C
Introduce the definition of L back on the right-hand side. We then conclude that
L(c1 y1 + c2 y2 ) = c1 L(y1 ) + c2 L(y2 ).
This establishes the Theorem.
The linearity of an operator L translates into the superposition property of the solutions
to the homogeneous equation L(y) = 0.
Theorem 2.1.5 (Superposition). If L is a linear operator and y1 , y2 are solutions of the
homogeneous equations L(y1 ) = 0, L(y2 ) = 0, then for every constants c1 , c2 holds true that
L(c1 y1 + c2 y2 ) = 0.
Remark: This result is not true for nonhomogeneous equations.
Proof of Theorem 2.1.5: Verify that the function y = c1 y1 + c2 y2 satisfies L(y) = 0 for
every constants c1 , c2 , that is,
L(y) = L(c1 y1 + c2 y2 ) = c1 L(y1 ) + c2 L(y2 ) = c1 0 + c2 0 = 0.
This establishes the Theorem.
We now introduce the notion of linearly dependent and linearly independent functions.
Definition 2.1.6. Two continuous functions y1 , y2 : I → R are called linearly dependent
on the interval I iff there exists a constant c such that for all t ∈ I holds
y1 (t) = c y2 (t).
Two functions are called linearly independent on I iff they are not linearly dependent.
In words only, two functions are linearly dependent on an interval iff the functions are
proportional to each other on that interval, otherwise they are linearly independent.
Remark: The function y1 = 0 is proportional to every other function y2 , since holds
y1 = 0 = 0 y2 . Hence, any set containing the zero function is linearly dependent.
The definitions of linearly dependent or independent functions found in the literature
are equivalent to the definition given here, but they are worded in a slight different way.
One usually finds in the literature that two functions are called linearly dependent on the
interval I iff there exist constants c1 , c2 , not both zero, such that for all t ∈ I holds
c1 y1 (t) + c2 y2 (t) = 0.
G. NAGY – ODE August 16, 2015 69
Two functions are called linearly independent on the interval I iff they are not linearly
dependent, that is, the only constants c1 and c2 that for all t ∈ I satisfy the equation
c1 y1 (t) + c2 y2 (t) = 0
are the constants c1 = c2 = 0. This latter wording makes it simple to generalize these
definitions to an arbitrary number of functions.
Example 2.1.6:
(a) Show that y1 (t) = sin(t), y2 (t) = 2 sin(t) are linearly dependent.
(b) Show that y1 (t) = sin(t), y2 (t) = t sin(t) are linearly independent.
Solution:
Part (a): This is trivial, since 2y1 (t) − y2 (t) = 0.
Part (b): Find constants c1 , c2 such that for all t ∈ R holds
c1 sin(t) + c2 t sin(t) = 0.
Evaluating at t = π/2 and t = 3π/2 we obtain
π 3π
c1 + c2 = 0, c1 + c2 = 0 ⇒ c1 = 0, c2 = 0.
2 2
We conclude: The functions y1 and y2 are linearly independent. C
We now introduce the second main result in this section. If you know two linearly in-
dependent solutions to a second order linear homogeneous differential equation, then you
actually know all possible solutions to that equation. Any other solution is just a linear
combination of the previous two solutions. We repeat that the equation must be homoge-
neous. This is the closer we can get to a general formula for solutions to second order linear
homogeneous differential equations.
Theorem 2.1.7 (General Solution). If y1 and y2 are linearly independent solutions of
the equation L(y) = 0 on an interval I ⊂ R, where L(y) = y 00 + a1 y 0 + a0 y, and a1 , a2 are
continuous functions on I, then there exist unique constants c1 , c2 such that every solution
y of the differential equation L(y) = 0 on I can be written as a linear combination
y(t) = c1 y1 (t) + c2 y2 (t).
Before we prove Theorem 2.1.7, it is convenient to state the following the definitions,
which come out naturally from this Theorem.
Definition 2.1.8.
(a) The functions y1 and y2 are fundamental solutions of the equation L(y) = 0 iff holds
that L(y1 ) = 0, L(y2 ) = 0 and y1 , y2 are linearly independent.
(b) The general solution of the homogeneous equation L(y) = 0 is a two-parameter family
of functions ygen given by
ygen (t) = c1 y1 (t) + c2 y2 (t),
where the arbitrary constants c1 , c2 are the parameters of the family, and y1 , y2 are
fundamental solutions of L(y) = 0.
Example 2.1.7: Show that y1 = et and y2 = e−2t are fundamental solutions to the equation
y 00 + y 0 − 2y = 0.
Solution: We first show that y1 and y2 are solutions to the differential equation, since
L(y1 ) = y100 + y10 − 2y1 = et + et − 2et = (1 + 1 − 2)et = 0,
70 G. NAGY – ODE august 16, 2015
Remark: The fundamental solutions to the equation above are not unique. For example,
show that another set of fundamental solutions to the equation above is given by,
2 t 1 −2t 1 t
e − e−2t .
y1 (t) = e + e , y2 (t) =
3 3 3
To prove Theorem 2.1.7 we need to introduce the Wronskian function and to verify some
of its properties. The Wronskian function is studied in the following Subsection and Abel’s
Theorem is proved. Once that is done we can say that the proof of Theorem 2.1.7 is
complete.
Proof of Theorem 2.1.7: We need to show that, given any fundamental solution pair,
y1 , y2 , any other solution y to the homogeneous equation L(y) = 0 must be a unique linear
combination of the fundamental solutions,
y(t) = c1 y1 (t) + c2 y2 (t), (2.1.5)
for appropriately chosen constants c1 , c2 .
First, the superposition property implies that the function y above is solution of the
homogeneous equation L(y) = 0 for every pair of constants c1 , c2 .
Second, given a function y, if there exist constants c1 , c2 such that Eq. (2.1.5) holds, then
these constants are unique. The reason is that functions y1 , y2 are linearly independent.
This can be seen from the following argument. If there are another constants c̃1 , c̃2 so that
y(t) = c̃1 y1 (t) + c̃2 y2 (t),
then subtract the expression above from Eq. (2.1.5),
0 = (c1 − c̃1 ) y1 + (c2 − c̃2 ) y2 ⇒ c1 − c̃1 = 0, c2 − c̃2 = 0,
where we used that y1 , y2 are linearly independent. This second part of the proof can be
obtained from the part three below, but I think it is better to highlight it here.
So we only need to show that the expression in Eq. (2.1.5) contains all solutions. We
need to show that we are not missing any other solution. In this third part of the argument
enters Theorem 2.1.2. This Theorem says that, in the case of homogeneous equations, the
initial value problem
L(y) = 0, y(t0 ) = y0 , y 0 (t0 ) = ŷ1 ,
always has a unique solution. That means, a good parametrization of all solutions to the
differential equation L(y) = 0 is given by the two constants, y0 , ŷ1 in the initial condition.
To finish the proof of Theorem 2.1.7 we need to show that the constants c1 and c2 are also
good to parametrize all solutions to the equation L(y) = 0. One way to show this, is to
G. NAGY – ODE August 16, 2015 71
find an invertible map from the constants y0 , ŷ1 , which we know parametrize all solutions,
to the constants c1 , c2 . The map itself is simple to find,
y0 = c1 y1 (t0 ) + c2 y2 (t0 )
ŷ1 = c1 y10 (t0 ) + c2 y20 (t0 ).
We now need to show that this map is invertible. From linear algebra we know that this
map acting on c1 , c2 is invertible iff the determinant of the coefficient matrix is nonzero,
y1 (t0 ) y2 (t0 ) 0 0
y1 (t0 ) y20 (t0 ) = y1 (t0 ) y2 (t0 ) − y1 (t0 )y2 (t0 ) 6= 0.
0
Solution:
Part (a): By the definition of the Wronskian:
y1 (t) y2 (t) sin(t) 2 sin(t)
Wy1 y2 (t) = 0 = = sin(t)2 cos(t) − cos(t)2 sin(t)
y1 (t) y20 (t) cos(t) 2 cos(t)
We conclude that Wy1 y2 (t) = 0. Notice that y1 and y2 are linearly dependent.
Part (b): Again, by the definition of the Wronskian:
sin(t) t sin(t)
Wy1 y2 (t) =
= sin(t) sin(t) + t cos(t) − cos(t)t sin(t).
cos(t) sin(t) + t cos(t)
We conclude that Wy1 y2 (t) = sin2 (t). Notice that y1 and y2 are linearly independent. C
It is simple to prove the following relation between the Wronskian of two functions and
the linear dependency of these two functions.
72 G. NAGY – ODE august 16, 2015
Solution: Notice that we do not known the explicit expression for the solutions. Neverthe-
less, Theorem 2.1.14 says that we can compute their Wronskian. First, we have to rewrite
the differential equation in the form given in that Theorem, namely,
2 2 1
y 00 − + 1 y0 + 2 + y = 0.
t t t
Then, Theorem 2.1.14 says that the Wronskian satisfies the differential equation
2
Wy0 1 y2 (t) − + 1 Wy1 y2 (t) = 0.
t
This is a first order, linear equation for Wy1 y2 , so its solution can be computed using the
method of integrating factors. That is, first compute the integral
Z t
2 t
− + 1 ds = −2 ln − (t − t0 )
t0 s t0
t2
= ln 02 − (t − t0 ).
t
Then, the integrating factor µ is given by
t20 −(t−t0 )
µ(t) = e ,
t2
which satisfies the condition µ(t0 ) = 1. So the solution, Wy1 y2 is given by
h i0
µ(t)Wy1 y2 (t) = 0 ⇒ µ(t)Wy1 y2 (t) − µ(t0 )Wy1 y2 (t0 ) = 0
so, the solution is
t2 (t−t0 )
Wy1 y2 (t) = Wy1 y2 (t0 ) e .
t20
74 G. NAGY – ODE august 16, 2015
If we call the constant c = Wy1 y2 (t0 )/[t20 et0 ], then the Wronskian has the simpler form
Wy1 y2 (t) = c t2 et .
C
Proof of Theorem 2.1.14: We start computing the derivative of the Wronskian function,
0
Wy0 1 y2 = y1 y20 − y10 y2 = y1 y200 − y100 y2 .
Recall that both y1 and y2 are solutions to Eq. (2.1.6), meaning,
y100 = −a1 y10 − a0 y1 , y200 = −a1 y20 − a0 y2 .
Replace these expressions in the formula for Wy0 1 y2 above,
Wy0 1 y2 = y1 −a1 y20 − a0 y2 − −a1 y10 − a0 y1 y2 ⇒ Wy0 1 y2 = −a1 y1 y20 − y10 y2
2.1.5. Exercises.
2.1.1.- Compute the Wronskian of the fol- 2.1.5.- Can the function y(t) = sin(t2 ) be
lowing functions: solution on an open interval containing
(a) f (t) = sin(t), g(t) = cos(t). t = 0 of a differential equation
(b) f (x) = x, g(x) = x ex . y 00 + a(t) y 0 + b(t)y = 0,
(c) f (θ) = cos2 (θ), g(θ) = 1 + cos(2θ).
with continuous coefficients a and b?
2.1.2.- Find the longest interval where the Explain your answer.
solution y of the initial value problems
below is defined. (Do not try to solve 2.1.6.- Verify whether the functions y1 , y2
the differential equations.) below are a fundamental set for the dif-
ferential equations given below:
(a) t2 y 00 + 6y = 2t, y(1) = 2, y 0 (1) = 3.
(a) y1 (t) = cos(2t), y2 (t) = sin(2t),
(b) (t − 6)y 0 + 3ty 0 − y = 1, y(3) =
−1, y 0 (3) = 2. y 00 + 4y = 0.
(b) y1 (t) = et , y2 (t) = t et ,
2.1.3.- (a) Verify that y1 (t) = t2 and
y2 (t) = 1/t are solutions to the dif- y 00 − 2y 0 + y = 0.
ferential equation (c) y1 (x) = x, y2 (t) = x ex ,
t2 y 00 − 2y = 0, t > 0. x2 y 00 − 2x(x + 2) y 0 + (x + 2) y = 0.
b
(b) Show that y(t) = a t2 + is so- 2.1.7.- If the Wronskian of any two solu-
t
lution of the same equation for all tions of the differential equation
constants a, b ∈ R.
y 00 + p(t) y 0 + q(t) y = 0
2.1.4.- If the graph of y, solution to a sec- is constant, what does this imply about
ond order linear differential equation the coefficients p and q?
L(y(t)) = 0 on the interval [a, b], is tan-
gent to the t-axis at any point t0 ∈ [a, b], 2.1.8.- Let y(t) = c1 t + c2 t2 be the general
then find the solution y explicitly. solution of a second order linear differ-
ential equation L(y) = 0. By eliminat-
ing the constants c1 and c2 , find the dif-
ferential equation satisfied by y.
76 G. NAGY – ODE august 16, 2015
2.2.1. Special Second Order Equations. A second order differential equation is called
special when either the unknown function or the independent variable does not appear
explicitly in the equation. In either case, such second order equation can be transformed
in a first order equation for a new unknown function. The transformation to get the new
unknown function is different on each case. One then solves the first order equation and
transforms back solving another first order equation to get the original unknown function.
We now start with a few definitions.
Definition 2.2.1. A second order equation in the unknown function y is an equation
y 00 = f (t, y, y 0 ).
where the function f : R3 → R is given. The equation is linear iff function f is linear in
both arguments y and y 0 . The second order differential equation above is special iff one of
the following conditions hold:
(a) y 00 = f (t, y 0 ), so the function y does not appear explicitly in the equation;
(b) y 00 = f (y, y 0 ), so the independent variable t does not appear explicitly in the equation.
It is simpler to solve special second order equations when the function y missing, case (a),
than when the variable t is missing, case (b). This can be seen comparing Theorems 2.2.2
and 2.2.3.
Theorem 2.2.2 (Function y Missing). If a second order differential equation has the
form y 00 = f (t, y 0 ), then v = y 0 satisfies the first order equation v 0 = f (t, v).
The proof is trivial, so we go directly to an example.
Example 2.2.1: Find the y solution of the second order nonlinear equation y 00 = −2t (y 0 )2
with initial conditions y(0) = 2, y 0 (0) = −1.
Solution: Introduce v = y 0 . Then v 0 = y 00 , and
v0 1
v 0 = −2t v 2 ⇒ = −2t ⇒ − = −t2 + c.
v2 v
1 1
So, = t2 − c, that is, y 0 = 2 . The initial condition implies
y0 t −c
1 1
−1 = y 0 (0) = − ⇒ c=1 ⇒ y0 = .
c t2 −1
Z
dt
Then, y = + c. We integrate using the method of partial fractions,
t2 − 1
1 1 a b
= = + .
t2 − 1 (t − 1)(t + 1) (t − 1) (t + 1)
G. NAGY – ODE August 16, 2015 77
1 1
Hence, 1 = a(t + 1) + b(t − 1). Evaluating at t = 1 and t = −1 we get a = , b = − . So
2 2
1 1h 1 1 i
= − .
t2 − 1 2 (t − 1) (t + 1)
Therefore, the integral is simple to do,
1 1
y = ln |t − 1| − ln |t + 1| + c. 2 = y(0) = (0 − 0) + c.
2 2
1
We conclude y = ln |t − 1| − ln |t + 1| + 2. C
2
Special second order equations where the variable t is missing, case (b), are more com-
plicated to solve.
Theorem 2.2.3 (Variable t Missing). If a second order differential equation has the form
y 00 = f (y, y 0 ),
and a solution y is invertible, with values y(t) and inverse function values t(y), then the
function w(y) = v(t(y)), where v(t) = y 0 (t), satisfies the first order equation
f (y, w)
ẇ = ,
w
where we denoted ẇ = dw/dy.
Remark: The chain rule for the derivative of a composition of functions allows us to trans-
form original differential equation in the independent variable t to a differential equation in
the independent variable y.
Proof of Theorem 2.2.3: Introduce the notation
dw dv
ẇ(y) = , v 0 (t) = .
dy dt
The differential equation in terms of v has the form v 0 (t) = f (y(t), v(t)). It is not clear
how to solve it, since the function y still appears in that equation. For that reason we now
introduce the function w(y) = v(t(y)), and we use the chain rule to find out the equation
satisfied by that function w. The chain rule says,
dw dv dt v 0 (t) v 0 (t) f y(t), v(t)) f (y, w(y))
ẇ(y) = = = 0 = = = .
dy y dt t(y) dy t(y) y (t) t(y) v(t) t(y) v(t) w(y)
t(y)
Solution: The variable t does not appear in the equation. So we start introduciong the
function v(t) = y 0 (t). The equation is now given by v 0 (t) = 2y(t) v(t). We look for invertible
solutions y, then introduce the function w(y) = v(t(y)). This function satisfies
dw dv dt v 0 v 0
ẇ(y) = = = 0 = .
dy dt dy t(y) y t(y) v t(y)
2.2.2. Reduction of Order Method. This method provides a way to obtain a second
solution to a differential equation if we happen to know one solution.
Theorem 2.2.4 (Reduction of Order). If a nonzero function y1 is solution to
y 00 + p(t) y 0 + q(t) y = 0. (2.2.1)
where p, q are given functions, then a second solution to this same equation is given by
Z −P (t)
e
y2 (t) = y1 (t) dt, (2.2.2)
y12 (t)
R
where P (t) = p(t) dt. Furthermore, y1 and y2 are fundamental solutions to Eq. (2.2.1).
Remark: In the first pat of the proof we write y2 (t) = v(t) y1 (t) and show that y2 is solution
of Eq. (2.2.1) iff the function v is solution of
y 0 (t)
v 00 + 2 1 + p(t) v 0 = 0. (2.2.3)
y1 (t)
In the second part we solve the equation for v. This is a first order equation for for w = v 0 ,
since v itself does not appear in the equation, hence the name reduction of order method.
The equation for w is linear and first order, so we can solve it using the integrating factor
method. Then one more integration gives the function v, which is the factor multiplying y1
in Eq. (2.2.2).
Remark: The functions v and w in this subsection have no relation with the functions v
and w from the previous subsection.
Proof of Theorem 2.2.4: We write y2 = vy1 and we put this function into the differential
equation in 2.2.1, which give us an equation for v. To start, compute y20 and y200 ,
y20 = v 0 y1 + v y10 , y200 = v 00 y1 + 2v 0 y10 + v y100 .
Introduce these equations into the differential equation,
0 = (v 00 y1 + 2v 0 y10 + v y100 ) + p (v 0 y1 + v y10 ) + qv y1
= y1 v 00 + (2y10 + p y1 ) v 0 + (y100 + p y10 + q y1 ) v.
The function y1 is solution to the differential original differential equation,
y100 + p y10 + q y1 = 0,
then, the equation for v is given by
y0
y1 v 00 + (2y10 + p y1 ) v 0 = 0. ⇒ v 00 + 2 1 + p v 0 = 0.
y1
This is Eq. (2.2.3). The function v does not appear explicitly in this equation, so denoting
w = v 0 we obtain
y0
w0 + 2 1 + p w = 0.
y1
80 G. NAGY – ODE august 16, 2015
This is is a first order linear equation for w, so we solve it using the integrating factor
method, with integrating factor
Z
µ(t) = y12 (t) eP (t) , where P (t) = p(t) dt.
Solution: We look for a solution of the form y2 (t) = t v(t). This implies that
y20 = t v 0 + v, y200 = t v 00 + 2v 0 .
So, the equation for v is given by
0 = t2 t v 00 + 2v 0 + 2t t v 0 + v − 2t v
2.2.3. Exercises.
2.3.1. The Roots of the Characteristic Polynomial. Thanks to the work done in § 2.1
we only need to find two linearly independent solutions to the second order linear homoge-
neous equation. Then Theorem 2.1.7 says that every other solution is a linear combination
of the former two. How do we find any pair of linearly independent solutions? Since the
equation is so simple, having constant coefficients, we find such solutions by trual amd error.
Here is an example of this idea.
Example 2.3.1: Find solutions to the equation
y 00 + 5y 0 + 6y = 0. (2.3.1)
Solution: We try to find solutions to this equation using simple test functions. For exam-
ple, it is clear that power functions y = tn won’t work, since the equation
n(n − 1) t(n−2) + 5n t(n−1) + 6 tn = 0
cannot be satisfied for all t ∈ R. We obtained, instead, a condition on t. This rules out
power functions. A key insight is to try with a test function having a derivative proportional
to the original function, y 0 (t) = r y(t). Such function would be simplified from the equation.
For example, we try now with the test function y(t) = ert . If we introduce this function in
the differential equation we get
(r2 + 5r + 6) ert = 0 ⇔ r2 + 5r + 6 = 0. (2.3.2)
We have eliminated the exponential from the differential equation, and now the equation is
a condition on the constant r. We now look for the appropriate values of r, which are the
roots of a polynomial degree two,
√ r+ = −2,
1 1
r± = −5 ± 25 − 24 = (−5 ± 1) ⇒
2 2 r− = −3.
We have obtained two different roots, which implies we have two different solutions,
y1 (t) = e−2t , y2 (t) = e−3t .
These solutions are not proportional to each other, so the are fundamental solutions to the
differential equation in (2.3.1). Therefore, Theorem 2.1.7 in § 2.1 implies that we have found
all possible solutions to the differential equation, and they are given by
y(t) = c1 e−2t + c2 e−3t , c1 , c2 ∈ R. (2.3.3)
C
From the example above we see that this idea will produce fundamental solutions to
all constant coefficients homogeneous equations having associated polynomials with two
different roots. Such polynomial play an important role to find solutions to differential
equations as the one above, so we give such polynomial a name.
84 G. NAGY – ODE august 16, 2015
As we saw in Example 2.3.1, the roots of the characteristic polynomial are crucial to
express the solutions of the differential equation above. The characteristic polynomial is a
second degree polynomial with real coefficients, and the general expression for its roots is
1 p
r± = −a1 ± a21 − 4a0 .
2
If the discriminant (a21 − 4a0 ) is positive, zero, or negative, then the roots of p are different
real numbers, only one real number, or a complex-conjugate pair of complex numbers. For
each case the solution of the differential equation can be expressed in different forms.
Theorem 2.3.2 (Constant Coefficients). If r± are the roots of the characteristic poly-
nomial to the second order linear homogeneous equation with constant coefficients
y 00 + a1 y 0 + a0 y = 0, (2.3.4)
and if c+ , c- are arbitrary constants, then the following statements hold true.
(a) If r+ 6= r- , real or complex, then the general solution of Eq. (2.3.4) is given by
ygen (t) = c+ er+ t + c- er- t .
(b) If r+ = r- = r0 ∈ R, then the general solution of Eq. (2.3.4) is given by
ygen (t) = c+ er0 t + c- ter0 t .
Furthermore, given real constants t0 , y0 and y1 , there is a unique solution to the initial value
problem given by Eq. (2.3.4) and the initial conditions y(t0 ) = y0 and y 0 (t0 ) = y1 .
Remarks:
(a) The proof is to guess that functions y(t) = ert must be solutions for appropriate values of
the exponent constant r, the latter being roots of the characteristic polynomial. When
the characteristic polynomial has two different roots, Theorem 2.1.7 says we have all
solutions. When the root is repeated we use the reduction of order method to find a
second solution not proportional to the first one.
(b) At the end of the section we show a proof where we construct the fundamental solutions
y1 , y2 without guessing them. We do not need to use Theorem 2.1.7 in this second proof,
which is based completely in a generalization of the reduction of order method.
Proof of Theorem 2.3.2: We guess that particular solutions to Eq. 2.3.4 must be expo-
nential functions of the form y(t) = ert , because the exponential will cancel out from the
equation and only a condition for r will remain. This is what happens,
r2 ert + a1 ert + a0 ert = 0 ⇒ r2 + a1 r + a0 = 0.
The second equation says that the appropriate values of the exponent are the root of the
characteristic polynomial. We now have two cases. If r+ 6= r- then the solutions
y+ (t) = er+ t , y- (t) = er- t ,
are linearly independent, so the general solution to the differential equation is
ygen (t) = c+ er+ t + c- er- t .
G. NAGY – ODE August 16, 2015 85
If r+ = r- = r0 , then we have found only one solution y+ (t) = er0 t , and we need to find a
second solution not proportional to y+ . This is what the reduction of order method is perfect
for. We write the second solution as
y- (t) = v(t) y+ (t) ⇒ y- (t) = v(t) er0 t ,
and we put this expression in the differential equation (2.3.4),
v 00 + 2r0 v 0 + vr02 er0 t + v 0 + r0 v a1 er0 t + a0 v er0 t = 0.
Solution: We know that the general solution of the differential equation above is
ygen (t) = c+ e−2t + c- e−3t .
86 G. NAGY – ODE august 16, 2015
We now find the constants c+ and c- that satisfy the initial conditions above,
)
1 = y(0) = c+ + c- c+ = 2,
0 ⇒
−1 = y (0) = −2c+ − 3c- c- = −1.
Therefore, the unique solution to the initial value problem is
y(t) = 2e−2t − e−3t .
C
Example 2.3.3: Find the general solution ygen of the differential equation
2y 00 − 3y 0 + y = 0.
Solution: We look for every solutions of the form y(t) = ert , where r is solution of the
characteristic equation
1 √ r+ = 1,
2r2 − 3r + 1 = 0 ⇒ r = 3 ± 9 − 8
⇒ 1
4 r- = .
2
Therefore, the general solution of the equation above is
ygen (t) = c+ et + c- et/2 .
C
2.3.2. Real Solutions for Complex Roots. We study in more detail the solutions to
the differential equation (2.3.4) in the case that the characteristic polynomial has complex
roots. Since these roots have the form
a1 1p 2
r± = − ± a1 − 4a0 ,
2 2
the roots are complex-valued in the case a21 − 4a0 < 0. We use the notation
r
a1 a2
r± = α ± iβ, with α = − , β = a0 − 1 .
2 4
The fundamental solutions in Theorem 2.3.2 are the complex-valued functions
ỹ+ = e(α+iβ)t , ỹ- = e(α−iβ)t .
The general solution constructed from these solutions is
ygen (t) = c̃+ e(α+iβ)t + c̃- e(α−iβ)t , c̃+ , c̃- ∈ C.
This formula for the general solution includes real valued and complex valued solutions.
But it is not so simple to single out the real valued solutions. Knowing the real valued
solutions could be important in physical applications. If a physical system is described by a
differential equation with real coefficients, more often than not one is interested in finding
real valued solutions. For that reason we now provide a new set of fundamental solutions
that are real valued. Using real valued fundamental solution is simple to separate all real
valued solutions from the complex valued ones.
Theorem 2.3.3 (Real Valued Fundamental Solutions). If the differential equation
y 00 + a1 y 0 + a0 y = 0, (2.3.5)
where a1 , a0 are real constants, has characteristic polynomial with complex roots r± = α ±iβ
and complex valued fundamental solutions
ỹ+ (t) = e(α+iβ)t , ỹ- (t) = e(α−iβ)t ,
then the equation also has real valued fundamental solutions given by
y+ (t) = eαt cos(βt), y- (t) = eαt sin(βt).
Proof of Theorem 2.3.3: We start with the complex valued fundamental solutions
ỹ+ (t) = e(α+iβ)t , ỹ- (t) = e(α−iβ)t .
We take the function ỹ+ and we use a property of complex exponentials,
ỹ+ (t) = e(α+iβ)t = eαt eiβt = eαt cos(βt) + i sin(βt) ,
where on the last step we used Euler’s formula eiθ = cos(θ)+i sin(θ). Repeat this calculation
for y- we get,
ỹ+ (t) = eαt cos(βt) + i sin(βt) , ỹ- (t) = eαt cos(βt) − i sin(βt) .
88 G. NAGY – ODE august 16, 2015
Solution: We already found the roots of the characteristic polynomial, but we do it again,
1 √ √
r2 − 2r + 6 = 0 ⇒ r± = 2 ± 4 − 24
⇒ r± = 1 ± i 5.
2
So the complex valued fundamental solutions are
√ √
ỹ+ (t) = e(1+i 5) t
, ỹ- (t) = e(1−i 5) t
.
Theorem ?? says that real valued fundamental solutions are given by
√ √
y+ (t) = et cos( 5t), y- (t) = et sin( 5t).
So the real valued general solution is given by
√ √
ygen (t) = c+ cos( 5 t) + c- sin( 5 t) et , c+ , c- ∈ R.
C
Remark: Sometimes it is difficult to remember the formula for real valued solutions. One
way to obtain those solutions without remembering the formula is to start repeat the proof
of Theorem 2.3.3. Start with the complex valued solution ỹ+ and use the properties of the
complex exponential,
√ √ √ √
ỹ+ (t) = e(1+i 5)t = et ei 5t = et cos( 5t) + i sin( 5t) .
The real valued fundamental solutions are the real and imaginary parts in that expression.
y1 y2
Remark: Physical processes that oscillate in time without dissipation could be described
by differential equations like the one in this example.
2.3.3. Constructive proof of Theorem 2.3.2. We now present an alternative proof for
Theorem 2.3.2 that does not involve guessing the fundamental solutions of the equation.
Instead, we construct these solutions using a generalization of the reduction of order method.
Proof of Theorem 2.3.2: The proof has two main parts: First, we transform the original
equation into an equation simpler to solve for a new unknown; second, we solve this simpler
problem.
In order to transform the problem into a simpler one, we express the solution y as a
product of two functions, that is, y(t) = u(t)v(t). Choosing v in an appropriate way the
equation for u will be simpler to solve than the equation for y. Hence,
y = uv ⇒ y 0 = u0 v + v 0 u ⇒ y 00 = u00 v + 2u0 v 0 + v 00 u.
Therefore, Eq. (2.3.4) implies that
(u00 v + 2u0 v 0 + v 00 u) + a1 (u0 v + v 0 u) + a0 uv = 0,
that is,
h v0 0 i
u00 + a1 + 2 u + a0 u v + (v 00 + a1 v 0 ) u = 0. (2.3.6)
v
We now choose the function v such that
v0 v0 a1
a1 + 2 = 0 ⇔ =− . (2.3.7)
v v 2
We choose a simple solution of this equation, given by
v(t) = e−a1 t/2 .
Having this expression for v one can compute v 0 and v 00 , and it is simple to check that
a21
v 00 + a1 v 0 = − v. (2.3.8)
4
90 G. NAGY – ODE august 16, 2015
Introducing the first equation in (2.3.7) and Eq. (2.3.8) into Eq. (2.3.6), and recalling that
v is non-zero, we obtain the simplified equation for the function u, given by
a21
u00 − k u = 0, − a0 .
k= (2.3.9)
4
Eq. (2.3.9) for u is simpler than the original equation (2.3.4) for y since in the former there
is no term with the first derivative of the unknown function.
In order to solve Eq. (2.3.9) we repeat the idea followed to obtain this equation, that
is, express function u as a product of two functions, and solve a simple problem of one of
the functions. We√
first consider the harder case, which is when k 6= 0. In this case, let us
express u(t) = e kt w(t). Hence,
√ √ √ √ √ √ √
u0 = ke kt w + e kt w0 ⇒ u00 = ke kt w + 2 ke kt w0 + e kt w00 .
Therefore, Eq. (2.3.9) for function u implies the following equation for function w
√ √ √
0 = u00 − ku = e kt (2 k w0 + w00 ) ⇒ w00 + 2 kw0 = 0.
Only derivatives of w appear in the latter equation, so denoting x(t) = w0 (t) we have to
solve a simple equation
√ √
x0 = −2 k x ⇒ x(t) = x0 e−2 kt , x0 ∈ R.
Integrating we obtain w as follows,
√ x0 √
w0 = x0 e−2 kt
⇒ w(t) = − √ e−2 kt + c0 .
2 k
√
renaming c1 = −x0 /(2 k), we obtain
√ √ √
w(t) = c1 e−2 kt
+ c0 ⇒ u(t) = c0 e kt
+ c1 e− kt
.
We then obtain the expression for the solution y = uv, given by
a1 √ a1 √
− k)t
y(t) = c0 e(− 2 + k)t
+ c1 e(− 2 .
Since k = (a21 /4 − a0 ), the numbers
a1 √ 1 p
r± = − ± k ⇔ r± = −a1 ± a21 − 4a0
2 2
are the roots of the characteristic polynomial
r2 + a1 r + a0 = 0,
we can express all solutions of the Eq. (2.3.4) as follows
y(t) = c0 er+ t + c1 er- t , k 6= 0.
Finally, consider the case k = 0. Then, Eq. (2.3.9) is simply given by
u00 = 0 ⇒ u(t) = (c0 + c1 t) c0 , c1 ∈ R.
Then, the solution y to Eq. (2.3.4) in this case is given by
y(t) = (c0 + c1 t) e−a1 t/2 .
Since k = 0, the characteristic equation r2 +a1 r+a0 = 0 has only one root r+ = r− = −a1 /2,
so the solution y above can be expressed as
y(t) = (c0 + c1 t) er+ t , k = 0.
The Furthermore part is the same as in Theorem 2.3.2. This establishes the Theorem.
G. NAGY – ODE August 16, 2015 91
Notes.
In the case that the characteristic polynomial of a differential equation has repeated roots
there is an interesting argument to guess the solution y- . The idea is to take a particular
type of limit in solutions of differential equations with complex valued roots.
Consider the equation in (2.3.4) with a characteristic polynomial having complex valued
roots given by r± = α ± iβ, with
r
a1 a2
α=− , β = a0 − 1 .
2 4
Real valued fundamental solutions in this case are given by
ŷ+ = eα t cos(βt), ŷ- = eα t sin(βt).
We now study what happen to these solutions ŷ+ and ŷ- in the following limit: The variable
t is held constant, α is held constant, and β → 0. The last two conditions are conditions on
the equation coefficients, a1 , a0 . For example, we fix a1 and we vary a0 → a21 /4 from above.
Since cos(βt) → 1 as β → 0 with t fixed, then keeping α fixed too, we obtain
ŷ+ (t) = eα t cos(βt) −→ eα t = y+ (t).
sin(βt)
Since → 1 as β → 0 with t constant, that is, sin(βt) → βt, we conclude that
βt
ŷ- (t) sin(βt) α t sin(βt) α t
= e = te −→ t eα t = y- (t).
β β βt
The calculation above says that the function ŷ- /β is close to the function y- (t) = t eα t in
the limit β → 0, t held constant. This calculation provides a candidate, y- (t) = t y+ (t),
of a solution to Eq. (2.3.4). It is simple to verify that this candidate is in fact solution
of Eq. (2.3.4). Since y- is not proportional to y+ , one then concludes the functions y+ , y-
are a fundamental set for the differential equation in (2.3.4) in the case the characteristic
polynomial has repeated roots.
92 G. NAGY – ODE august 16, 2015
2.3.4. Exercises.
2.3.1.- . 2.3.2.- .
G. NAGY – ODE August 16, 2015 93
Remark: The difference of any two solutions of the nonhomogeneous equation is actually
a solution of the homogeneous equation. This is the key idea to prove Theorem 2.4.1.
Proof of Theorem 2.4.1: Let y be any solution of the nonhomogeneous equation L(y) = f .
Recall that we already have one solution, yp , of the nonhomogeneous equation, L(yp ) = f .
We can now subtract the second equation from the first,
L(y) − L(yp ) = f − f = 0 ⇒ L(y − yp ) = 0.
The equation on the right is obtained from the linearity of the operator L. This last equation
says that the difference of any two solutions of the nonhomogeneous equation is solution of
the homogeneous equation. The general solution formula for homogeneous equations says
that all solutions of the homogeneous equation can be written as linear combinations of a
pair of fundamental solutions, y1 , y2 . So the exist constants c1 , c2 such that
y − yp = c1 y1 + c2 y2 .
Since for every y solution of L(y) = f we can find constants c1 , c2 such that the equation
above holds true, we have found a formula for all solutions of the nonhomogeneous equation.
This establishes the Theorem.
2.4.2. The Undetermined Coefficients Method. The general solution formula in (2.4.2)
is the most useful if there is a way to find a particular solution yp of the nonhomogeneous
equation L(yp ) = f . We now present a method to find such particular solution, the Un-
determined Coefficients Method. This method works for linear operators L with constant
coefficients and for simple source functions f . Here is a summary of the Undetermined
Coefficients Method:
(1) Find fundamental solutions y1 , y2 of the homogeneous equation L(y) = 0.
(2) Given the source functions f , guess the solutions yp following the Table 1 below.
(3) If the function yp given by the table satisfies L(yp ) = 0, then change the guess to typ. .
If typ satisfies L(typ ) = 0 as well, then change the guess to t2 yp .
(4) Find the undetermined constants k in the function yp using the equation L(yp ) = f .
Keat keat
Km tm + · · · + K0 km tm + · · · + k0
K1 cos(bt) + K2 sin(bt) k1 cos(bt) + k2 sin(bt)
This is the undetermined coefficients method. It is a set of simple rules to find a particular
solution yp of an nonhomogeneous equation L(yp ) = f in the case that the source function
f is one of the entries in the Table 1. There are a few formulas in particular cases and a
few generalizations of the whole method. We discuss them after a few examples.
G. NAGY – ODE August 16, 2015 95
(2): The table says: For f (t) = 3e2t guess yp (t) = k e2t . The constant k is the undetermined
coefficient we must find.
(3): Since yp (t) = k e2t is not solution of the homogeneous equation, we do not need to
modify our guess. (Recall: L(y) = 0 iff exist constants c+ , c- such that y(t) = c+ e4t + c- e−t .)
(4): Introduce yp into L(yp ) = f and find k. So we do that,
1
(22 − 6 − 4)ke2t = 3e2t ⇒ −6k = 3 ⇒ k=− .
2
We guessed that yp must be proportional to the exponential e2t in order to cancel out the
exponentials in the equation above. We have obtained that
1
yp (t) = − e2t .
2
The undetermined coefficients method gives us a way to compute a particular solution yp of
the nonhomogeneous equation. We now use the general solution theorem, Theorem 2.4.1,
to write the general solution of the nonhomogeneous equation,
1
ygen (t) = c+ e4t + c- e−t − e2t .
2
C
Remark: The step (4) in Example 2.4.1 is a particular case of the following statement.
Lemma 2.4.3. Consider a nonhomogeneous equation L(y) = f with a constant coefficient
operator L and characteristic polynomial p. If the source function is f (t) = K eat , with
p(a) 6= 0, then a particular solution of the nonhomogeneous equation is
K at
yp (t) = e .
p(a)
Proof of Lemma 2.4.3: Since the linear operator L has constant coefficients, let us write
L and its associated characteristic polynomial p as follows,
L(y) = y 00 + a1 y 0 + a0 y, p(r) = r2 + a1 r + a0 .
Since the source function is f (t) = K eat , the Table 1 says that a good guess for a particular
soution of the nonhomogneous equation is yp (t) = k eat . Our hypothesis is that this guess
is not solution of the homogenoeus equation, since
L(yp ) = (a2 + a1 a + a0 ) k eat = p(a) k eat , and p(a) 6= 0.
We then compute the constant k using the equation L(yp ) = f ,
K
(a2 + a1 a + a0 ) k eat = K eat ⇒ p(a) k eat = K eat ⇒ k= .
p(a)
K at
We get the particular solution yp (t) = e . This establishes the Lemma.
p(a)
96 G. NAGY – ODE august 16, 2015
Remark: As we said, the step (4) in Example 2.4.1 is a particular case of Lemma 2.4.3,
3 2t 3 3 2t 1
yp (t) = e = 2 e2t = e ⇒ yp (t) = − e2t .
p(2) (2 − 6 − 4) −6 2
In the following example our first guess for a particular solution yp happens to be a
solution of the homogenous equation.
Example 2.4.2: Find all solutions to the nonhomogeneous equation
y 00 − 3y 0 − 4y = 3 e4t .
Solution: If we write the equation as L(y) = f , with f (t) = 3 e4t , then the operator L is
the same as in Example 2.4.1. So the solutions of the homogeneous equation L(y) = 0, are
the same as in that example,
y+ (t) = e4t , y- (t) = e−t .
The source function is f (t) = 3 e4t , so the Table 1 says that we need to guess yp (t) = k e4t .
However, this function yp is solution of the homogeneous equation, because
yp = k y + .
We have to change our guess, as indicated in the undetermined coefficients method, step (3)
yp (t) = kt e4t .
This new guess is not solution of the homogeneous equation. So we proceed to compute the
constant k. We introduce the guess into L(yp ) = f ,
yp0 = (1 + 4t) k e4t , yp00 = (8 + 16t) k e4t ⇒ 8 − 3 + (16 − 12 − 4)t k e4t = 3 e4t ,
Solution: If we write the equation as L(y) = f , with f (t) = 2 sin(t), then the operator L
is the same as in Example 2.4.1. So the solutions of the homogeneous equation L(y) = 0,
are the same as in that example,
y+ (t) = e4t , y- (t) = e−t .
Since the source function is f (t) = 2 sin(t), the Table 1 says that we need to choose the
function yp (t) = k1 cos(t) + k2 sin(t). This function yp is not solution to the homogeneous
equation. So we look for the constants k1 , k2 using the differential equation,
yp0 = −k1 sin(t) + k2 cos(t), yp00 = −k1 cos(t) − k2 sin(t),
and then we obtain
[−k1 cos(t) − k2 sin(t)] − 3[−k1 sin(t) + k2 cos(t)] − 4[k1 cos(t) + k2 sin(t)] = 2 sin(t).
G. NAGY – ODE August 16, 2015 97
The next example collects a few nonhomogeneous equations and the guessed function yp .
Example 2.4.4: We provide few more examples of nonhomogeneous equations and the
appropriate guesses for the particular solutions.
(a) For y 00 − 3y 0 − 4y = 3e2t sin(t), guess, yp (t) = k1 cos(t) + k2 sin(t) e2t .
(d) For y 00 − 3y 0 − 4y = 3t sin(t), guess, yp (t) = (k1 t + k0 ) k̃1 cos(t) + k̃2 sin(t) .
C
Remark: Suppose that the source function f does not appear in Table 1, but f can be
written as f = f1 + f2 , with f1 and f2 in the table. In such case look for a particular solution
yp = yp1 + yp2 , where L(yp1 ) = f1 and L(yp2 ) = f2 . Since the operator L is linear,
L(yp ) = L(yp1 + yp2 ) = L(yp1 ) + L(yp2 ) = f1 + f2 = f ⇒ L(yp ) = f.
Solution: If we write the equation as L(y) = f , with f (t) = 2 sin(t), then the operator L
is the same as in Example 2.4.1 and 2.4.3. So the solutions of the homogeneous equation
L(y) = 0, are the same as in these examples,
y+ (t) = e4t , y- (t) = e−t .
The source function f (t) = 3 e2t + 2 sin(t) does not appear in Table 1, but each term does,
f1 (t) = 3 e2t and f2 (t) = 2 sin(t). So we look for a particular solution of the form
y p = y p1 + y p2 , where L(yp1 ) = 3 e2t , L(yp2 ) = 2 sin(t).
We have chosen this example because we have solved each one of these equations before, in
Example 2.4.1 and 2.4.3. We found the solutions
1 1
yp1 (t) = − e2t ,
yp2 (t) = 3 cos(t) − 5 sin(t) .
2 17
98 G. NAGY – ODE august 16, 2015
2.4.3. The Variation of Parameters Method. This method provides a second way to
find a particular solution yp to a nonhomogeneous equation L(y) = f . We summarize this
method in formula to compute yp in terms of any pair of fundamental solutions to the
homogeneous equation L(y) = 0. The variation of parameters method works with second
order linear equations having variable coefficients and contiuous but otherwise arbitrary
sources. When the source function of a nonhomogeneous equation is simple enough to
appear in Table 1 the undetermined coefficients method is a quick way to find a particular
solution to the equation. When the source is more complicated, one usually turns to the
variation of parameters method, with its more involved formula for a particular solution.
Theorem 2.4.4 (Variation of Parameters). A particular solution to the equation
L(y) = f,
00 0
with L(y) = y + p(t) y + q(t) y and p, q, f continuous functions, is given by
yp = u1 y1 + u2 y2 ,
where y1 , y2 are fundamental solutions of the homogeneous equatio L(y) = 0 and the func-
tions u1 , u2 are defined by
Z Z
y2 (t)f (t) y1 (t)f (t)
u1 (t) = − dt, u2 (t) = dt, (2.4.3)
Wy1 y2 (t) Wy1 y2 (t)
where Wy1 y2 is the Wronskian of y1 and y2 .
The proof rests in a generalization of the reduction order method. Recall that the re-
duction order method is a way to find a second solution y2 of an homogeneous equation if
we already know one solution y1 . One writes y2 = u y1 and the original equation L(y2 ) = 0
provides an equation for u. This equation for u is simpler than the original equation for y2
because the function y1 satisfies L(y1 ) = 0.
The formula for yp is obtained generalizing the reduction order method. We write yp in
terms of both fundamental solutions y1 , y2 of the homogeneous equation,
yp (t) = u1 (t) y1 (t) + u2 (t) y2 (t).
We put this yp in the equation L(yp ) = f and we find an equation relating u1 and u2 . It
is important to realize that we have added one new function to the original problem. The
original problem is to find yp . Now we need to find u1 and u2 , but we still have only one
equation to solve, L(yp ) = f . The problem for u1 , u2 cannot have a unique solution. So we
are completely free to add a second equation to the original equation L(yp ) = f . We choose
the second equation so that we can solve for u1 and u2 . We unveil this second equation
when we are in the middle of the proof of Theorem 2.4.4.
Proof of Theorem 2.4.4: We must find a function yp solution of L(yp ) = f . We know a
pair of fundamental solutions, y1 , y2 , of the homogeneous equation L(y) = 0. Here is where
we generalize the reduction order method by looking for a function yp of the form
yp = u1 y1 + u2 y2 ,
G. NAGY – ODE August 16, 2015 99
where the functions u1 , u2 must be determined from the equation L(yp ) = f . We started
looking for one function, yp , and now we are looking for two functions u1 , u2 . The original
equation L(yp ) = f will give us a relation between u1 and u2 . Because we have added a
new function to the problem, we need to add one more equation to the problem so we get
a unique solution u1 , u2 . We are completely free to choose this extra equation. However, at
this point we have no idea what to choose.
Before adding a new equation to the system, we work with the original equation. We
introduce yp into the equation L(yp ) = f . We must first compute the derivatives
yp0 = u01 y1 + u1 y10 + u02 y2 + u2 y20 , yp00 = u001 y1 + 2u01 y10 + u1 y100 + u002 y2 + 2u02 y20 + u2 y200 .
We reorder a few terms and we see that L(yp ) = f has the form
u001 y1 + u002 y2 + 2(u01 y10 + u2 y20 ) + p (u01 y1 + u02 y2 )
+u1 (y100 + p y10 + q y1 ) + u2 (y200 + p y20 + q y2 ) = f.
The functions y1 and y2 are solutions to the homogeneous equation,
y100 + p y10 + q y1 = 0, y200 + p y20 + q y2 = 0,
so u1 and u2 must be solution of a simpler equation that the one above, given by
u001 y1 + u002 y2 + 2(u01 y10 + u02 y20 ) + p (u01 y1 + u02 y2 ) = f. (2.4.4)
As we said above, this equation does not have a unique solution u1 , u2 . This is just a relation
between these two functions. Here is where we need to add a new equation so that we can
get a unique solution for u1 , u2 . What is an appropriate equation to add? Any equation
that simplifies the Eq. (2.4.4) is a good candidate. For two reasons, we take the equation
u01 y1 + u02 y2 = 0. (2.4.5)
The first reason is that this equation makes the last term on the righ-hand side of Eq. (2.4.4)
vanish. So the system we need to solve is
u001 y1 + u002 y2 + 2(u01 y10 + u02 y20 ) = f (2.4.6)
0 0
u1 y1 + u2 y2 = 0. (2.4.7)
The second reason is that this second equation simplifies the first equation even further.
Just take the derivative of the second equation,
0
u01 y1 + u02 y2 = 0 ⇒ u001 y1 + u002 y2 + (u01 y10 + u02 y20 ) = 0.
This last equation implies that the first three terms in Eq. (2.4.6) vanish identically, because
of Eq.(2.4.7). So we end with the equations
u01 y10 + u02 y20 = f
u01 y1 + u02 y2 = 0.
And this is a 2 × 2 algebraic linear system for the unknowns u01 , u02 . It is hard to overstate
the importance of the word “algebraic” in the previous sentence. From the second equation
above we compute u02 and we introduce it in the first equation,
y1 y1 y20 0 y0 y − y y0
u02 = − u01 ⇒ u01 y10 − u1 = f ⇒ u01 1
2 1 2
= f.
y2 y2 y2
Recall that the Wronskian of two functions is Wy1 y2 = y1 y20 − y10 y2 , we get
y2 f y1 f
u01 = − ⇒ u02 = .
Wy1 y2 Wy1 y2
These equations are the derivative of Eq. (2.4.3). Integrate them in the variable t and choose
the integration constants to be zero. We get Eq. (2.4.3). This establishes the Theorem.
100 G. NAGY – ODE august 16, 2015
Remark: The integration constants in the expressions for u1 , u2 can always be chosen to
be zero. To understand the effect of the integration constants in the function yp , let us do
the following. Denote by u1 and u2 the functions in Eq. (2.4.3), and given any real numbers
c1 and c2 define
ũ1 = u1 + c1 , ũ2 = u2 + c2 .
Then the corresponding solution ỹp is given by
ỹp = ũ1 y1 + ũ2 y2 = u1 y1 + u2 y2 + c1 y1 + c2 y2 ⇒ ỹp = yp + c1 y1 + c2 y2 .
The two solutions ỹp and yp differ by a solution to the homogeneous differential equation.
So both functions are also solution to the nonhomogeneous equation. One is then free to
choose the constants c1 and c2 in any way. We chose them in the proof above to be zero.
Solution: The formula for yp in Theorem 2.4.4 requires we know fundamental solutions to
the homogeneous problem. So we start finding these solutions first. Since the equation has
constant coefficients, we compute the characteristic equation,
√ r+ = 3,
1
r2 − 5r + 6 = 0 ⇒ r± = 5 ± 25 − 24
⇒
2 r- = 2.
So, the functions y1 and y2 in Theorem 2.4.4 are in our case given by
y1 (t) = e3t , y2 (t) = e2t .
The Wronskian of these two functions is given by
Wy1 y2 (t) = (e3t )(2e2t ) − (3e3t )(e2t ) ⇒ Wy1 y2 (t) = −e5t .
We are now ready to compute the functions u1 and u2 . Notice that Eq. (2.4.3) the following
differential equations
y2 f y1 f
u01 = − , u02 = .
Wy1 y2 Wy1 y2
So, the equation for u1 is the following,
u01 = −e2t (2et )(−e−5t ) ⇒ u01 = 2e−2t ⇒ u1 = −e−2t ,
Solution: We first rewrite the nonhomogeneous equation above in the form given in The-
orem 2.4.4. In this case we must divide the whole equation by t2 ,
2 1 1
y 00 − 2
y =3− 2 ⇒ f (t) = 3 − .
t t t2
We now proceed to compute the Wronskian of the fundamental solutions y1 , y2 ,
−1 1
Wy1 y2 (t) = (t2 ) 2 − (2t) ⇒ Wy1 y2 (t) = −3.
t t
We now use the equation in (2.4.3) to obtain the functions u1 and u2 ,
1 1 1 1 1
u01 = − 3− 2 u02 = (t2 ) 3 − 2
t t −3 t −3
1 1 −3 1 −2 2 1 1 1
= − t ⇒ u1 = ln(t) + t , = −t + ⇒ u2 = − t3 + t.
t 3 6 3 3 3
A particular solution to the nonhomogeneous equation above is ỹp = u1 y1 + u2 y2 , that is,
h 1 i 1
ỹp = ln(t) + t−2 (t2 ) + (−t3 + t)(t−1 )
6 3
2 1 1 2 1
= t ln(t) + − t +
6 3 3
2 1 1 2
= t ln(t) + − t
2 3
2 1 1
= t ln(t) + − y1 (t).
2 3
However, a simpler expression for a solution of the nonhomogeneous equation above is
1
yp = t2 ln(t) + .
2
C
Remark: Sometimes it could be difficult to remember the formulas for functions u1 and u2
in (2.4.3). In such case one can always go back to the place in the proof of Theorem 2.4.4
where these formulas come from, the system
u01 y10 + u02 y20 = f
u01 y1 + u02 y2 = 0.
The system above could be simpler to remember than the equations in (2.4.3). We end this
Section using the equations above to solve the problem in Example 2.4.7. Recall that the
solutions to the homogeneous equation in Example 2.4.7 are y1 (t) = t2 , and y2 (t) = 1/t,
while the source function is f (t) = 3 − 1/t2 . Then, we need to solve the system
1
t2 u01 + u02 = 0,
t
(−1) 1
2t u01 + u02 2 = 3 − 2 .
t t
This is an algebraic linear system for u01 and u02 . Those are simple to solve. From the equation
on top we get u02 in terms of u01 , and we use that expression on the bottom equation,
1 1 1
u02 = −t3 u01 ⇒ 2t u01 + t u01 = 3 − ⇒ u01 = − 3.
t2 t 3t
102 G. NAGY – ODE august 16, 2015
Substitue back the expression for u01 in the first equation above and we get u02 . We get,
1 1
u01 = − 3
t 3t
1
u02 = −t2 + .
3
We should now integrate these functions to get u1 and u2 and then get the particular solution
ỹp = u1 y1 + u2 y2 . We do not repeat these calculations, since they are done Example 2.4.7.
G. NAGY – ODE August 16, 2015 103
2.4.4. Exercises.
2.4.1.- . 2.4.2.- .
104 G. NAGY – ODE august 16, 2015
2.5. Applications
Different physical systems are mathematically identical. In this Section we show that a
weight attached to a spring, oscillating trough air of under water, is mathematically identical
to the behavior of an electric current in a circuit containing a resistor, a capacitor, and an
inductor. Mathematical identical means in this case that both systems are described by the
same differential equation.
2.5.1. Review of Constant Coefficient Equations. In § 2.3 we have found solutions to
second order, linear, homogeneous, differential equations with constant coefficients,
y 00 + a1 y 0 + a0 y = 0, a1 , a0 ∈ R. (2.5.1)
Theorem 2.3.2 contains formulas for the general solution of this equation. We review these
formulas here and at the same time we introduce new names that are common in the physics
literature to describe these solutions. The first step to obtain solutions to Eq. (2.5.1) is to
find the roots or the characteristic polynomial p(r) = r2 + a1 r + a0 , which are given by
a1 1p 2
r± = − ± a1 − 4a0 .
2 2
We then have three different cases to consider.
(a) A system is called overdamped in the case that a21 −4a0 > 0. In this case the characteristic
polynomial has real and distinct roots, r+ , r- , and the corresponding solutions to the
differential equation are
y+ (t) = er+ t , y- (t) = er- t .
So the solutions are exponentials, increasing or decreasing, according whether the roots
are positive or negative, respectively. The decreasing exponential solutions originate the
name overdamped solutions.
(b) A system is called critically damped in the case that a21 −4a0 = 0. In this case the charac-
teristic polynomial has only one real, repeated, root, r0 = −a1 /2, and the corresponding
solutions to the differential equation are then,
y+ (t) = e−a1 t/2 , y- (t) = t e−a1 t/2 .
(c) A system is called underdamped in the case that a21 − 4a0 < 0. In this case the character-
istic polynomial has two complex roots, r± = α ± βi, one being the complex conjugate
of the other, since the polynomial has real coefficients. The corresponding solutions to
the differential equation are
y+ (t) = eαt cos(βt), y- (t) = eαt sin(βt).
a1 1p
where α = − and β = 4a0 − a21 . In the particular case that the real part of
2 2
the solutions vanishes, a1 = 0, the system is called undamped, since it has oscillatory
solutions without any exponential decay or increase.
2.5.2. Undamped Mechanical Oscillations. Springs are curious objects, when you slightly
deform them they create a force proportional and in opposite direction to the deformation.
When you release the spring, it goes back to its original shape. This is true for small enough
deformations. If you stretch the spring long enough, the deformations are permanent.
Consider a spring-plus-body system as shown in Fig. 2.5.2. A spring is fixed to a ceiling
and hangs vertically with a natural length l. It stretches by ∆l when a body with mass m
is attached to the spring lower end, just like the middle spring in Fig. 2.5.2. We assume
that the weight m is small enough so that the spring is not damaged. This means that the
spring acts like a normal spring, whenever it is deformed by an amount ∆l it makes a force
G. NAGY – ODE August 16, 2015 105
proportional and opposite to the deformation, Fs0 = −k ∆l. Here k > 0 is a constant that
depends on the type of spring. Newton’s law of motion implies the following result.
Theorem 2.5.1. A spring-plus-body system with spring constant k, body mass m, at rest
with a spring deformation ∆l, within the range where the spring acts like a spring, satisfies
mg = k ∆l.
Proof of Theorem 2.5.1: Since the spring-plus-body system is at rest, Newton’s law of
motion implies that all forces acting on the body must add up to zero. The only two forces
acting on the body are its weight, Fg = mg, and the force done by the spring, Fs0 = −k ∆l.
We now find out how the body will move when we take it away from the rest position.
To describe that movement we introduce a vertical coordinate for the displacements, y, as
shown in Fig. 2.5.2, with y positive downwards, and y = 0 at the rest position of the spring
and the body. The physical system we want to describe is simple; we further stretch the
spring with the body by y0 and then we release it with an initial velocity ŷ0 . Newton’s law
of motion determine the subsequent motion.
Theorem 2.5.2. The vertical movement of a spring-plus-body system in air with spring
constant k > 0 and body mass m > 0 is described by the solutions of the differential equation
m y 00 (t) + k y(t) = 0, (2.5.2)
where y is the vertical displacement function as shown in Fig. 2.5.2. Furthermore, there is
a unique solution to Eq. (2.5.2) satisfying the initial conditions y(0) = y0 and y 0 (0) = y1 ,
y(t) = A cos(ω0 t − φ),
r
k
with natural frequency ω0 = , where the amplitude A > 0 and phase-shift φ ∈ (−π, π],
m
s
y2 y
1
A = y02 + 12 , φ = arctan .
ω0 ω0 y0
p
Remark: The natural frequency of the system ω0 = k/m is an angular, or circular,
frequency. So when ω0 6= 0 the motion of the system is periodic with period T = 2π/ω0 and
frequency ν0 = ω0 /(2π).
106 G. NAGY – ODE august 16, 2015
Proof of Theorem 2.5.2: Newton’s second law of motion says that mass times acceleration
of the body m y 00 (t) must be equal to the sum of all forces acting on the body, hence
m y 00 (t) = Fg + Fs0 + Fs (t),
where Fs (t) = −k y(t) is the force done by the spring due to the extra displacement y.
Since the first two terms on the right hand side above cancel out, Fg + Fs0 = 0, the body
displacement from the equilibrium position, y(t), must be solution of the differential equation
m y 00 (t) + k y(t) = 0.
which is Eq. (2.5.2). In § 2.3 we have seen how to solve this type of differential equations.
The characteristic polynomial is p(r) = mr2 + k, which has complex roots r± = ±ω02 i,
where we introduced the natural frequency of the system,
r
k
ω0 = .
m
The reason for this name is the calculations done in § 2.3, where we found that a real-valued
expression for the general solution to Eq. (2.5.2) is given by
ygen (t) = c+ cos(ω0 t) + c- sin(ω0 t).
This means that the body attached to the spring oscillates around the equilibrium position
y = 0 with period T = 2π/ω0 , hence frequency ν0 = ω0 /(2π). There is an equivalent way to
express the general solution above given by
ygen (t) = A cos(ω0 t − φ).
These two expressions for ygen are equivalent because of the trigonometric identity
A cos(ω0 t − φ) = A cos(ω0 t) cos(φ) + A sin(ω0 t) sin(φ),
which holds for all A and φ, and ω0 t. Then, it is not difficult to see that
p
A = c2+ + c2- ,
)
c+ = A cos(φ),
⇔ c
c- = A sin(φ). φ = arctan - .
c+
Since both expressions for the general solution are equivalent, we use the second one, in
terms of the amplitude and phase-shift. The initial conditions y(0) = y0 and y 0 (0) = y1
determine the constants A and φ. Indeed,
s
y2
A = y02 + 12 ,
)
y0 = y(0) = A cos(φ),
⇒ ω0
y1 = y 0 (0) = Aω0 sin(φ).
y
1
φ = arctan
.
ω0 y0
This establishes the Theorem.
Example 2.5.1: Find the movement of a 50 gr mass attached to a spring moving in air
with initial conditions y(0) = 4 cm and y 0 (0) = 40 cm/s. The spring is such that a 30 gr
mass stretches it 6 cm. Approximate the acceleration of gravity by 1000 cm/s2 .
Solution: Theorem 2.5.2 says that the equation satisfied by the displacement y is given by
my 00 + ky = 0.
G. NAGY – ODE August 16, 2015 107
In order to solve this equation we need to find the spring constant, k, which by Theorem 2.5.1
is given by k = mg/∆l. In our case when a mass of m = 30 gr is attached to the sprint, it
stretches ∆l = 6 cm, so we get,
(30) (1000) gr
k= ⇒ k = 5000 .
6 s2
Knowing the spring constant k we can now describe the movement of the body with mass
m = 50 gr. The solution of the differential equation above is obtained as usual, first find the
roots of the characteristic polynomial
r r
2 k 5000 1
mr + k = 0 ⇒ r± = ±ω0 i, ω0 = = ⇒ ω0 = 10 .
m 50 s
We write down the general solution in terms of the amplitude A and phase-shift φ,
To accommodate the initial conditions we need the function y 0 (t) = −Aω0 sin(ω0 t − φ). The
initial conditions determine the amplitude and phase-shift, as follows,
√
4 = y(0) = A cos(φ),
)
A = 16 + 16,
⇒ 40
40 = y 0 (0) = −10 A sin(−φ) φ = arctan .
(10)(4)
√
We obtain that A = 4 2 and tan(φ) = 1. The later equation implies that either φ = π/4 or
φ = −3π/4, for φ ∈ (−π, π]. If we pick the second value, φ = −3π/4, this would imply that
y(0) < 0 and y 0 (0) < 0, which is not true in our case. So we must pick the value φ = π/4.
We then conclude:
√ π
y(t) = 4 2 cos 10 t − .
4
C
2.5.3. Damped Mechanical Oscillations. Suppose now that the body in the spring-plus-
body system is a thin square sheet of metal. If the main surface of the sheet is perpendicular
to the direction of motion, then the air dragged by the sheet during the spring oscillations will
be significant enough to slow down the spring oscillations in an appreciable time. One can
find out that the friction force done by the air opposes the movement and it is proportional
to the velocity of the body, that is, Fd = −d y 0 (t). We call such force a damping force, where
d > 0 is the damping constant, and systems having such force damped systems. We now
describe the spring-plus-body system in the case that there is a non-zero damping force.
Theorem 2.5.3.
(a) The vertical displacement y, function as shown in Fig. 2.5.2, of a spring-plus-body sys-
tem with spring constant k > 0, body mass m > 0, and damping constant d > 0, is
described by the solutions of
p
r± = −ωd ±
(b) The roots of the characteristic polynomial of Eq. (2.5.3) are r ωd2 − ω02 ,
d k
with damping frequency ωd = and natural frequency ω0 = .
2m m
108 G. NAGY – ODE august 16, 2015
(c) The solutions to Eq. (2.5.3) fall into one of the following cases:
(i) A system with ωd > ω0 is called overdamped, with general solution to Eq. (2.5.3)
ygen (t) = c+ er+ t + c- er- t .
(ii) A system with ωd = ω0 is called critically damped, with general solution to Eq. (2.5.3)
ygen (t) = c+ e−ωd t + c- t e−ωd t .
(iii) A system with ωd < ω0 is called underdamped, with general solution to Eq. (2.5.3)
ygen (t) = A e−ωd t cos(βt − φ),
p
where β = ω02 − ωd2 is the system frequency. The case ωd = 0 is called undamped.
(d) There is a unique solution to Eq. (2.5.2) with initial conditions y(0) = y0 and y 0 (0) = y1 .
Remark: In the case the damping constant vanishes we recover Theorem 2.5.2.
Proof of Therorem 2.5.2: Newton’s second law of motion says that mass times acceler-
ation of the body m y 00 (t) must be equal to the sum of all forces acting on the body. In the
case that we take into account the air dragging force we have
m y 00 (t) = Fg + Fs0 + Fs (t) + Fd (t),
where Fs (t) = −k y(t) as in Theorem 2.5.2, and Fd (t) = −d y 0 (t) is the air-body dragging
force. Since the first two terms on the right hand side above cancel out, Fg + Fs0 = 0,
as mentioned in Theorem 2.5.1, the body displacement from the equilibrium position, y(t),
must be solution of the differential equation
m y 00 (t) + +d y 0 (t) + k y(t) = 0.
which is Eq. (2.5.3). In § 2.3 we have seen how to solve this type of differential equations.
The characteristic polynomial is p(r) = mr2 + dr + k, which has complex roots
r
1 d d 2 k
p q
r± = −d ± d2 − 4mk = − ± − ⇒ r± = −ωd ± ωd2 − ω02 .
2m 2m 2m m
r
d k
where ωd = and ω0 = . In § 2.3 we found that the general solution of a differential
2m m
equation with a characteristic polynomial having roots as above can be divided into three
groups. For the case r+ 6= r- real valued, we obtain case (ci), for the case r+ = r- we
obtain case (cii). Finally, we said that the general solution for the case of two complex roots
r± = α + βi was given by
ygen (t) = eαt c+ cos(βt) + c- sin(βt) .
p
In our case α = −ωd and β = ω02 − ωd2 . We now rewrite the second factor on the right-hand
side above in terms of an amplitude and a phase shift,
ygen (t) = A e−ωd t cos(βt − φ).
The main result from § 2.3 says that the initial value problem in Theorem 2.5.3 has a unique
solution for each of the three cases above. This establishes the Theorem.
Example 2.5.2: Find the movement of a 5Kg mass attached to a spring with constant
2
k = 5Kg/Secs moving
√ in a medium with damping constant d = 5Kg/Secs, with initial
conditions y(0) = 3 and y 0 (0) = 0.
Solution: By Theorem 2.5.3 the differential equation for this system is my 00 + dy 0 + ky = 0,
with m = 5, k = 5, d = 5. The roots of the characteristic polynomial are
r
d 1 k
q
2 2
r± = −ωd ± ωd − ω0 , ωd = = , ω0 = = 1,
2m 2 m
G. NAGY – ODE August 16, 2015 109
that is,
r √
1 1 1 3
r± = − ± −1=− ±i .
2 4 2 2
This means our system has underdamped oscillations. Following Theorem 2.5.3 part (ciii),
our solution must be given by
√3
y(t) = A e−t/2 cos t−φ .
2
We only need to introduce the initial conditions into the expression for y to find out the
amplitude A and phase-shift φ. In order to do that we first compute the derivative,
1 √3 √3 √3
0 −t/2 −t/2
y (t) = − A e cos t−φ − Ae sin t−φ .
2 2 2 2
The initial conditions in the example imply,
√
√ 0 1 3
3 = y(0) = A cos(φ), 0 = y (0) = − A cos(φ) + A sin(φ).
2 2
The second equation above allows us to compute the phase-shift, since
1 π π 5π
tan(φ) = √ ⇒ φ= , or φ = −π =− .
3 6 6 6
If φ = −5π/6, then y(0) < 0, which is not out case. Hence we must choose φ = π/6. With
that phase-shift, the amplitude is given by
√
√ π 3
3 = A cos =A ⇒ A = 2.
6 2
√
3 π
We conclude: y(t) = 2 e−t/2 cos t− . C
2 6
2.5.4. Electrical Oscillations. We describe the electric current flowing through an electric
circuit consisting of a resistor, a capacitor, and an inductor connected in series as shown in
Fig. 12. A current can start when a magnet is moved near the inductor. If the circuit has low
resistance, the current will keep flowing through the inductor between the capacitor plates,
endlessly. There is no need of a power source to keep the current flowing. The presence of
a resistance transforms the current energy into heat, damping the current oscillation.
Furthermore, the results in Theorem 2.5.3 parts (c), (d), hold with the roots of the charac-
p R
teristic polynomial r± = −ωd ± ωd2 − ω02 , and with the damping frequency ωd = and
r 2L
1
natural frequency ω0 = .
LC
Proof of Theorem 2.5.4: Compute the derivate on both sides in Eq. (2.5.4),
1
L I 00 + R I 0 + I = 0,
C
and divide by L,
R 1
I 00 (t) + 2 I 0 (t) + I(t) = 0.
2L LC
R 1
Introduce ωd = and ω0 = √ , then Kirchhoff’s law can be expressed as the second
2L LC
order, homogeneous, constant coefficients, differential equation
I 00 + 2ωd I 0 + ω02 I = 0.
The rest of the proof follows the one of Theorem 2.5.3. This establishes the Theorem.
Example 2.5.3: Find real-valued fundamental solutions to I 00 + 2ωd I 0 + ω02 I = 0, where
ωd = R/(2L), ω02 = 1/(LC), in the cases (a), (b) below.
Solution: The roots of the characteristic polynomial, p(r) = r2 + 2ωd r + ω02 , are given by
1
q q
r± = −2ωd ± 4ωd2 − 4ω02 ⇒ r± = −ωd ± ωd2 − ω02 .
2
Remark: When the circuit has no resistance, the current oscillates without dissipation.
p
Case (b): R < 4L/C. This implies
I1 I2
2 4L R2 1
R < ⇔ < ⇔ ωd2 < ω02 .
C 4L2 LC
Therefore, the characteristicppolynomial has e−ωd t
complex roots r± = −ωd ± i ω02 − ωd2 , hence
the fundamental solutions are t
I1 (t) = e−ωd t cos(β t), −e −ωd t
2.5.5. Exercises.
2.5.1.- . 2.5.2.- .
112 G. NAGY – ODE august 16, 2015
The first differential equations were solved around the end of the seventeen century and
beginning of the eighteen century. We studied a few of these equations in § 1.1-1.4 and the
constant coefficients equations in Chapter 2. By the middle of the eighteen century people
realized that the methods we learnt in these first sections had reached a dead end. One reason
was the lack of functions to write the solutions of differential equations. The elementary
functions we use in calculus, such as polynomials, quotient of polynomials, trigonometric
functions, exponentials, and logarithms, were simply not enough. People even started to
think of differential equations as sources to find new functions. It was matter of little time
before mathematicians started to use power series expansions to find solutions of differential
equations. Convergent power series define functions far more general than the elementary
functions from calculus.
In § 3.1 we study the simplest case, when the power series is centered at a regular point
of the equation. The coefficients of the equation are analytic functions at regular points, in
particular continuous. In § 3.2 we study the Euler equidimensional equation. The coefficients
of an Euler equation diverge at a particular point in a very specific way. No power series
are needed to find solutions in this case. In § 3.3 we solve equations with regular singular
points. The equation coefficients diverge at regular singular points in a way similar to
the coefficients in an Euler equation. We will find solutions to these equations using the
solutions to an Euler equation and power series centered precisely at the regular singular
points of the equation.
1 P0
P1
P2
−1 1 x
P3
−1
G. NAGY – ODE August 16, 2015 113
3.1.1. Regular Points. We now look for solutions to second order linear homogeneous
differential equations having variable coefficients. Recall we solved the constant coefficient
case in Chapter 2. We have seen that the solutions to constant coefficient equations can
be written in terms of elementary functions such as quotient of polynomials, trigonometric
functions, exponentials, and logarithms. For example, the equation
y 00 + y = 0
has the fundamental solutions y1 (x) = cos(x) and y2 (x) = sin(x). But the equation
x y 00 + y 0 + x y = 0
cannot be solved in terms of elementary functions, that is in terms of quotients of poly-
nomials, trigonometric functions, exponentials and logarithms. Except for equations with
constant coefficient and equations with variable coefficient that can be transformed into
constant coefficient by a change of variable, no other second order linear equation can be
solved in terms of elementary functions. Still, we are interested in finding solutions to vari-
able coefficient equations. Mainly because these equations appear in the description of so
many physical systems.
We have said that power series define more general functions than the elementary func-
tions mentioned above. So we look for solutions using power series. In this section we center
the power series at a regular point of the equation.
Definition 3.1.1. A point x0 ∈ R is called a regular point of the equation
y 00 + p(x) y 0 + q(x) y = 0, (3.1.1)
iff p, q are analytic functions at x0 . Otherwise x0 is called a singular point of the equation.
Remark: Near a regular point x0 the coefficients p and q in the differential equation above
can be written in terms of power series centered at x0 ,
∞
X
p(x) = p0 + p1 (x − x0 ) + p2 (x − x0 )2 + · · · = pn (x − x0 )n ,
n=0
∞
X
q(x) = q0 + q1 (x − x0 ) + q2 (x − x0 )2 + · · · = qn (x − x0 )n ,
n=0
3.1.2. The Power Series Method. The differential equation in (3.1.1) is a particular
case of the equations studied in § 2.1, and the existence result in Theorem 2.1.2 applies to
Eq. (3.1.1). This Theorem was known to Lazarus Fuchs, who in 1866 added the following: If
the coefficient functions p and q are analytic on a domain, so is the solution on that domain.
Fuchs went ahead and studied the case where the coefficients p and q have singular points,
which we study in § 3.3. The result for analytic coefficients is summarized below.
Theorem 3.1.2. If the functions p, q are analytic on an open interval (x0 − ρ, x0 + ρ) ⊂ R,
then the differential equation
y 00 + p(x) y 0 + q(x) y = 0,
has two independent solutions, y1 , y2 , which are analytic on the same interval.
Remark: A complete proof of this theorem can be found in [2], Page 169. See also [10],
§ 29. We present the first steps of the proof and we leave the convergence issues to the latter
references. The proof we present is based on power series expansions for the coefficients p,
q, and the solution y. This is not the proof given by Fuchs in 1866.
Proof of Thorem 3.1.2: Since the coefficient functions p and q are analytic functions on
(x0 − ρ, x0 + ρ), where ρ > 0, they can be written as power series centered at x0 ,
X∞ ∞
X
p(x) = pn (x − x0 )n , q(x) = qn (x − x0 )n .
n=0 n=0
We look for solutions that can also be written as power series expansions centered at x0 ,
∞
X
y(x) = an (x − x0 )n .
n=0
We start computing the first derivatives of the function y,
∞
X X∞
0 (n−1) 0
y (x) = nan (x − x0 ) ⇒ y (x) = nan (x − x0 )(n−1) ,
n=0 n=1
where in the second expression we started the sum at n = 1, since the term with n = 0
vanishes. Relabel the sum with m = n − 1, so when n = 1 we have that m = 0, and
n = m + 1. Therefore, we get
X∞
0
y (x) = (m + 1)a(m+1) (x − x0 )m .
m=0
We finally rename the summation index back to n,
∞
X
y 0 (x) = (n + 1)a(n+1) (x − x0 )n . (3.1.2)
n=0
From now on we do these steps at once, and the notation n − 1 = m → n means
∞
X X∞
y 0 (x) = nan (x − x0 )(n−1) = (n + 1)a(n+1) (x − x0 )n .
n=1 n=0
G. NAGY – ODE August 16, 2015 115
Therefore, the differential equation y 00 + p(x) y 0 + q(x) y = 0 has now the form
∞ h n
X X i
(k + 1)a(k+1) p(n−k) + ak q(n−k) (x − x0 )n = 0.
(n + 2)(n + 1)a(n+2) +
n=0 k=0
So we obtain a recurrence relation for the coefficients an ,
Xn
(n + 2)(n + 1)a(n+2) + (k + 1)a(k+1) p(n−k) + ak q(n−k) = 0,
k=0
for n = 0, 1, 2, · · · . Equivalently,
n
1 X
a(n+2) = − (k + 1)a(k+1) p(n−k) + ak q(n−k) . (3.1.3)
(n + 2)(n + 1)
k=0
We have obtained an expression for a(n+2) in terms of the previous coefficients a(n+1) , · · · , a0
and the coefficients of the function p and q. If we choose arbitrary values for the first two
coefficients a0 and a1 , the the recurrence relation in (3.1.3) define the remaining coefficients
a2 , a3 , · · · in terms of a0 and a1 . The coefficients an chosen in such a way guarantee that
the function y defined in (3.1.2) satisfies the differential equation.
In order to finish the proof of Theorem 3.1.2 we need to show that the power series
for y defined by the recurrence relation actually converges on a nonempty domain, and
furthermore that this domain is the same where p and q are analytic. This part of the
proof is too complicated for us. The interested reader can find the rest of the proof in [2],
Page 169. See also [10], § 29.
It is important to understand the main ideas in the proof above, because we will follow
these ideas to find power series solutions to differential equations. So we now summarize
the main steps in the proof above:
116 G. NAGY – ODE august 16, 2015
(a) Write a power series expansion of the solution centered at a regular point x0 ,
∞
X
y(x) = an (x − x0 )n .
n=0
(b) Introduce the power series expansion above into the differential equation and find a
recurrence relation among the coefficients an .
(c) Solve the recurrence relation in terms of free coefficients.
(d) If possible, add up the resulting power series for the solutions y1 , y2 .
We follow these steps in the examples below to find solutions to several differential equa-
tions. We start with a first order constant coefficient equation, and then we continue with
a second order constant coefficient equation. The last two examples consider variable coef-
ficient equations.
Example 3.1.2: Find a power series solution y around the point x0 = 0 of the equation
y 0 + c y = 0, c ∈ R.
Solution: We already know every solution to this equation. This is a first order, linear,
differential equation, so using the method of integrating factor we find that the solution is
y(x) = a0 e−c x , a0 ∈ R.
We are now interested in obtaining such solution with the power series method. Although
this is not a second order equation, the power series method still works in this example.
Propose a solution of the form
∞
X ∞
X
y= an xn ⇒ y0 = nan x(n−1) .
n=0 n=1
0
We can start the sum in y at n = 0 or n = 1. We choose n = 1, since it is more convenient
later on. Introduce the expressions above into the differential equation,
∞
X ∞
X
nan xn−1 + c an xn = 0.
n=1 n=0
Relabel the first sum above so that the functions xn−1 and xn in the first and second sum
have the same label. One way is the following,
∞
X ∞
X
(n + 1)a(n+1) xn + c an xn = 0
n=0 n=0
We can now write down both sums into one single sum,
∞
X
(n + 1)a(n+1) + c an xn = 0.
n=0
Since the function on the left-hand side must be zero for every x ∈ R, we conclude that
every coefficient that multiplies xn must vanish, that is,
(n + 1)a(n+1) + c an = 0, n > 0.
The last equation is called a recurrence relation among the coefficients an . The solution of
this relation can be found by writing down the first few cases and then guessing the general
G. NAGY – ODE August 16, 2015 117
Example 3.1.3: Find a power series solution y(x) around the point x0 = 0 of the equation
y 00 + y = 0.
Solution: We know that the solution can be found computing the roots of the characteristic
polynomial r2 + 1 = 0, which gives us the solutions
y(x) = a0 cos(x) + a1 sin(x).
We now recover this solution using the power series,
∞
X X∞ ∞
X
y= an xn ⇒ y 0 = nan x(n−1) , ⇒ y 00 = n(n − 1)an x(n−2) .
n=0 n=1 n=2
Introduce the expressions above into the differential equation, which involves only the func-
tion and its second derivative,
∞
X X∞
n(n − 1)an xn−2 + an xn = 0.
n=2 n=0
Relabel the first sum above, so that both sums have the same factor xn . One way is,
∞
X ∞
X
(n + 2)(n + 1)a(n+2) xn + an xn = 0.
n=0 n=0
Now we can write both sums using one single sum as follows,
X∞
(n + 2)(n + 1)a(n+2) + an xn = 0 ⇒ (n + 2)(n + 1)a(n+2) + an = 0.
n > 0.
n=0
The last equation is the recurrence relation. The solution of this relation can again be found
by writing down the first few cases, and we start with even values of n, that is,
1
n = 0, (2)(1)a2 = −a0 ⇒ a2 = − a0 ,
2!
1
n = 2, (4)(3)a4 = −a2 ⇒ a4 = a0 ,
4!
1
n = 4, (6)(5)a6 = −a4 ⇒ a6 = − a0 .
6!
118 G. NAGY – ODE august 16, 2015
One can check that the even coefficients a2k can be written as
(−1)k
a2k = a0 .
(2k)!
The coefficients an for the odd values of n can be found in the same way, that is,
1
n = 1, (3)(2)a3 = −a1 ⇒ a3 = − a1 ,
3!
1
n = 3, (5)(4)a5 = −a3 ⇒ a5 = a1 ,
5!
1
n = 5, (7)(6)a7 = −a5 ⇒ a7 = − a1 .
7!
One can check that the odd coefficients a2k+1 can be written as
(−1)k
a2k+1 = a1 .
(2k + 1)!
Split the sum in the expression for y into even and odd sums. We have the expression for
the even and odd coefficients. Therefore, the solution of the differential equation is given by
∞ ∞
X (−1)k 2k X (−1)k
y(x) = a0 x + a1 x2k+1 .
(2k)! (2k + 1)!
k=0 k=0
One can check that these are precisely the power series representations of the cosine and
sine functions, respectively,
y(x) = a0 cos(x) + a1 sin(x).
C
Example 3.1.4: Find the first four terms of the power series expansion around the point
x0 = 1 of each fundamental solution to the differential equation
y 00 − x y 0 − y = 0.
Solution: This is a differential equation we cannot solve with the methods of previous
sections. This is a second order, variable coefficients equation. We use the power series
method, so we look for solutions of the form
X∞ X∞ ∞
X
n 0 n−1 00
y= an (x − 1) ⇒ y = nan (x − 1) ⇒ y = n(n − 1)an (x − 1)n−2 .
n=0 n=1 n=2
We start working in the middle term in the differential equation. Since the power series is
centered at x0 = 1, it is convenient to re-write this term as x y 0 = [(x − 1) + 1] y 0 , that is,
∞
X
x y0 = nan x(x − 1)n−1
n=1
X∞
nan (x − 1) + 1 (x − 1)n−1
=
n=1
X∞ ∞
X
= nan (x − 1)n + nan (x − 1)n−1 . (3.1.4)
n=1 n=1
As usual by now, the first sum on the right-hand side of Eq. (3.1.4) can start at n = 0, since
we are only adding a zero term to the sum, that is,
∞
X ∞
X
nan (x − 1)n = nan (x − 1)n ;
n=1 n=0
G. NAGY – ODE August 16, 2015 119
so both sums in Eq. (3.1.4) have the same factors (x − 1)n . We obtain the expression
∞
X ∞
X
x y0 = nan (x − 1)n + (n + 1)a(n+1) (x − 1)n
n=0 n=0
X∞
nan + (n + 1)a(n+1) (x − 1)n .
= (3.1.5)
n=0
If we use Eqs. (3.1.5)-(3.1.6) in the differential equation, together with the expression for y,
the differential equation can be written as follows
∞
X ∞
X ∞
X
(n + 2)(n + 1)a(n+2) (x − 1)n − nan + (n + 1)a(n+1) (x − 1)n − an (x − 1)n = 0.
n=0 n=0 n=0
We can now put all the terms above into a single sum,
X∞ h i
(n + 2)(n + 1)a(n+2) − (n + 1)a(n+1) − nan − an (x − 1)n = 0.
n=0
This expression provides the recurrence relation for the coefficients an with n > 0, that is,
(n + 2)(n + 1)a(n+2) − (n + 1)a(n+1) − (n + 1)an = 0
h i
(n + 1) (n + 2)a(n+2) − a(n+1) − an = 0,
Example 3.1.5: Find the first three terms of the power series expansion around the point
x0 = 2 of each fundamental solution to the differential equation
y 00 − x y = 0.
We now relabel the first sum on the right-hand side of Eq. (3.1.8) in the following way,
∞
X ∞
X
an (x − 2)n+1 = a(n−1) (x − 2)n . (3.1.9)
n=0 n=1
We can solve this recurrence relation for the first four coefficients,
n=0 a2 − a0 = 0 ⇒ a2 = a0 ,
a0 a1
n=1 (3)(2)a3 − 2a1 − a0 = 0 ⇒ a3 = + ,
6 3
a0 a1
n=2 (4)(3)a4 − 2a2 − a1 = 0 ⇒ a4 = + .
6 12
Therefore, the first terms in the power series expression for the solution y of the differential
equation are given by
a a1 a a1
y = a0 + a1 (x − 2) + a0 (x − 2)2 + (x − 2)3 + (x − 2)4 + · · ·
0 0
+ +
6 3 6 12
which can be rewritten as
h 1 1 i
y = a0 1 + (x − 2)2 + (x − 2)3 + (x − 2)4 + · · ·
6 6
h 1 1 i
+ a1 (x − 2) + (x − 2) + (x − 2)4 + · · ·
3
3 12
So the first three terms on each fundamental solution are given by
1
y1 = 1 + (x − 2)2 + (x − 2)3 ,
6
1 1
y2 = (x − 2) + (x − 2)3 + (x − 2)4 .
3 12
C
3.1.3. The Legendre Equation. The Legendre equation appears when one solves the
Laplace equation in spherical coordinates. The Laplace equation describes several phenom-
ena, such as the static electric potential near a charged body, or the gravitational potential
of a planet or star. When the Laplace equation describes a situation having spherical sym-
metry it makes sense to use spherical coordinates to solve the equation. It is in that case
that the Legendre equation appears for a variable related to the polar angle in the spherical
coordinate system. See Jackson’s classic book on electrodynamics [8], § 3.1, for a derivation
of the Legendre equation from the Laplace equation.
Example 3.1.6: Find all solutions of the Legendre equation
(1 − x2 ) y 00 − 2x y 0 + l(l + 1) y = 0,
where l is any real constant, using power series centered at x0 = 0.
Solution: We start writing the equation in the form of Theorem 3.1.2,
2 l(l + 1)
y 00 − y0 + y = 0.
(1 − x2 ) (1 − x2 )
It is clear that the coefficient functions
2 l(l + 1)
p(x) = − , q(x) = ,
(1 − x2 ) (1 − x2 )
are analytic on the interval |x| < 1, which is centered at x0 = 0. Theorem 3.1.2 says that
there are two solutions linearly independent and analytic on that interval. So we write the
solution as a power series centered at x0 = 0,
∞
X
y(x) = an xn ,
n=0
122 G. NAGY – ODE august 16, 2015
Then we get,
∞
X
y 00 = (n + 2)(n + 1)a(n+2) xn ,
n=0
X∞
−x2 y 00 = −(n − 1)nan xn ,
n=0
X∞
−2x y 0 = −2nan xn ,
n=0
X∞
l(l + 1) y = l(l + 1)an xn .
n=0
The Legendre equation says that the addition of the four equations above must be zero,
X∞
(n + 2)(n + 1)a(n+2) − (n − 1)nan − 2nan + l(l + 1)an xn = 0.
n=0
Therefore,every term in that sum must vanish,
(n + 2)(n + 1)a(n+2) − (n − 1)nan − 2nan + l(l + 1)an = 0, n > n.
This is the recurrence relation for the coefficients an . After a few manipulations the recur-
rence relation becomes
(l − n)(l + n + 1)
a(n+2) = − an , n > 0.
(n + 2)(n + 1)
By giving values to n we obtain,
l(l + 1) (l − 1)(l + 2)
a2 = − a0 , a3 = − a1 .
2! 3!
Since a4 is related to a2 and a5 is related to a3 , we get,
(l − 2)(l + 3) (l − 2)l(l + 1)(l + 3)
a4 = − a2 ⇒ a4 = a0 ,
(3)(4) 4!
(l − 3)(l + 4) (l − 3)(l − 1)(l + 2)(l + 4)
a5 = − a3 ⇒ a5 = a1 .
(4)(5) 5!
If one keeps solving the coefficients an in terms of either a0 or a1 , one gets the expression,
h l(l + 1) 2 (l − 2)l(l + 1)(l + 3) 4 i
y(x) = a0 1 − x + x + ···
2! 4!
h (l − 1)(l + 2) 3 (l − 3)(l − 1)(l + 2)(l + 4) 5 i
+ a1 x − x + x + ··· .
3! 5!
Hence, the fundamental solutions are
l(l + 1) 2 (l − 2)l(l + 1)(l + 3) 4
y1 (x) = 1 − x + x + ···
2! 4!
(l − 1)(l + 2) 3 (l − 3)(l − 1)(l + 2)(l + 4) 5
y2 (x) = x − x + x + ··· .
3! 5!
The ration test provides the interval where the seires above converge. For function y1 we
get, replacing n by 2n,
2n+2 x2n+2 (l − 2n)(l + 2n + 1) 2
a
= |x | → |x|2 as n → ∞.
a2n x2n (2n + 1)(2n + 2)
=
A similar result holds for y2 . So both series converge on the interval defined by |x| < 1. C
G. NAGY – ODE August 16, 2015 123
Remark: The functions y1 , y2 are called Legendre functions. For a noninteger value of
the constant l these functions cannot be written in terms of elementary functions. But
when l is an integer, one of these series terminate and becomes a polynomial. The case
l being a nonnegative integer is specially relevant in physics. For l even the function y1
becomes a polynomial while y2 remains an infinite series. For l odd the function y2 becomes
a polynomial while the y1 remains an infinite series. For example, for l = 0, 1, 2, 3 we get,
l = 0, y1 (x) = 1,
l = 1, y2 (x) = x,
l = 2, y1 (x) = 1 − 3x2 ,
5
l = 3, y2 (x) = x − x3 .
3
The Legendre polynomials are proportional to these polynomials. The proportionality fac-
tor for each polynomial is chosen so that the Legendre polynomials have unit lengh in a
particular chosen inner product. We just say here that the first four polynomials are
l = 0, y1 (x) = 1, P0 = y1 , P0 (x) = 1,
l = 1, y2 (x) = x, P1 = y2 , P1 (x) = x,
1 1
y1 (x) = 1 − 3x2 , P2 (x) = 3x2 − 1 ,
l = 2, P2 = − y1 ,
2 2
5 3 1
y2 (x) = x − x3 , P3 (x) = 5x3 − 3x .
l = 3, P3 = − y1 ,
3 2 2
These polynomials, Pn , are called Legendre polynomials. The graph of the first four Le-
gendre polynomials is given in Fig. 14.
1 P0
P1
P2
−1 1 x
P3
−1
3.1.4. Exercises.
3.1.1.- . 3.1.2.- .
126 G. NAGY – ODE august 16, 2015
3.2.1. The Roots of the Indicial Polynomial. We study the differential equation
y 00 + p(x) y 0 + q(x) y = 0,
where the coefficients p and q are given by
p0 q0
p(x) = , q(x) = ,
(x − x0 ) (x − x0 )2
with p0 and q0 constants. The point x0 is a singular point of the equation; the functions p
and q are not analytic on an open set including x0 . But the singularity is of a good type,
the type we know how find solutions. We start with a small rewriting of the differential
equation we are going to study.
Definition 3.2.1. The Euler equidimensional equation for the unknown y with singular
point at x0 ∈ R is given by the equation below, where p0 and q0 are constants,
(x − x0 )2 y 00 + p0 (x − x0 ) y 0 + q0 y = 0.
Remarks:
(a) This equation is also called Cauchy equidimensional equation, Cauchy equation, Cauchy-
Euler equation, or simply Euler equation. As George Simmons says in [10], “Euler
studies were so extensive that many mathematicians tried to avoid confusion by naming
subjects after the person who first studied them after Euler.”
(b) The equation is called equidimensional because if the variable x has any physical di-
dn
mensions, then the terms with (x − x0 )n n , for any nonnegative integer n, are actually
dx
dimensionless.
(c) The exponential functions y(x) = erx are not solutions of the Euler equation. Just
introduce such a function into the equation, and it is simple to show that there is no
constant r such that the exponential is solution.
(d) As we mentioneed above, the point x0 ∈ R is a singular point of the equation.
(e) The particular case x0 = 0 is
x2 y 00 + p0 x y 0 + q0 y = 0.
We now summarize what is known about solutions of the Euler equation.
G. NAGY – ODE August 16, 2015 127
Remark: We have restricted to a domain with x > x0 . Similar results hold for x < x0 . In
fact one can prove the following: If a solution y has the value y(x − x0 ) at x − x0 > 0, then
the function ỹ defined as ỹ(x − x0 ) = y(−(x − x0 )), for x − x0 < 0 is solution of Eq. (3.2.1)
for x − x0 < 0. For this reason the solution for x 6= x0 is sometimes written in the literature,
see [3] § 5.4, as follows,
ygen (t) = c+ |x − x0 |r+ + c- |x − x0 |r- , r+ 6= r- ,
r0 r0
ygen (t) = c+ |x − x0 | + c- |x − x0 | ln |x − x0 |, r+ = r- = r0 .
However, when solving an initial value problem, we need to pick the domain that contains
the initial data point x1 . This domain will be a subinterval in either (−∞, x0 ) or (x0 , ∞).
The proof of this theorem closely follows the ideas to find all solutions of second order lin-
ear equations with constant coefficients, Theorem 2.3.2, in § 2.3. We first found fundamental
solutions to the differential equation
y 00 + a1 y 0 + a0 y = 0,
and then we recalled that Theorem 2.1.7 says that any other solution is a linear combinantion
of any fundamental solutions pair. To get fundamental solutions we looked for exponential
functions y(x) = erx , where the constant r was a root of the characteristic polynomial
r2 + a1 r + a0 = 0.
When this polynomial had two different roots, r+ 6= r- , we got the fundamental solutions
y+ (x) = er+ x , y- (x) = er- x .
When the root was repeated, r+ = r- = r0 , we used the reduction order method to get the
fundamental solutions
y+ (x) = er0 x , y- (x) = x er0 x .
Well, the proof of Theorem 3.2.2 closely follows this proof, replacing the exponential function
by power functions.
Proof of Theorem 3.2.2: For simplicity we consider the case x0 = 0. The general case
x0 6= 0 follows from the case x0 = 0 replacing x by (x − x0 ). So, consider the equation
x2 y 00 + p0 x y 0 + q0 y = 0, x > 0.
r
We look for solutions of the form y(x) = x , because power functions have the property that
y 0 = r xr−1 ⇒ x y 0 = r xr .
128 G. NAGY – ODE august 16, 2015
Example 3.2.1: Find the general solution of the Euler equation below for x > 0,
x2 y 00 + 4x y 0 + 2 y = 0.
Solution: We look for solutions of the form y(x) = xr , which implies that
x y 0 (x) = rxr , x2 y 00 (x) = r(r − 1) xr ,
therefore, introducing this function y into the differential equation we obtain
r(r − 1) + 4r + 2 xr = 0 ⇔ r(r − 1) + 4r + 2 = 0.
Remark: Both fundamental solutions in the example above are not analytic on any interval
including x = 0. Both solutions diverge at x = 0.
Example 3.2.2: Find the general solution of the Euler equation below for x > 0,
x2 y 00 − 3x y 0 + 4 y = 0.
Solution: We look for solutions of the form y(x) = xr , then the constant r must be solution
of the Euler characteristic polynomial
r(r − 1) − 3r + 4 = 0 ⇔ r2 − 4r + 4 = 0 ⇒ r+ = r- = 2.
Therefore, the general solution of the Euler equation in this case is given by
ygen (x) = c+ x2 + c- x2 ln(x).
C
Solution: We look for solutions of the form y(x) = xr , which implies that
x y 0 (x) = rxr , x2 y 00 (x) = r(r − 1) xr ,
therefore, introducing this function y into the differential equation we obtain
r(r − 1) − 3r + 13 xr = 0 ⇔ r(r − 1) − 3r + 13 = 0.
3.2.2. Real Solutions for Complex Roots. We study in more detail the solutions to the
Euler equation in the case that the indicial polynomial has complex roots. Since these roots
have the form
(p0 − 1) 1 p
r-+ = − ± (p0 − 1)2 − 4q0 ,
2 2
the roots are complex-valued in the case (p0 − 1)2 − 4q0 < 0. We use the notation
r
(p0 − 1) (p0 − 1)2
r-+ = α ± iβ, with α = − , β = q0 − .
2 4
The fundamental solutions in Theorem 3.2.2 are the complex-valued functions
ỹ+ (x) = x(α+iβ) , ỹ- (x) = x(α−iβ) .
The general solution constructed from these solutions is
ygen (x) = c̃+ x(α+iβ) + c̃- x(α−iβ) , c̃+ , c̃- ∈ C.
This formula for the general solution includes real valued and complex valued solutions.
But it is not so simple to single out the real valued solutions. Knowing the real valued
solutions could be important in physical applications. If a physical system is described by a
differential equation with real coefficients, more often than not one is interested in finding
real valued solutions. For that reason we now provide a new set of fundamental solutions
that are real valued. Using real valued fundamental solution is simple to separate all real
valued solutions from the complex valued ones.
Theorem 3.2.3 (Real Valued Fundamental Solutions). If the differential equation
(x − x0 )2 y 00 + p0 (x − x0 ) y 0 + q0 y = 0, x > x0 , (3.2.3)
where p0 , q0 , x0 are real constants, has indicial polynomial with complex roots r-+ = α ± iβ
and complex valued fundamental solutions for x > x0 ,
ỹ+ (x) = (x − x0 )(α+iβ) , ỹ- (x) = (x − x0 )(α−iβ) ,
then the equation also has real valued fundamental solutions for x > x0 given by
y+ (x) = (x − x0 )α cos β ln(x − x0 ) , y- (x) = (x − x0 )α sin β ln(x − x0 ) .
Proof of Theorem 3.2.3: For simplicity consider the case x0 = 0. Take the solutions
ỹ+ (x) = x(α+iβ) , ỹ- (x) = x(α−iβ) .
Rewrite the power function as follows,
iβ
ỹ+ (x) = x(α+iβ) = xα xiβ = xα eln(x )
= xα eiβ ln(x) ⇒ ỹ+ (x) = xα eiβ ln(x) .
A similar calculation yields
ỹ- (x) = xα e−iβ ln(x) .
Recall now Euler formula for complex exponentials, eiθ = cos(θ) + i sin(θ), then we get
ỹ+ (x) = xα cos β ln(x) + i sin β ln(x) , ỹ- (x) = xα cos β ln(x) − i sin β ln(x) .
Since ỹ+ and ỹ- are ssolutions to Eq. (3.2.3), so are the functions
1 1
y1 (x) = ỹ1 (x) + ỹ2 (x) , y2 (x) = ỹ1 (x) − ỹ2 (x) .
2 2i
It is not difficult to see that these functions are
y+ (x) = xα cos β ln(x) , y- (x) = xα sin β ln(x) .
To prove the case having x0 6= 0, just replace x by (x−x0 ) on all steps above. This establishes
the Theorem.
G. NAGY – ODE August 16, 2015 131
Example 3.2.4: Find a real-valued general solution of the Euler equation below for x > 0,
x2 y 00 − 3x y 0 + 13 y = 0.
3.2.3. Transformation to Constant Coefficients. Teorem 3.2.2 shows that power func-
tions y(x) = xr-+ , where r-+ the roots of the indicial polynomial, are solutions to the Euler
equidimensional equation
x2 y 00 + p0 x y 0 + q0 y = 0, x > 0.
The proof of this theorem is to verify that the power functions y(x) = xr-+ solve the differ-
ential equation. How did we know we had to try with power functions? One answer could
be, this is a guess, a lucky one. Another answer could be that the Euler equation can be
transformed into a constant coefficient equation by a change of variable.
Theorem 3.2.4. The function y is solution of the Euler equidimensional equation
x2 y 00 + p0 x y 0 + q0 y = 0, x>0
z
iff the function u(z) = y(e ) satisfies the constant coefficients equation
ü + (p0 − 1) u̇ + q0 u = 0,
where y 0 = dy/dx and u̇ = du/dz. Furthermore, the functions y(x) = er-+ x are solutions of
the Euler equidimensional equation iff the constants r-+ are solutions of the indicial equation
r-+2 + (p0 − 1)r-+ + q0 = 0.
Proof of Theorem 3.2.4: Given x > 0, introduce z(x) = ln(x), therefore x(z)) = ez .
Given a function y, introduce the function
u(z) = y(x(z)) ⇒ u(z) = y(ez ).
Then, the derivatives of u and y are related by the chain rule,
du dy dx d(ez )
u̇(z) = (z) = (x(z)) (z) = y 0 (x(z)) = y 0 (x(z)) ez
dz dx dz dz
so we obtain
u̇(z) = x y 0 (x),
where we have denoted u̇ = du/dz. The relation for the second derivatives is
d dx d(ez )
x y 0 (x) (z) = x y 00 (x) + y 0 (x) = x y 00 (x) + y 0 (x) x
ü(z) =
dx dz dz
so we obtain
ü(z) = x2 y 00 (x) + x y 0 (x).
Combining the equations for u̇ and ü we get
x2 y 00 = ü − u̇, x y 0 = u̇.
132 G. NAGY – ODE august 16, 2015
3.2.4. Exercises.
3.2.1.- . 3.2.2.- .
134 G. NAGY – ODE august 16, 2015
Example 3.3.1: Show that the singular point of Euler equation below is regular singular,
(x − 3)2 y 00 + 2(x − 3) y 0 + 4 y = 0.
Solution: Divide the equation by (x − 3)2 , so we get the equation in the standard form
2 4
y 00 + y0 + y = 0.
(x − 3) (x − 3)2
The functions p and q are given by
2 4
p(x) = , q(x) = .
(x − 3) (x − 3)2
The functions p̃3 and q̃3 for the point x0 = 3 are constants,
2 4
p̃3 (x) = (x − 3) = 2, q̃3 (x) = (x − 3)2 = 4.
(x − 3) (x − 3)2
Therefore they are analytic. This shows that x0 = 3 is regular singular. C
Example 3.3.3: Find the regular singular points of the differential equation
(x + 2)2 (x − 1) y 00 + 3(x − 1) y 0 + 2 y = 0.
Remark: It is fairly simple to find the regular singular points of an equation. Take the
equation in out last example, written in standard form,
3 2
y 00 + 2
y0 + y = 0.
(x + 2) (x + 2)2 (x − 1)
The functions p and q are given by
3 2
p(x) = , q(x) = .
(x + 2)2 (x + 2)2 (x − 1)
The singular points are given by the zeros in the denominators, that is x0 = −2 and x1 = 1.
The point x0 is not regular singular because function p diverges at x0 = −2 faster than
1
. The point x1 = 1 is regular singular because function p is regular at x1 = 1 and
(x + 2)
1
function q diverges at x1 = 1 slower than .
(x − 1)2
G. NAGY – ODE August 16, 2015 137
3.3.2. The Frobenius Method. We now assume that the differential equation
y 00 + p(x) y 0 + q(x) y = 0, (3.3.1)
has a regular singular point. We want to find solutions to this equation that are defined
arbitrary close to that regular singular point. Recall that a point x0 is a regular singular
point of the equation above iff the functions (x − x0 ) p and (x − x0 )2 q are analytic at x0 . A
function is analytic at a point iff it has a convergent power series expansion in a neighborhood
of that point. In our case this means that near a regular singular point holds
X∞
(x − x0 ) p(x) = pn (x − x0 )n = p0 + p1 (x − x0 ) + p2 (x − x0 )2 + · · ·
n=0
X∞
(x − x0 )2 q(x) = qn (x − x0 )n = q0 + q1 (x − x0 ) + q2 (x − x0 )2 + · · ·
n=0
This means that near x0 the function p diverges at most like (x − x0 )−1 and function q
diverges at most like (x − x0 )−2 , as it can be seen from the equations
p0
p(x) = + p1 + p2 (x − x0 ) + · · ·
(x − x0 )
q0 q1
q(x) = + + q2 + · · ·
(x − x0 )2 (x − x0 )
Therefore, for p0 and q0 nonzero and x close to x0 we have the relations
p0 q0
p(x) ' , q(x) ' , x ' x0 ,
(x − x0 ) (x − x0 )2
where the symbol a ' b, with a, b ∈ R means that |a − b| is close to zero. In other words,
the for x close to a regular singular point x0 the coefficients of Eq. (3.3.1) are close to the
coefficients of the Euler equidimensional equation
(x − x0 )2 ye00 + p0 (x − x0 ) ye0 + q0 ye = 0,
where p0 and q0 are the zero order terms in the power series expansions of (x − x0 ) p and
(x−x0 )2 q given above. One could expect that solutions y to Eq. (3.3.1) are close to solutions
ye to this Euler equation. One way to put this relation in a more precise way is
∞
X
an (x − x0 )n ⇒ y(x) = ye (x) a0 + a1 (x − x0 ) + · · · .
y(x) = ye (x)
n=0
Recalling that at least one solution to the Euler equation has the form ye (x) = (x − x0 )r ,
where r is a root of the indicial polynomial
r(r − 1) + p0 r + q0 = 0,
we then expect that for x close to x0 the solution to Eq. (3.3.1) be close to
∞
X
r
y(x) = (x − x0 ) an (x − x0 )n .
n=0
This expression for the solution is usually written in a more compact way as follows,
X∞
y(x) = an (x − x0 )(r+n) .
n=0
This is the main idea of the Frobenius method to find solutions to equations with regular
singular points. To look for solutions that are close to solutions to an appopriate Euler
equation. We now state two theorems summarize a few formulas for solutions to differential
equations with regular singular points.
138 G. NAGY – ODE august 16, 2015
(b) If (r+ − r- ) = N , a nonnegative integer, then the differential equation in (3.3.2) has two
independent solutions y+ , y- of the form
∞
X
y+ (x) = |x − x0 |r+ an (x − x0 )n , with a0 = 1,
n=0
X∞
y- (x) = |x − x0 |r- bn (x − x0 )n + c y+ (x) ln |x − x0 |, with b0 = 1.
n=0
The constant c is nonzero if N = 0. If N > 0, the constant c may or may not be zero.
In both cases above the series converge in the interval defined by |x − x0 | < ρ and the
differential equation is satisfied for 0 < |x − x0 | < ρ.
Remarks:
(a) The statements above are taken from Apostol’s second volume [2], Theorems 6.14, 6.15.
For a sketch of the proof see Simmons [10]. A proof can be found in [5, 7].
(b) The existence of solutions and their behavior in a neighborhood of a singular point was
first shown by Lazarus Fuchs in 1866. The construction of the solution using singular
power series expansions was first shown by Ferdinand Frobenius in 1874.
We now give a summary of the Frobenius method to find the solutions mentioned in
Theorem 3.3.2 to a differential equation having a regular singular point. For simplicity we
only show how to obtain the solution y+ .
X∞
(1) Look for a solution y of the form y(x) = an (x − x0 )(n+r) .
n=0
(2) Introduce this power series expansion into the differential equation and find the indicial
equation for the exponent r. Find the larger solution of the indicial equation.
(3) Find a recurrence relation for the coefficients an .
(4) Introduce the larger root r into the recurrence relation for the coefficients an . Only
then, solve this latter recurrence relation for the coefficients an .
(5) Using this procedure we will find the solution y+ in Theorem 3.3.2.
G. NAGY – ODE August 16, 2015 139
We now show how to use these steps to find one solution of a differential equation near a
regular singular point. We show the case where the roots of the indicial polynomial differ by
an integer. We show that in this case we obtain only solution y+ . The solution y- does not
X∞
have the form y(x) = an (x − x0 )(n+r) . Theorem 3.3.2 says that there is a logarithmic
n=0
term in the solution. We do not compute that solution here.
Example 3.3.4: Find the solution y near the regular singular point x0 = 0 of the equation
x2 y 00 − x(x + 3) y 0 + (x + 3) y = 0.
As one can see from Eqs.(3.3.3)-(3.3.5), the guiding principle to rewrite each term is to
have the power function x(n+r) labeled in the same way on every term. For example, in
Eqs.(3.3.3)-(3.3.5) we do not have a sum involving terms with factors x(n+r−1) or factors
x(n+r+1) . Then, the differential equation can be written as follows,
X∞ ∞
X
(n + r)(n + r − 1)an x(n+r) − (n + r − 1)a(n−1) x(n+r)
n=0 n=1
X∞ ∞
X ∞
X
− 3(n + r)an x(n+r) + a(n−1) x(n+r) + 3an x(n+r) = 0.
n=0 n=1 n=0
In the equation above we need to split the sums containing terms with n > 0 into the term
n = 0 and a sum containing the terms with n > 1, that is,
r(r − 1) − 3r + 3 a0 xr +
X∞ h i
(n + r)(n + r − 1)an − (n + r − 1)a(n−1) − 3(n + r)an + a(n−1) + 3an x(n+r) = 0,
n=1
and this expression can be rewritten as follows,
r(r − 1) − 3r + 3 a0 xr +
X∞ h i
(n + r)(n + r − 1) − 3(n + r) + 3 an − (n + r − 1 − 1)a(n−1) x(n+r) = 0
n=1
and then,
r(r − 1) − 3r + 3 a0 xr +
∞ h
X i
(n + r)(n + r − 1) − 3(n + r − 1) an − (n + r − 2)a(n−1) x(n+r) = 0
n=1
hence,
∞ h
X i
r(r − 1) − 3r + 3 a0 xr + (n + r − 1)(n + r − 3)an − (n + r − 2)a(n−1) x(n+r) = 0.
n=1
The indicial equation and the recurrence relation are given by the equations
r(r − 1) − 3r + 3 = 0, (3.3.6)
(n + r − 1)(n + r − 3)an − (n + r − 2)a(n−1) = 0. (3.3.7)
The way to solve these equations in (3.3.6)-(3.3.7) is the following: First, solve Eq. (3.3.6) for
the exponent r, which in this case has two solutions r± ; second, introduce the first solution
r+ into the recurrence relation in Eq. (3.3.7) and solve for the coefficients an ; the result is
a solution y+ of the original differential equation; then introduce the second solution r- into
Eq. (3.3.7) and solve again for the coefficients an ; the new result is a second solution y- . Let
us follow this procedure in the case of the equations above:
√ r+ = 3,
1
r2 − 4r + 3 = 0 ⇒ r± = 4 ± 16 − 12
⇒
2 r- = 1.
Introducing the value r+ = 3 into Eq. (3.3.7) we obtain
(n + 2)n an − (n + 1)an−1 = 0.
One can check that the solution y+ obtained form this recurrence relation is given by
h 2 1 1 3 i
y+ (x) = a0 x3 1 + x + x2 + x + ··· .
3 4 15
G. NAGY – ODE August 16, 2015 141
3.3.3. The Bessel Equation. We saw in § 3.1 that the Legendre equation appears when
one solves the Laplace equation in spherical coordinates. If one uses cylindrical coordinates
insted, one needs to solve the Bessel equation. Recall we mentioned that the Laplace
equation describes several phenomena, such as the static electric potential near a charged
body, or the gravitational potential of a planet or star. When the Laplace equation describes
a situation having cylindrical symmetry it makes sense to use cylindrical coordinates to solve
it. Then the Bessel equation appears for the radial variable in the cylindrical coordinate
system. See Jackson’s classic book on electrodynamics [8], § 3.7, for a derivation of the
Bessel equation from the Laplace equation.
The equation is named after Friedrich Bessel, a German astronomer from the first half
of the seventeen century, who was the first person to calculate the distance to a star other
than our Sun. Bessel’s parallax of 1838 yielded a distance of 11 light years for the star
61 Cygni. In 1844 he discovered that Sirius, the brightest star in the sky, has a traveling
companion. Nowadays such system is called a binary star. This companion has the size
of a planet and the mass of a star, so it has a very high density, many thousand times
the density of water. This was the first dead start discovered. Bessel first obtained the
equation that now bears his name when he was studing star motions. But the equation
first appeared in Daniel Bernoulli’s studies of oscillations of a hanging chain. (Taken from
Simmons’ book [10], § 34.)
∞
X
Example 3.3.5: Find all solutions y(x) = an xn+r , with a0 6= 0, of the Bessel equation
n=0
00 0
x y + x y + (x − α2 ) y = 0,
2 2
x > 0,
where α is any real nonnegative constant, using the Frobenius method centered at x0 = 0.
Solution: Let us double check that x0 = 0 is a regular singular point of the equation. We
start writing the equation in the standard form,
1 0 (x2 − α2 )
y 00 +
y + y = 0,
x x2
so we get the functions p(x) = 1/x and q(x) = (x2 − α2 )/x2 . It is clear that x0 = 0 is a
singular point of the equation. Since the functions
p̃(x) = xp(x) = 1, q̃(x) = x2 q(x) = (x2 − α2 )
142 G. NAGY – ODE august 16, 2015
are analytic, we conclude that x0 = 0 is a regular singular point. So it makes sense to look
for solutions of the form
X∞
y(x) = an x(n+r) , x > 0.
n=0
We now compute the different terms needed to write the differential equation. We need,
∞
X X∞
x2 y(x) = an x(n+r+2) ⇒ y(x) = a(n−2) x(n+r) ,
n=0 n=2
where we did the relabeling n + 2 = m → n. The term with the first derivative is given by
∞
X
x y 0 (x) = (n + r)an x(n+r) .
n=0
The term with the second derivative has the form
∞
X
x2 y 00 (x) = (n + r)(n + r − 1)an x(n+r) .
n=0
Therefore, the differential equation takes the form
X∞ ∞
X
(n + r)(n + r − 1)an x(n+r) + (n + r)an x(n+r)
n=0 n=0
∞
X ∞
X
+ a(n−2) x(n+r) − α2 an x(n+r) = 0.
n=2 n=0
Group together the sums that start at n = 0,
∞
X ∞
X
(n + r)(n + r − 1) + (n + r) − α2 an x(n+r) + a(n−2) x(n+r) ,
n=0 n=2
and cancel a few terms in the first sum,
X∞ ∞
X
(n + r)2 − α2 an x(n+r) + a(n−2) x(n+r) = 0.
n=0 n=2
Split the sum that starts at n = 0 into its first two terms plus the rest,
(r2 − α2 )a0 xr + (r + 1)2 − α2 a1 x(r+1)
∞
X ∞
X
(n + r)2 − α2 an x(n+r) + a(n−2) x(n+r) = 0.
+
n=2 n=2
The reason for this splitting is that now we can write the two sums as one,
∞
X
(r2 − α2 )a0 xr + (r + 1)2 − α2 a1 x(r+1) + (n + r)2 − α2 an + a(n−2) x(n+r) = 0.
n=2
We then conclude that each term must vanish,
(r2 − α2 )a0 = 0, (r + 1)2 − α2 a1 = 0, (n + r)2 − α2 an + a(n−2) = 0,
n > 2. (3.3.8)
This is the recurrence relation for the Bessel equation. It is here where we use that we look
for solutions with a0 6= 0. In this example we do not look for solutions with a1 6= 0. Maybe
it is a good exercise for the reader to find such solutions. But in this example we look for
solutions with a0 6= 0. This condition and the first equation above imply that
r 2 − α2 = 0 ⇒ r± = ±α,
G. NAGY – ODE August 16, 2015 143
and recall that α is a nonnegative but otherwise arbitrary real number. The choice r = r+
will lead to a solution yα , and the choice r = r- will lead to a solution y−α . These solutions
may or may not be linearly independent. This depends on the value of α, since r+ − r- = 2α.
One must be careful to study all possible cases.
Remark: Let us start with a very particular case. Suppose that both equations below hold,
(r2 − α2 ) = 0, (r + 1)2 − α2 = 0.
This equations are the result of both a0 6= 0 and a1 6= 0. These equations imply
1
r2 = (r + 1)2 ⇒ 2r + 1 = 0 ⇒
r=− .
2
But recall that r = ±α, and α > 0, hence the case a0 6= 0 and a1 6= 0 happens only when
α = 1/2 and we choose r- = −α = −1/2. We leave computation of the solution y−1/2 as an
exercise for the reader. But the answer is
cos(x) sin(x)
y−1/2 (x) = a0 √ + a1 √ .
x x
From now on we assume that α 6= 1/2. This condition on α, the equation r2 − α2 = 0, and
the remark above imply that
(r + 1)2 − α2 6= 0.
So the second equation in the recurrence relation in (3.3.8) implies that a1 = 0. Summariz-
ing, the first two equations in the recurrence relation in (3.3.8) are satisfied because
r± = ±α, a1 = 0.
We only need to find the coefficients an , for n > 2 such that the third equation in the
recurrence relation in (3.3.8) is satisfied. But we need to consider two cases, r = r+ = α and
r- = −α.
We start with the case r = r+ = α, and we get
(n2 + 2nα) an + a(n−2) = 0 ⇒ n(n + 2α) an = −a(n−2) .
Since n > 2 and α > 0, the factor (n + 2α) never vanishes and we get
a(n−2)
an = − .
n(n + 2α)
This equation and a1 = 0 imply that all coefficients a2k+1 = 0 for k > 0, the odd coefficients
vanish. On the other hand, the even coefficent are nonzero. The coefficient a2 is
a0 a0
a2 = − ⇒ a2 = − 2 ,
2(2 + 2α) 2 (1 + α)
the coefficient a4 is
a2 a2 a0
a4 = − =− 2 ⇒ a4 = ,
4(4 + 2α) 2 (2)(2 + α) 24 (2)(1 + α)(2 + α)
the coefficient a6 is
a4 a4 a0
a6 = − =− 2 ⇒ a6 = − .
6(6 + 2α) 2 (3)(3 + α) 26 (3!)(1 + α)(2 + α)(3 + α)
Now it is not so hard to show that the general term a2k , for k = 0, 1, 2, · · · has the form
(−1)k a0
a2k = .
22k (k!)(1 + α)(2 + α) · · · (k + α)
144 G. NAGY – ODE august 16, 2015
k + 1/2, for k integer. Introducing this y−(k+1/2) into the Bessel equation one can check
that y−(k+1/2) is a solution to the Bessel equation.
Summarizing, the solutions of the Bessel equation function yα is defined for every non-
negative real number α, and y−α is defined for every nonnegative real number α except for
nonnegative integers. For a given α such that both yα and y−α are defined, these func-
tions are linearly independent. That these functions cannot be proportional to each other
is simple to see, since for α > 0 the function yα is regular at the origin x = 0, while y−α
diverges.
The last case we need to study is how to find the solution y−α when α is a nonnegative
integer. We see that the expression in (3.3.10) is not defined when α is a nonnegative
integer. And we just saw that this condition on α is a particular case of the condition in
Theorem 3.3.2 that (r+ − r- ) is not a nonnegative integer. Theorem 3.3.2 gives us what is
the expression for a second solution, y−α linearly independent of yα , in the case that α is a
nonnegative integer. This expression is
∞
X
y−α (x) = yα (x) ln(x) + x−α cn xn .
n=0
If we put this expression into the Bessel equation, one can find a recurrence relation for the
coefficients cn . This is a long calculation, and the final result is
y−α (x) = yα (x) ln(x)
α−1
1 x −α X (α − n − 1)! x 2n
−
2 2 n=0
n! 2
∞
1 x α X (hn + h(n+α) ) x 2n
− (−1)n ,
2 2 n=0 n! (n + α)! 2
1 1
with h0 = 0, hn = 1 + 2 + ··· + n for n > 1, and α a nonnegative integer. C
146 G. NAGY – ODE august 16, 2015
3.3.4. Exercises.
3.3.1.- . 3.3.2.- .
G. NAGY – ODE August 16, 2015 147
Notes on Chapter 3
Sometimes solutions to a differential equation cannot be written in terms of previously
known functions. When that happens the we say that the solutions to the differential
equation define a new type of functions. How can we work with, or let alone write down, a
new function, a function that cannot be written in terms of the functions we already know?
It is the differential equation what defines the function. So the function properties must be
obtained from the differential equation itself. A way to compute the function values must
come from the differential equation as well. The few paragraphs that follow try to give sense
that this procedure is not as artificial as it may sound.
Differential Equations to Define Functions. We have seen in § 3.3 that the solutions
of the Bessel equation for α 6= 1/2 cannot be written in terms of simple functions, such as
quotients of polynomials, trigonometric functions, logarithms and exponentials. We used
power series including negative powers to write solutions to this equation. To study prop-
erties of these solutions one needs to use either the power series expansions or the equation
itself. This type of study on the solutions of the Bessel equation is too complicated for these
notes, but the interested reader can see [14].
We want to give an idea how this type of study can be carried out. We choose a differential
equation that is simpler to study than the Bessel equation. We study two solutions, C and S,
of this particular differential equation and we will show, using only the differential equation,
that these solutions have all the properties that the cosine and sine functions have. So
we will conclude that these solutions are in fact C(x) = cos(x) and S(x) = sin(x). This
example is taken from Hassani’s textbook [6], example 13.6.1, page 368.
Example 3.3.6: Let the function C be the unique solution of the initial value problem
C 00 + C = 0, C(0) = 1, C 0 (0) = 0,
and let the function S be the unique solution of the initial value problem
S 00 + S = 0, S(0) = 0, S 0 (0) = 1.
Use the differential equation to study these functions.
Solution:
(a) We start showing that these solutions C and S are linearly independent. We only need
to compute their Wronskian at x = 0.
W (0) = C(0) S 0 (0) − C 0 (0) S(0) = 1 6= 0.
Therefore the functions C and S are linearly independent.
(b) We now show that the function S is odd and the function C is even. The function
Ĉ(x) = C(−x) satisfies the initial value problem
Ĉ 00 + Ĉ = C 00 + C = 0, Ĉ(0) = C(0) = 1, Ĉ 0 (0) = −C 0 (0) = 0.
This is the same initial value problem satisfied by the function C. The uniqueness of
solutions to these initial value problem implies that C(−x) = C(x) for all x ∈ R, hence the
function C is even. The function Ŝ(x) = S(−x) satisfies the initial value problem
Ŝ 00 + Ŝ = S 00 + S = 0, Ŝ(0) = S(0) = 0, Ŝ 0 (0) = −S 0 (0) = −1.
This is the same initial value problem satisfied by the function −S. The uniqueness of
solutions to these initial value problem implies that S(−x) = −S(x) for all x ∈ R, hence
the function S is odd.
148 G. NAGY – ODE august 16, 2015
(c) Next we find a differential relation between the functions C and S. Notice that the
function −C 0 is the uique solution of the initial value problem
(−C 0 )00 + (−C 0 ) = 0, −C 0 (0) = 0, (−C 0 )0 (0) = C(0) = 1.
This is precisely the same initial value problem satisfied by the function S. The uniqueness
of solutions to these initial value problems implies that −C = S, that is for all x ∈ R holds
C 0 (x) = −S(x).
Take one more derivative in this relation and use the differential equation for C,
S 0 (x) = −C 00 (x) = C(x) ⇒ S 0 (x) = C(x).
(d) Let us now recall that Abel’s Theorem says that the Wronskian of two solutions to a
second order differential equation y 00 + p(x) y 0 + q(x) y = 0 satisfies the differential equation
W 0 + p(x) W = 0. In our case the function p = 0, so the Wronskian is a constant function.
If we compute the Wronskian of the functions C and S and we use the differential relations
found in (c) we get
W (x) = C(x) S 0 (x) − C 0 (x) S(x) = C 2 (x) + S 2 (x).
This Wronskian must be a constant function, but at x = 0 takes the value W (0) = C 2 (0) +
S 2 (0) = 1. We therefore conclude that for all x ∈ R holds
C 2 (x) + S 2 (x) = 1.
(e) We end computing power series expansions of these functions C and S, so we have a
way to compute their values. We start with function C. The initial conditions say
C(0) = 1, C 0 (0) = 0.
The differential equation at x = 0 and the first initial condition say that C 00 (0) = −C(0) =
−1. The derivative of the differential equation at x = 0 and the second initial condition say
that C 000 (0) = −C 0 (0) = 0. If we keep taking derivatives of the differential equation we get
C 00 (0) = −1, C 000 (0) = 0, C (4) (0) = 1,
and in general,
(
0 if n is odd,
(n)
C (0) = k
(−1) if n = 2k, where k = 0, 1, 2, · · · .
So we obtain the Taylor series expansion
∞
X x2k
C(x) = (−1)k ,
(2k)!
k=0
which is the power series expansion of the cosine function. A similar calculation yields
∞
X x2k+1
S(x) = (−1)k ,
(2k + 1)!
k=0
which is the power series expansion of the sine function. Notice that we have obtained these
expansions using only the differential equation and its derivatives at x = 0 together with
the initial conditions. The ratio test shows that these power series converge for all x ∈ R.
These power series expansions also say that the function S is odd and C is even. C
G. NAGY – ODE August 16, 2015 149
The Euler number e is defined as the solution of the equation ln(e) = 1. The inverse of the
natural logarithm, ln−1 , is defined in the usual way,
ln−1 (y) = x ⇔ ln(x) = y, x ∈ (0, ∞), y ∈ (−∞, ∞).
Since the natural logarithm satisfies that ln(x1 x2 ) = ln(x1 ) + ln(x2 ), the inverse function
satisfies the related identity ln−1 (y1 + y2 ) = ln−1 (y1 ) ln−1 (y2 ). To see this identity compute
ln−1 (y1 + y2 ) = ln−1 ln(x1 ) + ln(x2 ) = ln−1 (ln(x1 x2 )) = x1 x2 = ln−1 (y1 ) ln−1 (y2 ).
This identity and the fact that ln−1 (1) = e imply that for any positive integer n holds
n times n times
z }| { z −1 }| { nz times
ln−1 (n) = ln−1 −1
(1 + · · · + 1)=ln (1) · · · ln (1)= e · · · e = en .
}| {
This relation says that ln−1 is the exponential function when restricted to positive integers.
This suggests a way to generalize the exponential function from positive integers to real
numbers, ey = ln−1 (y), for y real. Hence the name exponential for the inverse of the natural
logarithm. And this is how calculus brought us the logarithm and the exponential functions.
Finally notice that by the definition of the natural logarithm, its derivative is ln0 (x) = 1/x.
But there is a formula relating the derivative of a function f and its inverse f −1 ,
0 1
f −1 (y) = 0 −1 .
f f (y)
150 G. NAGY – ODE august 16, 2015
The Laplace Transform is a transformation, meaning that it changes a function into a new
function. Actually, it is a linear transformation, because it converts a linear combination of
functions into a linear combination of the transformed functions. Even more interesting, the
Laplace Transform converts derivatives into multiplications. These two properties make the
Laplace Transform very useful to solve linear differential equations with constant coefficients.
The Laplace Transform converts such differential equation for an unknown function into an
algebraic equation for the transformed function. Usually it is easy to solve the algebraic
equation for the transformed function. Then one converts the transformed function back
into the original function. This function is the solution of the differential equation.
Solving a differential equation using a Laplace Transform is radically different from all
the methods we have used so far. This method, as we will use it here, is relatively new. The
Laplace Transform we define here was first used in 1910, but its use grew rapidly after 1920,
specially to solve differential equations. Transformations like the Laplace Transform were
known much earlier. Pierre Simon de Laplace used a similar transformation in his studies of
probability theory, published in 1812, but analogous transformations were used even earlier
by Euler around 1737.
δn
δ3 (t)
3
δ2 (t)
2
δ1 (t)
1
0 1 1 1 t
3 2
152 G. NAGY – ODE august 16, 2015
Solution: Following the definition above we need to first compute a definite integral and
then take a limit. So, from the definition,
Z ∞ Z N
−at
I= e dt = lim e−at dt.
N →∞
0 0
therefore for a = 0 the improper integral I does not exist. When a 6= 0 we have
1
I = lim − e−aN − 1 .
N →∞ a
In the case a < 0, that is a = −|a|, we have that
lim e|a|N = ∞ ⇒ I = −∞,
N →∞
therefore for a < 0 the improper integral I does not exist. In the case a > 0 we know that
Z ∞
−aN 1
lim e =0 ⇒ e−at dt = , a > 0. (4.1.1)
N →∞
0 a
C
4.1.2. Definition and Table. The Laplace Transform is a transformation, meaning that
it converts a function into a new function. We have seen transformations earlier in these
notes. In Chapter 2 we used the transformation
L[y(t)] = y 00 (t) + a1 y 0 (t) + a0 y(t),
so that a second order linear differential equation with source f could be written as L[y] = f .
There are simpler transformations, for example the differentiation operation itself,
D[f (t)] = f 0 (t).
Not all transformations involve differentiation. There are integral transformations, for ex-
ample integration itself, Z x
I[f (t)] = f (t) dt.
0
G. NAGY – ODE August 16, 2015 153
where s ∈ R is any real number such that the integral above converges.
Remark: An alternative notation for the Laplace Transform of a function f is
F (s) = L[f (t)], s ∈ DF ⊂ R,
where the emphasis is in the result of the Laplace Transform, which is a function F on the
variable s. We have denoted the domain of the transformed function as DF ⊂ R, defined
as the set of all real numbers such that the integral in (4.1.2) converges. In these notes
we use both notations L[f (t)] and F , depending on what we want to emphasize, either the
transformation itself or the result of the transformation. We will also use the notation L[f ],
whenever the independent variable t is not important in that context.
In this Section we study properties of the transformation L. We will show in Theo-
rem 4.1.4 that this transformation is linear, and in Theorem 4.2.1 that this transformation
is one-to-one and so invertible on the appropriate domain. But before that, we show how
to compute a Laplace Transform, how to compute the improper integral and interpret the
result. We will see in a few examples below that this improper integral in Eq. (4.1.2) does
not converge for every s ∈ R. The interval where the Laplace Transform of a function f is
defined depends on the particular function f . We will see that L[eat ] with a ∈ R is defined
for s > a, but L[sin(at)] is defined for s > 0. Let us compute a few Laplace Transforms.
Example 4.1.2: Compute L[1].
Solution: The function f (t) = 1 is a simple enough function to find its Laplace transform.
Following the definition, Z ∞
L[1] = e−st dt.
0
But we have computed this improper integral in Example 4.1.1. Just replace a = s in that
example. The result is that the L[1] is not defined for s 6 0, while for s > 0 we have
1
L[1] = , s > 0.
s C
Here again we can use the result in Example 4.1.1, just replace a in that Example by (s − a).
The result is that the L[eat ] is not defined for s 6 a, while for s > a we have
1
L[eat ] = , s > a.
(s − a) C
154 G. NAGY – ODE august 16, 2015
One can check that the limit N → ∞ on the right hand side above does not exist for s 6 0,
so L[sin(at)] does not exist for s 6 0. In the case s > 0 it is not difficult to see that
s2 + a2 Z ∞ a
e−st sin(at) dt = 2 ,
s2 0 s
which is equivalent to
a
L[sin(at)] = 2 , s > 0.
s + a2 C
G. NAGY – ODE August 16, 2015 155
In Table 2 we present a short list of Laplace Transforms. They can be computed in the
same way we computed the the Laplace Transforms in the examples above.
4.1.3. Main Properties. Since we are more or less confident on how to compute a Laplace
Transform, we can start asking deeper questions. For example, what type of functions have
a Laplace Transform? It turns out that a large class of functions, those that are piecewise
continuous on [0, ∞) and bounded by an exponential. This last property is particularly
important and we give it a name.
Definition 4.1.2. A function f on [0, ∞) is of exponential order s0 , where s0 is any real
number, iff there exist positive constants k, T such that
|f (t)| 6 k es0 t for all t > T. (4.1.4)
156 G. NAGY – ODE august 16, 2015
Remarks:
(a) When the precise value of the constant s0 is not important we will say that f is of
exponential order.
2
(b) An example of a function that is not of exponential order is f (t) = et .
This definition helps to describe a set of functions having Laplace Transform. Piecewise
continuous functions on [0, ∞) of exponential order have Laplace Transforms.
Theorem 4.1.3 (Sufficient Conditions). If the function f on [0, ∞) is piecewise con-
tinuous and of exponential order s0 , then the L[f ] exists for all s > s0 and there exists a
positive constant k such that the following bound holds
L[f ] 6 k ,
s > s0 .
s − s0
Proof of Theorem 4.1.3: From the definition of the Laplace Transform we know that
Z N
L[f ] = lim e−st f (t) dt.
N →∞
0
The definite integral on the interval [0, N ] exists for every N > 0 since f is piecewise
continuous on that interval, no matter how large N is. We only need to check whether the
integral converges as N → ∞. This is the case for functions of exponential order, because
Z N Z N Z N Z N
e−st f (t) dt 6 e−st |f (t)| dt 6 e−st kes0 t dt = k e−(s−s0 )t dt.
0 0 0 0
The Laplace Transform can be used to solve differential equations. The Laplace Trans-
form converts a differential equation into an algebraic equation. This is so because the
Laplace Transform converts derivatives into multiplications. Here is the precise result.
Theorem 4.1.5 (Derivative). If a function f is continuously differentiable on [0, ∞) and
of exponential order s0 , then L[f 0 ] exists for s > s0 and
L[f 0 ] = s L[f ] − f (0), s > s0 . (4.1.5)
We start computing the definite integral above. Since f 0 is continuous on [0, ∞), that definite
integral exists for all positive N , and we can integrate by parts,
Z N h N Z N i
e−st f 0 (t) dt = e−st f (t) − (−s)e−st f (t) dt
0 0 0
Z N
= e−sN f (N ) − f (0) + s e−st f (t) dt.
0
Let us use one more time that f is of exponential order s0 . This means that there exist
positive constants k and T such that |f (t)| 6 k es0 t , for t > T . Therefore,
lim e−sN f (N ) 6 lim k e−sN es0 N = lim k e−(s−s0 )N = 0, s > s0 .
N →∞ N →∞ N →∞
These two results together imply that L[f 0 ] exists and holds
L[f 0 ] = s L[f ] − f (0), s > s0 .
This establishes the Theorem.
Example 4.1.7: Verify the result in Theorem 4.1.5 for the function f (t) = cos(bt).
Solution: We need to compute the left hand side and the right hand side of Eq. (4.1.5)
and verify that we get the same result. We start with the left hand side,
b b2
L[f 0 ] = L[−b sin(bt)] = −b L[sin(bt)] = −b ⇒ L[f 0 ] = − .
s + b2
2 s2 + b2
158 G. NAGY – ODE august 16, 2015
4.1.4. Exercises.
4.1.1.- . 4.1.2.- .
160 G. NAGY – ODE august 16, 2015
Solution: In § 1.1 we saw one way to solve this problem, using the integrating factor
method. One can check that the solution is y(t) = 3e−2t . We now use the Laplace Transform.
First, compute the Laplace Transform of the differential equation,
L[y 0 + 2y] = L[0] = 0.
Theorem 4.1.4 says the Laplace Transform is a linear operation, that is,
L[y 0 ] + 2 L[y] = 0.
Theorem 4.1.5 relates derivatives and multiplications, as follows,
h i
s L[y] − y(0) + 2 L[y] = 0 ⇒ (s + 2)L[y] = y(0).
In the last equation we have been able to transform the original differential equation for y
into an algebraic equation for L[y]. We can solve for the unknown L[y] as follows,
y(0) 3
L[y] = ⇒ L[y] = ,
s+2 s+2
where in the last step we introduced the initial condition y(0) = 3. From the list of Laplace
Transforms given in Sect. 4.1 we know that
1 3 3
L[eat ] = ⇒ = 3L[e−2t ] ⇒ = L[3 e−2t ].
s−a s+2 s+2
So we arrive at L[y(t)] = L[3 e−2t ]. Here is where we need one more property of the Laplace
Transform. We show right after this example that
L[y(t)] = L[3 e−2t ] ⇒ y(t) = 3 e−2t .
This property is called One-to-One. Hence the only solution is y(t) = 3 e−2t . C
G. NAGY – ODE August 16, 2015 161
4.2.2. One-to-One Property. Let us repeat the method we used to solve the differential
equation in Example 4.2.1. We first computed the Laplace Transform of the whole differ-
ential equation. Then we use the linearity of the Laplace Transform, Theorem 4.1.4, and
the property that derivatives are converted into multiplications, Theorem 4.1.5, to trans-
form the differential equation into an algebraic equation for L[y]. We solved the algebraic
equation and we got an expression of the form
L[y(t)] = H(s),
where we have collected all the terms that come from the Laplace transformed differential
equation into the function H. We then used a Laplace Transform table to find a function h
such that
L[h(t)] = H(s).
We arrived to an equation of the form
L[y(t)] = L[h(t)].
Clearly, y = h is one solution of the equation above, hence a solution to the differential
equation. We now show that there are no solutions to the equation L[y] = L[h] other than
y = h. The reason is that the Laplace Transform on continuous functions of exponential
order is an one-to-one transformation, also called injective.
Theorem 4.2.1 (One-to-One). If f , g are continuous on [0, ∞) of exponential order, then
L[f ] = L[g] ⇒ f = g.
Remarks:
(a) The result above holds for continuous functions f and g. But it can be extended to
piecewise continuous functions. In the case of piecewise continuous functions f and g
satisfying L[f ] = L[g] one can prove that f = g + h, where h is a null function, meaning
RT
that 0 h(t) dt = 0 for all T > 0. See Churchill’s textbook [4], page 14.
(b) Once we know that the Laplace Transform is a one-to-one transformation, we can define
the inverse transformation in the usual way.
Definition 4.2.2. The Inverse Laplace Transform, denoted L−1 , of a function F is
L−1 [F (s)] = f (t) ⇔ F (s) = L[f (t)].
Remarks: There is an explicit formula for the inverse Laplace Transform, which involves
an integral on the complex plane,
Z a+ic
1
L−1 [F (s)] = lim est F (s) ds.
t 2πi c→∞ a−ic
See for example Churchill’s textbook [4], page 176. However, we do not use this formula in
these notes, since it involves integration on the complex plane.
Proof of Theorem 4.2.1: The proof is based on a clever change of variables and on
Weierstrass Approximation Theorem of continuous functions by polynomials. Before we get
to the change of variable we need to do some rewriting. Introduce the function u = f − g,
then the linearity of the Laplace Transform implies
L[u] = L[f − g] = L[f ] − L[g] = 0.
162 G. NAGY – ODE august 16, 2015
What we need to show is that the function u vanishes identically. Let us start with the
definition of the Laplace Transform,
Z ∞
L[u] = e−st u(t) dt.
0
We know that f and g are of exponential order, say s0 , therefore u is of exponential order
s0 , meaning that there exist positive constants k and T such that
u(t) < k es0 t ,
t > T.
Evaluate L[u] at s̃ = s1 + n + 1, where s1 is any real number such that s1 > s0 , and n is any
positive integer. We get
Z ∞ Z ∞
L[u] = e−(s1 +n+1)t u(t) dt = e−s1 t e−(n+1)t u(t) dt.
s̃ 0 0
−t −t
We now do the substitution y = e , so dy = −e dt,
Z 0 Z 1
y s1 y n u − ln(y) (−dy) = y s1 y n u − ln(y) dy.
L[u] =
s̃ 1 0
Introduce the function v(y) = y s1 u( − ln(y) , so the integral is
Z 1
L[u] = y n v(y) dy. (4.2.1)
s̃ 0
We know that L[u] exists because u is continuous and of exponential order, so the function
v does not diverge at y = 0. To double check this, recall that t = − ln(y) → ∞ as y → 0+ ,
and u is of exponential order s0 , hence
lim |v(y)| = lim e−s1 t |u(t)| < lim e−(s1 −s0 )t = 0.
y→0+ t→∞ t→∞
Our main hypothesis is that L[u] = 0 for all values of s such that L[u] is defined, in particular
s̃. By looking at Eq. (4.2.1) this means that
Z 1
y n v(y) dy = 0, n = 1, 2, 3, · · · .
0
The equation above and the linearity of the integral imply that this function v is perpen-
dicular to every polynomial p, that is
Z 1
p(y) v(y) dy = 0, (4.2.2)
0
The last term in the second equation above vanishes because of Eq. (4.2.2), therefore
Z 1 Z 1
2
v (y) dy = v(y) − p(y) v(y) dy
0 0
Z 1
6 v(y) − p(y) |v(y)| dy
0
Z 1
6 max |v(y)| v(y) − p(y) dy. (4.2.3)
y∈[0,1] 0
We remark that the inequality above is true for every polynomial p. Here is where we use the
Weierstrass Approximation Theorem, which essentially says that every continuous function
on a closed interval can be approximated by a polynomial.
G. NAGY – ODE August 16, 2015 163
The proof of this theorem can be found on a real analysis textbook. Weierstrass result
implies that, given v and > 0, there exists a polynomial p such that the inequality
in (4.2.3) has the form
Z 1 Z 1
2
v (y) dy 6 max |v(y)| v(y) − p (y) dy 6 max |v(y)| .
0 y∈[0,1] 0 y∈[0,1]
that is, the polynomial has two real roots. In this case we factorize the denominator,
(s − 1)
L[y] = .
(s − 2)(s + 1)
The partial fraction decomposition of the right-hand side in the equation above is the fol-
lowing: Find constants a and b such that
(s − 1) a b
= + .
(s − 2)(s + 1) s−2 s+1
A simple calculation shows
(s − 1) a b a(s + 1) + b(s − 2) s(a + b) + (a − 2b)
= + = = .
(s − 2)(s + 1) s−2 s+1 (s − 2)(s + 1) (s − 2)(s + 1)
Hence constants a and b must be solutions of the equations
a + b = 1,
(s − 1) = s(a + b) + (a − 2b) ⇒
a − 2b = −1.
1 2
The solution is a = and b = . Hence,
3 3
1 1 2 1
L[y] = + .
3 (s − 2) 3 (s + 1)
From the list of Laplace Transforms given in § 4.1, Table 2, we know that
1 1 1
L[eat ] = ⇒ = L[e2t ], = L[e−t ].
s−a s−2 s+1
So we arrive at the equation
1 2 h1 i
L[y] = L[e2t ] + L[e−t ] = L e2t + 2e−t
3 3 3
We conclude that
1
y(t) = e2t + 2e−t .
3
C
The Partial Fraction Method is usually introduced in a second course of Calculus to inte-
grate rational functions. We need it here to use Table 2 to find Inverse Laplace Transforms.
The method applies to rational functions
Q(s)
R(s) = ,
P (s)
where P , Q are polynomials and the degree of the numerator is less than the degree of the
denominator. In the example above
(s − 1)
R(s) = .
(s2 − s − 2)
One starts rewriting the polynomial in the denominator as a product of polynomials degree
two or one. In the example above,
(s − 1)
R(s) = .
(s − 2)(s + 1)
One then rewrites the rational function as an addition of simpler rational functions. In the
example above,
a b
R(s) = + .
(s − 2) (s + 1)
G. NAGY – ODE August 16, 2015 165
We now solve a few examples to recall the different partial fraction cases that can appear
when solving differential equations.
Example 4.2.3: Use the Laplace Transform to find the solution y to the initial value problem
y 00 − 4y 0 + 4y = 0, y(0) = 1, y 0 (0) = 1.
Example 4.2.4: Use the Laplace Transform to find the solution y to the initial value problem
y 00 − 4y 0 + 4y = 3 sin(2t), y(0) = 1, y 0 (0) = 1.
Therefore,
6 3 s 3 1 3 1
= − +
2)2
(s − 2
(s + 4) 8 s + 4 8 (s − 2) 4 (s − 2)2
2
We can rewrite this expression above in terms of the Laplace Transforms given in Table 2,
in Sect. 4.1, as follows,
6 3 3 3
2 2
= L[cos(2t)] − L[e2t ] + L[te2t ],
(s − 2) (s + 4) 8 8 4
and using the linearity of the Laplace Transform,
6 h3 3 3 i
2 2
=L cos(2t) − e2t + te2t . (4.2.6)
(s − 2) (s + 4) 8 8 4
Finally, introducing Eqs. (4.2.5) and (4.2.6) into Eq. (4.2.4) we obtain
3 3 i
L[y(t)] = L (1 − t) e2t + (−1 + 2t) e2t + cos(2t) .
8 8
Since the Laplace Transform is an invertible transformation, we conclude that
3 3
y(t) = (1 − t) e2t + (2t − 1) e2t + cos(2t).
8 8
C
168 G. NAGY – ODE august 16, 2015
4.2.4. Exercises.
4.2.1.- . 4.2.2.- .
G. NAGY – ODE August 16, 2015 169
The step function u at t = 0 and its right and left translations are plotted in Fig. 15.
u u u
u(t) u(t − c) u(t + c)
1
0 t 0 c t −c 0 t
Figure 15. The graph of the step function given in Eq. (4.3.1), a right
and a left translation by a constant c > 0, respectively, of this step function.
Recall that given a function with values f (t) and a positive constant c, then f (t − c) and
f (t + c) are the function values of the right translation and the left translation, respectively,
of the original function f . In Fig. 16 we plot the graph of functions f (t) = eat , g(t) = u(t) eat
and their respective right translations by c > 0.
1 1 1 1
0 t 0 c t 0 t 0 c t
Figure 16. The function f (t) = et , its right translation by c > 0, the
function f (t) = u(t) eat and its right translation by c.
Right and left translations of step functions are useful to construct bump functions. A
bump function is a function with nonzero values only on a finite interval [a, b].
0 t < a,
b(t) = u(t − a) − u(t − b) ⇔ b(t) = 1 a 6 t < b (4.3.2)
0 t > b.
The graph of a bump function is given in Fig. 17, constructed from two step functions.
Step and bump functions are useful to construct more general piecewise continuous functions.
170 G. NAGY – ODE august 16, 2015
u u u
u(t − a) u(t − b) b(t)
1 1 1
0 a b t 0 a b t 0 a b t
Proof of Theorem 4.3.2: Consider the case c > 0. The Laplace Transform is
Z ∞ Z ∞
−st
L[u(t − c)] = e u(t − c) dt = e−st dt,
0 c
where we used that the step function vanishes for t < c. Now compute the improper integral,
1 e−cs e−cs
L[u(t − c)] = lim − e−N s − e−cs = ⇒ L[u(t − c)] = .
N →∞ s s s
Consider now the case of c < 0. The step function is identically equal to one in the domain
of integration of the Laplace Transform, which is [0, ∞), hence
Z ∞ Z ∞
−st 1
L[u(t − c)] = e u(t − c) dt = e−st dt = L[1] = .
0 0 s
This establishes the Theorem.
Example 4.3.2: Compute L[3u(t − 2)].
Solution: The Laplace Transform is a linear operation, so
L[3u(t − 2)] = 3 L[u(t − 2)],
3 e−2s
and the Theorem 4.3.2 above implies that L[3u(t − 2)] = . C
s
G. NAGY – ODE August 16, 2015 171
h e−3s i
Example 4.3.3: Compute L−1 .
s
e−3s h e−3s i
Solution: Theorem 4.3.2 says that = L[u(t − 3)], so L−1 = u(t − 3). C
s s
4.3.2. Translation Identities. We now introduce two important properties of the Laplace
Transform.
Theorem 4.3.3 (Translation Identities). If L[f (t)](s) exists for s > a, then
L[u(t − c)f (t − c)] = e−cs L[f (t)], s > a, c>0 (4.3.3)
ct
L[e f (t)] = L[f (t)](s − c), s > a + c, c ∈ R. (4.3.4)
Remarks:
(a) Eq. (4.3.4) holds for all c ∈ R, while Eq. (4.3.3) holds only for c > 0.
(b) Show that in the case that c < 0 the following equation holds,
Z |c|
|c|s
L[u(t + |c|)f (t + |c|)] = e L[f (t)] − e−st f (t) dt .
0
(c) We can highlight the main idea in the theorem above as follows:
L right-translation (uf ) = (exp) L[f ] ,
L (exp) (f ) = translation L[f ] .
(d) Denoting F (s) = L[f (t)], then an equivalent expression for Eqs. (4.3.3)-(4.3.4) is
L[u(t − c)f (t − c)] = e−cs F (s),
L[ect f (t)] = F (s − c).
(e) The inverse form of Eqs. (4.3.3)-(4.3.4) is given by,
L−1 [e−cs F (s)] = u(t − c)f (t − c), (4.3.5)
−1 ct
L [F (s − c)] = e f (t). (4.3.6)
Proof of Theorem 4.3.3: The proof is again based in a change of the integration variable.
We start with Eq. (4.3.3), as follows,
Z ∞
L[u(t − c)f (t − c)] = e−st u(t − c)f (t − c) dt
0
Z ∞
= e−st f (t − c) dt, τ = t − c, dτ = dt, c > 0,
c
Z ∞
= e−s(τ +c) f (τ ) dτ
0
Z ∞
−cs
=e e−sτ f (τ ) dτ
0
−cs
=e L[f (t)], s > a.
The proof of Eq. (4.3.4) is a bit simpler, since
Z ∞ Z ∞
L ect f (t) = e−st ect f (t) dt = e−(s−c)t f (t) dt = L[f (t)](s − c),
0 0
Example 4.3.4: Compute L u(t − 2) sin(a(t − 2)) .
a
Solution: Both L[sin(at)] = 2 and L[u(t − c)f (t − c)] = e−cs L[f (t)] imply
s + a2
a
L u(t − 2) sin(a(t − 2)) = e−2s L[sin(at)] = e−2s 2
.
s + a2
a e−2s
We conclude: L u(t − 2) sin(a(t − 2)) = 2 . C
s + a2
Example 4.3.5: Compute L e3t sin(at) .
Solution: Since L[ect f (t)] = L[f ](s − c), then we get
a
L e3t sin(at) =
, s > 3.
(s − 3)2 + a2
C
Example 4.3.6: Compute both L u(t − 2) cos a(t − 2) and L e3t cos(at) .
s
Solution: Since L cos(at) = 2 , then
s + a2
s (s − 3)
L u(t − 2) cos a(t − 2) = e−2s 2 L e3t cos(at) =
2
, .
(s + a ) (s − 3)2 + a2
C
Solution: The idea is to rewrite function f so we can use the Laplace Transform Table 2,
in § 4.1 to compute its Laplace Transform. Since the function f vanishes for all t < 1, we
use step functions to write f as
f (t) = u(t − 1)(t2 − 2t + 2).
Now, notice that completing the square we obtain,
t2 − 2t + 2 = (t2 − 2t + 1) − 1 + 2 = (t − 1)2 + 1.
The polynomial is a parabola t2 translated to the right and up by one. This is a discontinuous
function, as it can be seen in Fig. 19.
e−4s
Example 4.3.8: Find the function f such that L[f (t)] = .
s2 + 5
Solution: Notice that
√
−4s 1 1 −4s 5
L[f (t)] = e ⇒ L[f (t)] = √ e √ 2 .
s2 + 5 5 2
s + 5
a
Recall that L[sin(at)] = , then Eq. (4.3.3), or its inverse form Eq. (4.3.5) imply
(s2
+ a2 )
1 √ 1 √
L[f (t)] = √ L u(t − 4) sin 5 (t − 4) ⇒ f (t) = √ u(t − 4) sin 5 (t − 4) .
5 5
C
(s − 1)
Example 4.3.9: Find the function f (t) such that L[f (t)] = .
(s − 2)2 + 3
Solution: We first rewrite the right-hand side above as follows,
(s − 1 − 1 + 1)
L[f (t)] =
(s − 2)2 + 3
(s − 2) 1
= +
(s − 2)2 + 3 (s − 2)2 + 3
√
(s − 2) 1 3
= √ 2 + √ √ 2 .
2
(s − 2) + 3 3 (s − 2) +
2 3
We now recall Eq. (4.3.4) or its inverse form Eq. (4.3.6), which imply
√ 1 √
L[f (t)] = L e2t cos 3 t + √ L e2t sin 3 t .
3
So, we conclude that
e2t h√ √ √ i
f (t) = √ 3 cos 3 t + sin 3 t .
3
C
h 2e−3s i
Example 4.3.10: Find L−1 2 .
s −4
h a i
Solution: Since L−1 2 = sinh(at) and L−1 e−cs fˆ(s) = u(t − c) f (t − c), then
s −a 2
h 2e−3s i h 2 i h 2e−3s i
L−1 2 = L−1 e−3s 2 ⇒ L−1 2
= u(t − 3) sinh 2(t − 3) .
s −4 s −4 s −4
C
e−2s
Example 4.3.11: Find a function f such that L[f (t)] = .
s2 + s − 2
Solution: Since the right hand side above does not appear in the Laplace Transform Table
in § 4.1, we need to simplify it in an appropriate way. The plan is to rewrite the denominator
of the rational function 1/(s2 +s−2), so we can use partial fractions to simplify this rational
function. We first find out whether this denominator has real or complex roots:
√ s+ = 1,
1
s± = −1 ± 1 + 8 ⇒
2 s− = −2.
174 G. NAGY – ODE august 16, 2015
4.3.3. Solving Differential Equations. The last three examples in this section show how
to use the methods presented above to solve differential equations with discontinuous source
functions.
Example 4.3.12: Use the Laplace Transform to find the solution of the initial value problem
y 0 + 2y = u(t − 4), y(0) = 3.
We then obtain
1 h −4s 1 1 i
L[y] = 3 L e−2t + − e−4s
e
2 s (s + 2)
−2t 1
L[u(t − 4)] − L u(t − 4) e−2(t−4) .
= 3L e +
2
Hence, we conclude that
1 h i
y(t) = 3e−2t + u(t − 4) 1 − e−2(t−4) .
2
C
Example 4.3.13: Use the Laplace Transform to find the solution to the initial value problem
00 0 5 0 1 06t<π
y + y + y = b(t), y(0) = 0, y (0) = 0, b(t) = (4.3.8)
4 0 t > π.
u u b
u(t) u(t − π) u(t) − u(t − π)
1 1 1
0 π t 0 π t 0 π t
Figure 20. The graph of the u, its translation and b as given in Eq. (4.3.8).
The last expression for b is particularly useful to find its Laplace Transform,
1 1 1
L[b(t)] = L[u(t)] − L[u(t − π)] = + e−πs ⇒ L[b(t)] = (1 − e−πs ) .
s s s
Now Laplace Transform the whole equation,
5
L[y 00 ] + L[y 0 ] + L[y] = L[b].
4
Since the initial condition are y(0) = 0 and y 0 (0) = 0, we obtain
5 1 1
s2 + s + L[y] = 1 − e−πs ⇒ L[y] = 1 − e−πs
.
4 s s s +s+ 5
2
4
That is, we only need to find the inverse Laplace Transform of H. We use partial fractions
to simplify the expression of H. We first find out whether the denominator has real or
complex roots:
5 1 √
s2 + s + = 0 ⇒ s± = −1 ± 1 − 5 ,
4 2
so the roots are complex valued. An appropriate partial fraction decomposition is
1 a (bs + c)
H(s) = = +
s s2 + s + 45 s2 + s + 54
s
176 G. NAGY – ODE august 16, 2015
Therefore, we get
5 5
1 = a s2 + s + + s (bs + c) = (a + b) s2 + (a + c) s + a.
4 4
This equation implies that a, b, and c, satisfy the equations
5
a + b = 0, a + c = 0, a = 1.
4
4 4 4
The solution is, a = , b = − , c = − . Hence, we have found that,
5 5 5
1 4 h1 (s + 1) i
H(s) = = −
s2 + s + 54 s 5 s s2 + s + 45
Example 4.3.14: Use the Laplace Transform to find the solution to the initial value problem
5 sin(t) 0 6 t < π
y 00 + y 0 + y = g(t), y(0) = 0, y 0 (0) = 0, g(t) = (4.3.9)
4 0 t > π.
Solution: From Fig. 21, the source function g can be written as the following product,
g(t) = u(t) − u(t − π) sin(t),
G. NAGY – ODE August 16, 2015 177
since u(t) − u(t − π) is a box function, taking value one in the interval [0, π] and zero on
the complement. Finally, notice that the equation sin(t) = − sin(t − π) implies that the
function g can be expressed as follows,
g(t) = u(t) sin(t) − u(t − π) sin(t) ⇒ g(t) = u(t) sin(t) + u(t − π) sin(t − π).
The last expression for g is particularly useful to find its Laplace Transform,
v b g
v(t) = sin(t) u(t) − u(t − π) g(t)
1 1 1
0 π t 0 π t 0 π t
Figure 21. The graph of the sine function, a square function u(t)−u(t−π)
and the source function g given in Eq. (4.3.9).
1 1
L[g(t)] = + e−πs 2 .
(s2 + 1) (s + 1)
With this last transform is not difficult to solve the differential equation. As usual, Laplace
Transform the whole equation,
5
L[y 00 ] + L[y 0 ] + L[y] = L[g].
4
Since the initial condition are y(0) = 0 and y 0 (0) = 0, we obtain
5 1 1
s2 + s + L[y] = 1 + e−πs ⇒ L[y] = 1 + e−πs
.
4 (s2 + 1) s + s + 5 (s2 + 1)
2
4
4.3.4. Exercises.
4.3.1.- . 4.3.2.- .
180 G. NAGY – ODE august 16, 2015
Exercise: Find a sequence {un } so that its limit is the step function u defined in § 4.3.
Although every function in the sequence {un } is continuous, the limit ũ is a discontinuous
function. It is not difficult to see that one can construct sequences of continuous functions
having no limit at all. A similar situation happens when one considers sequences of piecewise
discontinuous functions. In this case the limit could be a continuous function, a piecewise
discontinuous function, or not a function at all.
We now introduce a particular sequence of piecewise discontinuous functions with domain
R such that the limit as n → ∞ does not exist for all values of the independent variable t.
G. NAGY – ODE August 16, 2015 181
The limit of the sequence is not a function with domain R. In this case, the limit is a new
type of object that we will call Dirac’s Delta generalized function. Dirac’s Delta is the limit
of a sequence of particular bump functions.
Definition 4.4.1. The Dirac Delta generalized function is the limit
δ(t) = lim δn (t),
n→∞
for every fixed t ∈ R of the sequence functions {δn }∞
n=1 ,
h 1 i
δn (t) = n u(t) − u t − , (4.4.2)
n
We see that the Dirac delta generalized function is a function on the domain R − {0}.
Actually it is the zero function on that domain. Dirac’s Delta is not defined at t = 0, since
the limit diverges at that point. One thing we can do now is to shift each element in the
sequence by a real number c, and define
δ(t − c) = lim δn (t − c), c ∈ R.
n→∞
This shifted Dirac’s Delta is identically zero on R − {c} and diverges at t = c. If we shift
the graphs given in Fig. 23 by any real number c, one can see that
Z c+1
δn (t − c) dt = 1
c
for every n > 1. Therefore, the sequence of integrals is the constant sequence, {1, 1, · · · },
which has a trivial limit, 1, as n → ∞. This says that the divergence at t = c of the sequence
{δn } is of a very particular type. The area below the graph of the sequence elements is always
the same. We can say that this property of the sequence provides the main defining property
of the Dirac Delta generalized function.
182 G. NAGY – ODE august 16, 2015
Using a limit procedure one can generalize several operations from a sequence to its
limit. For example, translations, linear combinations, and multiplications of a function by
a generalized function, integration and Laplace Transforms.
Definition 4.4.2. We introduce the following operations on the Dirac Delta:
f (t) δ(t − c) + g(t) δ(t − c) = lim f (t) δn (t − c) + g(t) δn (t − c) ,
n→∞
Z b Z b
δ(t − c) dt = lim δn (t − c) dt,
a n→∞ a
L[δ(t − c)] = lim L[δn (t − c)].
n→∞
Remark: The notation in the definitions above could be misleading. In the left hand
sides above we use the same notation as we use on functions, although Dirac’s Delta is
not a function on R. Take the integral, for example. When we integrate a function f , the
integration symbol means “take a limit of Riemann sums”, that is,
Z b n
X b−a
f (t) dt = lim f (xi ) ∆x, xi = a + i ∆x, ∆x = .
a n→∞
i=0
n
However, when f is a generalized function in the sense of a limit of a sequence of functions
{fn }, then by the integration symbol we mean to compute a different limit,
Z b Z b
f (t) dt = lim fn (t) dt.
a n→∞ a
We use the same symbol, the integration, to mean two different things, depending whether
we integrate a function or a generalized function. This remark also holds for all the oper-
ations we introduce on generalized functions, specially the Laplace Transform, that will be
often used in the rest of this section.
4.4.2. Computations with the Dirac Delta. Once we have the definitions of operations
involving the Dirac delta, we can actually compute these limits. The following statement
summarizes few interesting results. The first formula below says that the infinity we found
in the definition of Dirac’s delta is of a very particular type; that infinity is such that Dirac’s
delta is integrable, in the sense defined above, with integral equal one.
Z c+
Theorem 4.4.3. For every c ∈ R and > 0 holds, δ(t − c) dt = 1.
c−
Proof of Theorem 4.4.3: The integral of a Dirac’s delta generalized function is computed
as a limit of integrals,
Z c+ Z c+
δ(t − c) dt = lim δn (t − c) dt
c− n→∞ c−
1
Z c+ n
1
= lim n dt, < ,
n→∞ c n
1
= lim n c + − c
n→∞ n
= lim 1
n→∞
= 1.
This establishes the Theorem.
G. NAGY – ODE August 16, 2015 183
Z b
Theorem 4.4.4. If f is continuous on (a, b) and c ∈ (a, b), then f (t) δ(t − c) dt = f (c).
a
Proof of Theorem 4.4.4: We again compute the integral of a Dirac’s delta as a limit of
a sequence of integrals,
Z b Z b
δ(t − c) f (t) dt = lim δn (t − c) f (t) dt
a n→∞ a
Z b h 1 i
= lim n u(t − c) − u t − c − f (t) dt
n→∞ a n
1
Z c+ n
1
= lim < (b − c),
n f (t) dt,
n→∞c n
where in the last line we used that c ∈ [a, b]. If we denote by F any primitive of f , that is,
F 0 = f , then we can write,
Z b
1
δ(t − c) f (t) dt = lim n F c + − F (c)
a n→∞ n
1 1
= lim 1 F c + − F (c)
n→∞
n
n
= F 0 (c)
= f (c).
This establishes the Theorem.
(
−cs
e for c > 0,
Theorem 4.4.5. For all s ∈ R holds L[δ(t − c)] =
0 for c < 0.
Proof of Theorem 4.4.5: The Laplace Transform of a Dirac’s delta is computed as a limit
of Laplace Transforms,
L[δ(t − c)] = lim L[δn (t − c)]
n→∞
h 1 i
= lim L n u(t − c) − u t − c −
n→∞
Z ∞ n
1 −st
= lim n u(t − c) − u t − c − e dt.
n→∞ 0 n
1
The case c < 0 is simple. For < |c| holds
n
Z ∞
L[δ(t − c)] = lim 0 dt ⇒ L[δ(t − c)] = 0, for s ∈ R, c < 0.
n→∞ 0
Consider now the case c > 0. We then have,
1
Z c+ n
L[δ(t − c)] = lim n e−st dt.
n→∞ c
For s = 0 we get
1
Z c+ n
L[δ(t − c)] = lim n dt = 1 ⇒ L[δ(t − c)] = 1 for s = 0, c > 0.
n→∞ c
In the case that s 6= 0 we get,
Z c+ n1 s
−st n −cs 1
−(c+ n )s
−cs (1 − e− n )
L[δ(t − c)] = lim ne dt = lim − e −e =e lim s .
n→∞ c n→∞ s n→∞
n
184 G. NAGY – ODE august 16, 2015
The limit on the last line above is a singular limit of the form 00 , so we can use the l’Hôpital
rule to compute it, that is,
s s
−n
−n
(1 − e )
s − 2
e
n s = lim e− n = 1.
s
lim s = lim
n→∞ n→∞ n→∞
− 2
n n
We then obtain,
L[δ(t − c)] = e−cs for s 6= 0, c > 0.
This establishes the Theorem.
4.4.3. Applications of the Dirac Delta. Dirac’s Delta generalized functions describe
impulsive forces in mechanical systems, such as the force done by a stick hitting a marble.
An impulsive force acts on a very short time and transmits a finite momentum to the system.
Suppose we have a point particle with constant mass m. And to simplify the problem as
much as we can, let us assume the particle can move along only one space direction, say x.
If a force F acts on the particle, Newton’s second law of motion says that
ma = F ⇔ mx00 (t) = F (t, x(t)),
where the function values x(t) are the particle position as function of time, a(t) = x00 (t) are
the particle acceleration values, and we will denote v(t) = x0 (t) the particle velocity values.
We saw in § 1.1 that Newton’s second law of motion is a second order differential equation for
the position function x. Now it is more convenient to use the particle momentum, p = mv,
to write the Newton’s equation,
mx00 = mv 0 = (mv)0 = F ⇒ p0 = F.
Writing Newton’s equation in this form it is simpler to see that forces change the particle
momentum. Integrating in time on an interval [t1 , t2 ] we get
Z t2
∆p = p(t2 ) − p(t1 ) = F (t, x(t)) dt.
t1
Suppose that an impulsive force is acting on a particle at t0 transmitting a finite momentum,
say p0 . This is where the Dirac Delta is uselful for, because we can write the force as
F (t) = p0 δ(t − t0 ),
then F = 0 on R − {t0 } and the momentum transferred to the particle by the force is
Z t0 +∆t
∆p = p0 δ(t − t0 ) dt = p0 .
t0 −∆t
The momentum tranferred is ∆p = p0 , but the force is identically zero on R − {t0 }. We have
transferred a finite momentum to the particle by an interaction at a single time t0 .
4.4.4. The Impulse Response Function. We now want to solve differential equations
with the Dirac Delta as a source. But a particular type of solutions will be important later
on, those solutions to initial value problems with the Dirac Delta generalized function as a
source and zero initial conditions. We give these solutions a particular name.
Definition 4.4.6. The impulse response function at the point c > 0 of the linear
operator L(y) = y 00 + a1 y 0 + a0 y, with a1 , a0 constants, is the solution yδ , in the sense of
Laplace Transforms, of the initial value problem
L(yδ ) = δ(t − c), yδ (0) = 0, yδ0 (0) = 0, c > 0.
Example 4.4.1: Find the impulse response function at t = 0 of the linear operator
L(y) = y 00 + 2y 0 + 2y.
Since for c > 0 holds e−cs L[f ](s) = L[u(t − c) f (t − c)], we conclude that
Solution: The source is a generalized function, so we need to solve this problem using the
Lapace Transform. So we compute the Laplace Transform of the differential equation,
where in the second equation we have already introduced the initial conditions. We arrive
to the equation
s 1
L[y] = − 20 e−3s 2 = L[cosh(t)] − 20 L[u(t − 3) sinh(t − 3)],
(s2 − 1) (s − 1)
which leads to the solution
L[y 00 ] + 4 L[y] = L[δ(t − π)] − L[δ(t − 2π)] ⇒ (s2 + 4) L[y] = e−πs − e−2πs ,
where in the second equation above we have introduced the initial conditions. Then,
e−πs e−2πs
L[y] = −
(s2 + 4) (s2 + 4)
e−πs 2 e−2πs 2
= 2
− 2
2 (s + 4) 2 (s + 4)
1 h i 1 h i
= L u(t − π) sin 2(t − π) − L u(t − 2π) sin 2(t − 2π) .
2 2
The last equation can be rewritten as follows,
1 1
y(t) = u(t − π) sin 2(t − π) − u(t − 2π) sin 2(t − 2π) ,
2 2
which leads to the conclusion that
1
y(t) = u(t − π) − u(t − 2π) sin(2t).
2
C
G. NAGY – ODE August 16, 2015 187
Looking at the differential equation y 0 (t) = δ(t − c) and at the solution y(t) = u(t − c) one
could like to write them together as
u0 (t − c) = δ(t − c). (4.4.3)
But this is not correct, because the step function is a discontinuous function at t = c, hence
not differentiable. What we have done is something different. We have found a sequence of
functions un with the properties,
lim un (t − c) = u(t − c), lim u0n (t − c) = δ(t − c),
n→∞ n→∞
and we have called y(t) = u(t − c). This is what we actually do when we solve a differential
equation with a source defined as a limit of a sequence of functions, such as the Dirac Delta.
The Laplace Transform Method used on differential equations with generalized sources al-
lows us to solve these equations without the need to write any sequence, which are hidden
in the definitions of the Laplace Transform of generalized functions. Let us solve the prob-
lem in the Example 4.4.5 one more time, but this time let us show where all the sequences
actually are.
Example 4.4.6: Find the solution to the initial value problem
y 0 (t) = δ(t − c), y(0) = 0, c > 0, (4.4.4)
Solution: Recall that the Dirac Delta is defined as a limit of a sequence of bump functions,
h 1 i
δ(t − c) = lim δn (t − c), δn (t − c) = n u(t − c) − u t − c − , n = 1, 2, · · · .
n→∞ n
The problem we are actually solving involves a sequence and a limit,
y 0 (t) = lim δn (t − c), y(0) = 0.
n→∞
We start computing the Laplace Transform of the differential equation,
L[y 0 (t)] = L[ lim δn (t − c)].
n→∞
188 G. NAGY – ODE august 16, 2015
We have defined the Laplace Transform of the limit as the limit of the Laplace Transforms,
L[y 0 (t)] = lim L[δn (t − c)].
n→∞
If the solution is at least piecewise differentiable, we can use the property
L[y 0 (t)] = s L[y(t)] − y(0).
Assuming that property, and the initial condition y(0) = 0, we get
1 L[δn (t − c)]
L[y(t)] = lim L[δn (t − c)] ⇒ L[y(t)] = lim .
s n→∞ n→∞ s
Introduce now the function yn (t) = un (t − c), given in Eq. (4.4.1), which for each n is the
only continuous, piecewise differentiable, solution of the initial value problem
yn0 (t) = δn (t − c), yn (0) = 0.
It is not hard to see that this function un satisfies
L[δn (t − c)]
L[un (t)] = .
s
Therefore, using this formula back in the equation for y we get,
L[y(t)] = lim L[un (t)].
n→∞
For continuous functions we can interchange the Laplace Transform and the limit,
L[y(t)] = L[ lim un (t)].
n→∞
So we get the result,
y(t) = lim un (t) ⇒ y(t) = u(t − c).
n→∞
We see above that we have found something more than just y(t) = u(t − c). We have found
y(t) = lim un (t − c),
n→∞
where the sequence elements un are continuous functions with un (0) = 0 and
lim un (t − c) = u(t − c), lim u0n (t − c) = δ(t − c),
n→∞ n→∞
Finally, derivatives and limits cannot be interchanged for un ,
0
lim u0n (t − c) 6= lim un (t − c)
n→∞ n→∞
so it makes no sense to talk about y 0 . C
When the Dirac Delta is defined by a sequence of functions, as we did in this section, the
calculation needed to find impulse response functions must involve sequence of functions
and limits. The Laplace Transform Method used on generalized functions allows us to hide
all the sequences and limits. This is true not only for the derivative operator L(y) = y 0 but
for any second order differential operator with constant coefficients.
Definition 4.4.7. A solution of the initial value problem with a Dirac’s Delta source
y 00 + a1 y 0 + a0 y = δ(t − c), y(0) = y0 , y 0 (0) = y1 , (4.4.5)
where a1 , a0 , y0 , y1 , and c ∈ R, are given constants, is a function
y(t) = lim yn (t),
n→∞
where the functions yn , with n > 1, are the unique solutions to the initial value problems
yn00 + a1 yn0 + a0 yn = δn (t − c), yn (0) = y0 , yn0 (0) = y1 , (4.4.6)
and the source δn satisfy limn→∞ δn (t − c) = δ(t − c).
G. NAGY – ODE August 16, 2015 189
The definition above makes clear what do we mean by a solution to an initial value problem
having a generalized function as source, when the generalized function is defined as the limit
of a sequence of functions. The following result says that the Laplace Transform Method
used with generalized functions hides all the sequence computations.
Theorem 4.4.8. The function y is solution of the initial value problem
y 00 + a1 y 0 + a0 y = δ(t − c), y(0) = y0 , y 0 (0) = y1 , c > 0,
iff its Laplace Transform satisfies the equation
s2 L[y] − sy0 − y1 + a1 s L[y] − y0 − a0 L[y] = e−cs .
This Theorem tells us that to find the solution y to an initial value problem when the source
is a Dirac’s Delta we have to apply the Laplace Transform to the equation and perform the
same calculations as if the Dirac Delta were a function. This is the calculation we did when
we computed the impulse response functions.
Proof of Theorem 4.4.8: Compute the Laplace Transform on Eq. (4.4.6),
L[yn00 ] + a1 L[yn0 ] + a0 L[yn ] = L[δn (t − c)].
Recall the relations between the Laplace Transform and derivatives and use the initial
conditions,
L[yn00 ] = s2 L[yn ] − sy0 − y1 , L[y 0 ] = s L[yn ] − y0 ,
and use these relation in the differential equation,
(s2 + a1 s + a0 ) L[yn ] − sy0 − y1 − a1 y0 = L[δn (t − c)],
Since δn satisfies that limn→∞ δn (t − c) = δ(t − c), an argument like the one in the proof of
Theorem 4.4.5 says that for c > 0 holds
L[δn (t − c)] = L[δ(t − c)] ⇒ lim L[δn (t − c)] = e−cs .
n→∞
Then
(s2 + a1 s + a0 ) lim L[yn ] − sy0 − y1 − a1 y0 = e−cs .
n→∞
Interchanging limits and Laplace Transforms we get
(s2 + a1 s + a0 ) L[y] − sy0 − y1 − a1 y0 = e−cs ,
which is equivalent to
s2 L[y] − sy0 − y1 + a1 s L[y] − y0 − a0 L[y] = e−cs .
4.4.6. Exercises.
4.4.1.- . 4.4.2.- .
G. NAGY – ODE August 16, 2015 191
4.5.1. Definition and Properties. One can say that the convolution is a generalization
of the pointwise product of two functions. In a convolution one multiplies the two functions
evaluated at different points and then integrates the result. Here is a precise definition.
Definition 4.5.1. The convolution of functions f and g is a function f ∗ g given by
Z t
(f ∗ g)(t) = f (τ )g(t − τ ) dτ. (4.5.1)
0
Remark: The convolution is defined for functions f and g such that the integral in (4.5.1) is
defined. For example for f and g piecewise continuous functions, or one of them continuous
and the other a Dirac’s Delta generalized function.
Example 4.5.1: Find f ∗ g the convolution of the functions f (t) = e−t and g(t) = sin(t).
Solution: The definition of convolution is,
Z t
(f ∗ g)(t) = e−τ sin(t − τ ) dτ.
0
that is,
Z t h it h it
2 e−τ sin(t − τ ) dτ = e−τ cos(t − τ ) − e−τ sin(t − τ ) = e−t − cos(t) − 0 + sin(t).
0 0 0
A few properties of the convolution operation are summarized in the Theorem below.
But we save the most important property for the next subsection.
Theorem 4.5.2 (Properties). For every piecewise continuous functions f , g, and h, hold:
(i) Commutativity: f ∗ g = g ∗ f;
(ii) Associativity: f ∗ (g ∗ h) = (f ∗ g) ∗ h;
(iii) Distributivity: f ∗ (g + h) = f ∗ g + f ∗ h;
(iv) Neutral element: f ∗ 0 = 0;
(v) Identity element: f ∗ δ = f.
192 G. NAGY – ODE august 16, 2015
Proof of Theorem 4.5.2: We only prove properties (i) and (v), the rest are left as an
exercise and they are not so hard to obtain from the definition of convolution. The first
property can be obtained by a change of the integration variable as follows,
Z t
(f ∗ g)(t) = f (τ ) g(t − τ ) dτ, τ̂ = t − τ, dτ̂ = −dτ,
0
Z 0
= f (t − τ̂ ) g(τ̂ )(−1) dτ̂
t
Z t
= g(τ̂ ) f (t − τ̂ ) dτ̂ ⇒ (f ∗ g)(t) = (g ∗ f )(t).
0
We now move to property (v), which is essentially a property of the Dirac Delta,
Z t
(f ∗ δ)(t) = f (τ ) δ(t − τ ) dτ = f (t).
0
4.5.2. The Laplace Transform. The Laplace Transform of a convolution of two functions
is the pointwise product of their corresponding Laplace Transforms. This result will be a
key part in the solution decomposition result we show at the end of the section.
Theorem 4.5.3 (Laplace Transform). If the functions f and g have Laplace Transforms
L[f ] and L[g], including the case where one of them is a Dirac’s Delta, then
L[f ∗ g] = L[f ] L[g]. (4.5.3)
Remark: It is not an accident that the convolution of two functions satisfies Eq. (4.5.3).
The definition of convolution is chosen so that it has this property. One can see that this is
the case by looking at the proof of Theorem 4.5.3. One starts with the expression L[f ] L[g],
then changes the order of integration, and one ends up with the Laplace Transform of some
quantity. Because this quantity appears in that expression, is that it deserves a name. This
is how the convolution operation was created.
Z t
Example 4.5.2: Compute the Laplace Transform of the function u(t) = e−τ sin(t−τ ) dτ .
0
Proof of Theorem 4.5.3: We start writing the right hand side of Eq. (4.5.1), the product
L[f ] L[g]. We write the two integrals coming from the individual Laplace Transforms and
we rewrite them in an appropriate way.
hZ ∞ i hZ ∞ i
−st
L[f ] L[g] = e f (t) dt e−st̃ g(t̃) dt̃
0 0
Z ∞ Z ∞
= e−st̃ g(t̃) e−st f (t) dt dt̃
Z0 ∞ Z ∞
0
= g(t̃) e−s(t+t̃) f (t) dt dt̃,
0 0
where we only introduced the integral in t as a constant inside the integral in t̃. Introduce
the change of variables in the inside integral τ = t + t̃, hence dτ = dt. Then, we get
Z ∞ Z ∞
L[f ] L[g] = g(t̃) e−sτ f (τ − t̃) dτ dt̃ (4.5.4)
Z0 ∞ Z ∞ t̃
= e−sτ g(t̃) f (τ − t̃) dτ dt̃. (4.5.5)
0 t̃
Z t
Example 4.5.3: Use the Laplace Transform to compute u(t) = e−τ sin(t − τ ) dτ .
0
Solution: We know that u = f ∗ g, with f (t) = e−t and g(t) = sin(t), and we have seen in
Example 4.5.2 that
1
L[u] = L[f ∗ g] = .
(s + 1)(s2 + 1)
A partial fraction decomposition of the right hand side above implies that
1h 1 (1 − s) i
L[u] = + 2
2 (s + 1) (s + 1)
1h 1 1 s i
= + 2 − 2
2 (s + 1) (s + 1) (s + 1)
1 −t
= L[e ] + L[sin(t)] − L[cos(t)] .
2
194 G. NAGY – ODE august 16, 2015
4.5.3. Solution Decomposition. The Solution Decomposition Theorem is the main result
of this section. Theorem 4.5.4 shows one way to write the solution to a general initial
value problem for a linear second order differential equation with constant coefficients. The
solution to such problem can always be divided in two terms. The first term contains
information only about the initial data. The second term contains information only about
the source function. This second term is a convolution of the source function itself and the
impulse response function of the differential operator.
Theorem 4.5.4 (Solution Decomposition). Given constants a0 , a1 , y0 , y1 and a piece-
wise continuous function g, the solution y to the initial value problem
y 00 + a1 y 0 + a0 y = g(t), y(0) = y0 , y 0 (0) = y1 , (4.5.6)
can be decomposed as
y(t) = yh (t) + (yδ ∗ g)(t), (4.5.7)
where yh is the solution of the homogeneous initial value problem
yh00 + a1 yh0 + a0 yh = 0, yh (0) = y0 , yh0 (0) = y1 , (4.5.8)
and yδ is the impulse response solution, that is,
yδ00 + a1 yδ0 + a0 yδ = δ(t), yδ (0) = 0, yδ0 (0) = 0.
Remark: The solution decomposition in Eq. (4.5.7) can be written in the equivalent way
Z t
y(t) = yh (t) + yδ (τ )g(t − τ ) dτ.
0
Using the result in Theorem 4.5.3 in the last term above we conclude that
y(t) = yh (t) + (yδ ∗ g)(t).
Example 4.5.4: Use the Solution Decomposition Theorem to express the solution of
y 00 + 2 y 0 + 2 y = sin(at), y(0) = 1, y 0 (0) = −1.
4.5.4. Exercises.
4.5.1.- . 4.5.2.- .
G. NAGY – ODE August 16, 2015 197
Newton’s second law of motion for point particles is one of the first differential equations
ever written. Even this early example of a differential equation consists not of a single
equation but of a system of three equation on three unknowns. The unknown functions are
the particle three coordinates in space as function of time. One important difficulty to solve
a differential system is that the equations in a system are usually coupled. One cannot solve
for one unknown function without knowing the other unknowns. In this chapter we study
how to solve the system in the particular case that the equations can be uncoupled. We call
such systems diagonalizable. Explicit formulas for the solutions can be written in this case.
Later we generalize this idea to systems that cannot be uncoupled.
x2
R5
R1
R6 u2 u1
R4
R2 0 x1
R8
R3
R7
198 G. NAGY – ODE august 16, 2015
5.1.1. First Order Linear Systems. A single differential equation on one unknown func-
tion is often not enough to describe certain physical problems. The description of a point
particle moving in space under Newton’s law of motion requires three functions of time,
the space coordinates of the particle, to describe the motion together with three differential
equations. To describe several proteins activating and deactivating each other inside a cell
also requires as many unknown functions and equations as proteins in the system. In this
Section we present a first step aimed to describe such physical systems. We start introducing
a first order linear differential system.
Definition 5.1.1. An n × n first order linear differential system is the equation
x0 (t) = A(t) x(t) + g(t), (5.1.1)
where the n × n coefficient matrix A, the source n-vector g, and the unknown n-vector x are
given in components by
a11 (t) · · · a1n (t) g1 (t) x1 (t)
A(t) = ... .. , g(t) = ... , x(t) = ... .
.
an1 (t) · · · ann (t) gn (t) xn (t)
The system is called homogeneous iff the source vector g = 0. The system is called of
constant coefficients iff the coefficient matrix A is constant.
Remarks:
x01 (t)
x0n (t)
(b) By the definition of the matrix-vector product, Eq. (5.1.1) can be written as
x01 (t) = a11 (t) x1 (t) + · · · + a1n (t) xn (t) + g1 (t),
..
.
x0n (t) = an1 (t) x1 (t) + · · · + ann (t) xn (t) + gn (t).
A solution of an n × n linear differential system is an n-vector-valued function x, that
is, a set of n functions {x1 , · · · , xn }, that satisfy every differential equation in the system.
When we write down the equations we will usually write x instead of x(t).
Example 5.1.1: The case n = 1 is a single differential equation: Find a solution x1 of
x01 = a11 (t) x1 + g1 (t).
This is a linear first order equation, and solutions can be found with the integrating factor
method described in Section 1.2. C
G. NAGY – ODE August 16, 2015 199
Example 5.1.3: Use matrix notation to write down the 2 × 2 system given by
x01 = x1 − x2 ,
x02 = −x1 + x2 .
Solution: In this case, the matrix of coefficients and the unknown vector have the form
1 −1 x (t)
A= , x(t) = 1 .
−1 1 x2 (t)
This is an homogeneous system, so the source vector g = 0. The differential equation can
be written as follows,
x01 = x1 − x2
0
x1 1 −1 x1
⇔ = ⇔ x0 = Ax.
x02 = −x1 + x2 x02 −1 1 x2
C
Example 5.1.4: Find the explicit expression for the linear system x0 = Ax + b, where
t
1 3 e x1
A= , g(t) = , x = .
3 1 2e3t x2
Example 5.1.6: Find the explicit expression of the most general 3 × 3 homogeneous linear
differential system.
Solution: This is a system of the form x0 = A(t) x, with A beaing a 3×3 matrix. Therefore,
we need to find functions x1 , x2 , and x3 solutions of
x01 = a11 (t) x1 + a12 (t) x2 + a13 (t) x3
x02 = a21 (t) x1 + a22 (t) x2 + a13 (t) x3
x03 = a31 (t) x1 + a32 (t) x2 + a33 (t) x3 .
C
5.1.2. Order Transformations. We present two results that make use of 2 × 2 linear
systems. The first result transforms any second order linear equation into a 2 × 2 first order
linear system. The second result is kind of a converse. It transforms any 2 × 2 first order,
linear, constant coefficients system into a second order linear differential equation. We start
with our first result.
Theorem 5.1.2 (First Order Reduction). A function y solves the second order equation
y 00 + p(t) y 0 + q(t) y = g(t), (5.1.2)
0
iff the functions x1 = y and x2 = y are solutions to the 2 × 2 first order differential system
x01 = x2 , (5.1.3)
0
x2 = −q(t) x1 − p(t) x2 + g(t). (5.1.4)
Proof of Theorem 5.1.2:
(⇒) Given a solution y of Eq. (5.1.2), introduce the functions x1 = y and x2 = y 0 . Therefore
Eq. (5.1.3) holds, due to the relation
x01 = y 0 = x2 ,
Also Eq. (5.1.4) holds, because of the equation
x02 = y 00 = −q(t) y − p(t) y 0 + g(t) ⇒ x002 = −q(t) x1 − p(t) x2 + g(t).
(⇐) Differentiate Eq. (5.1.3) and introduce the result into Eq. (5.1.4), that is,
x001 = x02 ⇒ x001 = −q(t) x1 − p(t) x01 + g(t).
Denoting y = x1 , we obtain,
y 00 + p(t) y 0 + q(t) y = g(t).
This establishes the Theorem.
Example 5.1.7: Express as a first order system the second order equation
y 00 + 2y 0 + 2y = sin(at).
The transformation of a 2 × 2 first order system into a second order equation given in
Theorem 5.1.2 can be generalized to any 2 × 2 constant coefficient linear differential system.
Theorem 5.1.3 (Second
Order Reduction). Any 2×2 constant coefficients linear system
x
x0 = A x, with x = 1 , can be written as the second order equation for x1 given by
x2
Finally, replace the term with x2 above using Eq. (5.1.8), that is,
x01 − a11 x1
00 0
x1 = a11 x1 + a12 a21 x1 + a12 a22 .
a12
A simple cancellation and reorganization of terms gives the equation,
x001 = (a11 + a22 ) x01 + (a12 a21 − a11 a22 ) x1 .
Recalling that tr (A) = a11 + a22 , and det(A) = a11 a22 − a12 a21 , we get
x001 − tr (A) x01 + det(A) x1 = 0.
This establishes the Theorem.
Remark: The component x2 satisfies exactly the same equation as x1 ,
x002 − tr (A) x02 + det(A) x2 = 0. (5.1.8)
The proof is the analogous to the one to get the equation for x1 . There is a nice proof to
get both equations, for x1 and x2 , at the same time. It is based in the identity that holds
for any 2 × 2 matrix,
A2 − tr (A) A + det(A) I = 0.
This identity is the particular case n = 2 of the Cayley-Hamilton Theorem, which holds
for n × n matrices. If we use this identity on the equation for x00 we get the equation in
Theorem 5.1.3 but for both components x1 and x2 , because
0
x00 = A x = A x0 = A2 x = tr (A) Ax − det(A)Ix.
Recalling that A x = x0 , and Ix = x, we get the vector equation
x00 − tr (A) x0 + det(A) x = 0.
The first component of this equation is Eq. (5.1.5), the second component is Eq. (5.1.8)
202 G. NAGY – ODE august 16, 2015
Example 5.1.8: Express as a single second order equation the 2 × 2 system and solve it,
x01 = −x1 + 3x2 ,
x02 = x1 − x2 .
Solution: Instead of using the result from Theorem 5.1.3, we solve this problem following
the proof of that theorem. But instead of working with x1 , we work with x2 . We start
computing x1 from the second equation: x1 = x02 + x2 . We then introduce this expression
into the first equation,
(x02 + x2 )0 = −(x02 + x2 ) + 3x2 ⇒ x002 + x02 = −x02 − x2 + 3x2 ,
so we obtain the second order equation
x002 + 2x02 − 2x2 = 0.
We solve this equation with the methods studied in Chapter 2, that is, we look for solutions
of the form x2 (t) = ert , with r solution of the characteristic equation
1 √ √
r2 + 2r − 2 = 0 ⇒ r± =
−2 ± 4 + 8 ⇒ r± = −1 ± 3.
2
Therefore, the general solution to the second order equation above is
√ √
x2 = c+ e(1+ 3)t
+ c- e(1− 3)t
, c+ , c- ∈ R.
Since x1 satisfies the same equation as x2 , we obtain the same general solution
√ √
x1 = c̃+ e(1+ 3)t
+ c̃- e(1− 3)t
, c̃+ , c̃- ∈ R.
C
5.1.3. The Initial Value Problem. This notion for linear systems is similar to initial
value problems for single differential equations. In the case of an n × n first order system we
need n initial conditions, one for each unknown function, which are collected in an n-vector.
Definition 5.1.4. An Initial Value Problem for an n × n linear differential system is
the following: Given an n × n matrix-valued function A, and an n-vector-valued function b,
a real constant t0 , and an n-vector x0 , find an n-vector-valued function x solution of
x0 = A(t) x + b(t), x(t0 ) = x0 .
Remark: The initial condition vector x0 represents n conditions, one for each component
of the unknown vector x.
x
Example 5.1.9: Write down explicitly the initial value problem for x = 1 given by
x2
2 1 3
x0 = Ax, x(0) = , A= .
3 3 1
The main result about existence and uniqueness of solutions to an initial value problem
for a linear system is also analogous to Theorem 2.1.2
G. NAGY – ODE August 16, 2015 203
Remark: The fixed point argument used in the proof of Picard-Lindelöf’s Theorem 1.6.2
can be extended to prove Theorem 5.1.5. This proof will be presented later on.
5.1.4. Homogeneous Systems. Solutions to a linear homogeneous differential system sat-
isfy the superposition property: Given two solutions of the homogeneous system, their linear
combination is also a solution to that system.
Theorem 5.1.6 (Superposition). If the n-vector-valued functions x(1) , x(2) are solutions
of x(1)0 = A(t) x(1) and x(2)0 = A(t) x(2) , then any the linear combination x = ax(1) + bx(2) , for
all a, b ∈ R, is also solution of x0 = A x.
Remark: This Theorem contains two particular cases:
(a) a = b = 1: If x(1) and x(2) are solutions of an homogeneous linear system, so is x(1) +x(2) .
(b) b = 0 and a arbitrary: If x(1) is a solution of an homogeneous linear system, so is ax(1) .
Proof of Theorem 5.1.6: We check that the function x = ax(1) + bx(2) is a solution of
the differential equation in the Theorem. Indeed, since the derivative of a vector-valued
function is a linear operation, we get
0
x0 = ax(1) + bx(2) = a x(1) 0 + b x(2) 0 .
Replacing the differential equation on the right-hand side above,
x0 = a Ax(1) + b Ax(2) .
The matrix-vector product is a linear operation, A ax(1) + bx(2) = a Ax(1) + b Ax(2) , hence,
x0 = A ax(1) + bx(2) ⇒ x0 = Ax.
We now introduce the notion of a linearly dependent and independent set of functions.
Definition 5.1.7. A set of n vector-valued functions {x(1) , · · · , x(n) } is called linearly
dependent on an interval I ∈ R iff for all t ∈ I there exist constants c1 , · · · , cn , not all of
them zero, such that it holds
c1 x(1) (t) + · · · + cn x(n) (t) = 0.
A set of n vector-valued functions is called linearly independent on I iff the set is not
linearly dependent.
Remark: This notion is a generalization of Def. 2.1.6 from two functions to n vector-
valued functions. For every value of t ∈ R this definition agrees with the definition of a set
of linearly dependent vectors given in Linear Algebra and reviewed in the Appendices.
We now generalize Theorem 2.1.7 to linear systems. If you know a linearly independent
set of n solutions to an n×n first order, linear, homogeneous system, then you actually know
all possible solutions to that system, since any other solution is just a linear combination of
the previous n solutions.
Theorem 5.1.8 (General Solution). If {x(1) , · · · , x(n) } is a linearly independent set of
solutions of the n × n system x0 = A x, where A is a continuous matrix-valued function, then
there exist unique constants c1 , · · · , cn such that every solution x of the differential equation
x0 = A x can be written as the linear combination
x(t) = c1 x(1) (t) + · · · + cn x(n) (t). (5.1.10)
Before we present a sketch of the proof for Theorem 5.1.8, it is convenient to state the
following the definitions, which come out naturally from Theorem 5.1.8.
Definition 5.1.9.
(a) The set of functions {x(1) , · · · , x(n) } is a fundamental set of solutions of the equation
x0 = A x iff it holds that x(i)0 = A x(i) , for i = 1, · · · , n, and the set {x(1) , · · · , x(n) } is
linearly independent.
(b) The general solution of the homogeneous equation x0 = A x denotes any vector-valued
function xgen that can be written as a linear combination
xgen (t) = c1 x(1) (t) + · · · + cn x(n) (t),
where x(1) , · · · , x(n) are the functions in any fundamental set of solutions of x0 = A x,
while c1 , · · · , cn are arbitrary constants.
Remark: The names above are appropriate, since Theorem 5.1.8 says that knowing the
n functions of a fundamental set of solutions is equivalent to knowing all solutions to the
homogeneous linear differential system.
G. NAGY – ODE August 16, 2015 205
n 1 −2t (2) −1 4t o
Example 5.1.11: Show that the set of functions x(1) = e , x = e is a
1 1
1 −3
fundamental set of solutions to the system x0 = Ax, where A = .
−3 1
Solution: In Example 5.1.10 we have shown that x(1) and x(2) are solutions to the dif-
ferential equation above. We only need to show that these two functions form a linearly
independent set. That is, we need to show that the only constants c1 , c2 solutions of the
equation below, for all t ∈ R, are c1 = c2 = 0, where
−2t
−e4t c1
(1) (2) 1 −2t −1 4t e
0 = c1 x + c2 x = c1 e + c2 e = −2t = X(t) c,
1 1 e e4t c2
(1) (2)
c
where X(t) = x (t), x (t) and c = 1 . Using this matrix notation, the linear system
c2
for c1 , c2 has the form
X(t) c = 0.
We now show that matrix X(t) is invertible for all t ∈ R. This is the case, since its
determinant is
e−2t −e4t
det X(t) = −2t
= e2t + e2t = 2 e2t 6= 0 for all t ∈ R.
e e4t
Since X(t) is invertible for t ∈ R, the only solution
for the linear system above is c = 0,
that is, c1 = c2 = 0. We conclude that the set x(1) , x(2) is linearly independent, so it is a
fundamental set of solution to the differential equation above. C
Proof of Theorem 5.1.8: The superposition property in Theorem 5.1.6 says that given any
set of solutions {x(1) , · · · , x(n) } of the differential equation x0 = A x, the linear combination
x(t) = c1 x(1) (t) + · · · + cn x(n) (t) is also a solution. We now must prove that, in the case
that {x(1) , · · · , x(n) } is linearly independent, every solution of the differential equation is
included in this linear combination.
Let now x be any solution of the differential equation x0 = A x. The uniqueness statement
in Theorem 5.1.5 implies that this is the only solution that at t0 takes the value x(t0 ). This
means that the initial data x(t0 ) parametrizes all solutions to the differential equation. We
now try to find the constants {c1 , · · · , cn } solutions of the algebraic linear system
Example 5.1.12: Find the general solution to differential equation in Example 5.1.5 and
then use this general solution to find the solution of the initial value problem
1 3 −2
x0 = Ax, x(0) = , A= .
5 2 −2
Solution: From Example 5.1.5 we know that the general solution of the differential equa-
tion above can be written as
2 2t 1 −t
x(t) = c1 e + c2 e .
1 2
Before imposing the initial condition on this general solution, it is convenient to write this
general solution using a matrix-valued function, X, as follows
e−t
2t
2e c1
x(t) = ⇔ x(t) = X(t)c,
e2t 2e−t c2
where we introduced the solution matrix and the constant vector, respectively,
e−t
2t
2e c
X(t) = , c= 1 .
e2t 2e−t c2
The initial condition fixes the vector c, that is, its components c1 , c2 , as follows,
−1
x(0) = X(0) c ⇒ c = X(0) x(0).
Since the solution matrix X at t = 0 has the form,
2 1 −1 1 2 −1
X(0) = ⇒ X(0) = ,
1 2 3 −1 2
−1
introducing X(0) in the equation for c above we get
c1 = −1,
1 2 −1 1 −1
c= = ⇒
3 −1 2 5 3 c2 = 3.
We conclude that the solution to the initial value problem above is given by
2 2t 1 −t
x(t) = − e +3 e .
1 2
C
5.1.5. The Wronskian and Abel’s Theorem. From the proof of Theorem 5.1.8 above
we see that it is convenient to introduce the notion of solution matrix and Wronskian of a
set of n solutions to an n × n linear differential system,
Definition 5.1.10.
(a) The solution matrix of any set {x(1) , · · · , x(n) } of solutions to a differential equation
x0 = A x is the n × n matrix-valued function
X(t) = x1 (t), · · · , xn (t) .
(5.1.11)
Xis called a fundamental matrix iff the set {x1 , · · · , xn } is a fundamental
set.
(b) The Wronskian of the set {x1 , · · · , xn } is the function W (t) = det X(t) .
Remark: A fundamental matrix provides a more compact way to write down the general
solution of a differential equation. The general solution in Eq. (5.1.10) can be rewritten as
c1 c1
.
n n
c = ... .
1
1
xgen (t) = c1 x (t) + · · · + cn x (t) = x (t), · · · , x (t) .. = X(t) c,
cn cn
G. NAGY – ODE August 16, 2015 207
Remark: Consider the case that a linear system is a first order reduction of a second order
linear homogeneous equation, y 00 + a1 y 0 + a0 y = 0, that is,
x01 = x2 , x02 = −a0 x1 − a1 x2 .
In this case, the Wronskian defined here coincides with the definition given in Sect. 2.1. The
proof is simple. Suppose that y1 , y2 are fundamental solutions of the second order equation,
then the vector-valued functions
(1) y1 (2) y
x = 0 , x = 20 ,
y1 y2
are solutions of the first order reduction system. Then holds,
(1) (2) y1 y2
W = det x , x = 0 = Wy y .
y1 y20
1 2
Example 5.1.13: Find two fundamental matrices for the linear homogeneous system in
Example 5.1.10.
Solution: One fundamental matrix is simple to find, is it constructed with the solutions
given in Example 5.1.10, that is,
−2t
−e4t
(1) (2) e
X = x ,x ⇒ X(t) = −2t .
e e4t
A second fundamental matrix can be obtained multiplying by any non-zero constant each
solution above. For example, another fundamental matrix is
−2t
−3e4t
(1) ( 2)
2e
X̃ = 2x , 3x ⇒ X̃(t) = .
2e−2t 3e4t
C
Example 5.1.14: Compute the Wronskian of the vector-valued functions given in Exam-
(1) 1 −2t −1 4t
ple 5.1.10, that is, x = e and x(2) = e .
1 1
Solution: The Wronskian is the determinant of the solution matrix, with the vectors
placed in any order. For example, we can choose the order x(1) , x(2) . If we choose the
order x(2) , x(1) , this second Wronskian is the negative of the first one. Choosing the first
order we get,
(1) (2) e−2t −e4t
W (t) = det x , x = −2t
= e−2t e4t + e−2t e4t .
e e4t
We conclude that W (t) = 2e2t . C
3t −t o
e e
n
Example 5.1.15: Show that the set of functions x(1) = , x(2)
= is
2e3t −2e−t
linearly independent for all t ∈ R.
e3t e−t
Solution: We compute the determinant of the matrix X(t) = , that is,
2e3t −2e−t
e−t
3t
e
w(t) = 3t = −2e2t − 2e2t ⇒ w(t) = −4e2t 6= 0 t ∈ R.
2e −2e−t
C
208 G. NAGY – ODE august 16, 2015
We now generalize Abel’s Theorem 2.1.14 from a single equation to an n×n linear system.
Theorem 5.1.11 (Abel). The Wronskian function W = det X(t) of a solution matrix
X(t) = x , · · · , x(n) of the linear system x0 = A(t)x, where A is an n × n continuous
(1)
where the equation on the far right comes from the definition of matrix multiplication.
Replacing this equation in the Wronskian equation we get
W 0 (t) = W (t) tr X −1 AX = W (t) tr X X −1 A = W (t) tr (A),
where in the second equation above we used a property of the trace of three matrices:
tr (ABC) = tr (CAB) = tr (BCA). Therefore, we have seen that the Wronskian satisfies
the equation
W 0 (t) = tr A(t) W (t).
(5.1.13)
This is a linear differential equation of a single function W : R → R. We integrate it using
the integrating factor method from Section 1.2. The result is
Z t
α(t)
W (t) = W (t0 ) e , α(t) = tr A(τ ) dτ.
t0
2 2t
Solution: In Example 5.1.5 we have shown that the vector-valued functions x(1) = e
1
1 −t 3 −2
and x(2) = e are solutions to the system x0 = Ax, where A = . The matrix
2 2 −2
e−t
2t
2e
X(t) =
e2t 2e−t
is a fundamental matrix of the system, since its Wronskian is non-zero,
e−t
2t
2e
W (t) = 2t = 4et − et ⇒ W (t) = 3et .
e 2e−t
We need to compute the right-hand side and the left-hand side of Eq. (5.1.13) and verify
that they coincide. We start with the left-hand side,
W 0 (t) = 3et = W (t).
The right-hand side is
tr (A) W (t) = (3 − 2) W (t) = W (t).
Therefore, we have shown that W (t) = tr (A) W (t). C
210 G. NAGY – ODE august 16, 2015
5.1.6. Exercises.
5.1.1.- . 5.1.2.- .
G. NAGY – ODE August 16, 2015 211
Solution: The main idea to solve this system comes from the following observation. If we
add up the two equations equations, and if we subtract the second equation from the first,
we obtain, respectively,
(x1 + x2 )0 = 0, (x1 − x2 )0 = 2(x1 − x2 ).
To understand the consequences of what we have done, let us introduce the new unknowns
v = x1 + x2 , and w = x1 − x2 , and re-write the equations above with these new unknowns,
v 0 = 0, w0 = 2w.
We have decoupled the original system. The equations for x1 and x2 are coupled, but we
have found a linear combination of the equations such that the equations for v and w are
not coupled. We now solve each equation independently of the other.
v0 = 0 ⇒ v = c1 ,
0
w = 2w ⇒ w = c2 e2t ,
with c1 , c2 ∈ R. Having obtained the solutions for the decoupled system, we now transform
back the solutions to the original unknown functions. From the definitions of v and w we
see that
1 1
x1 = (v + w), x2 = (v − w).
2 2
We conclude that for all c1 , c2 ∈ R the functions x1 , x2 below are solutions of the 2 × 2
differential system in the example, namely,
1 1
x1 (t) = (c1 + c2 e2t ), x2 (t) = (c1 − c2 e2t ).
2 2 C
Let us review what we did in the example above. The equations for x1 and x2 are coupled,
so we found an appropriate linear combination of the equations and the unknowns such
that the equations for the new unknown functions, u and v, are decoupled. We integrated
each equation independently of the other, and we finally transformed back to the original
unknowns x1 and x2 . The key step is to find the transformation from the x1 , x2 into the u, v.
212 G. NAGY – ODE august 16, 2015
For general systems this transformation may not exists. It exists, however, for a particular
type of systems, called diagonalizable. We start reviewing the concept of a diagonalizable
matrix.
Definition 5.2.1. An n × n matrix A is called diagonalizable iff there exists an invertible
matrix P and a diagonal matrix D such that
A = P DP −1 .
Remark: This and other concepts of linear algebra can be reviewed in Chapter 8.
1 3
Example 5.2.2: Show that matrix A = is diagonalizable, where
3 1
1 −1 4 0
P = and D = .
1 1 0 −2
4 0
D = diag λ+ , λ- ⇒ D=
.
0 −2
We have already shown in Example 5.1.7 that P is invertible and that A = P DP −1 . C
5.2.2. Eigenvector solution formulas. Explicit formulas for the solution of linear differ-
ential system can be obtained in the case that the system coefficient matrix is diagonalizable.
Definition 5.2.3. A diagonalizable differential system is a differential equation of the
form x0 = A x + g, where the coefficient matrix A is diagonalizable.
For diagonalizable differential systems there is an explicit formula for the solution of the
differential equation. The formula includes the eigenvalues and eigenvectors of the coefficient
matrix.
Theorem 5.2.4 (Eigenpairs expression). If the n×n constant
matrix A is diagonalizable,
with a set of linearly independent eigenvectors v1 , · · · , vn and corresponding eigenvalues
{λ1 , · · · , λn }, then, the system x0 = A x has a set of fundamental solutions given by
x (t) = v1 eλ1 t , · · · , xn (t) = vn eλn t .
1
(5.2.1)
Furthermore, every initial value problem x0 (t) = A x(t), with x(t0 ) = x0 , has a unique for
every initial condition x0 ∈ Rn ,
x(t) = c1 v1 eλ1 t + · · · + cn vn eλn t , (5.2.2)
where the constants c1 , · · · , cn are solution of the algebraic linear system
x0 = c1 v1 eλ1 t0 + · · · + cn vn eλ1 t0 . (5.2.3)
Remarks:
(a) We show two proofs of this Theorem. The first one is short but uses Theorem 5.1.8.
The second proof is constructive, longer than the first proof, and it makes no use of
Theorem 5.1.8.
(b) The second proof follows the same idea presented to solve Example 5.2.1. We decouple
the system, we solve the uncoupled system, and we transform back to the original
unknowns. The differential system is decoupled when written in the basis of eigenvectors
of the coefficient matrix.
First proof of Theorem 5.2.4: Each function xi = vi eλi t , for i = 1, · · · , n, is solution of
the system x0 = A x, because
xi0 = λi vi eλ1 t , Axi = Avi eλi t = λi vi eλi t ,
hence xi0 = A xi . Since A is diagonalizable, the set x1 (t) = v1 eλ1 t , · · · , xn (t) = vn eλn t is
a fundamental set of the system. Therefore, Theorem 5.1.8 says that teh general solution
to the system is
x(t) = c1 v1 eλ1 t + · · · + cn vn eλn t .
The constants c1 , · · · , cn are computed by evaluating the equation above at t0 and recalling
the initial condition x(t0 ) = x0 . The result is Eq. (5.2.3). This establishes the Theorem.
Remark: The proof above does not say how one can find out that a function of the form
xi = vi eλi t is a solution in the first place. The second proof below constructs the solutions
and shows that the solutions are indeed the ones in the first proof.
Second proof of Theorem 5.2.4: Since the coefficient matrix A is diagonalizable, there
exist an n × n invertible matrix P and an n × n diagonal matrix D such that A = P DP −1 .
Introduce that into the differential equation and multiplying the whole equation by P −1 ,
P −1 x0 (t) = P −1 P DP −1 x(t).
G. NAGY – ODE August 16, 2015 215
0
Since matrix A is constant, so is P and D. In particular P −1 x0 = P −1 x , hence
(P −1 x)0 = D P −1 x .
y0 (t) = D y(t).
Since matrix D is diagonal, the system above is a decoupled for the unknown y. Transform
the initial condition too, that is, P −1 x(t0 ) = P −1 x0 . Introduce the notation y0 = P −1 x0 , so
the initial condition is
y(t0 ) = y0 .
Solve the decoupled initial value problem y0 (t) = D y(t),
y10 (t) = λ1 y1 (t), λ1 t
y1 (t) = c1 e ,
c1 eλ1 t
.. ..
⇒ ⇒ y(t) = ... .
. .
cn eλn t
yn0 (t) = λn yn (t), y (t) = c eλn t ,
n n
cn eλn t
This is Eq. (5.2.2). Evaluating it at t0 we obtain Eq. (5.2.3). This establishes the Theorem.
Example 5.2.4: Find the vector-valued function x solution to the differential system
0 3 1 2
x = A x, x(0) = , A= .
2 2 1
Solution: First we need to find out whether the coefficient matrix A is diagonalizable or
not. Theorem 5.2.2 says that a 2 × 2 matrix is diagonalizable iff there exists a linearly
independent set of two eigenvectors. So we start computing the matrix eigenvalues, which
are the roots of the characteristic polynomial
(1 − λ) 2
p(λ) = det(A − λI2 ) = = (1 − λ)2 − 4.
2 (1 − λ)
The roots of the characteristic polynomial are
(λ − 1)2 = 4 ⇔ λ± = 1 ± 2 ⇔ λ+ = 3, λ- = −1.
The eigenvectors corresponding to the eigenvalue λ+ = 3 are the solutions v+ of the linear
system (A − 3I2 )v+ = 0. To find them, we perform Gauss operations on the matrix
−2 2 1 −1 + + + 1
A − 3I2 = → ⇒ v1 = v2 ⇒ v = .
2 −2 0 0 1
The eigenvectors corresponding to the eigenvalue λ- = −1 are the solutions v- of the linear
system (A + I2 )v- = 0. To find them, we perform Gauss operations on the matrix
2 2 1 1 - - - −1
A + I2 = → ⇒ v1 = −v2 ⇒ v = .
2 2 0 0 1
Summarizing, the eigenvalues and eigenvectors of matrix A are following,
+ 1 - −1
λ+ = 3, v = , and λ- = −1, v = .
1 1
216 G. NAGY – ODE august 16, 2015
Once we have the eigenvalues and eigenvectors of the coefficient matrix, Eq. (5.2.2) gives us
the general solution
1 3t −1 −t
x(t) = c+ e + c- e ,
1 1
where the coefficients c+ and c- are solutions of the initial condition equation
1 −1 3 1 −1 c+ 3 c+ 1 1 1 3
c+ + c- = ⇒ = ⇒ = .
1 1 2 1 1 c- 2 c- 2 −1 1 2
We conclude that c+ = 5/2 and c- = −1/2, hence
1 5e3t + e−t
5 1 3t 1 −1 −t
x(t) = e − e ⇔ x(t) = .
2 1 2 1 2 5e3t − e−t
C
Solution: We start finding the eigenvalues and eigenvectors of the coefficient matrix A.
This part of the work was already done in Example 5.2.3. We have found that A has two
linearly independent eigenvectors, more precisely,
+ 1 + 1 4t
λ+ = 4, v = ⇒ x (t) = e ,
1 1
−1 −1 −2t
λ- = −2, v- = ⇒ x- (t) = e .
1 1
Therefore, the general solution of the differential equation is
1 4t −1 −2t
x(t) = c+ e + c- e , c+ , c- ∈ R.
1 1
C
5.2.3. Alternative solution formulas. There are several ways to write down the solution
found in Theorem 5.2.4. The formula in Eq. (5.2.2) is useful to write down the general
solution to the equation x0 = A x when A diagonalizable. It is a formula easy to remember,
you just add all terms of the form vi eλi t , where λi , vi is any eigenpair of A. But this
formula is not the best one to write down solutions to initial value problems. As you can
see in Theorem 5.2.4, I did not provide a formula for that. I only said that the constants
c1 , · · · , cn are the solutions of the algebraic linear system in (5.2.3). But I did not write
down the solution for the c’s. It is too complicated in this notation, though it is not difficult
to do on a particular example, as near the end of Example 5.2.2.
A fundamental matrix, introduced in Eq. (5.1.11), provides a more compact form for the
solution of an initial value problem. We have seen this compact notation in Eq. (5.1.12),
x(t) = X(t) c,
where we used the fundamental matrix constructed with the fundamental solutions in (5.2.1),
and we collected all the c’s in a vector,
c1
c = ... .
1 λt n λ t
X(t) = v e 1 , · · · , v e n ,
cn
G. NAGY – ODE August 16, 2015 217
Remark: Eq. (5.2.4) also holds in the case that the coefficient matrix A is not diagonaliz-
able. In such case the fundamental matrix X is not given by the expression provided in the
Theorem. But with an appropriate fundamental matrix, Eq. (5.2.4) still holds.
Example 5.2.6: Find a fundamental matrix for the system below and use it to write down
the general solution to the system, where
0 1 2
x = A x, A = .
2 1
Solution: One way to find a fundamental matrix of a system is to start computing the
eigenvalues and eigenvectors of the coefficient matrix. The differential equation in this
Example is the same as the one given in Example 5.2.2. In that Example we found out that
the eigenvalues and eigenvectors of the coefficient matrix were,
1 −1
λ+ = 3, v+ = , and λ- = −1, v- = .
1 1
We see that the coefficient matrix is diagonalizable, so with the information above we can
construct a fundamental set of solutions,
1 3t - −1 −t
n o
+
x (t) = e , x (t) = e .
1 1
From here we construct a fundamental matrix
−e−t
3t
e
X(t) = 3t .
e e−t
Then we have the general solution
−e−t
3t
e c+ c+
xgen (t) = X(t)c ⇒ xgen (t) = , c= .
e3t e−t c- c-
C
Example 5.2.7: Use the fundamental matrix found in Example 5.2.6 to write down the
solution to the initial value problem
0 x1 (0) 1 2
x = A x, x(0) = , A= .
x2 (0) 2 1
218 G. NAGY – ODE august 16, 2015
Solution: In Example 5.2.6 we found the general solution to the differential equation,
−e−t c+
3t
e
xgen (t) = 3t
e e−t c-
The initial condition has the form
x1 (0) 1 −1 c+
= x(0) = X(0) c =
x2 (0) 1 1 c-
We need to compute the inverse of matrix X(0),
1 1 1
X(0)−1 = ,
2 −1 1
so we compute the constant vector c,
c+ 1 1 1 x1 (0)
= .
c- 2 −1 1 x2 (0)
So the solution to the initial value problem is,
−e−t 1 1 1 x1 (0)
3t
e
x(t) = X(t)X(0)−1 x(0) ⇔ x(t) = .
e3t e−t 2 −1 1 x2 (0)
If we compute the matrix on the last equation, explicitly, we get,
1 (e3t + e−t ) (e3t − e−t ) x1 (0)
x(t) = .
2 (e3t − e−t ) (e3t + e−t ) x2 (0)
C
Example 5.2.8: Use a fundamental matrix to write the solution to the initial value problem
0 2 1 3
x = Ax, x(0) = , A= .
4 3 1
Solution: We known from Example ?? that the general solution to the differential equation
above is given by
1 4t −1 −2t
x(t) = c+ e + c- e , c+ , c- ∈ R.
1 1
Equivalently, introducing the fundamental matrix X and the vector c as in Example ??,
−e−2t
4t
e c1
X(t) = 4t , c = ,
e e−2t c2
so the general solution can be written as
−e−2t
4t
e c1
x(t) = X(t)c ⇒ x(t) = .
e4t e−2t c2
The initial condition is an equation for the constant vector c,
1 −1 c1 2
X(0)c = x(0) ⇔ = .
1 1 c2 4
The solution of the linear system above can be expressed in terms of the inverse matrix
−1 1 1 1
X(0) = ,
2 −1 1
as follows,
−1 c1 1 1 1 2 c1 3
c = X(0) x(0) ⇔ = ⇒ = .
c2 2 −1 1 4 c2 1
G. NAGY – ODE August 16, 2015 219
So, the solution to the initial value problem in vector form is given by
1 4t −1 −2t
x(t) = 3 e + e ,
1 1
and using the fundamental matrix above we get
−e−2t 3 3e − e−2t
4t 4t
e
x(t) = 4t ⇒ x(t) = .
e e−2t 1 3e4t + e−2t
C
We saw that the solution to the initial value problem x0 = A x with x(t0 ) = x0 can be
written using a fundamental matrix X,
x(t) = X(t)X(t0 )−1 x0 .
There is an alternative expression to write the solution of the initial value problem above.
It makes use of the exponential of a the coefficient matrix.
Theorem 5.2.6 (Exponential expression). The initial value problem for an n × n ho-
mogeneous, constant coefficients, linear differential system
x0 = A x, x(t0 ) = x0
has a unique solution x for every t0 ∈ R and every n-vector x0 , given by
x(t) = eA(t−t0 ) x0 . (5.2.5)
Remarks:
(a) The Theorem holds as it is written, for any constant n × n matrix A, whether it is
diagonalizable or not. But in this Section we provide a proof of the Theorem only in
the case that A is diagonalizable.
(b) Eq. 5.2.5 is a nice generalization of the solutions for a single linear homogeneous equa-
tions we found in Section 1.1.
Proof of Theorem 5.2.6 for a diagonalizable matrix A: We start with the formula
for the fundamental matrix given in Theorem 5.2.5,
λt
e 1 ··· 0
X(t) = v1 eλ1 t , · · · , vn eλn t = v1 , · · · , vn ... .. .. ,
. .
0 ··· eλn t
The diagonal matrix on the last equation above is the exponential of the diagonal matrix
Dt = diag λ1 t, · · · , λn t .
This is the case, since by the definitions in Chapter 8 we have,
∞ ∞ ∞
X (Dt)n hX (λ1 t)n X (λn t)n i
eDt = = diag ,··· ,
n=0
n! n=0
n! n=0
n!
which gives us the expression for the exponential of a diagonal matrix,
eDt = diag eλ1 t , · · · , eλn t .
−1
where we used that eDt0 = e−Dt0 . These manipulations lead us to the formula
The last step of the argument is to relate the equation above with eA(t−t0 ) . Since A is
diagonalizable, A = P DP −1 , for the matrices P and D defined above. Then,
∞ ∞ ∞
X An (t − t0 )n X (P DP −1 )n (t − t0 )n X Dn (t − t0 )n −1
eA(t−t0 ) = = =P P ,
n=0
n! n=0
n! n=0
n!
and by the calculation we did in the first part of this proof we get,
eA(t−t0 ) = P eD(t−t0 ) P −1
x(t) = eA(t−t0 ) x0 .
Example 5.2.9: Compute the exponential function eAt and use it to express the vector-
valued function x solution to the initial value problem
x (0) 1 2
x0 = A x, x(0) = 1 A= .
x2 (0) 2 1
Solution: In Example 5.2.6 we found that a fundamental matrix for this system was
−e−t
3t
e
X(t) = 3t .
e e−t
1 −1 e3t
0 1 1 1
eAt = P eDt P −1 = ,
1 1 0 e−t 2 −1 1
so we conclude that
1 (e3t + e−t ) (e3t − e−t )
At
e = .
2 (e3t − e−t ) (e3t + e−t )
The solution to the initial value problem above is,
1 (e3t + e−t ) (e3t − e−t ) x1
x(t) = eAt x0 = .
2 (e3t − e−t ) (e3t + e−t ) x2 C
G. NAGY – ODE August 16, 2015 221
The expression above is the generalizations for systems of Eq. (1.1.11) for scalar equations.
Proof of Theorem 5.2.7: Since the coefficient matrix A is diagonalizable, there exist an
n×n invertible matrix P and an n×n diagonal matrix D such that A = P DP −1 . Introducing
this information into the differential equation in Eq. (1.1.11), and then multiplying the whole
equation by P −1 , we obtain,
P −1 x0 (t) = P −1 P DP −1 x(t) + P −1 g(t).
0
Since matrix A is constant, so is P and D. In particular P −1 x0 = P −1 x . Therefore,
(P −1 x)0 = D P −1 x + P −1 g .
Introduce the new unknown function y = P −1 x and the new source function h = P −1 g ,
where we recall that eD(t−t0 ) = diag eλ1 (t−t0 ) , · · · , eλn (t−t0 ) . We now multiply the whole
equation by the constant matrix P and we recall that P −1 P = In , then
h Z t i
P y(t) = P eD(t−t0 ) (P −1 P )y0 + (P −1 P ) e−D(τ −t0 ) (P −1 P )h(τ ) dτ .
t0
h Z t i
x(t) = eA(t−t0 ) x0 + e−A(τ −t0 ) g(τ ) dτ .
t0
Solution: In Example 5.2.4 we have found the eigenvalues and eigenvectors of the coeffi-
cient matrix, and the result is
1 −1
λ1 = 3, v(1) = , and λ2 = −1, v(2) = .
1 1
With this information and Theorem 5.2.2 we obtain that
−1 1 −1 3 0
A = P DP , P = , D= ,
1 1 0 −1
and also that
−1 e3t
At Dt −1 1 0 1 1 1
e = Pe P = ,
1 1 0 e−t 2 −1 1
so we conclude that
1 (e3t + e−t ) (e3t − e−t ) 1 (e−3t + et ) (e−3t − et )
At −At
e = ⇒ e = .
2 (e3t − e−t ) (e3t + e−t ) 2 (e−3t − et ) (e−3t + et )
The solution to the initial value problem above is,
Z t
x(t) = eAt x0 + eAt e−Aτ g dτ.
0
Since
1 (e3t + e−t ) (e3t − e−t ) 3 1 5e3t + e−t
eAt x0 = = ,
2 (e3t − e−t ) (e3t + e−t ) 2 2 5e3t − e−t
in a similar way
1 (e−3τ + eτ ) (e−3τ − eτ ) 1 1 3e−3τ − eτ
e−Aτ g = = .
2 (e−3τ − eτ ) (e−3τ + eτ ) 2 2 3e−3τ + eτ
Integrating the last expresion above, we get
Z t
1 −e−3t − et
1
e−Aτ g dτ = −3t t + .
0 2 −e + e 0
Therefore, we get
1 5e3t + e−t 1 (e3t + e−t ) (e3t − e−t ) h 1 −e−3t − et
i
1
x(t) = + + .
2 5e3t − e−t 2 (e3t − e−t ) (e3t + e−t ) 2 −e−3t + et 0
G. NAGY – ODE August 16, 2015 223
Multiplying the matrix-vector product on the second term of the left-hand side above,
1 5e3t + e−t 1 (e3t + e−t )
1
x(t) = − + .
2 5e3t − e−t 0 2 (e3t − e−t )
We conclude that the solution to the initial value problem above is
3e + e−t − 1
3t
x(t) = .
3e3t − e−t
C
224 G. NAGY – ODE august 16, 2015
5.2.5. Exercises.
5.2.1.- . 5.2.2.- .
G. NAGY – ODE August 16, 2015 225
Solution: We have computed in Example 5.2.3 the eigenvalues and eigenvectors of the
coefficient matrix,
1 −1
λ+ = 4, v+ = , and λ- = −2, v- = .
1 1
This coefficient matrix has distinct real eigenvalues, so the general solution to the differential
equation is
1 4t −1 −2t
xgen (t) = c+ e + c- e .
1 1
C
We now focus on case (B). The coefficient matrix is real-valued with the complex-valued
eigenvalues. In this case each eigenvalue is the complex conjugate of the other. A similar
result is true for n×n real-valued matrices. When such n×n matrix has a complex eigenvalue
λ, there is another eigenvalue λ. A similar result holds for the respective eigenvectors.
226 G. NAGY – ODE august 16, 2015
Proof of Theorem 5.3.3: Theorem 5.2.4 implies the the set in (5.3.3) is a linearly inde-
pendent set. The new information in Theorem 5.3.3 above is the real-valued solutions in
Eq. (5.3.4). They are obtained from Eq. (5.3.3) as follows:
x± = (a ± ib) e(α±iβ)t
= eαt (a ± ib) e±iβt
= eαt (a ± ib) cos(βt) ± i sin(βt)
Since the differential equation x0 = Ax is linear, the functions below are also solutions,
1
x1 = x+ + x- = a cos(βt) − b sin(βt) eαt ,
2
1 +
x − x- = a sin(βt) + b cos(βt) eαt .
2
x =
2i
This establishes the Theorem.
Example 5.3.2: Find a real-valued set of fundamental solutions to the differential equation
2 3
x0 = Ax, A= . (5.3.5)
−3 2
Then find the respective eigenvectors. The one corresponding to λ+ is the solution of the
homogeneous linear system with coefficients given by
2 − (2 + 3i) 3 −3i 3 −i 1 1 i 1 i
= → → → .
−3 2 − (2 + 3i) −3 −3i −1 −i −1 −i 0 0
+
v
+
Therefore the eigenvector v = 1+ is given by
v2
+ + + + + −i
v1 = −iv2 ⇒ v2 = 1, v1 = −i, ⇒ v = , λ+ = 2 + 3i.
1
The second eigenvector is the complex conjugate of the eigenvector found above, that is,
- i
v = , λ- = 2 − 3i.
1
Notice that
(±) 0 −1
v = ± i.
1 0
Then, the real and imaginary parts of the eigenvalues and of the eigenvectors are given by
0 −1
α = 2, β = 3, a= , b= .
1 0
So a real-valued expression for a fundamental set of solutions is given by
0
−1
sin(3t) 2t
1 2t 1
x = cos(3t) − sin(3t) e ⇒ x = e ,
1 0 cos(3t)
0 −1 − cos(3t) 2t
2 2t 2
x = sin(3t) + cos(3t) e ⇒ x = e .
1 0 sin(3t)
C
We end with case (C). There are no many possibilities left for a 2 × 2 real matrix that
both is diagonalizable and has a repeated eigenvalue. Such matrix must be proportional to
the identity matrix.
Theorem 5.3.4. Every 2×2 diagonalizable matrix with repeated eigenvalue λ0 has the form
A = λ0 I.
Proof of Theorem 5.3.4: Since matrix A diagonalizable, there exists a matrix P invertible
such that A = P DP −1 . Since A is 2 × 2 with a repeated eigenvalue λ|tizero , then
λ 0
D= = λ0 I2 .
0 λ
Put these two fatcs together,
A = P λ0 IP −1 = λ0 P P −1 = λ0 I.
Remark: The general solution xgen for x0 = λI x is simple to write. Since any non-zero
2-vector is an eigenvector of λ0 I2 , we choos the linearly independent set
o
n
1 1 2 0
v = ,v = .
0 1
Using these eigenvectors we can write the general solution,
1 λ0 t 0 λ0 t c
1 λ0 t 2 λ0 t
xgen (t) = c1 v e + c2 v e = c1 e + c2 e ⇒ xgen (t) = 1 eλt .
0 1 c2
228 G. NAGY – ODE august 16, 2015
where the vector w is one of infinitely many solutions of the algebraic linear system
(A − λI)w = v. (5.3.6)
Remark: The eigenvalue λ is the precise number that makes matrix (A−λI) not invertible,
that is, det(A − λI) = 0. This implies that an algebraic linear system with coefficient
matrix (A − λI) is not consistent for every source. Nevertheless, the Theorem above says
that Eq. (5.3.6) has solutions. The fact that the source vector in that equation is v, an
eigenvector of A, is crucial to show that this system is consistent.
Proof of Theorem 5.3.5: One solution to the differential system is x1 (t) = v eλt . Inspired
by the reduction order method we look for a second solution of the form
x2 (t) = u(t) eλt .
Inserting this function into the differential equation x0 = A x we get
u0 + λ u = A u ⇒ (A − λI) u = u0 .
We now introduce a power series expansion of the vector-valued function u,
u(t) = u0 + u1 t + u2 t2 + · · · ,
into the differential equation above,
(A − λI)(u0 + u1 t + u2 t2 + · · · ) = (u1 + 2u2 t + · · · ).
If we evaluate the equation above at t = 0, and then its derivative at t = 0, and so on, we
get the following infinite set of linear algebraic equations
(A − λI)u0 = u1 ,
(A − λI)u1 = 2u2 ,
(A − λI)u2 = 3u3
..
.
Here is where we use Cayley-Hamilton’s Theorem. Recall that the characteristic polynomial
p(λ̃) = det(A − λ̃I) has the form
p(λ̃) = λ̃2 − tr (A) λ̃ + det(A).
Cayley-Hamilton Theorem says that the matrix-valued polynomial p(A) = 0, that is,
A2 − tr (A) A + det(A) I = 0.
G. NAGY – ODE August 16, 2015 229
Since in the case we are interested in matrix A has a repeated root λ, then
p(λ̃) = (λ̃ − λ)2 = λ̃2 − 2λ λ̃ + λ2 .
Therefore, Cayley-Hamilton Theorem for the matrix in this Theorem has the form
0 = A2 − 2λ A + λ2 I ⇒ (A − λI)2 = 0.
This last equation is the one we need to solve the system for the vector-valued u. Multiply
the first equation in the system by (A − λI) and use that (A − λI)2 = 0, then we get
0 = (A − λI)2 u0 = (A − λI) u1 ⇒ (A − λI)u1 = 0.
This implies that u1 is an eigenvector of A with eigenvalue λ. We can denote it as u1 = v.
Using this information in the rest of the system we get
(A − λI)u0 = v,
(A − λI)v = 2u2 ⇒ u2 = 0,
(A − λI)u2 = 3u3 ⇒ u3 = 0,
..
.
We conclude that all terms u2 = u3 = · · · = 0. Denoting u0 = w we obtain the following
system of algebraic equations,
(A − λI)w = v,
(A − λI)v = 0.
For vectors v and w solution of the system above we get u(t) = w + tv. This means that
the second solution to the differential equation is
x2 (t) = (tv + w) eλt .
This establishes the Theorem.
Example 5.3.3: Find the fundamental solutions of the differential equation
1 −6 4
x0 = Ax, A= .
4 −1 −2
Solution: As usual, we start finding the eigenvalues and eigenvectors of matrix A. The
former are the solutions of the characteristic equation
3
− −λ 1 3 1 1
0= 2
1 1 = λ + λ + + = λ2 + 2λ + 1 = (λ + 1)2 .
−4 −2 − λ 2 2 4
Therefore, there solution is the repeated eigenvalue λ = −1. The associated eigenvectors
are the vectors v solution to the linear system (A + I)v = 0,
3 1
−2 + 1 1 −2 1 1 −2 1 −2
= → → ⇒ v1 = 2v2 .
− 14 − 12 + 1 − 14 12 1 −2 0 0
Choosing v2 = 1, then v1 = 2, and we obtain
2
λ = −1, v= .
1
Any other eigenvector associated to λ = −1 is proportional to the eigenvector above. The
matrix A above is not diagonalizable. So. we follow Theorem 5.3.5 and we solve for a vector
w the linear system (A + I)w = v. The augmented matrix for this system is given by,
1
− 2 1 2 1 −2 −4 1 −2 −4
→ → ⇒ w1 = 2w2 − 4.
− 14 12 1 1 −2 −4 0 0 0
230 G. NAGY – ODE august 16, 2015
5.3.3. Exercises.
5.3.1.- . 5.3.2.- .
232 G. NAGY – ODE august 16, 2015
5.4.1. Real distinct eigenvalues. We study the system in (5.4.1) in the case that matrix
A has two real eigenvalues λ+ 6= λ- . The case where one eigenvalues vanishes is left one
of the exercises at the end of the Section. We study the case where both eigenvalues are
non-zero. Two non-zero eigenvalues belong to one of hte following cases:
(i) λ+ > λ- > 0, both eigenvalues positive;
(ii) λ+ > 0 > λ- , one eigenvalue negative and the other positive;
(iii) 0 > λ+ > λ- , both eigenvalues negative.
In a phase portrait the solution vector x(t) at t is displayed on the plane x1 , x2 . The
whole vector is shown, only the end point of the vector is shown for t ∈ (−∞, ∞). The
result is a curve in the x1 , x2 plane. One usually adds arrows to determine the direction of
increasing t. A phase portrait contains several curves, each one corresponding to a solution
given in Eq. (5.4.2) for particular choice of constants c+ and c- . A phase diagram can be
sketched by following these few steps:
(a) Plot the eigenvectors v+ and v- corresponding to the eigenvalues λ+ and λ- .
(b) Draw the whole lines parallel to these vectors and passing through the origin. These
straight lines correspond to solutions with either c+ or c- zero.
(c) Draw arrows on these lines to indicate how the solution changes as the variable t in-
creases. If t is interpreted as time, the arrows indicate how the solution changes into
the future. The arrows point towards the origin if the corresponding eigenvalue λ is
negative, and they point away form the origin if the eigenvalue is positive.
(d) Find the non-straight curves correspond to solutions with both coefficient c+ and c-
non-zero. Again, arrows on these curves indicate the how the solution moves into the
future.
Case λ+ > λ- > 0.
Example 5.4.1: Sketch the phase diagram of the solutions to the differential equation
1 11 3
x0 = Ax, A= . (5.4.3)
4 1 9
The curved lines on each quadrant correspond to the following four solutions:
c+ = 1, c- = 1; c+ = 1, c- = −1; c+ = −1, c- = 1; c+ = −1, c- = −1.
x2
c+ = 0, c- = 1 c+ = 1, c- = 1
v-
c+ = 1, c- = 0
c+ = −1, c- = 1 v+
0 x1
c+ = −1, c- = 0
c+ = 1, c- = −1
c+ = −1, c- = −1 c+ = 0, c- = −1
Figure 26. Eight solutions to Eq. (5.4.3), where λ+ > λ- > 0. The trivial
solution x = 0 is called an unstable point.
Solution: In Example 5.2.3 we computed the eigenvalues and eigenvectors of the coefficient
matrix, and the result was
+ 1 - −1
λ+ = 4, v = and λ- = −2, v = .
1 1
In that Example we also computed the general solution to the differential equation above,
1 4t −1 −2t
x(t) = c+ v+ eλ+ t + c- v- eλ- t ⇔ x(t) = c+ e + c- e ,
1 1
In Fig. 27 we have sketched four curves, each representing a solution x corresponding to a
particular choice of the constants c+ and c- . These curves actually represent eight different
solutions, for eight different choices of the constants c+ and c- , as is described below. The
arrows on these curves represent the change in the solution as the variable t grows. The
part of the solution with positive eigenvalue increases exponentially when t grows, while the
part of the solution with negative eigenvalue decreases exponentially when t grows. The
straight lines correspond to the following four solutions:
c+ = 1, c- = 0; c+ = 0, c- = 1; c+ = −1, c- = 0; c+ = 0, c- = −1.
The curved lines on each quadrant correspond to the following four solutions:
c+ = 1, c- = 1; c+ = 1, c- = −1; c+ = −1, c- = 1; c+ = −1, c- = −1.
C
G. NAGY – ODE August 16, 2015 235
x2
c+ = 0, c- = 1 c+ = 1, c- = 0
c+ = 1, c- = 1
v- v+
c+ = −1, c- = 1 c+ = 1, c- = −1
x1
0
c+ = −1, c- = −1
c+ = −1, c- = 0 c+ = 0, c- = −1
Figure 27. Several solutions to Eq. (5.4.4), λ+ > 0 > λ- . The trivial
solution x = 0 is called a saddle point.
x2
c+ = 1, c- = 1
c+ = 0, c- = 1
v-
c+ = 1, c- = 0
c+ = −1, c- = 1 v+
0 x1
c+ = −1, c- = 0
c+ = 1, c- = −1
c+ = 0, c- = −1
c+ = −1, c- = −1
Figure 28. Several solutions to Eq. (5.4.5), where 0 > λ+ > λ- . The trivial
solution x = 0 is called a stable point.
Solution: We have found in Example 5.3.2 that the eigenvalues and eigenvectors of the
coefficient matrix are
∓i
λ± = 2 ± 3i, v± = .
1
Writing them in real and imaginary parts, λ± = α ± iβ and v± = a ± ib, we get
0 −1
α = 2, β = 3, a= , b= .
1 0
These eigenvalues and eigenvectors imply the following real-valued fundamental solutions,
n
1 sin(3t) 2t 2 − cos(3t) 2t o
x (t) = e , x (t) = e . (5.4.7)
cos(3t) sin(3t)
G. NAGY – ODE August 16, 2015 237
The phase diagram of these two fundamental solutions is given in Fig. 29 below. There is
also a circle given in that diagram, corresponding to the trajectory of the vectors
sin(3t) − cos(3t)
x̃1 (t) = x̃2 (t) = .
cos(3t) sin(3t)
The phase portrait of these functions is a circle, since they are unit vector-valued functions–
they have length one. C
x2
x2
a
x1
b 0 x1
Figure 29. The graph of the fundamental solutions x(1) and x(2) in Eq. (5.4.7).
λ± = α ± iβ, v± = a ± ib.
We now sketch phase portraits of these solutions for a few choices of α, a and b. We start
fixing the vectors a, b and plotting phase diagrams for solutions having α > 0, α = 0,
and α < 0. The result can be seen in Fig. 30. For α > 0 the solutions spiral outward as t
increases, and for α < 0 the solutions spiral inwards to the origin as t increases. The rotation
direction is from vector b towards vector a. The solution vector 0, is called unstable for
α > 0 and stable for α < 0.
We now change the direction of vector b, and we repeat the three phase portraits given
above; for α > 0, α = 0, and α < 0. The result is given in Fig. 31. Comparing Figs. 30
and 31 shows that the relative directions of the vectors a and b determines the rotation
direction of the solutions as t increases.
238 G. NAGY – ODE august 16, 2015
x2 α > 0 x2 α = 0 x2 α < 0
x2 x2
x1
x1
a
b a b a b
0 x1 0 x1 0 x1
x2 x1
x2 α > 0 x2 α = 0 x2 α < 0
x1
x1 x1
a
a a
0 x1 0 x1 0 x1
b x2 b b
x2 x2
5.4.3. Repeated eigenvalues. A matrix with repeated eigenvalues may or may not be
diagonalizable. If a 2 × 2 matrix A is diagonalizable with repeated eigenvalues, then by
Theorem 5.3.4 this matrix is proportional to the identity matrix, A = λ0 I, with λ0 the
repeated eigenvalue. We saw in Section 5.3 that the general solution of a differential system
with such coefficient matrix is
c
xgen (t) = 1 eλ0 t .
c2
Phase portraits of these solutions are just straight lines, starting from the origin for λ0 > 0,
or ending at the origin for λ0 < 0.
Non-diagonalizable 2 × 2 differential systems are more interesting. If x0 = A x is such a
system, it has fundamental solutions
where λ0 is the repeated eigenvalue of A with eigenvector v, and vector w is any solution of
the linear algebraic system
(A − λ0 I)w = v.
The phase portrait of these fundamental solutions is given in Fig 32. To construct this
figure start drawing the vectors v and w. The solution x1 is simpler to draw than x2 , since
the former is a straight semi-line starting at the origin and parallel to v.
G. NAGY – ODE August 16, 2015 239
x2 λ 0 > 0
x2 λ 0 < 0
x2
x1
x1
x2
v
w v
w
x1
0 x1
0
−x1 −x2
−x1
−x2
Figure 32. Functions x1 , x2 in Eq. (5.4.8) for the cases λ0 > 0 and λ0 < 0.
The solution x2 is more difficult to draw. One way is to first draw the trajectory of the
time-dependent vector
x̃2 = v t + w.
This is a straight line parallel to v passing through w, one of the black dashed lines in
Fig. 32, the one passing through w. The solution x2 differs from x̃2 by the multiplicative
factor eλ0 t . Consider the case λ0 > 0. For t > 0 we have x2 (t) > x̃2 (t), and the opposite
happens for t < 0. In the limit t → −∞ the solution values x2 (t) approach the origin,
since the exponential factor eλ0 t decreases faster than the linear factor t increases. The
result is the purple line in the first picture of Fig. 32. The other picture, for λ0 < 0 can be
constructed following similar ideas.
240 G. NAGY – ODE august 16, 2015
5.4.4. Exercises.
5.4.1.- . 5.4.2.- .
G. NAGY – ODE August 16, 2015 241
By the end of the seventeenth century Newton had invented differential equations, discov-
ered his laws of motion and the law of universal gravitation. He combined all of them to
explain Kepler laws of planetary motion. Newton solved what now is called the two-body
problem. Kepler laws correspond to the case of one planet orbiting the Sun. People then
started to study the three-body problem. For example the movement of Earth, Moon, and
Sun. This problem turned out to be far more difficult than the two-body problem and no
solution was ever found. Around the end of the nineteenth century Henri Poincaré proved
a breakthrough result. The solutions of the three body problem could not be found explic-
itly in terms of elementary functions, such as combinations of polynomials, trigonometric
functions, exponential, and logarithms. This led him to invent the so-called Qualitative
Theory of Differential Equations. In this theory one studies the geometric properties of
solutions–whether they show periodic behavior, tend to fixed points, tend to infinity, etc.
This approach evolved into the modern field of dynamics. In this chapter we introduce a
few basic concepts and we use them to find qualitative information of a particular type of
differential equations, called autonomous equations.
2π Unstable
CD
CU
π Stable
CD
CU
Unstable
0 t
CD
CU
−π Stable
CD
CU
−2π Unstable
G. NAGY – ODE August 16, 2015 243
Solution: y
This is a linear, constant coefficients equation,
c>0
so it could be solved using the integrating fac-
tor method. But this is also a separable equa-
tion, so we solve it as follows,
Z Z
dy 1 t
= dt ⇒ ln(a y + b) = t + c0 − ab c=0
ay + b a
so we get,
a y + b = eat eac0
and denoting c = eac0 /a, we get the expression
c<0
b
y(t) = c eat − . (6.1.2)
a
This is the expression for the solution we got Figure 33. A few solutions
in Theorem 1.1.2. C to Eq. (6.1.2) for different c.
However, the solutions of an autonomous equation are sometimes not so simple to un-
derstand. Even in the case that we can solve the differential equation.
Example 6.1.3: Sketch a qualitative graph of solutions to y 0 = sin(y), for different initial
data conditions y(0) = y0 .
Solution: We first find the exact solutions and then we see if we can graph them. The
equation is separable, then
Z t
y 0 (t) y 0 (t)
=1 ⇒ dt = t.
sin y(t) 0 sin y(t)
Sometimes the exact expression for the solution of a differential equation is difficult to
interpret. For example, take the solution in (6.1.3), in Example 6.1.3. It is not so easy to
see, for an arbitrary initial condition y0 , what is the behavior of the solution values y(t) as
t → ∞. To be able to answer questions like this one is that we introduce a new approach,
a geometric approach.
G. NAGY – ODE August 16, 2015 245
6.1.2. A Geometric Analysis. The idea is to obtain qualitative information about solu-
tions to an autonomous equation using the equation itself, without solving it. We now use
the equation of Example 6.1.3 to show how this can be done.
Example 6.1.4: Sketch a qualitative graph of solutions to y 0 = sin(y), for different initial
data conditions y(0).
Solution: The differential equation has the form y 0 = f (y), where f (y) = sin(y). The first
step in the graphical approach is to graph the function f .
f f (y) = sin(y)
−2π −π 0 π 2π y
The second step is to identify all the zeros of the function f . In this case,
f (y) = sin(y) = 0 ⇒ yn = nπ, where n = · · · , −2, −1, 0, 1, 2, · · · .
It is important to realize that these constants yn are solutions of the differential equation.
On the one hand, they are constants, t-independent, so yn0 = 0. On the other hand, these
constants yn are zeros of f , hence f (yn ) = 0. So yn are solutions of the differential equation
0 = yn0 = f (yn ) = 0.
These t-independent solutions, yn , are called stationary solutions. They are also called
equilibrium solutions, or fixed points, or critical points.
The third step is to identify the regions on the line where f is positive, and where f is
negative. These regions are bounded by the critical points. Now, in an interval where f > 0
write a right arrow, and in the intervals where f < 0 write a left arrow, as shown below.
f f (y) = sin(y)
−2π −π 0 π 2π y
Figure 35. Critical points and increase/decrease information added to Fig. 34.
It is important to notice that in the regions where f > 0 a solution y is increasing. And in
the regions where f < 0 a solution y is decreasing. The reason for this claim is, of course,
the differential equation, f (y) = y 0 .
The fourth step is to find the regions where the curvature of a solution is concave up or
concave down. That information is given by y 00 . But the differential equation relates y 00 to
246 G. NAGY – ODE august 16, 2015
CU CD CU CD CU CD CU CD
−2π −π 0 π 2π y
This is all the information we need to sketch a qualitative graph of solutions to the
differential equation. So, the last step is to put all this information on a yt-plane. The
horizontal axis above is now the vertical axis, and we now plot soltuions y of the differential
equation. The result is given below.
2π Unstable
CD
CU
π Stable
CD
CU
Unstable
0 t
CD
CU
−π Stable
CD
CU
−2π Unstable
The picture above contains the graph of several solutions y for different choices of initial
data y(0). Stationary solutions are in blue, t-dependent solutions in green. The stationary
solutions are separated in two types. The stable solutions y-1 = −π, y1 = π, are pictured
with solid blue lines. The unstable solutions y-2 = −2π, y0 = 0, y2 = 2π, are pictured with
dashed blue lines. C
Remark: A qualitative graph of the solutions does not provide all the possible information
about the solution. For example, we know from the graph above that for some initial
conditions the corresponding solutions have inflection points at some t > 0. But we cannot
know the exact value of t where the inflection point occurs. Such information could be
useful to have, since |y 0 | has its maximum value at those points.
The geometric approach used in Example 6.1.3 suggests the following definitions.
Definition 6.1.2.
(i) A constant yc is a critical point of the equation y 0 = f (y) iff holds f (yc ) = 0.
(ii) A critical point yc is stable iff f (y) > 0 for every y 6= yc in a neighborhood of yc .
(iii) A critical point yc is unstable iff f (y) < 0 for every y 6= yc in a neighborhood of yc .
(iv) A critical point yc is semistable iff the point is stable on one side of the critical point
and unstable on the other side.
Remarks:
(a) Critical points are also called fixed points, stationary solutions, equilibrium solutions,
critical solutions. We may use all these names in this notes. Stable points are also called
attractors or sinks. Unstable points are also called repellers or sources. Semistable
points are also called neutral points.
(b) That a critical point is stable means that for initial data close enough to the critical
point all solutions approach the critical point as t → ∞.
In Example 6.1.3 the critical points are yn = nπ. In the second graph of that example
wee only marked −2π, −π, 0, π, and 2π. Filled dots represent stable critical points, and
white dots represent unstable or semistable critical points. In this example all white points
are unstable points.
In that second graph one can see that stable critical points have green arrows directed
to them on both sides, and unstable points have arrows directed away from them on both
sides. This is always the case for stable and unstable critical points. A semistable point
would have one arrow pointing to the point on one side, and the other arrow pointing away
from the point on the other side.
In terms of the differential equation critical points represent stationary solutions, also
called t-independent solutions, or equilibrium solutions, or steady solutions. We will usually
mention critical points as stationary solutions when we describe them in a yt-plane, and we
reserve the name critical point when we describe them in a y-line.
On the last graph in Example 6.1.3 we have pictured the stationary solutions that are
stable with a solid line, and those that are unstable with a dashed line. Semistable stationary
solutions are also pictured with dashed lines. An equilibrium solutions is defined to be stable
if all sufficiently small disturbances away from it damp out in time. An equilibrium solution
is defined to be unstable if all sufficiently disturbances away from it grow in time.
Example 6.1.5: Find all the critical points of the first order linear system
y 0 = a y.
Study the stability of the critical points both for a > 0 and for a < 0. Sketch qualitative
graphs of solutions close to the critical points.
248 G. NAGY – ODE august 16, 2015
Solution: This is an equation of the form y 0 = f (y) for f (y) = ay. The critical points
are the constants yc such that 0 = f (yc ) = ayc , so yc = 0. We could now use the graphical
method to study the stability of the critical point yc = 0, but we do not need to do it.
This equation is the particular case b = 0 of the equation solved in Example 6.1.2. So the
solution for arbitrary initial data y(0) = y0 is
y(t) = y0 eat .
We use this expression to graph the solutions near a critical point. The result is shown
below.
y y
y0 > 0 y0 > 0
Unstable Stable
0 a>0 t 0 a<0 t
y0 < 0 y0 < 0
Figure 38. The graph of the functions y(t) = y(0) eat for a > 0 and a < 0.
We conclude that the critical point yc = 0 is stable for a < 0 and is unstable for a > 0. C
Remark: The stability of the critical point yc = 0 of the linear system y 0 = ay will be
important when we study the linearization of a nonlinear autonomous system. For that
reason we highlighted these stability results in Example 6.1.5.
6.1.3. Population Growth Models. The simplest model for the population growth of an
organism is N 0 = rN where N (t) is the population at time t and r > 0 is the growth rate.
This model predicts exponential population growth N (t) = N0 ert , where N0 = N (0). We
studied this model in § 1.5. Among other things, this model assumes that the organisms
have unlimited food supply. This assumption implies that the per capita growth N 0 /N = r
is constant.
A more realistic model assumes that the per capita growth decreases linearly with N ,
starting with a positive value, r, and going down to zero for a critical population N = K > 0.
So when we consider the per capita growth N 0 /N as a function of N , it must be given by
the formula N 0 /N = −(r/K)N + r. This equation, when thought as a differential equation
for N is called the logistic equation model for population growth.
Definition 6.1.3. The logistic equation describes the organisms population function N
in time as the solution of the autonomous differential equation
N
N 0 = rN 1 − ,
K
where the initial growth rate constant r and the carrying capacity constant K are positive.
We now use the graphical method to carry out a stability analysis of the logistic popu-
lation growth model. Later on we find the explicit solution of the differential equation. We
can then compare the two approaches to study the solutions of the model.
Example 6.1.6: Sketch a qualitative graph of solutions for different initial data conditions
y(0) = y0 to the logistic equation below, where r and K are given positive constants,
y
y 0 = ry 1 − .
K
G. NAGY – ODE August 16, 2015 249
Solution: f
The logistic differential equation for
y
f (y) = ry 1 −
population growth can be written rK K
y 0 = f (y), where function f is the 4
polynomial
y
f (y) = ry 1 − .
K
The first step in the graphical ap-
0 K K y
proach is to graph the function f .
The result is in Fig. 39. 2
The second step is to identify all crit-
ical points of the equation. The crit- Figure 39. The graph of f .
ical points are the zeros of the func- f
tion f . In this case, f (y) = 0 implies y
f (y) = ry 1 −
y0 = 0, y1 = K. rK K
4
The third step is to find out whether
the critical points are stable or un-
stable. Where function f is posi-
tive, a solution will be increasing,
and where function f is negative a 0 K K y
solution will be decreasing. These 2
regions are bounded by the critical
points. Now, in an interval where
f > 0 write a right arrow, and in Figure 40. Critical points added.
the intervals where f < 0 write a left f
arrow, as shown in Fig. 40.
y
f (y) = ry 1 −
The fourth step is to find the re- K
gions where the curvature of a solu- rK
tion is concave up or concave down. 4
That information is given by y 00 . But
the differential equation relates y 00
to f (y) and f 0 (y). We have shown CD CU CD CU
0 K K y
in Example 6.1.4 that the chain rule
and the differential equation imply, 2
y 00 = f 0 (y) f (y)
So the regions where f (y) f 0 (y) > 0 2r
f 0 (y) = r − y
a solution is concave up (CU), and K
the regions where f (y) f 0 (y) < 0 a
solution is concave down (CD). The Figure 41. Concavity information added.
result is in Fig. 41.
This is all the information we need to sketch a qualitative graph of solutions to the
differential equation. So, the last step is to put all this information on a yt-plane. The
horizontal axis above is now the vertical axis, and we now plot solutions y of the differential
equation. The result is given in Fig. 42.
250 G. NAGY – ODE august 16, 2015
CU
K Stable
CD
K
2
CU
Unstable
0 t
CD
The picture above contains the graph of several solutions y for different choices of initial
data y(0). Stationary solutions are in blue, t-dependent solutions in green. The stationary
solution y0 = 0 is unstable and pictured with a dashed blue line. The stationary solution
y1 = K is stable and pictured with a solid blue line. C
In Examples 6.1.4 and 6.1.6 we have used that the second derivative of the solution
function is related to f and f 0 . This is a result that we remark here in its own statement.
Theorem 6.1.4. If y is a solution of the autonomous system y 0 = f (y), then
y 00 = f 0 (y) f (y).
Remark: This result has been used to find out the curvature of the solution y of an
autonomous system y 0 = f (y). The graph of y has positive curvature iff f 0 (y) f (y) > 0 and
negative curvature iff f 0 (y) f (y) < 0.
Proof:
d dy d df dy
y 00 = = f (y(t)) = ⇒ y 00 = f 0 (y) f (y).
dt dt dt dy dt
Remark: The logistic equation is, of course, a separable equation, so it can be solved using
the method from § 1.3. We solve it below, so you can compare the qualitative graphs from
Example 6.1.6 with the exact solution below.
G. NAGY – ODE August 16, 2015 251
Example 6.1.7: Find the exact expression for the solution to the logistic equation for
population growth
y
y 0 = ry 1 − , y(0) = y0 , 0 < y0 < K.
K
The analysis done in Example 6.1.4 says that for initial data 0 < y0 < K we can discard the
absolute values in the expression above for the solution. Now the initial condition fixes the
value of the constant c,
y0
= c.
K − y0
Then, reordering terms we get the expression
Ky0
y(t) = .
y0 + (K − y0 ) e−rt
C
Remark: The expression above provides all solutions to the logistic equation with initial
data on the interval (0, K). But a stability analysis of the equation critical points is quite
involved if we use that expression for the solutions. It is in this case that the geometrical
analysis in Example 6.1.6 is quite useful.
6.1.4. Linear Stability Analysis. The geometrical analysis described above is useful to
get a quick qualitative picture of solutions to an autonomous differential system. But it is
always nice to complement geometric methods with analytic methods. For example, one
would like an analytic way to determine the stability of a critical point. One would also like
a quantitative measure of a solution decay rate to a stationary solution. A linear stability
analysis can provide this type of information.
One can get information about a solution of a nonlinear equation near a critical point
by studying an appropriate linear equation. More precisely, the solutions to a nonlinear
differential equation that are close to a stationary solution can be approximated by the
solutions of an appropriate linear differential equation. This linear equation is called the
linearization of the nonlinear equation computed at the stationary solution.
252 G. NAGY – ODE august 16, 2015
Definition 6.1.5. The linearization of the autonomous system y 0 = f (y) at the critical
point yc is the linear differential system for the unknown function ξ given by
ξ 0 = f 0 (yc ) ξ.
In the example above we have used a result that we highlight in the following statement.
Theorem 6.1.6. The trivial solution ξ = 0 of the constant coefficients equation
ξ0 = a ξ
is stable iff a < 0, and it is unstable iff a > 0.
Proof of Theorem 6.1.6: The stability analysis follows from the explicit solutions to the
differential equation, ξ(t) = ξ(0) eat . For a > 0 the solutions diverge to ±∞ as t → ∞, and
for a < 0 the solutions approach to zero as t → ∞.
y
Example 6.1.9: Find the linearization of the logistic equation y 0 = ry 1 − at the
K
critical points y0 = 0 and y1 = K. Solve the linear equations for arbitrary initial data.
r 2
Solution: If we write the nonlinear system as y 0 = f (y), then f (y) = ry − y . Then,
K
2r
f 0 (y) = r − y. For the critical point y0 = 0 we get the linearized system
K
ξ00 (t) = r ξ0 ⇒ ξ0 (t) = ξ0 (0) ert .
For the critical point y1 = K we get the linearized system
ξ10 (t) = −r ξ1 ⇒ ξ1 (t) = ξ1 (0) e−rt .
From this last expression we can see that for y0 = 0 the critical solution ξ0 = 0 is unstable,
while for y1 = K the critical solution ξ1 = 0 is stable. The stability of the trivial solution
ξ0 = ξ1 = 0 of the linearized system coincides with the stability of the critical points y0 = 0,
y1 = K for the nonlinear equation. C
G. NAGY – ODE August 16, 2015 253
Remark: In the Examples 6.1.8 and 6.1.9 we have seen that the stability of a critical point
yc to a nonlinear differential equation y 0 = f (y) is the same as the stability of the trivial
solution ξ = 0 of the linearized equation ξ 0 = f 0 (yc ) ξ. This is a general result, which we
state below.
Theorem 6.1.7. Let yc be a critical point of the autonomous system y 0 = f (y).
(a) The critical point yc is stable iff f 0 (yc ) < 0.
(b) The critical point yc is unstable iff f 0 (yc ) > 0.
Furthermore, If the initial data y(0) ' yc , is close enough to the critial point yc , then the
solution with that initial data of the equation y 0 = f (y) are close enough to yc in the sense
y(t) ' yc + ξ(t),
where ξ is the solution to the linearized equation at the critical point yc ,
ξ 0 = f 0 (yc ) ξ, ξ(0) = y(0) − yc .
Remark: The proof of this result can be found in § 43 in Simmons’ textbook [10].
Remark: The first part of Theorem 6.1.7 highlights the importance of the sign fo the
coefficient f 0 (yc ), which determines the stability of the critical point yc . The furthermore
part of the Theorem highlights how stable is a critical point. The value |f 0 (yc )| plays
a role of an exponential growth or a exponential decay rate. Its reciprocal, 1/|f 0 (yc )| is
a characteristic scale. It determines the value of t required for the solution y to vary
significantly in a neighborhood of the critical point yc .
Notes
This section follows a few parts of Chapter 2 in Steven Strogatz’s book on Nonlinear
Dynamics and Chaos, [12], and also § 2.5 in Boyce DiPrima classic textbook [3].
254 G. NAGY – ODE august 16, 2015
6.1.5. Exercises.
6.1.1.- . 6.1.2.- .
G. NAGY – ODE August 16, 2015 255
There are other problems associated to the differential equation above. The following
one is called a boundary value problem.
Definition 7.1.2 (BVP). Given the constants t0 6= t1 , y0 and y1 , find a solution y of
Eq. (7.1.1) satisfying the boundary conditions
y(t0 ) = y0 , y(t1 ) = y1 . (7.1.3)
One could say that the origins of the names “initial value problem” and “boundary value
problem” originates in physics. Newton’s second law of motion for a point particle was
the differential equation to solve in an initial value problem; the unknown function y was
interpreted as the position of the point particle; the independent variable t was interpreted
as time; and the additional conditions in Eq. (7.1.2) were interpreted as specifying the
position and velocity of the particle at an initial time. In a boundary value problem, the
differential equation was any equation describing a physical property of the system under
study, for example the temperature of a solid bar; the unknown function y represented any
physical property of the system, for example the temperature; the independent variable t
represented position in space, and it is usually denoted by x; and the additional conditions
given in Eq. (7.1.3) represent conditions on the physical quantity y at two different positions
in space given by t0 and t1 , which are usually the boundaries of the system under study, for
example the temperature at the boundaries of the bar. This originates the name “boundary
value problem”.
We mentioned above that the initial value problem for Eq. (7.1.1) always has a unique
solution for every constants y0 and y1 , result presented in Theorem ??. The case of the
boundary value problem for Eq. (7.1.1) is more complicated. A boundary value problem
may have a unique solution, or may have infinitely many solutions, or may have no solution,
depending on the boundary conditions. This result is stated in a precise way below.
Theorem 7.1.3 (BVP). Fix real constants a1 , a0 , and let r± be the roots of the charac-
teristic polynomial p(r) = r2 + a1 r + a0 .
(i) If the roots r± ∈ R, then the boundary value problem given by Eqs. (7.1.1) and (7.1.3)
has a unique solution for all y0 , y1 ∈ R.
G. NAGY – ODE August 16, 2015 257
(ii) If the roots r± form a complex conjugate pair, that is, r± = α±βi, with α, β ∈ R, then
the solution of the boundary value problem given by Eqs. (7.1.1) and (7.1.3) belongs
to only one of the following three possibilities:
(a) There exists a unique solution;
(b) There exists infinitely many solutions;
(c) There exists no solution.
Before presenting the proof of Theorem 7.1.3 concerning boundary value problem, let
us review part of the proof of Theorem ?? concerning initial value problems using matrix
notation to highlight the crucial part of the calculations. For the simplicity of this review,
we only consider the case r+ 6= r- . In this case the general solution of Eq. (7.1.1) can be
expressed as follows,
y(t) = c1 er- t + c2 er+ t , c1 , c2 ∈ R.
The initial conditions in Eq. (7.1.2) determine the values of the constants c1 and c2 as follows:
y0 = y(t0 ) = c1 er- t0 + c2 er+ t0
) r t
e r+ t0
e- 0 c1 y
⇒ = 0 .
y1 = y 0 (t0 ) = c1 r- er- t0 + c2 r+ er+ t0 r- er- t0 r+ er+ t0 c2 y1
The linear system above has a unique solution c1 and c2 for every constants y0 and y1 iff the
determinant of the coefficient matrix Z is non-zero, where
r t
er+ t0
e- 0
Z= .
r- er- t0 r+ er+ t0
A simple calculation shows
det(Z) = r+ − r- e(r+ +r- ) t0 6= 0
⇔ r+ 6= r- .
Since r+ 6= r- , the matrix Z is invertible and so the initial value problem above a unique
solution for every choice of the constants y0 and y1 . The proof of Theorem 7.1.3 for the
boundary value problem follows the same steps we described above: First find the general
solution to the differential equation, second find whether the matrix Z above is invertible
or not.
Proof of Theorem 7.1.3:
Part (i): Assume that r± are real numbers. We have two cases, r+ 6= r- and r+ = r- . In
the former case the general solution to Eq. (7.1.1) is given by
y(t) = c1 er- t + c2 er+ t , c1 , c2 ∈ R. (7.1.4)
The boundary conditions in Eq. (7.1.3) determine the values of the constants c1 , c2 , since
y0 = y(t0 ) = c1 er- t0 + c2 er+ t0
) r t
e - 0 er+ t0 c1
y
r- t1 r+ t1
⇒ r- t1 r+ t1 = 0 . (7.1.5)
y1 = y(t1 ) = c1 e + c2 e e e c 2 y1
The linear system above has a unique solution c1 and c2 for every constants y0 and y1 iff the
determinant of the coefficient matrix Z is non-zero, where
r t
e - 0 er+ t0
Z = r- t1 . (7.1.6)
e er+ t1
A straightforward calculation shows
det(Z) = er+ t1 er- t0 − er+ t0 er- t1 = er+ t0 er- t0 er+ (t1 −t0 ) − er- (t1 −t0 ) .
(7.1.7)
So it is simple to verify that
det(Z) 6= 0 ⇔ er+ (t1 −t0 ) 6= er- (t1 −t0 ) ⇔ r+ 6= r- . (7.1.8)
Therefore, in the case r+ 6= r- the matrix Z is invertible and so the boundary value problem
above has a unique solution for every choice of the constants y0 and y1 . In the case that
258 G. NAGY – ODE august 16, 2015
r+ = r- = r0 , then we have to start over, since the general solution of Eq. (7.1.1) is not given
by Eq. (7.1.4) but by the following expression
y(t) = (c1 + c2 t) er0 t , c1 , c2 ∈ R.
Again, the boundary conditions in Eq. (7.1.3) determine the values of the constants c1 and
c2 as follows:
y0 = y(t0 ) = c1 er0 t0 + c2 t0 er0 t0
) rt
e 0 0 t0 er0 t0 c1
y
⇒ = 0 .
y1 = y(t1 ) = c1 er0 t1 + c2 t1 er0 t1 er0 t1 t1 er0 t1 c2 y1
The linear system above has a unique solution c1 and c2 for every constants y0 and y1 iff the
determinant of the coefficient matrix Z is non-zero, where
rt
e 0 0 t er0 t0
Z = r0 t1 0 r0 t1
e t1 e
A simple calculation shows
det(Z) = t1 er0 (t1 +t0 ) − t0 er0 (t1 +t0 ) = (t1 − t0 )er0 (t1 +t0 ) 6= 0 ⇔ t1 6= t0 .
Therefore, in the case r+ = r- = r0 the matrix Z is again invertible and so the boundary
value problem above has a unique solution for every choice of the constants y0 and y1 . This
establishes part (i) of the Theorem.
Part (ii): Assume that the roots of the characteristic polynomial have the form r± = α±βi
with β 6= 0. In this case the general solution to Eq. (7.1.1) is still given by Eq. (7.1.4), and
the boundary condition of the problem are still given by Eq. (7.1.5). The determinant of
matrix Z introduced in Eq. (7.1.6) is still given by Eq. (7.1.7). However, the claim given
in Eq. (7.1.8) is true only in the case that r± are real numbers, and it does not hold in the
case that r± are complex numbers. The reason is the following calculation: Let us start
with Eq. (7.1.7) and then introduce that r± = α ± βi;
det(Z) = e(r+ +r- ) t0 er+ (t1 −t0 ) − er- (t1 −t0 )
case, when y0 and y1 are such that Eq. (7.1.5) has no solution is the case in part (iic). This
establishes the Theorem.
Our first example is a boundary value problem with a unique solution. This corresponds
to case (iia) in Theorem 7.1.3. The matrix Z defined in the proof of that Theorem is
invertible and the boundary value problem has a unique solution for every y0 and y1 .
Example 7.1.1: Find the solution y(x) to the boundary value problem
y 00 + 4y = 0, y(0) = 1, y(π/4) = −1.
Solution: We first find the general solution to the differential equation above. We know
that we have to look for solutions of the form y(x) = erx , with the constant r being solutions
of the characteristic equation
r2 + 4 = 0 ⇔ r± = ±2i.
We know that in this case we can express the general solution to the differential equation
above as follows,
y(x) = c1 cos(2x) + c2 sin(2x).
The boundary conditions imply the following system of linear equation for c1 and c2 ,
)
1 = y(0) = c1
1 0 c1 1
⇒ = .
−1 = y(π/4) = c2 0 1 c2 −1
The linear system above has the unique solution c1 = 1 and c2 = −1. Hence, the boundary
value problem above has the unique solution
y(x) = cos(2x) − sin(2x).
C
The following example is a small variation of the previous one, we change the value
of the constant t1 , which is the place where we impose the second boundary condition,
from π/4 to π/2. This is enough to have a boundary value problem with infinitely many
solutions, corresponding to case (iib) in Theorem 7.1.3. The matrix Z in the proof of this
Theorem 7.1.3 is not invertible in this case, and the values of the constants t0 , t1 , y0 , y1 , a1
and a0 are such that there are infinitely many solutions.
Example 7.1.2: Find the solution y(x) to the boundary value problem
y 00 + 4y = 0, y(0) = 1, y(π/2) = −1.
Solution: The general solution is the same as in Example 7.1.1 above, that is,
y(x) = c1 cos(2x) + c2 sin(2x).
The boundary conditions imply the following system of linear equation for c1 and c2 ,
)
1 = y(0) = c1
1 0 c1 1
⇒ = .
−1 = y(π/2) = −c1 −1 0 c2 −1
The linear system above has infinitely many solution, as can be seen from the following:
c1 = 1,
1 0 | 1 1 0 | 1
→ ⇒
−1 0 | −1 0 0 | 0 c2 free.
Hence, the boundary value problem above has infinitely many solutions given by
y(x) = cos(2x) + c2 sin(2x), c2 ∈ R.
C
260 G. NAGY – ODE august 16, 2015
The following example again is a small variation of the previous one, this time we change
the value of the constant y1 from −1 to 1. This is enough to have a boundary value problem
with no solutions, corresponding to case (iic) in Theorem 7.1.3. The matrix Z in the proof
of this Theorem 7.1.3 is still not invertible, and the values of the constants t0 , t1 , y0 , y1 , a1
and a0 are such that there is not solution.
Example 7.1.3: Find the solution y to the boundary value problem
y 00 (x) + 4y(x) = 0, y(0) = 1, y(π/2) = 1.
Solution: The general solution is the same as in Examples 7.1.1 and 7.1.2 above, that is,
y(x) = c1 cos(2x) + c2 sin(2x).
The boundary conditions imply the following system of linear equation for c1 and c2 ,
1 = y(0) = c1
1 = y(π/2) = −c1
From the equations above we see that there is no solution for c1 , hence there is no solution
for the boundary value problem above.
Remark: We now use matrix notation, in order to follow the same steps we did in the
proof of Theorem 7.1.3:
1 0 c1 1
= .
−1 0 c2 1
The linear system above has infinitely many solutions, as can be seen from the following
Gauss elimination operations
1 0 1 → 1
0 1 1 0 1
2 → 0 0
−1 0 1 0 0 1
Hence, there are no solutions to the linear system above. C
Proof of Theorem 7.1.4: We first look for solutions having eigenvalue λ = 0. In such a
case the general solution to the differential equation in (7.1.10) is given by
y(x) = c1 + c2 x, c1 , c2 ∈ R.
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(0) = c1 ,
⇒ c1 = c2 = 0.
0 = y(`) = c1 + c2 `
Since the only solution in this case is y = 0, there are no non-zero solutions.
We now look for solutions having eigenvalue λ > 0. In this case we redefine the eigenvalue
as λ = µ2 , with µ > 0. The general solution to the differential equation in (7.1.10) is given
by
y(x) = c1 e−µx + c2 eµx ,
where we used that the roots of the characteristic polynomial r2 − µ2 = 0 are given by
r± = ±µ. The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(0) = c1 + c2 ,
1 1
c1 0
⇒ = .
0 = y(`) = c1 e−µ` + c2 eµ` e−µ` eµ` c2 0
Denoting by
1 1
Z=
e−µ` eµ`
we see that
det(Z) = eµ` − e−µ` 6= 0 ⇔ µ 6= 0.
Hence the matrix Z is invertible, and then we conclude that the linear system above for c1 ,
c2 has a unique solution given by c1 = c2 = 0, and so y = 0. Therefore there are no non-zero
solutions y in the case that λ > 0.
We now study the last case, when the eigenvalue λ < 0. In this case we redefine the
eigenvalue as λ = −µ2 , with µ > 0, and the general solution to the differential equation
in (7.1.10) is given by
y(x) = c̃1 e−iµx + c̃2 eiµx ,
where we used that the roots of the characteristic polynomial r2 + µ2 = 0 are given by
r± = ±iµ. In a case like this one, when the roots of the characteristic polynomial are
complex, it is convenient to express the general solution above as a linear combination of
real-valued functions,
y(x) = c1 cos(µx) + c2 sin(µx).
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(0) = c1 ,
⇒ c2 sin(µ`) = 0.
0 = y(`) = c1 cos(µ`) + c2 sin(µ`)
Since we are interested in non-zero solutions y, we look for solutions with c2 6= 0. This
implies that µ cannot be arbitrary but must satisfy the equation
sin(µ`) = 0 ⇔ µn ` = nπ, n ∈ N.
We therefore conclude that the eigenvalues and eigenfunctions are given by
n2 π 2 nπx
λn = − 2
, yn (x) = cn sin .
` `
Choosing the free constants cn = 1 we establish the Theorem.
262 G. NAGY – ODE august 16, 2015
Example 7.1.4: Find the numbers λ and the non-zero functions with values y(x) solutions
of the following homogeneous boundary value problem
y 00 = λy, y(0) = 0, y 0 (π) = 0.
Solution: This is also an eigenvalue-eigenfunction problem, the only difference with the
case studied in Theorem 7.1.4 is that the second boundary condition here involves the
derivative of the unknown function y. The solution is obtained following exactly the same
steps performed in the proof of Theorem 7.1.4.
We first look for solutions having eigenvalue λ = 0. In such a case the general solution
to the differential equation is given by
y(x) = c1 + c2 x, c1 , c2 ∈ R.
The boundary conditions imply the following conditions on c1 and c2 ,
0 = y(0) = c1 , 0 = y 0 (π) = c2 .
Since the only solution in this case is y = 0, there are no non-zero solutions with λ = 0.
We now look for solutions having eigenvalue λ > 0. In this case we redefine the eigenvalue
as λ = µ2 , with µ > 0. The general solution to the differential equation is given by
y(x) = c1 e−µx + c2 eµx ,
where we used that the roots of the characteristic polynomial r2 − µ2 = 0 are given by
r± = ±µ. The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(0) = c1 + c2 ,
1 1 c1 0
0 −µπ ⇒ −µπ µπ = .
0 = y (π) = −µc1 e + µc2 eµπ −µe µe c 2 0
Denoting by
1 1
Z=
−µe−µπ µeµπ
we see that
det(Z) = µ eµπ + e−µπ 6= 0.
Hence the matrix Z is invertible, and then we conclude that the linear system above for c1 ,
c2 has a unique solution given by c1 = c2 = 0, and so y = 0. Therefore there are no non-zero
solutions y in the case that λ > 0.
We now study the last case, when the eigenvalue λ < 0. In this case we redefine the
eigenvalue as λ = −µ2 , with µ > 0, and the general solution to the differential equation
in (7.1.10) is given by
y(x) = c̃1 e−iµx + c̃2 eiµx ,
where we used that the roots of the characteristic polynomial r2 + µ2 = 0 are given by
r± = ±iµ. As we did in the proof of Theorem 7.1.4, it is convenient to express the general
solution above as a linear combination of real-valued functions,
y(x) = c1 cos(µx) + c2 sin(µx).
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(0) = c1 ,
⇒ c2 cos(µπ) = 0.
0 = y 0 (π) = −µc1 sin(µπ) + µc2 cos(µπ)
Since we are interested in non-zero solutions y, we look for solutions with c2 6= 0. This
implies that µ cannot be arbitrary but must satisfy the equation
π
cos(µπ) = 0 ⇔ µn π = (2n + 1) , n ∈ N.
2
G. NAGY – ODE August 16, 2015 263
Example 7.1.5: Find the numbers λ and the non-zero functions with values y(x) solutions
of the homogeneous boundary value problem
x2 y 00 − x y 0 = λ y, y(1) = 0, y(`) = 0, ` > 1.
Solution: This is also an eigenvalue-eigenfunction problem, the only difference with the
case studied in Theorem 7.1.4 is that the differential equation is now the Euler equation,
studied in Sect. 3.2, instead of a constant coefficient equation. Nevertheless, the solution is
obtained following exactly the same steps performed in the proof of Theorem 7.1.4.
Writing the differential equation above in the standard form of an Euler equation,
x2 y 00 − x y 0 − λy = 0,
we know that the general solution to the Euler equation is given by
y(x) = c1 + c2 ln(x) xr0
in the case that the constants r+ = r- = r0 , where r± are the solutions of the Euler charac-
teristic equation √
r(r − 1) − r − λ = 0 ⇒ r± = 1 ± 1 + λ.
In the case that r+ 6= r- , then the general solution to the Euler equation has the form
y(x) = c1 xr- + c2 xr+ .
Let us start with the first case, when the roots of the Euler characteristic polynomial are
repeated r+ = r- = r0 . In our case this happens if 1 + λ = 0. In such a case r0 = 1, and the
general solution to the Euler equation is
y(x) = c1 + c2 ln(x) x.
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(1) = c1 ,
⇒ c2 ` ln(`) = 0,
0 = y(`) = c1 + c2 ln(`) `
hence c2 = 0. We conclude that the linear system above for c1 , c2 has a unique solution
given by c1 = c2 = 0, and so y = 0. Therefore there are no non-zero solutions y in the case
that 1 + λ = 0.
We now look for solutions having eigenvalue λ satisfying the condition 1 + λ > 0. In this
case we redefine the eigenvalue as 1 + λ = µ2 , with µ > 0. Then, r± = 1 ± µ, and so the
general solution to the differential equation is given by
y(x) = c1 x(1−µ) + c2 x(1+µ) ,
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(1) = c1 + c2 ,
1 1
c1 0
⇒ = .
0 = y(`) = c1 `(1−µ) + c2 `(1+µ) `(1−µ) `(1+µ) c2 0
Denoting by
1 1
Z=
`(1−µ) `(1+µ)
264 G. NAGY – ODE august 16, 2015
we see that
det(Z) = ` `µ − `−µ 6= 0 ⇔ ` 6= ±1.
Hence the matrix Z is invertible, and then we conclude that the linear system above for c1 ,
c2 has a unique solution given by c1 = c2 = 0, and so y = 0. Therefore there are no non-zero
solutions y in the case that 1 + λ > 0.
We now study the second case, when the eigenvalue satisfies that 1 + λ < 0. In this case
we redefine the eigenvalue as 1 + λ = −µ2 , with µ > 0. Then r± = 1 ± iµ, and the general
solution to the differential equation is given by
y(x) = c̃1 x(1−iµ) + c̃2 x(1+iµ) ,
As we did in the proof of Theorem 7.1.4, it is convenient to express the general solution
above as a linear combination of real-valued functions,
y(x) = x c1 cos µ ln(x) + c2 sin µ ln(x) .
The boundary conditions imply the following conditions on c1 and c2 ,
)
0 = y(1) = c1 ,
⇒ c2 ` sin µ ln(`) = 0.
0 = y(`) = c1 ` cos µ ln(`) + c2 ` sin (µ ln(`)
Since we are interested in non-zero solutions y, we look for solutions with c2 6= 0. This
implies that µ cannot be arbitrary but must satisfy the equation
sin µ ln(`) = 0 ⇔ µn ln(`) = nπ, n ∈ N.
We therefore conclude that the eigenvalues and eigenfunctions are given by
n2 π 2 h nπ ln(x) i
λn = −1 − 2 , yn (x) = cn x sin , n ∈ N.
ln (`) ln(`)
C
G. NAGY – ODE August 16, 2015 265
7.1.3. Exercises.
7.1.1.- . 7.1.2.- .
266 G. NAGY – ODE august 16, 2015
y
7.2.1. Origins of Fourier series. The
study of solutions to the wave equation in u(t, x)
one space dimension by Daniel Bernoulli in
the 1750s is a possible starting point to de-
scribe the origins of the Fourier series. The
physical system is a vibrating elastic string 0 L x
with fixed ends, the unknown function with
values u(t, x) represents the vertical displace-
ment of a point in the string at the time t and
position x, as can be seen in the sketch given
in Fig. 43. A constant c > 0 characterizes Figure 43. Vibrating string
the material that form the string. moving on the vertical direc-
tion with fixed ends.
The mathematical problem to solve is the following initial-boundary value problem: Given
a function with values f (x) defined in the interval [0, `] ⊂ R satisfying f (0) = f (`) = 0, find
a function with values u(t, x) solution of the wave equation
∂t2 u(t, x) = c2 ∂x2 u(t, x),
u(t, 0) = 0, u(t, `) = 0,
u(0, x) = f (x), ∂t u(0, x) = 0.
The equations on the second line are called boundary conditions, since they are conditions
at the boundary of the vibrating string for all times. The equations on the third line are
called initial conditions, since they are equation that hold at the initial time only. The first
equation says that the initial position of the string is given by the function f , while the
second equation says that the initial velocity of the string vanishes. Bernoulli found that
the functions cnπt nπx
un (t, x) = cos sin
` `
are particular solutions to the problem above in the case that the initial position function
is given by nπx
fn (x) = sin .
`
He also found that the function
X∞ cnπt nπx
u(t, x) = cn cos sin
n=1
` `
is also a solution to the problem above with initial condition
X∞ nπx
f (x) = cn sin . (7.2.1)
n=1
`
G. NAGY – ODE August 16, 2015 267
Is the set of initial functions f given in Eq. (7.2.1) big enough to include all continuous
functions satisfying the compatibility conditions f (0) = f (`) = 0? Bernoulli said the answer
was yes, and his argument was that with infinitely many coefficients cn one can compute
every function f satisfying the compatibility conditions. Unfortunately this argument does
not prove Bernoulli’s claim. A proof would be a formula to compute the coefficients cn in
terms of the function f . However, Bernoulli could not find such a formula.
A formula was obtained by Joseph Fourier in the 1800s while studying a different prob-
lem. He was looking for solutions to the following initial-boundary value problem: Given a
function with values f (x) defined in the interval [0, `] ⊂ R satisfying f (0) = f (`) = 0, and
given a positive constant k, find a function with values u(t, x) solution of the differential
equation
∂t u(t, x) = k ∂x2 u(t, x),
u(t, 0) = 0, u(t, `) = 0,
u(0, x) = f (x).
The values of the unknown function u(t, x) are interpreted as the temperature of a solid
body at the time t and position x. The temperature in this problem does not depend on the
y and z coordinates. The partial differential equation on the first line above is called the
heat equation, and describes the variation of the body temperature. The thermal properties
of the body material are specified by the positive constant k, called the thermal diffusivity.
The main difference with the wave equation above is that only first time derivatives appear
in the equation. The boundary conditions on the second line say that both borders of the
body are held at constant temperature. The initial condition on the third line provides the
initial temperature of the body. Fourier found that the functions
nπ 2
nπx
un (t, x) = e−( ` ) kt sin
`
are particular solutions to the problem above in the case that the initial position function
is given by
nπx
fn (x) = sin .
`
Fourier also found that the function
∞ nπx
nπ 2
X
u(t, x) = cn e−( ` ) kt sin
n=1
`
Fourier was able to show that any continuous function f defined on the domain [0, `] ⊂ R
satisfying the conditions f (0) = f (`) = 0 can be written as the series given in Eq. (7.2.2),
where the coefficients cn can be computed with the formula
2 `
Z nπx
cn = f (x) sin dx.
` 0 `
This formula for the coefficients cn , together with few other formulas that we will study
later on in this Section, was an important reason to name after Fourier instead of Bernoulli
the series containing those given in Eq. (7.2.2).
268 G. NAGY – ODE august 16, 2015
7.2.2. Fourier series. Every continuous τ -periodic function f can be expressed as an infi-
nite linear combination of sine and cosine functions. Before we present this result in a precise
form we need to introduce few definitions and to recall few concepts from linear algebra.
We start defining a peridic function saying that it is invariant under certain translations.
Definition 7.2.1. A function f : R → R is called τ -periodic iff for all x ∈ R holds
f (x − τ ) = f (x), τ > 0.
The number τ is called the period of f , and the definition says that a function τ -periodic iff
it is invariant under translations by τ and so, under translations by any multiple of τ .
Example 7.2.1: The following functions are periodic, with period τ ,
f (x) = sin(x), τ = 2π,
f (x) = cos(x), τ = 2π,
f (x) = tan(x), τ = π,
2π
f (x) = sin(ax), τ= .
a
The following function is also periodic and its graph is given in Fig. 44,
f (x) = ex , x ∈ [0, 2), f (x − 2) = f (x). (7.2.3)
f
f (t)
1
−2 0 2 4 t
Example 7.2.2: Show that the following functions are τ -periodic for all n ∈ N,
2πnx 2πnx
fn (x) = cos , gn (x) = sin .
τ τ
it is symmetric,
Z ` Z `
(f, g) = f (x) g(x) dx = g(x) f (x) dx = (g, f );
−` −`
Z `
f, [ag + bh] = f (x) a g(x) + b h(x) dx
−`
Z `
=a f (x) g(x) dx + b ∈`−` f (x) h(x) dx
−`
= a(f, g) + b(f, h).
An inner product provides the notion of angle in a vector space, and so the notion of
orthogonality of vectors. The idea of perpendicular vectors in the three dimensional space
is indeed a notion that belongs to a vector space with an inner product, hence it can be
generalized to the space of functions. The following result states that certain sine and cosine
functions can be perpendicular to each oder.
270 G. NAGY – ODE august 16, 2015
0 n 6= m,
Z ` nπx mπx
sin sin dx =
−` ` ` ` n = m,
Z ` nπx mπx
cos sin dx = 0.
−` ` `
Remark: The proof of this result is based in the following trigonometric identities:
1
cos(θ) cos(φ) = cos(θ + φ) + cos(θ − φ) ,
2
1
sin(θ) sin(φ) = cos(θ − φ) − cos(θ + φ) ,
2
1
sin(θ) cos(φ) = sin(θ + φ) + sin(θ − φ) .
2
Proof of Theorem 7.2.2: We show the proof of the first equation in Theorem 7.2.2, the
proof of the other two equations is similar. So, From the trigonometric identities above we
obtain
Z `
1 ` 1 ` h (n − m)πx i
nπx mπx Z h (n + m)πx i Z
cos cos dx = cos dx + cos dx.
−` ` ` 2 −` ` 2 −` `
Now, consider the case that at least one of n or m is strictly greater than zero. In this case
it holds the first term always vanishes, since
1 `
Z h (n + m)πx i h (n + m)πx i`
`
cos dx = sin = 0;
2 −` ` 2(n + m)π ` −`
while the remaining term is zero in the sub-case n 6= m, due to the same argument as above,
1 ` h (n − m)πx i h (n − m)πx i`
Z
`
cos dx = sin = 0;
2 −` ` 2(n − m)π ` −`
This establishes the first equation in Theorem 7.2.2. The remaining equations are proven
in a similar way.
The main result of this Section is that any twice continuously differentiable function
defined on an interval [−`, `] ⊂ R, with ` > 0, admits a Fourier series expansion.
Theorem 7.2.3 (Fourier Series). Given ` > 0, assume that the functions f , f 0 and
f 00 : [−`, `] ⊂ R → R are continuous. Then, f can be expressed as an infinite series
∞
a0 X h nπx nπx i
f (x) = + an cos + bn sin (7.2.4)
2 n=1
` `
G. NAGY – ODE August 16, 2015 271
where the coefficients an , bn are given in Theorem 7.2.3. We start computing a0 , that is,
Z 1
a0 = f (x) dx
−1
Z0 Z 1
= (1 + x) dx + (1 − x) dx
−1 0
x2 0 x2 1
= x+ + x−
2 −1 2 0
1 1
= 1− + 1− ⇒ a0 = 1.
2 2
Similarly,
Z 1
an = f (x) cos(nπx) dx
−1
Z 0 Z 1
= (1 + x) cos(nπx) dx + (1 − x) cos(nπx) dx.
−1 0
Recalling the integrals
Z
1
cos(nπx) dx = sin(nπx),
nπ
Z
x 1
x cos(nπx) dx = sin(nπx) + 2 2 cos(nπx),
nπ n π
it is not difficult to see that
1 0 h x 1 i0
an = sin(nπx) + sin(nπx) + 2 2 cos(nπx)
nπ −1 nπ n π −1
1 1 h x 1 i1
+ sin(nπx) − sin(nπx) + 2 2 cos(nπx)
nπ 0 nπ n π 0
h 1 1 i h 1 1 i
= 2 2 − 2 2 cos(−nπ) − 2 2 cos(−nπ) − 2 2 ,
n π n π n π n π
we then conclude that
2
an = 2 2 1 − cos(−nπ) .
n π
Finally, we must find the coefficients bn . The calculation is similar to the one done above
for an , and it is left as an exercise: Show that bn = 0. Then, the Fourier series of f is
∞
1 X 2
f (x) = + 1 − cos(−nπ) cos(nπx).
2 n=1 n2 π 2
C
7.2.3. Even and odd functions. There exist particular classes of functions with simple
Fourier series expansions. Simple means that either the bn or the an coefficients vanish.
These functions are called even or odd, respectively.
Definition 7.2.4. Given any ` > 0, a function f : [−`, ` ] → R is called even iff holds
f (−x) = f (x), for all x ∈ [−`, ` ].
A function f : [−`, ` ] → R is called odd iff holds
f (−x) = −f (x), for all x ∈ [−`, ` ].
G. NAGY – ODE August 16, 2015 273
The Fourier series of a function which is either even or odd is simple to find.
Theorem 7.2.5 (Cosine and sine series). Consider the Fourier series of the function
f : [−`, `] → R, that is,
∞
a0 X h nπx nπx i
f (x) = + an cos + bn sin .
2 n=1
` `
(a) The function f is even iff the coefficients bn = 0 for all n > 1. In this case the Fourier
series is called a cosine series.
(b) The function f is odd iff the coefficients an = 0 for all n > 0. In this case the Fourier
series is called a sine series.
Proof of Theorem 7.2.5:
Part (a): Suppose that f is even, that is, f (−x) = f (x), and compute the coefficients bn ,
Z ` nπx
bn = f (x) sin dx
−` `
Z −` nπ(−y)
= f (−y) sin (−dy), y = −x, dy = −dx,
` `
Z ` nπ(−y)
= f (−y) sin dy,
−` `
Z ` nπy
=− f (y) sin dy,
−` `
= −bn ⇒ bn = 0.
Part (b): The proof is similar. This establishes the Theorem.
274 G. NAGY – ODE august 16, 2015
7.2.4. Exercises.
7.2.1.- . 7.2.2.- .
G. NAGY – ODE August 16, 2015 275
Insulation
u(t, `) = 0
u(t, 0) = 0
x
0 `
z Insulation
Figure 45. A solid bar thermally insulated on all surfaces except the x = 0
and x = ` surfaces, which are held at temperatures T0 and T` , respectively.
The temperature u of the bar is a function of the coordinate x only.
The one-space dimensional heat equation for the temperature function u is the partial
differential equation
∂t u(t, x) = k ∂x2 u(t, x),
where ∂ denotes partial derivative and k is a positive constant called the thermal diffusivity
of the material. This equation has infinitely many solutions. The temperature of the bar is
given by a uniquely defined function u because this function satisfies a few extra conditions.
We mentioned the boundary conditions at x = 0 and x = `. We also need to specify the
initial temeperature of the bar. The partial differential equation and these extra conditions
define an initial-boundary value problem, which we now summarize.
Definition 7.3.1. The Initial-Boundary Value Problem for the one-space dimensional
heat equation with homogeneous boundary conditions is the following: Given positive con-
stants ` and k, and a function f : [0, `] → R satisfying f (0) = f (`) = 0, find a function
u : [0, ∞) × [0, `] → R, with values u(t, x), solution of
∂t u(t, x) = k ∂x2 u(t, x), (7.3.1)
u(0, x) = f (x), (7.3.2)
u(t, 0) = 0, u(t, `) = 0. (7.3.3)
276 G. NAGY – ODE august 16, 2015
The requirement in Eq. (7.3.2) is called an initial condition, while the equations in (7.3.3)
are called the boundary conditions of the problem. The latter conditions are actually homo-
geneous boundary conditions. A sketch on the tx plane is useful to understand this type of
problems, as can be seen in Fig. 46. This figure helps us realize that the boundary conditions
u(t, 0) = u(t, `) = 0 hold for all times t > 0. And this figure also helps understand why the
initial condition function f must satisfy the compatibility conditions f (0) = f (`) = 0.
0 u(0, x) = f (x) ` x
I think it was Richard Feynman who said that one should never start a calculation
without knowing the answer. Following that advice we now try to understand the qualitative
behavior of a solution to the heat equation. Suppose that the boundary conditions are
u(t, 0) = T0 = 0 and u(t, `) = T` > 0. Suppose that at a fixed time t > 0 the graph of the
temperature function u is as in Fig. 47. Then a qualitative idea of how a solution of the
heat equation behaves can be obtained from the arrows in that figure. The heat equation
relates the time variation of the temperature, ∂t u, to the curvature of the function u in the
x variable, ∂x2 u. In the regions where the function u is concave up, hence ∂x2 u > 0, the
heat equation says that the tempreature must increase ∂t u > 0. In the regions where the
function u is concave down, hence ∂x2 u < 0, the heat equation says that the tempreature
must decrease ∂t u < 0. So the heat equation tries to make the temperature along the
material to vary the least possible that is consistent with the boundary condition. In the
case of the Figure, the temperature will try to get to the dashed line.
u
u(t, x)
T` ∂t u < 0
T0 ∂t u > 0
0 t fixed ` x
We now summarize the main result about the initial-boundary value problem.
G. NAGY – ODE August 16, 2015 277
Theorem 7.3.2. If the initial data function f is continuous, then the initial boundary value
problem given in Def. 7.3.1 has a unique solution u given by
∞ nπx
nπ 2
X
u(t, x) = cn e−( ` ) t sin , (7.3.4)
n=1
`
where the coefficients cn are given in terms of the initial data
2 `
Z nπx
cn = f (x) sin dx.
` 0 `
Remarks:
(a) The theorem considers only homogeneous boundary conditions. The analysis given
in Fig. 47 predicts that the temperature will drop to zero, to match the boundary
values. This is what we see in the solution formula in Eq. (7.3.4), which says that the
temperature approaches zero exponentially in time.
(b) Each term in the infinite sum in Eq. (7.3.4) satisfies the boundary conditions, because
of the factor with the sine function.
(c) The solution formula evaluated at t = 0 is the Sine Fourier series expansion of the initial
data function f , as can be seen by the formula for the coefficient cn .
(d) The proof of this theorem is based in the separation of variables method and is presented
in the next subsection.
7.3.2. The Separation of Variables. We present two versions of the same proof of The-
orem 7.3.2. They differ only in the emphasis on different parts of the argument.
First Proof of Theorem 7.3.2: The separation of variables method it is usually presented
in the literature as follows. First look for particular type of solutions to the heat equation
of the form
u(t, x) = v(t) w(x).
Introducing this particular function in the heat equation we get
1 v̇(t) w00 (x)
v̇(t) w(x) = k v(t) w00 (x) ⇒ = ,
k v(t) w(x)
where we used the notation v̇ = dv/dt and w0 = dw/dx. These equations are the reason
the method is called separation of variables. The left hand side in the last equation above
depends only on t and the right hand side depends only on x. The only possible solution is
that both sides are equal the same constant, call it −λ. So we end up with two equations
1 v̇(t) w00 (x)
= −λ, = −λ.
k v(t) w(x)
The first equation leads to an initial value problem for v once initial conditions are provided.
The second equation leads to an eigenvalue-eigenfunction problem for w once boundary
conditions are provided. The choice of these inital and boundary conditions is inspired from
the analogous conditions in Def. 7.3.1. Usually in the literature these conditions are
v(0) = 1, and w(0) = w(`) = 0.
The boundary conditions on w are clearly coming from the boundary conditions in Def 7.3.1,
but the initial condition on v is clearly not. We now solve the eigenvalue-eigenfunction
problem for w, and we know from § 7.1 that the solution is
nπ 2 nπx
λn = , wn (x) = sin , n = 1, 2, · · · .
` `
278 G. NAGY – ODE august 16, 2015
where we used the notation v̇n = dvn /dt and wn0 = dwn /dx. As we said in the first proof
above, these equations are the reason the method is called separation of variables. The left
hand side in the last equation above depends only on t and the right hand side depends
only on x. The only possible solution is that both sides are equal the same constant, call it
−λn . So we end up with the equations
1 v̇n (t) wn00 (x)
= −λn , = −λn .
k vn (t) wn (x)
Recall that the basis vectors wn satisfy the boundary conditions wn (0) = wn (`) = 0. This
is an eigenvalue-eigenfunction problem, which we solved in § 7.1. The result is
nπ 2 nπx
λn = , wn (x) = sin , n = 1, 2, · · · .
` `
Using the value of λn found above, the solution for the fuunction vn is
nπ 2
v(t) = vn (0) e−k( ` ) t .
So we have a solution to the heat equation given by
∞ nπx
nπ 2
X
u(t, x) = vn (0) e−k( ` ) t sin .
n=1
`
This solution satisfied the boundary conditions u(t, 0) = u(t, `) = 0 because each term
satisfy them. The constants vn (0) are determined from the initial data,
X∞ nπx
f (x) = vn (0) sin .
n=1
`
Recall now that the sine functions above are mutually orthogonal and that
Z `
nπx mπx 0 n 6= m,
sin sin dx = `
0 ` ` n = m,
2
Then, multiplying the equation for f by a sin(nπx/`) and integrating on [0, `] it is not so
difficult to get
2 `
Z nπx
vn (0) = f (x) sin dx.
` 0 `
This establishes the Theorem.
Example 7.3.1: Find the solution to the initial-boundary value problem
4 ∂t u = ∂x2 u, t > 0, x ∈ [0, 2],
with initial and boundary conditions given by
u(0, x) = 3 sin(πx/2), u(t, 0) = 0, u(t, 2) = 0.
7.3.3. Exercises.
7.3.1.- . 7.3.2.- .
282 G. NAGY – ODE august 16, 2015
8.1.1. Linear algebraic systems. One could say that the set of results we call Linear
Algebra originated with the study of linear systems of algebraic equations. Our review of
elementary concepts from Linear Algebra starts with a study of these type of equations.
The system is called homogeneous iff all sources vanish, that is, b1 = · · · = bn = 0.
Example 8.1.1:
(a) A 2 × 2 linear system on the unknowns x1 and x2 is the following:
2x1 − x2 = 0,
−x1 + 2x2 = 3.
x1 + 2x2 + x3 = 1,
−3x1 + x2 + 3x3 = 24,
x2 − 4x3 = −1.
C
C
The particular case of an m × 1 matrix is called an m-vector.
v1
Definition 8.1.3. An m-vector, v, is the array of numbers v = ... , where the vector
vm
components vi ∈ C, with i = 1, · · · , m.
Example 8.1.3: The unknowns of the algebraic linear systems in Example 8.1.1 can be
grouped in vectors, as follows,
x1 + 2x2 + x3 = 1,
x1
2x1 − x2 = 0,
x
⇒ x= 1 . −3x1 + x2 + 3x3 = 24, ⇒ x = x2 .
−x1 + 2x2 = 3, x2
x3
x2 − 4x3 = −1.
C
Definition 8.1.4. The matrix-vector product of an n × n matrix A and an n-vector x
is an n-vector given by
a11 · · · a1n x1 a11 x1 + · · · + a1n xn
Ax = ... .. .. = ..
. . .
an1 ··· ann xn an1 x1 + · · · + a1n xn
Example 8.1.5: Use the matrix-vector product to express the algebraic linear system below,
2x1 − x2 = 0,
−x1 + 2x2 = 3.
Solution: Introduce the coefficient matrix A, the unknown vector x, and the source vector
b as follows,
2 −1 x 0
A= , x= 1 , b= .
−1 2 x2 3
Since the matrix-vector product Ax is given by
2 −1 x1 2x1 − x2
Ax = = ,
−1 2 x2 −x1 + 2x2
then we conclude that
2x1 − x2 = 0,
2x1 − x2 0
⇔ = ⇔ Ax = b.
−x1 + 2x2 = 3, −x1 + 2x2 3
C
It is simple to see that the result found in the Example above can be generalized to every
n × n algebraic linear system.
Theorem 8.1.5. Given the algebraic linear system in Eqs. (8.1.1)-(8.1.2), introduce the
coefficient matrix A, the unknown vector x, and the source vector b, as follows,
a11 · · · a1n x1 b1
A = ... .. , .. ..
x = , b = . .
. .
an1 ··· ann xn bn
Then, the algebraic linear system can be written as
Ax = b.
Proof of Theorem 8.1.5: From the definition of the matrix-vector product we have that
a11 · · · a1n x1 a11 x1 + · · · + a1n xn
Ax = ... .. .. = ..
.
. . .
an1 · · · ann xn an1 x1 + · · · + a1n xn
Then, we conclude that
a11 x1 + · · · + a1n xn = b1 ,
a11 x1 + · · · + a1n xn b1
..
⇔ .. ..
⇔
=. Ax = b.
.
.
an1 x1 + · · · + a1n xn bn
an1 x1 + · · · + ann xn = bn ,
We introduce one last definition, which will be helpful in the next subsection.
Definition 8.1.6. The augmented matrix of Ax = b is the n × (n + 1) matrix [A|b].
G. NAGY – ODE August 16, 2015 285
The augmented matrix of an algebraic linear system contains the equation coefficients and
the sources. Therefore, the augmented matrix of a linear system contains the complete
information about the system.
Example 8.1.6: Find the augmented matrix of both the linear systems in Example 8.1.1.
Solution: The coefficient matrix and source vector of the first system imply that
2 −1 0 2 −1 0
A= , b= ⇒ [A|b] = .
−1 2 3 −1 2 3
The coefficient matrix and source vector of the second system imply that
1 2 1 1 1 2 1 1
A = −3 1 3 , b = 24 ⇒ [A|b] = −3 1 3 24 .
0 1 −4 −1 0 1 −4 −1
C
Recall that the linear combination of two vectors is defined component-wise, that is, given
any numbers a, b ∈ R and any vectors x, y, their linear combination is the vector given by
ax1 + by1 x1 y1
.
.. . y = ... .
ax + by = , where x = .. ,
axn + byn xn yn
With this definition of linear combination of vectors it is simple to see that the matrix-vector
product is a linear operation.
Theorem 8.1.7. The matrix-vector product is a linear operation, that is, given an n × n
matrix A, then for all n-vectors x, y and all numbers a, b ∈ R holds
A(ax + by) = a Ax + b Ay. (8.1.3)
Proof of Theorem 8.1.7: Just write down the matrix-vector product in components,
a11 · · · a1n ax1 + by1 a11 (ax1 + by1 ) + · · · + a1n (axn + byn )
A(ax + by) = ... .. ..
=
..
.
. . .
am1 · · · amn axn + byn an1 (ax1 + by1 ) + · · · + ann (axn + byn )
Expand the linear combinations on each component on the far right-hand side above and
re-order terms as follows,
a (a11 x1 + · · · + a1n xn ) + b (a11 y1 + · · · + a1n yn )
A(ax + by) = ..
.
.
a (an1 x1 + · · · + ann xn ) + b (an1 y1 + · · · + ann yn )
Separate the right-hand side above,
(a11 x1 + · · · + a1n xn ) (a11 y1 + · · · + a1n yn )
A(ax + by) = a .. ..
+b .
. .
(an1 x1 + · · · + ann xn ) (an1 y1 + · · · + ann yn )
We then conclude that
A(ax + by) = a Ax + b Ay.
This establishes the Theorem.
286 G. NAGY – ODE august 16, 2015
8.1.2. Gauss elimination operations. We now review three operations that can be per-
formed on an augmented matrix of a linear system. These operations change the augmented
matrix of the system but they do not change the solutions of the system. The Gauss elim-
ination operations were already known in China around 200 BC. We call them after Carl
Friedrich Gauss, since he made them very popular around 1810, when he used them to study
the orbit of the asteroid Pallas, giving a systematic method to solve a 6 × 6 algebraic linear
system.
Definition 8.1.8. The Gauss elimination operations are three operations on a matrix:
(i) Adding to one row a multiple of the another;
(ii) Interchanging two rows;
(iii) Multiplying a row by a non-zero number.
These operations are respectively represented by the symbols given in Fig. 48.
As we said above, the Gauss elimination operations change the coefficients of the augmented
matrix of a system but do not change its solution. Two systems of linear equations having
the same solutions are called equivalent. It can be shown that there is an algorithm using
these operations that transforms any n × n linear system into an equivalent system where
the solutions are explicitly given.
Example 8.1.7: Find the solution to the 2 × 2 linear system given in Example 8.1.1 using
the Gauss elimination operations.
Solution: Consider the augmented matrix of the 2 × 2 linear system in Example (8.1.1),
and perform the following Gauss elimination operations,
2 −1 0 2 −1 0 2 −1 0 2 −1 0
→ → → →
−1 2 3 −2 4 6 0 3 6 0 1 2
x1 + 0 = 1 x1 = 1
2 0 2 1 0 1
→ ⇔ ⇔
0 1 2 0 1 2 0 + x2 = 2 x2 = 2
C
Example 8.1.8: Find the solution to the 3 × 3 linear system given in Example 8.1.1 using
the Gauss elimination operations
Solution: Consider the augmented matrix of the 3 × 3 linear system in Example 8.1.1 and
perform the following Gauss elimination operations,
1 2 1 1 1 2 1 1 1 2 1
1
−3 1 3 24 → 0 7 6 27 → 0 1 −4 −1 ,
0 1 −4 −1 0 1 −4 −1 0 7 6 27
x1 = −6,
1 0 9 3 1 0 9 3 1 0 0 −6
0 1 −4 −1 → 0 1 −4 −1 → 0 1 0
3 ⇒ x2 = 3,
0 0 34 34 0 0 1 1 0 0 1 1
x = 1.
3
C
G. NAGY – ODE August 16, 2015 287
In the last augmented matrix on both Examples 8.1.7 and 8.1.8 the solution is given ex-
plicitly. This is not always the case with every augmented matrix. A precise way to define
the final augmented matrix in the Gauss elimination method is captured in the notion of
echelon form and reduced echelon form of a matrix.
Definition 8.1.9. An m × n matrix is in echelon form iff the following conditions hold:
(i) The zero rows are located at the bottom rows of the matrix;
(ii) The first non-zero coefficient on a row is always to the right of the first non-zero
coefficient of the row above it.
The pivot coefficient is the first non-zero coefficient on every non-zero row in a matrix in
echelon form.
Example 8.1.9: The 6 × 8, 3 × 5 and 3 × 3 matrices given below are in echelon form, where
the ∗ means any non-zero number and pivots are highlighted.
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
0 0 ∗ ∗ ∗ ∗ ∗ ∗
0 0 0 ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
0 0 ∗ ∗ ∗ , 0 ∗ ∗ .
0 0 0 0 0 0 ∗ ∗ ,
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 ∗
0 0 0 0 0 0 0 0 C
Example 8.1.10: The following matrices are in echelon form, with pivot highlighted.
2 1 1
1 3 2 3 2
, , 0 3 4 .
0 1 0 4 −2
0 0 0
C
Definition 8.1.10. An m × n matrix is in reduced echelon form iff the matrix is in
echelon form and the following two conditions hold:
(i) The pivot coefficient is equal to 1;
(ii) The pivot coefficient is the only non-zero coefficient in that column.
We denote by EA a reduced echelon form of a matrix A.
Example 8.1.11: The 6 × 8, 3 × 5 and 3 × 3 matrices given below are in echelon form, where
the ∗ means any non-zero number and pivots are highlighted.
1 ∗ 0 0 ∗ ∗ 0 ∗
0 0 1 0 ∗ ∗ 0 ∗
0 0 0 1 ∗ ∗ 0 ∗
1 ∗ 0 ∗ ∗ 1 0 0
0 0 1 ∗ ∗ , 0 1 0 .
0 0 0 0 0 0 1 ∗ ,
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 C
Example 8.1.12: And the following matrices are not only in echelon form but also in
reduced echelon form; again, pivot coefficients are highlighted.
1 0 0
1 0 1 0 4
, , 0 1 0 .
0 1 0 1 5
0 0 0
C
Summarizing, the Gauss elimination operations can transform any matrix into reduced
echelon form. Once the augmented matrix of a linear system is written in reduced echelon
form, it is not difficult to decide whether the system has solutions or not.
288 G. NAGY – ODE august 16, 2015
Example 8.1.13: Use Gauss operations to find the solution of the linear system
2x1 − x2 = 0,
1 1 1
− x1 + x2 = − .
2 4 4
Solution: We find the system augmented matrix and perform appropriate Gauss elimina-
tion operations,
2 −1 0 2 −1 0 2 −1 0
− 21 1 − 1 → −2 1 −1
→
0 0 1
4 4
From the last augmented matrix above we see that the original linear system has the same
solutions as the linear system given by
2x1 − x2 = 0,
0 = 1.
Since the latter system has no solutions, the original system has no solutions. C
The situation shown in Example 8.1.13 is true in general. If the augmented matrix [A|b] of
an algebraic linear system is transformed by Gauss operations into the augmented matrix
[Ã|b̃] having a row of the form [0, · · · , 0|1], then the original algebraic linear system Ax = b
has no solution.
Example 8.1.14: Find all vectors b such that the system Ax = b has solutions, where
1 −2 3 b1
A = −1 1 −2 , b = b2 .
2 −1 3 b3
Solution: We do not need to write down the algebraic linear system, we only need its
augmented matrix,
1 −2 3 b1 1 −2 3
b1
[A|b] = −1 1 −2 b2 → 0 −1 1
b1 + b2 →
2 −1 3 b3 2 −1 3 b3
1 −2 3 b1 1 −2 3
b1
0 1 −1 −b1 − b2 → 0 1 −1 −b1 − b2 .
0 3 −3 b3 − 2b1 0 0 0 b3 + b1 + 3b2
8.1.3. Linearly dependence. We generalize the idea of two vectors lying on the same line,
and three vectors lying on the same plane, to an arbitrary number of vectors.
Definition 8.1.11. A set of vectors {v1 , · · · , vk }, with k > 1 is called linearly dependent
iff there exists constants c1 , · · · , ck , with at least one of them non-zero, such that
c1 v1 + · · · + ck vk = 0. (8.1.4)
The set of vectors is called linearly independent iff it is not linearly dependent, that is,
the only constants c1 , · · · , ck that satisfy Eq. (8.1.4) are given by c1 = · · · = ck = 0.
In other words, a set of vectors is linearly dependent iff one of the vectors is a linear combi-
nation of the other vectors. When this is not possible, the set is called linearly independent.
Example 8.1.15: Show that the following set of vectors is linearly dependent,
n 1 3 −1 o
2 , 2 , 2 ,
3 1 5
and express one of the vectors as a linear combination of the other two.
Solution: We need to find constants c1 , c2 , and c3 solutions of the equation
1 3 −1 0 1 3 −1 c1 0
2 c1 + 2 c2 + 2 c3 = 0 ⇔ 2 2 2 c2 + 0 .
3 1 5 0 3 1 5 c3 0
The solution to this linear system can be obtained with Gauss elimination operations,
c1 = −2c3 ,
1 3 −1 1 3 −1 1 3 −1 1 0 2
2 2 2 → 0 −4 4 → 0 1 −1 → 0 1 −1 ⇒ c2 = c3 ,
3 1 5 0 −8 8 0 1 −1 0 0 0
c = free.
3
Since there are non-zero constants c1 , c2 , c3 solutions to the linear system above, the vectors
are linearly dependent. Choosing c3 = −1 we obtain the third vector as a linear combination
of the other two vectors,
1 3 −1 0 −1 1 3
2 2 − 2 − 2 = 0 ⇔ 2 = 2 2 − 2 .
3 1 5 0 5 3 1
C
290 G. NAGY – ODE august 16, 2015
8.1.4. Exercises.
8.1.1.- . 8.1.2.- .
G. NAGY – ODE August 16, 2015 291
x2 x2
x2 = x1
Ax Ax
z
Ay y
Az x
x
x1 x1
Example 8.2.2: Describe the action on R2 of the function given by the 2 × 2 matrix
0 −1
A= . (8.2.2)
1 0
These cases are plotted in the second figure on Fig. 49, and the vectors are called x, y and
z, respectively. We therefore conclude that this matrix produces a ninety degree counter-
clockwise rotation of the plane. C
Definition
T 8.2.2.
The transpose of a matrix A = [Aij ] ∈ Fm,n is the matrix denoted as
n,m
A = (A )kl ∈ F , with its components given by AT kl = Alk .
T
1 3 5
Example 8.2.6: Find the transpose of the 2 × 3 matrix A = .
2 4 6
Solution: Matrix A has components Aij with i = 1, 2 and j = 1, 2, 3. Therefore, its
transpose has components (AT )ji = Aij , that is, AT has three rows and two columns,
1 2
AT = 3 4 .
5 6
C
If a matrix has complex-valued coefficients, then the conjugate of a matrix can be defined
as the conjugate of each component.
Definition
8.2.3. The complex conjugate of a matrix A = [Aij ] ∈ Fm,n is the matrix
m,n
A = Aij ∈ F .
Example 8.2.7: A matrix A and its conjugate is given below,
1 2+i 1 2−i
A= , ⇔ A= .
−i 3 − 4i i 3 + 4i
C
Example 8.2.8: A matrix A has real coefficients iff A = A; It has purely imaginary coeffi-
cients iff A = −A. Here are examples of these two situations:
1 2 1 2
A= ⇒ A= = A;
3 4 3 4
i 2i −i −2i
A= ⇒ A= = −A.
3i 4i −3i −4i
C
T
Definition 8.2.4. The adjoint of a matrix A ∈ Fm,n is the matrix A∗ = A ∈ Fn,m .
T
Since A = (AT ), the order of the operations does not change the result, that is why there
is no parenthesis in the definition of A∗ .
Example 8.2.9: A matrix A and its adjoint is given below,
1 2+i ∗ 1 i
A= , ⇔ A = .
−i 3 − 4i 2 − i 3 + 4i
C
The transpose, conjugate and adjoint operations are useful to specify certain classes of
matrices with particular symmetries. Here we introduce few of these classes.
Definition 8.2.5. An n × n matrix A is called:
(a) symmetric iff holds A = AT ;
(b) skew-symmetric iff holds A = −AT ;
(c) Hermitian iff holds A = A∗ ;
(d) skew-Hermitian iff holds A = −A∗ .
294 G. NAGY – ODE august 16, 2015
Example 8.2.10: We present examples of each of the classes introduced in Def. 8.2.5.
Part (a): Matrices A and B are symmetric. Notice that A is also Hermitian, while B is
not Hermitian,
1 2 3 1 2 + 3i 3
A = 2 7 4 = AT , B = 2 + 3i 7 4i = B T .
3 4 8 3 4i 8
Part (b): Matrix C is skew-symmetric,
0 −2 3 0 2 −3
C= 2 0 −4 ⇒ C T = −2 0 4 = −C.
−3 4 0 3 −4 0
Notice that the diagonal elements in a skew-symmetric matrix must vanish, since Cij = −Cji
in the case i = j means Cii = −Cii , that is, Cii = 0.
Part (c): Matrix D is Hermitian but is not symmetric:
1 2+i 3 1 2−i 3
D = 2 − i 7 4 + i ⇒ DT = 2 + i 7 4 − i 6= D,
3 4−i 8 3 4+i 8
however,
1 2+i 3
T
D∗ = D = 2 − i 7 4 + i = D.
3 4−i 8
Notice that the diagonal elements in a Hermitian matrix must be real numbers, since the
condition Aij = Aji in the case i = j implies Aii = Aii , that is, 2iIm(Aii ) = Aii − Aii = 0.
T
We can also verify what we said in part (a), matrix A is Hermitian since A∗ = A = AT = A.
Part (d): The following matrix E is skew-Hermitian:
i 2+i −3 i −2 + i 3
E = −2 + i 7i 4 + i ⇒ E T = 2 + i 7i −4 + i
3 −4 + i 8i −3 4+i 8i
therefore,
−i −2 − i 3
T
E ∗ = E 2 − i −7i −4 − i = −E.
−3 4−i −8i
A skew-Hermitian matrix has purely imaginary elements in its diagonal, and the off diagonal
elements have skew-symmetric real parts with symmetric imaginary parts. C
The trace of a square matrix is a number, the sum of all the diagonal elements of the matrix.
Definition 8.2.6. The trace of a square matrix A = Aij ∈ Fn,n , denoted as tr (A) ∈ F,
is the sum of its diagonal elements, that is, the scalar given by tr (A) = A11 + · · · + Ann .
1 2 3
Example 8.2.11: Find the trace of the matrix A = 4 5 6.
7 8 9
Solution: We only have to add up the diagonal elements:
tr (A) = 1 + 5 + 9 ⇒ tr (A) = 15.
C
G. NAGY – ODE August 16, 2015 295
The product is not defined for two arbitrary matrices, since the size of the matrices is
important: The numbers of columns in the first matrix must match the numbers of rows in
the second matrix.
A times B defines AB
m×n n×` m×`
2 −1 3 0
Example 8.2.12: Compute AB, where A = and B = .
−1 2 2 −1
Solution: The component (AB)11 = 4 is obtained from the first row in matrix A and the
first column in matrix B as follows:
2 −1 3 0 4 1
= , (2)(3) + (−1)(2) = 4;
−1 2 2 −1 1 −2
The component (AB)12 = −1 is obtained as follows:
2 −1 3 0 4 1
= , (2)(0) + (−1)(1) = −1;
−1 2 2 −1 1 −2
The component (AB)21 = 1 is obtained as follows:
2 −1 3 0 4 1
= , (−1)(3) + (2)(2) = 1;
−1 2 2 −1 1 −2
And finally the component (AB)22 = −2 is obtained as follows:
2 −1 3 0 4 1
= , (−1)(0) + (2)(−1) = −2.
−1 2 2 −1 1 −2
C
2 −1 3 0
Example 8.2.13: Compute BA, where A = and B = .
−1 2 2 −1
6−3
Solution: We find that BA = . Notice that in this case AB 6= BA. C
5−4
4 3 1 2 3
Example 8.2.14: Compute AB and BA, where A = and B = .
2 1 4 5 6
Solution: The product AB is
4 3 1 2 3 16 23 30
AB = ⇒ AB = .
2 1 4 5 6 6 9 12
The product BA is not possible. C
296 G. NAGY – ODE august 16, 2015
8.2.3. The inverse matrix. We now introduce the concept of the inverse of a square
matrix. Not every square matrix is invertible. The inverse of a matrix is useful to compute
solutions to linear systems of algebraic equations.
Definition 8.2.8. The matrix In ∈ Fn,n is the identity matrix iff In x = x for all x ∈ Fn .
It is simple to see that the components of the identity matrix are given by
Iii = 1
In = [Iij ] with
Iij = 0 i 6= j.
The cases n = 2, 3 are given by
1 0 0
1 0
I2 = , I3 = 0 1 0 .
0 1
0 0 1
Definition 8.2.9. A matrix A ∈ Fn,n is called
invertible iff there exists a matrix, denoted
−1 −1
A = In , and A A−1 = In .
as A , such that A
Example 8.2.15: Verify that the matrix and its inverse are given by
2 2 −1 1 3 −2
A= , A = .
1 3 4 −1 2
The number ∆ is called the determinant of A, since it is the number that determines whether
A is invertible or not.
2 2
Example 8.2.16: Compute the inverse of matrix A = , given in Example 8.2.15.
1 3
Solution: Following Theorem 8.2.10 we first compute ∆ = 6 − 4 = 4. Since ∆ 6= 0, then
A−1 exists and it is given by
1 3 −2
A−1 = .
4 −1 2
C
1 2
Example 8.2.17: Compute the inverse of matrix A = .
3 6
Solution: Following Theorem 8.2.10 we first compute ∆ = 6 − 6 = 0. Since ∆ = 0, then
matrix A is not invertible. C
G. NAGY – ODE August 16, 2015 297
Gauss operations can be used to compute the inverse of a matrix. The reason for this is
simple to understand in the case of 2 × 2 matrices, as can be seen in the following Example.
Example 8.2.18: Given any 2 × 2 matrix A, find its inverse matrix, A−1 , or show that the
inverse does not exist.
Solution: If the inverse matrix, A−1 exists, then denote
it as A−1 = [x1 , x2 ]. The equation
1 0
A(A−1 ) = I2 is then equivalent to A [x1 , x2 ] = . This equation is equivalent to solving
0 1
two algebraic linear systems,
1 0
A x1 = , A x2 = .
0 1
Here is where we can use Gauss elimination operations. We use them on both systems
" # " #
1 0
A , A .
0 1
However, we can solve both systems at the same time if we do Gauss operations on the
bigger augmented matrix " #
1 0
A .
0 1
Now, perform Gauss operations until we obtain the reduced echelon form for [A|I2 ]. Then
we can have two different types of results:
• If there is no line of the form [0, 0|∗, ∗] with any of the star coefficients non-zero,
then matrix A is invertible and the solution vectors x1 , x2 form the columns of the
inverse matrix, that is, A−1 = [x1 , x2 ].
• If there is a line of the form [0, 0|∗, ∗] with any of the star coefficients non-zero, then
matrix A is not invertible. C
2 2
Example 8.2.19: Use Gauss operations to find the inverse of A = .
1 3
Solution: As we said in the Example above, perform Gauss operation on the augmented
matrix [A|I2 ] until the reduced echelon form is obtained, that is,
2 2 1 0 1 3 0 1 1 3 0 1
→ → 1 −2 →
1 3 0 1 2 2 1 0 0 −4
3
− 12
1 3 0 1 1 0
1 1 → 4
0 1 −4 2 0 1 − 14 1
2
That is, matrix A is invertible and the inverse is
3
− 12
1 3 −2
A−1 = 41 1 ⇔ A −1
= .
−4 2 4 −1 2
C
1 2 3
Example 8.2.20: Use Gauss operations to find the inverse of A = 2 5 7.
3 7 9
Solution: We perform Gauss operations on the augmented matrix [A|I3 ] until we obtain
its reduced echelon form, that is,
1 2 3 1 0 0 1 2 3 1 0 0
2 5 7 0 1 0 → 0 1 1 −2 1 0 →
3 7 9 0 0 1 0 1 0 −3 0 1
298 G. NAGY – ODE august 16, 2015
1 0 1
5 −2 0 1 0 1 5 −2 0
0 1 1
−2 1 0 → 0 1 1 −2 1 0
0 0 −1 −1 −1 1 0 0 1 1 1 −1
1 0 1
5 −2 0 1 0 0 4 −3 1
0 1 1
−2 1 0 → 0 1 0 −3 0 1
0 0 1 1 1 −1 0 0 1 1 1 −1
We conclude that matrix A is invertible and
4 −3 1
A−1 = −3 0 1 .
1 1 −1
C
8.2.4. Determinants. A determinant is a scalar computed form a square matrix that gives
important information about the matrix, for example if the matrix is invertible or not. We
now review the definition and properties of the determinant of 2 × 2 and 3 × 3 matrices.
a11 a12
Definition 8.2.11. The determinant of a 2 × 2 matrix A = is given by
a21 a22
a a12
det(A) = 11 = a11 a22 − a12 a21 .
a21 a22
a11 a12 a13
The determinant of a 3 × 3 matrix A = a21 a22 a23 is given by
a31 a32 a33
a11 a12 a13
a22 a23 a21 a23 a21 a22
det(A) = a21 a22 a23 = a11
− a12 + a13
.
a31 a32 a33 a32 a33 a31 a33 a31 a32
Example 8.2.21: The following three examples show that the determinant can be a negative,
zero or positive number.
1 2 2 1 1 2
= 4 − 6 = −2, = 8 − 3 = 5,
2 4 = 4 − 4 = 0.
3 4 3 4
The following is an example shows how to compute the determinant of a 3 × 3 matrix,
1 3 −1
= (1) 1 1 2 1
+ (−1) 2 1
2 1 1 2 − 3
3 2
1 3 1 3 2
1
= (1 − 2) − 3 (2 − 3) − (4 − 3)
= −1 + 3 − 1
= 1. C
x2 R2 x3 R3
a2
a2
a1 a3
a1
x2
x1 x1
matrix A is a linear combination of the others, then the figure determined by these column
vectors is not n-dimensional but (n−1)-dimensional, so its volume must vanish. We highlight
this property of the determinant of n × n matrices in the following result.
Theorem 8.2.12. The set of n-vectors {v1 , · · · , vn }, with n > 1, is linearly dependent iff
det[v1 , · · · , vn ] = 0.
Example 8.2.22: Show whether the set of vectors below linearly independent,
n 1 3 −3 o
2 , 2 , 2 .
3 1 7
The determinant of the matrix whose column vectors are the vectors above is given by
1 3 −3
2 2
2 = (1) (14 − 2) − 3 (14 − 6) + (−3) (2 − 6) = 12 − 24 + 12 = 0.
3 1 7
Therefore, the set of vectors above is linearly dependent. C
The determinant of a square matrix also determines whether the matrix is invertible or not.
Theorem 8.2.13. An n × n matrix A is invertible iff holds det(A) 6= 0.
1 2 3
Example 8.2.23: Is matrix A = 2 5 7 invertible?
3 7 9
Solution: We only need to compute the determinant of A.
1 2 3
5 7 2 7 2 5
det(A) = 2 5 7 = (1) − (2) 3 9 + (3) 3 .
3 7 9 7 9 7
8.2.5. Exercises.
8.2.1.- . 8.2.2.- .
G. NAGY – ODE August 16, 2015 301
Solution: We just must verify the definition of eigenvalue and eigenvector given above.
We start with the first pair,
1 3 1 4 1
Av1 = = =4 = λ1 v1 ⇒ Av1 = λ1 v1 .
3 1 1 4 1
A similar calculation for the second pair implies,
1 3 −1 2 −1
Av2 = = = −2 = λ2 v2 ⇒ Av2 = λ2 v2 .
3 1 1 −2 1
C
0 1
Example 8.3.2: Find the eigenvalues and eigenvectors of the matrix A = .
1 0
Solution: This is the matrix given in Example 8.2.1. The action of this matrix on the
plane is a reflection along the line x1 = x2 , as it was shown in Fig. 49. Therefore, this line
302 G. NAGY – ODE august 16, 2015
x1 = x2 is left invariant under the action of this matrix. This property suggests that an
eigenvector is any vector on that line, for example
1 0 1 1 1
v1 = ⇒ = ⇒ λ1 = 1.
1 1 0 1 1
1
So, we have found one eigenvalue-eigenvector pair: λ1 = 1, with v1 = . We remark that
1
any non-zero vector proportional to v1 is also
an eigenvector. Another choice fo eigenvalue-
3
eigenvector pair is λ1 = 1, with v1 = . It is not so easy to find a second eigenvector
3
which does not belong to the line determined by v1 . One way to find such eigenvector is
noticing that the line perpendicular to the line x1 = x2 is also left invariant by matrix A.
Therefore, any non-zero vector on that line must be an eigenvector. For example the vector
v2 below, since
−1 0 1 −1 1 −1
v2 = ⇒ = = (−1) ⇒ λ2 = −1.
1 1 0 1 −1 1
−1
So, we have found a second eigenvalue-eigenvector pair: λ2 = −1, with v2 = . These
1
two eigenvectors are displayed on Fig. 51. C
x2 x2
x2 = x1
Ax Ax
Av1 = v1 π
θ=
v2 2
x
x
x1 x1
Av2 = −v2
Figure 51. The first picture shows the eigenvalues and eigenvectors of
the matrix in Example 8.3.2. The second picture shows that the matrix
in Example 8.3.3 makes a counterclockwise rotation by an angle θ, which
proves that this matrix does not have eigenvalues or eigenvectors.
There exist matrices that do not have eigenvalues and eigenvectors, as it is show in the
example below.
cos(θ) − sin(θ)
Example 8.3.3: Fix any number θ ∈ (0, 2π) and define the matrix A = .
sin(θ) cos(θ)
Show that A has no real eigenvalues.
Solution: One can compute the action of matrix A on several vectors and verify that the
action of this matrix on the plane is a rotation counterclockwise by and angle θ, as shown
in Fig. 51. A particular case of this matrix was shown in Example 8.2.2, where θ = π/2.
Since eigenvectors of a matrix determine directions which are left invariant by the action of
the matrix, and a rotation does not have such directions, we conclude that the matrix A
above does not have eigenvectors and so it does not have eigenvalues either. C
G. NAGY – ODE August 16, 2015 303
Remark: We will show that matrix A in Example 8.3.3 has complex-valued eigenvalues.
We now describe a method to find eigenvalue-eigenvector pairs of a matrix, if they exit.
In other words, we are going to solve the eigenvalue-eigenvector problem: Given an n × n
matrix A find, if possible, all its eigenvalues and eigenvectors, that is, all pairs λ and v 6= 0
solutions of the equation
Av = λv.
This problem is more complicated than finding the solution x to a linear system Ax = b,
where A and b are known. In the eigenvalue-eigenvector problem above neither λ nor v are
known. To solve the eigenvalue-eigenvector problem for a matrix A we proceed as follows:
(a) First, find the eigenvalues λ;
(b) Second, for each eigenvalue λ find the corresponding eigenvectors v.
The following result summarizes a way to solve the steps above.
Theorem 8.3.2 (Eigenvalues-eigenvectors).
(a) The number λ is an eigenvalue of an n × n matrix A iff holds,
det(A − λI) = 0. (8.3.1)
(b) Given an eigenvalue λ of an n × n matrix A, the corresponding eigenvectors v are the
non-zero solutions to the homogeneous linear system
(A − λI)v = 0. (8.3.2)
Proof of Theorem 8.3.2: The number λ and the non-zero vector v are an eigenvalue-
eigenvector pair of matrix A iff holds
Av = λv ⇔ (A − λI)v = 0,
where I is the n × n identity matrix. Since v 6= 0, the last equation above says that the
columns of the matrix (A − λI) are linearly dependent. This last property is equivalent, by
Theorem 8.2.12, to the equation
det(A − λI) = 0,
which is the equation that determines the eigenvalues λ. Once this equation is solved,
substitute each solution λ back into the original eigenvalue-eigenvector equation
(A − λI)v = 0.
Since λ is known, this is a linear homogeneous system for the eigenvector components. It
always has non-zero solutions, since λ is precisely the number that makes the coefficient
matrix (A − λI) not invertible. This establishes the Theorem.
1 3
Example 8.3.4: Find the eigenvalues λ and eigenvectors v of the matrix A = .
3 1
Solution: We first find the eigenvalues as the solutions of the Eq. (8.3.1). Compute
1 3 1 0 1 3 λ 0 (1 − λ) 3
A − λI = −λ = − = .
3 1 0 1 3 1 0 λ 3 (1 − λ)
Then we compute its determinant,
λ+ = 4,
(1 − λ) 3
0 = det(A − λI) = = (λ − 1)2 − 9 ⇒
3 (1 − λ) λ- = −2.
We have obtained two eigenvalues, so now we introduce λ+ = 4 into Eq. (8.3.2), that is,
1−4 3 −3 3
A − 4I = = .
3 1−4 3 −3
304 G. NAGY – ODE august 16, 2015
It is useful to introduce few more concepts, that are common in the literature.
Definition 8.3.3. The characteristic polynomial of an n × n matrix A is the function
p(λ) = det(A − λI).
1 3
Example 8.3.5: Find the characteristic polynomial of matrix A = .
3 1
Solution: We need to compute the determinant
(1 − λ) 3
p(λ) = det(A − λI) =
= (1 − λ)2 − 9 = λ2 − 2λ + 1 − 9.
3 (1 − λ)
We conclude that the characteristic polynomial is p(λ) = λ2 − 2λ − 8. C
Since the matrix A in this example is 2 × 2, its characteristic polynomial has degree two.
One can show that the characteristic polynomial of an n × n matrix has degree n. The
eigenvalues of the matrix are the roots of the characteristic polynomial. Different matrices
may have different types of roots, so we try to classify these roots in the following definition.
G. NAGY – ODE August 16, 2015 305
Solution: In order to find the algebraic multiplicity of the eigenvalues we need first to find
the eigenvalues. We now that the characteristic polynomial of this matrix is given by
(1 − λ) 3
p(λ) = = (λ − 1)2 − 9.
3 (1 − λ)
The roots of this polynomial are λ1 = 4 and λ2 = −2, so we know that p(λ) can be rewritten
in the following way,
p(λ) = (λ − 4)(λ + 2).
We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,
λ1 = 4, r1 = 1, and λ2 = −2, r2 = 1.
In order to find the geometric multiplicities of matrix eigenvalues we need first to find the
matrix eigenvectors. This part of the work was already done in the Example 8.3.4 above
and the result is
1 −1
λ1 = 4, v(1) = , λ2 = −2, v(2) = .
1 1
From this expression we conclude that the geometric multiplicities for each eigenvalue are
just one, that is,
λ1 = 4, s1 = 1, and λ2 = −2, s2 = 1.
C
The following example shows that two matrices can have the same eigenvalues, and so the
same algebraic multiplicities, but different eigenvectors with different geometric multiplici-
ties.
3 0 1
Example 8.3.7: Find the eigenvalues and eigenvectors of the matrix A = 0 3 2.
0 0 1
Solution: We start finding the eigenvalues, the roots of the characteristic polynomial
(3 − λ) 0 1 λ1 = 1, r1 = 1,
2 = −(λ − 1)(λ − 3)2 ⇒
p(λ) = 0 (3 − λ)
λ2 = 3, r2 = 2.
0 0 (1 − λ)
We now compute the eigenvector associated with the eigenvalue λ1 = 1, which is the solution
of the linear system
(1)
2 0 1 v1 0
(A − I)v(1) = 0 ⇔ 0 2 2 v2(1) = 0 .
0 0 0 (1) 0
v3
306 G. NAGY – ODE august 16, 2015
Therefore, we obtain two linearly independent solutions, the first one v(2) with the choice
(2) (2) (2) (2)
v1 = 1, v2 = 0, and the second one w(2) with the choice v1 = 0, v2 = 1, that is
1 0
v(2) = 0 , w(2) = 1 , λ2 = 3, r2 = 2, s2 = 2.
0 0
Summarizing, the matrix in this example has three linearly independent eigenvectors. C
3 1 1
Example 8.3.8: Find the eigenvalues and eigenvectors of the matrix A = 0 3 2.
0 0 1
Solution: Notice that this matrix has only the coefficient a12 different from the previous
example. Again, we start finding the eigenvalues, which are the roots of the characteristic
polynomial
(3 − λ) 1 1 λ1 = 1, r1 = 1,
2
p(λ) = 0
(3 − λ) 2 = −(λ − 1)(λ − 3)
⇒
λ2 = 3, r2 = 2.
0 0 (1 − λ)
So this matrix has the same eigenvalues and algebraic multiplicities as the matrix in the
previous example. We now compute the eigenvector associated with the eigenvalue λ1 = 1,
which is the solution of the linear system
(1)
2 1 1 v1 0
(A − I)v(1) = 0 ⇔ 0 2 2 v2(1) = 0 .
0 0 0 (1) 0
v3
G. NAGY – ODE August 16, 2015 307
8.3.2. Diagonalizable matrices. We first introduce the notion of a diagonal matrix. Later
on we define the idea of a diagonalizable matrix as a matrix that can be transformed into a
diagonal matrix by a simple transformation.
a11 · · · 0
Definition 8.3.5. An n × n matrix A is called diagonal iff holds A = ... .. .. .
. .
0 ··· ann
That is, a matrix is diagonal iff every non-diagonal coefficient vanishes. From now on we
use the following notation for a diagonal matrix A:
a11 · · · 0
A = diag a11 , · · · , ann = ... .. .. .
. .
0 ··· ann
This notation says that the matrix is diagonal and shows only the diagonal coefficients,
since any other coefficient vanishes. Diagonal matrices are simple to manipulate since they
share many properties with numbers. For example the product of two diagonal matrices is
308 G. NAGY – ODE august 16, 2015
There is a deep relation between the eigenvalues and eigenvectors of a matrix and the
property of diagonalizability.
Theorem 8.3.7 (Diagonalizable matrices). An n × n matrix A is diagonalizable iff
matrix A has a linearly independent set of n eigenvectors. Furthermore,
A = P DP −1 , P = [v1 , · · · , vn ], D = diag λ1 , · · · , λn ,
where the last equation comes from multiplying the former equation by P on the left. This
last equation says that the vectors v(i) = P e(i) are eigenvectors of A with eigenvalue dii .
By definition, v(i) is the i-th column of matrix P , that is,
P = v(1) , · · · , v(n) .
Since matrix P is invertible, the eigenvectors set {v(1) , · · · , v(n) } is linearly independent.
This establishes this part of the Theorem.
(⇐) Let λi , v(i) be eigenvalue-eigenvector pairs of matrix
A, for i = 1, · · · , n. Now use the
eigenvectors to construct matrix P = v(1) , · · · , v(n) . This matrix is invertible, since the
eigenvector set {v(1) , · · · , v(n) } is linearly independent. We now show that matrix P −1 AP
is diagonal. We start computing the product
AP = A v(1) , · · · , v(n) = Av(1) , · · · , Av(n) , = λ1 v(1) · · · , λn v(n) .
that is,
P −1 AP = P −1 λ1 v(1) , · · · , λn v(n) = λ1 P −1 v(1) , · · · , λn P −1 v(n) .
We conclude that e(i) = P −1 v(i) , for i = 1 · · · , n. Using these equations in the equation for
P −1 AP we get
P −1 AP = λ1 e(1) , · · · , λn e(n) = diag λ1 , · · · , λn .
A = P DP −1 ,
P = v(1) , · · · , v(n) , D = diag λ1 , · · · , λn .
This means that A is diagonalizable. This establishes the Theorem.
1 3
Example 8.3.11: Show that matrix A = is diagonalizable.
3 1
Solution: We know that the eigenvalue eigenvector pairs are
1 −1
λ1 = 4, v1 = and λ2 = −2, v2 = .
1 1
Introduce P and D as follows,
1 −1 1 1 1 4 0
P = ⇒ P −1 = , D= .
1 1 2 −1 1 0 −2
1 3 1
Example 8.3.12: Show that matrix B = is not diagonalizable.
2 −1 5
Solution: We first compute the matrix eigenvalues. The characteristic polynomial is
3 1
2 −λ
3 5 1
p(λ) = 2
= − λ − λ + = λ2 − 4λ + 4.
−1 5
2 2 4
−λ
2 2
The roots of the characteristic polynomial are computed in the usual way,
1 √
λ = 4 ± 16 − 16 ⇒ λ = 2, r = 2.
2
We have obtained a single eigenvalue with algebraic multiplicity r = 2. The associated
eigenvectors are computed as the solutions to the equation (A − 2I)v = 0. Then,
1 1
3 1 −
2 −2 2 2
2 1 −1 1
(A − 2I) = 1 5 = → ⇒ v= , s = 1.
− −2 1 1
0 0 1
2 2 −
2 2
We conclude that the biggest linearly independent set of eigenvalues for the 2 × 2 matrix B
contains only one vector, insted of two. Therefore, matrix B is not diagonalizable. C
Repeat the whole procedure starting with Eq. (8.3.5), that is, multiply this later equation
by matrix A and also by λn−1 , then subtract the second from the first, the result is,
c1 (λ1 − λn )(λ1 − λn−1 )v(1) + · · · + cn−2 (λn−2 − λn )(λn−2 − λn−1 )v(n−2) = 0.
Repeat the whole procedure a total of n − 1 times, in the last step we obtain the equation
c1 (λ1 − λn )(λ1 − λn−1 ) · · · (λ1 − λ2 )v(1) = 0.
Since all the eigenvalues are different, we conclude that c1 = 0, however this contradicts our
assumption that c1 6= 0. Therefore, the set of n eigenvectors must be linearly independent.
1 1
Example 8.3.13: Is matrix A = diagonalizable?
1 1
Solution: We compute the matrix eigenvalues, starting with the characteristic polynomial,
(1 − λ) 1
p(λ) =
= (1 − λ)2 − 1 = λ2 − 2λ ⇒ p(λ) = λ(λ − 2).
1 (1 − λ)
The roots of the characteristic polynomial are the matrix eigenvalues,
λ1 = 0, λ2 = 2.
The eigenvalues are different, so by Theorem 8.3.8, matrix A is diagonalizable. C
Proof of Theorem 8.3.9: It is not difficult to generalize the calculation done in Exam-
ple 8.3.9 to obtain the n-th power of a diagonal matrix D = diag d1 , · · · , dn , and the result
is another diagonal matrix given by
Dn = diag dn1 , · · · , dnn .
We use this result and induction in n to prove Eq.(8.3.6). Since the case n = 1 is trivially
true, we start computing the case n = 2. We get
2
A2 = P DP −1 = P DP −1 P DP −1 = P DDP −1 ⇒ A2 = P D2 P −1 ,
that is, Eq. (8.3.6) holds for n = 2. Now, assuming that this Equation holds for n, we show
that it also holds for n + 1. Indeed,
A(n+1) = An A = P Dn P −1 P DP −1 = P Dn P −1 P DP −1 = P Dn DP −1 .
It is not clear how to extend to matrices this way of defining the exponential function on
real numbers. However, the exponential function on real numbers satisfies several identities
that can be used as definition for the exponential on matrices. One of these identities is the
Taylor series expansion
∞
X xk x2 x3
ex = =1+x+ + + ··· .
k! 2! 3!
k=0
This identity is the key to generalize the exponential function to diagonalizable matrices.
Definition 8.3.10. The exponential of a square matrix A is defined as the infinite sum
∞
X An
eA = . (8.3.7)
n=0
n!
One must show that the definition makes sense, that is, that the infinite sum in Eq. (8.3.7)
converges. We show in these notes that this is the case when matrix A is diagonal and when
matrix A is diagonalizable. The case of non-diagonalizable matrix is more difficult to prove,
and we do not do it in these notes.
Theorem 8.3.11. If D = diag d1 , · · · , dn , then holds eD = diag ed1 , · · · , edn .
Proof of Theorem 8.3.11: It is simple to see that the infinite sum in Eq (8.3.7) converges
for diagonal matrices. Start computing
∞ ∞
X 1 k X 1
eD = diag (d1 )k , · · · , (dn )k .
diag d1 , · · · , dn =
k! k!
k=0 k=0
Since each matrix in the sum on the far right above is diagonal, then holds
∞ ∞
hX (d1 )k X (dn )k i
eD = diag ,··· , .
k! k!
k=0 k=0
∞
X (di )k
Now, each sum in the diagonal of matrix above satisfies = edi . Therefore, we
k!
k=0
arrive to the equation eD = diag ed1 , · · · , edn . This establishes the Theorem.
2 0
Example 8.3.14: Compute eA , where A = .
0 7
2
A e 0
Solution: Theorem 8.3.11 implies that e = . C
0 e7
The formula above says that to find the exponential of a diagonalizable matrix there is no
need to compute the infinite sum in Definition 8.3.10. To compute the exponential of a
diagonalizable matrix it is only needed to compute the product of three matrices. It also
says that to compute the exponential of a diagonalizable matrix we need to compute the
eigenvalues and eigenvectors of the matrix.
G. NAGY – ODE August 16, 2015 313
8.3.4. Exercises.
8.3.1.- . 8.3.2.- .
G. NAGY – ODE August 16, 2015 315
Chapter 9. Appendices
(c) A function y having at x0 both infinitely many continuous derivatives and a convergent
power series is analytic where the series converges. The Taylor expansion centered at
x0 of such a function is
∞
X y (n) (x0 )
y(x) = (x − x0 )n ,
n=0
n!
and this means
y 00 (x0 ) y 000 (x0 )
y(x) = y(x0 ) + y 0 (x0 ) (x − x0 ) + (x − x0 )2 + (x − x0 )3 + · · · .
2! 3!
C
The Taylor series can be very useful to find the power series expansions of function having
infinitely many continuous derivatives.
Example B.2: Find the Taylor series of y(x) = sin(x) centered at x0 = 0.
Solution: We need to compute the derivatives of the function y and evaluate these deriva-
tives at the point we center the expansion, in this case x0 = 0.
y(x) = sin(x) ⇒ y(0) = 0, y 0 (x) = cos(x) ⇒ y 0 (0) = 1,
y 00 (x) = − sin(x) ⇒ y 00 (0) = 0, y 000 (x) = − cos(x) ⇒ y 000 (0) = −1.
One more derivative gives y (4) (t) = sin(t), so y (4) = y, the cycle repeats itself. It is not
difficult to see that the Taylor’s formula implies,
∞
x3 x5 X (−1)n
sin(x) = x − + − · · · ⇒ sin(x) = x(2n+1) .
3! 5! n=0
(2n + 1)!
G. NAGY – ODE August 16, 2015 317
Remark: The Taylor series at x0 = 0 for y(x) = cos(x) is computed in a similar way,
∞
X (−1)n (2n)
cos(x) = x .
n=0
(2n)!
y
∞
X
Solution: Notice that this function is well y(x) = xn
defined for every x ∈ R − {1}. The func- n=0
∞
X
Remark: The power series y(x) = xn does not converge on (−∞, −1]∪[1, ∞). But there
n=0
1
are different power series that converge to y(x) = on intervals inside that domain.
1−x
For example the Taylor series about x0 = 2 converges for |x − 2| < 1, that is 1 < x < 3.
∞
n! n! X
y (n) (x) = ⇒ y (n) (2) = ⇒ y(x) = (−1)n+1 (x − 2)n .
(1 − x)n+1 (−1)n+1 n=0
Later on we might need the notion of convergence of an infinite series in absolute value.
∞
X
Definition B.2. The power series y(x) = an (x − x0 )n converges in absolute value
n=0
∞
X
iff the series |an | |x − x0 |n converges.
n=0
Remark: If a series converges in absolute value, it converges. The converse is not true.
318 G. NAGY – ODE august 16, 2015
∞
X (−1)n
Example B.4: One can show that the series s = converges, but this series does
n=1
n
∞
X 1
not converge absolutely, since diverges. See [11, 13]. C
n=1
n
Since power series expansions of functions might not converge on the same domain where
the function is defined, it is useful to introduce the region where the power series converges.
∞
X
Definition B.3. The radius of convergence of a power series y(x) = an (x − x0 )n
n=0
is the number ρ > 0 satisfying both the series converges absolutely for |x − x0 | < ρ and the
series diverges for |x − x0 | > ρ.
Remark: The radius of convergence defines the size of the biggest open interval where the
power series converges. This interval is symmetric around the series center point x0 .
Example B.5: We state the radius of convergence of few power series. See [11, 13].
∞
1 X
(1) The series = xn has radius of convergence ρ = 1.
1 − x n=0
∞
X
x xn
(2) The series e = has radius of convergence ρ = ∞.
n=0
n!
∞
X (−1)n
(3) The series sin(x) = x(2n+1) has radius of convergence ρ = ∞.
n=0
(2n + 1)!
∞
X (−1)n (2n)
(4) The series cos(x) = x has radius of convergence ρ = ∞.
n=0
(2n)!
∞
X 1
(5) The series sinh(x) = x(2n+1) has radius of convergence ρ = ∞.
n=0
(2n + 1)!
∞
X 1
(6) The series cosh(x) = x(2n) has radius of convergence ρ = ∞.
n=0
(2n)!
One of the most used tests for the convergence of a power series is the ratio test.
∞
X
Theorem B.4 (Ratio Test). Given the power series y(x) = an (x − x0 )n , introduce
n=0
|an+1 |
the number L = lim . Then, the following statements hold:
n→∞ |an |
1.1.1.- a = 2 and b = 3. 5 3t 2
1.1.5.- y(x) = e + .
1 3 3
1.1.2.- y(t) = c e−4t + , with c ∈ R. 1 −6t
2 1.1.6.- ψ(t, y) = y + e
5 6
1.1.3.- y(t) = c e2t − . 1
2 y(x) = c e−6t − .
9 1 6
1.1.4.- y(x) = e−4t + , with c ∈ R. 7 −6t 1
2 2 1.1.7.- y(t) = e − .
6 6
y2 t3
1.3.5.- y(t) = t ln(|t|) + c .
1.3.1.- Implicit form: = + c.
2 3
1.3.6.- y 2 (t) = 2t2 ln(|t|) + c .
r
2t3
Explicit form: y = ± + 2c. 1.3.7.- Implicit: y 2 + ty − 2t = 0.
3 1 p
Explicit: y(t) = −t + t2 + 8t .
1.3.2.- y 4 + y + t3 − t = c, with c ∈ R. 2
3 1.3.8.- Hint: Recall the Defini-
1.3.3.- y(t) = . tion 1.3.5 and use that
3 − t3
√
y10 (x) = f x, y1 (x) ,
2
1.3.4.- y(t) = c e− 1+t .
for any independent variable x, for ex-
ample for x = kt.
322 G. NAGY – ODE august 16, 2015
1.4.1.- 1.4.4.-
(a) The equation is exact. N = (1+t2 ), (a) µ(x) = 1/x.
M = 2t y, so ∂t N = 2t = ∂y M . 18 5
(b) y 3 − 3xy + x = 1.
(b) Since a potential function is given 5
by ψ(t, y) = t2 y + y, the solution is 1.4.5.-
y(t) = 2
c
, c ∈ R. (a) µ(x) = x2 .
t +1 (b) y 2 (x4 + 1/2) = 2.
1.4.2.- 2
(c) y(x) = − √ . The negative
(a) The equation is exact. We have 1 + 2x4
square root is selected because the
N = t cos(y) − 2y, M = t + sin(y),
the initial condition is y(0) < 0.
∂t N = cos(y) = ∂y M.
1.4.6.-
(b) Since a potential function is given
(a) The equation for y is not exact.
t2
by ψ(t, y) = + t sin(y) − y 2 , the There is no integrating factor de-
2 pending only on x.
solution is
(b) The equation for x = y −1 is not ex-
t2
+ t sin(y(t)) − y 2 (t) = c, act. But there is an integrating fac-
2
tor depending only on y, given by
for c ∈ R.
µ(y) = ey .
1.4.3.-
(c) An implicit expression for both y(x)
(a) The equation is exact. We have
and x(y) is given by
N = −2y + t ety , M = 2 + y ety ,
∂t N = (1 + t y) ety = ∂y M. −3x e−y + sin(5x) ey = c,
T 0 = −k (T − 3). and
(b) The integrating factor method im- lim Q(t) = 300 grams.
t→∞
plies (T 0 + k T )ekt = 3k ekt , so
0 0 1.5.5.- Denoting ∆r = ri − ro and
T ekt − 3 ekt = 0. V (t) = ∆r t + V0 , we obtain
Integrating we get (T − 3) ekt = h V i ro
0 ∆r
c, so the general solution is T = Q(t) = Q0
V (t)
c e−kt + 3. The initial condition im- h h V i ro i
∆r
0
plies 18 = T (0) = c + 3, so c = 15, + qi V (t) − V0 .
V (t)
and the function temperature is
A reordering of terms gives
T (t) = 15 e−kt + 3. h V i ro
0 ∆r
(c) To find k we use that T (3) = 13 C. Q(t) = qi V (t) − (qi V0 − Q0 )
V (t)
This implies 13 = 15 e−3k +3, so we
and replacing the problem values yields
arrive at
13 − 3 2 (200)2
e−3k = = , Q(t) = t + 200 − 100 .
15 3 (t + 200)2
which leads us to −3k = ln(2/3), so The concentration q(t) = Q(t)/V (t) is
we get h V i ro +1
0 ∆r Q0
1 q(t) = qi − qi − .
k = ln(3/2). V (t) V0
3
The concentration at V (t) = Vm is
h V i ro +1 Q0
0 ∆r
qm = qi − qi − ,
Vm V0
which gives the value
121
qm = grams/liter.
125
In the case of an unlimited capacity,
limt→∞ V (t) = ∞, thus the equation for
q(t) above says
lim q(t) = qi .
t→∞
324 G. NAGY – ODE august 16, 2015
1.6.1.- 1.6.4.-
y0 = 0, (a) Write the equation as
y1 = t, 2 ln(t)
y0 = − y.
(t2 − 4)
y2 = t + 3t2 ,
The equation is not defined for
y3 = t + 3t2 + 6t3 .
t=0 t = ±2.
1.6.2.-
This provides the intervals
y0 = 1, (−∞, −2), (−2, 2), (2, ∞).
y1 = 1 + 8t, Since the initial condition is at t =
(a)
y2 = 1 + 8t + 12 t2 , 1, the interval where the solution is
defined is
y3 = 1 + 8t + 12 t2 + 12 t3 .
8 D = (0, 2).
(b) ck (t) = 3k tk .
3 (b) The equation is not defined for
8 5
(c) y(t) = e3t − . t = 0, t = 3.
3 3
1.6.3.- This provides the intervals
p
(a) Since y = y02 − 4t2 , and the ini- (−∞, 0), (0, 3), (3, ∞).
tial condition is at t = 0, the solu-
tion domain is Since the initial condition is at t =
h y y i
0 0
−1, the interval where the solution
D= − , . is defined is
2 2
y0 D = (−∞, 0).
(b) Since y = and the initial
1 − t2 y0
condition is at t = 0, the solution 1.6.5.-
2
domain is (a) y = t.
h 1 1 i 3
D = −√ , √ . (b) Outside the disk t2 + y 2 6 1.
y0 y0
G. NAGY – ODE August 16, 2015 325
2.1.1.- . 2.1.2.- .
??.1.- . ??.2.- .
??.1.- . ??.2.- .
??.??.- . ??.??.- .
??.??.- . ??.??.- .
??.??.- . ??.??.- .
326 G. NAGY – ODE august 16, 2015
3.1.1.- . 3.1.2.- .
3.2.1.- . 3.2.2.- .
3.3.1.- . 3.3.2.- .
G. NAGY – ODE August 16, 2015 327
??.??.- . ??.??.- .
??.??.- . ??.??.- .
??.1.- . ??.2.- .
??.1.- . ??.2.- .
??.1.- . ??.2.- .
328 G. NAGY – ODE august 16, 2015
??.1.- . ??.2.- .
8.1.1.- . 8.1.2.- .
8.2.1.- . 8.2.2.- .
??.??.- . ??.??.- .
8.3.1.- . 8.3.2.- .
??.??.- . ??.??.- .
G. NAGY – ODE August 16, 2015 329
7.1.1.- . 7.1.2.- .
7.2.1.- . 7.2.2.- .
7.3.1.- . 7.3.2.- .
330 G. NAGY – ODE august 16, 2015
References
[1] T. Apostol. Calculus. John Wiley & Sons, New York, 1967. Volume I, Second edition.
[2] T. Apostol. Calculus. John Wiley & Sons, New York, 1969. Volume II, Second edition.
[3] W. Boyce and R. DiPrima. Elementary differential equations and boundary value problems. Wiley, New
Jersey, 2012. 10th edition.
[4] R. Churchill. Operational Mathematics. McGraw-Hill, New york, 1958. Second Edition.
[5] E. Coddington. An Introduction to Ordinary Differential Equations. Prentice Hall, 1961.
[6] S. Hassani. Mathematical physics. Springer, New York, 2000. Corrected second printing.
[7] E. Hille. Analysis. Vol. II.
[8] J.D. Jackson. Classical Electrodynamics. Wiley, New Jersey, 1999. 3rd edition.
[9] W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, New York, NY, 1953.
[10] G. Simmons. Differential equations with applications and historical notes. McGraw-Hill, New York,
1991. 2nd edition.
[11] J. Stewart. Multivariable Calculus. Cenage Learning. 7th edition.
[12] S. Strogatz. Nonlinear Dynamics and Chaos. Perseus Books Publishing, Cambridge, USA, 1994. Pa-
perback printing, 2000.
[13] G. Thomas, M. Weir, and J. Hass. Thomas’ Calculus. Pearson. 12th edition.
[14] G. Watson. A treatise on the theory of Bessel functions. Cambridge University Press, London, 1944.
2nd edition.
[15] E. Zeidler. Nonlinear Functional Analysis and its Applications I, Fixed-Point Theorems. Springer, New
York, 1986.
[16] E. Zeidler. Applied functional analysis: applications to mathematical physics. Springer, New York,
1995.
[17] D. Zill and W. Wright. Differential equations and boundary value problems. Brooks/Cole, Boston, 2013.
8th edition.