0% found this document useful (0 votes)
43 views

Math302 LectureNotes PDF

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Math302 LectureNotes PDF

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

ORDINARY DIFFERENTIAL EQUATIONS

BENJAMIN DODSON

Contents
1. The method of integrating factors 2
2. Separable differential equations 3
3. Linear and nonlinear differential equations 4
4. Exact differential equations and integrating factors 5
5. Second order equations - reducible cases 6
6. Homogeneous differential equations with constant coefficients 8
7. Repeated roots: reduction of order 9
8. Complex roots of the characteristic equation 10
9. The Wronskian 10
10. Nonhomogeneous equations: method of undetermined coefficients 12
11. Variation of parameters 13
12. Mechanical and electrical vibrations 14
13. Vector spaces and linear transformations 15
14. Basis and dimension 16
15. Eigenvalues and eigenvectors 17
16. The matrix exponential 18
17. Generalized eigenvectors and the minimal polynomial 21
18. Systems of first order linear equations 23
19. Nonhomogeneous linear systems 24
20. Variable coefficient systems 26
21. Laplace Transform 28
22. Initial value problems 30
23. Convolution 30
24. Heaviside function 33
25. Impulse functions 35
26. Existence and uniqueness of solutions 36
27. Nonlinear ODEs : The phase plane 37
28. Predator-prey equations 38
29. Competing species equation 40
References 42

These class notes are primarily taken from [BD65] and [Tay22].
1
2 BENJAMIN DODSON

1. The method of integrating factors


Let us consider an ordinary differential equation of the form
dy
(1.1) (4 + t2 )+ 2ty = 4t.
dt
Notice that by the product rule, (1.1) is equal to
d
(1.2) ((4 + t2 )y(t)) = 4t.
dt
Then by the fundamental theorem of calculus,
(1.3) (4 + t2 )y(t) = 2t2 + c.
Usually, such equations do not fit into this framework exactly, but it may be possible to use an
integrating factor. Consider the differential equation
dy 1 1
(1.4) + y = et/3 .
dt 2 2
Let us multiply the left and right hand sides by the function µ(t) > 0.
dy 1 1
(1.5) µ(t) + µ(t)y(t) = µ(t)et/3 .
dt 2 2
If µ(t) solves the equation
d 1
(1.6) µ(t) = µ(t),
dt 2
then
d 1
(1.7) (µ(t)y(t)) = µ(t)et/3 ,
dt 2
and we can therefore proceed as before. Computing,
1 d d
(1.8) µ(t) = ln µ(t) = a,
µ(t) dt dt
and therefore,
(1.9) µ(t) = eat .
Now let us try a more difficult problem.
dy
(1.10) t+ 2y(t) = 4t2 , y(1) = 2.
dt
By direct computation, we need µ(t) such that
d 2
(1.11) µ(t) = µ(t).
dt t
Then we compute
d 2
(1.12) ln |µ(t)| = .
dt t
Integrating the left and right hand sides,
(1.13) ln |µ(t)| = 2 ln t + c.
3

Therefore, we may set


(1.14) µ(t) = t2 .
For a general equation
dy
(1.15) + p(t)y(t) = q(t),
dt
we take the integrating factor
Z
(1.16) µ(t) = exp( p(t)dt).

2. Separable differential equations


Consider the general first-order differential equation
dy
(2.1) = f (x, y).
dx
Suppose such an equation is of the form
dy
(2.2) M (x) + N (y) = 0.
dx
Such an equation is separable, because it can be written in the differential form
(2.3) M (x)dx + N (y)dy = 0.
For example, solve the equation
dy x2
(2.4) = .
dx 1 − y2
Then we can solve
1 3 y3
(2.5) x +c=y− .
3 3
Next, solve
dy 3x2 + 4x + 2
(2.6) = , y(0) = −1.
dx 2(y − 1)
Solving this equation,
(2.7) y 2 − 2y = x3 + 2x2 + 2x + c.
In this case we take c = 3.
Now consider the separable differential equation
dy 4x − x3
(2.8) = , y(0) = 1.
dx 4 + y3
In this case,
y4 x4
(2.9) 4y + = 2x2 − + c,
4 4
17
and by direct computation, c = 4 .
Remark: If we have an equation of the form
dy
(2.10) = f (y)g(x),
dx
4 BENJAMIN DODSON

if f (y0 ) = 0, then the solution to (2.10) is of the form y(x) = y0 . In this case we would not want
to divide by f (y).

3. Linear and nonlinear differential equations


Theorem 1. If the functions p and g are continuous on an open interval I : α < t < β containing
the point t = t0 , then there exists a unique function y = φ(t) that satisfies the differential equation
dy
(3.1) + p(t)y(t) = g(t),
dt
for each t ∈ I, and that also satisfies the initial condition y(t0 ) = y0 .
Proof. Since p(t) is continuous, p(t) is integrable on a subinterval of I. Therefore,
Z
(3.2) µ(t) = exp( p(t)dt),

is well-defined.
Now then, suppose (3.1) has two solutions, y1 (t) and y2 (t). Then, let y1 (t) − y2 (t) = y(t). Then,
dy
(3.3) + p(t)y(t) = 0, y(t0 ) = 0.
dt
We can show that the only solution to (3.3) is y(t) = 0. 
Theorem 2. Suppose f and ∂f ∂y are continuous in some rectangle α < t < β, γ < y < δ, containing
(t0 , y0 ). Then there exists some interval t0 − h < t < t0 + h contained in α < t < β, there is a
unique solution y(t) = φ(t) of the initial value problem
dy
(3.4) = f (t, y), y(t0 ) = y0 .
dt
We can apply this theorem to the initial value problem
(3.5) ty 0 (t) + 2y(t) = 4t2 , y(1) = 2.
2
Doing some algebra, p(t) = t, which is continuous on t 6= 0.
Now consider the initial value problem
dy 3x2 + 4x + 2
(3.6) = , y(0) = −1.
dx 2(y − 1)
∂f
In this case, f and ∂y are continuous on any rectangle that does not contain y = 1. If x = 0 and
y = 1, we obtain
(3.7) y 2 − 2y = x3 + 2x2 + 2x + c, c = −1.
For the equation,
dy
(3.8) = y 1/3 , y(0) = 0,
dt
we do not have a unique solution.
The initial value problem,
dy
(3.9) = y2 , y(0) = 1,
dt
has a solution on the interval (0, 1).
5

4. Exact differential equations and integrating factors


Now consider the differential equation
dy
(4.1) 2x + y 2 + 2xy = 0.
dx
Observe that if we did not have 2x,
dy
(4.2) y 2 + 2xy = 0,
dx
is a separable equation.
∂ψ ∂ψ
Here, notice that 2x + y 2 = ∂x and 2xy = ∂y , where ψ(x, y) = x2 + xy 2 . Then,
∂ψ ∂ψ dy
(4.3) + = 0.
∂x ∂y dx
Then, (4.3) has the form,
dψ d 2
(4.4) (x, y) = (x + xy 2 ) = 0.
dx dx
Therefore,
(4.5) ψ(x, y) = x2 + xy 2 = c.
How do we know in general if this is possible? Observe that if
∂ψ ∂ψ
(4.6) (x, y) = M (x, y), (x, y) = N (x, y),
∂x ∂y
then
∂M ∂N
(4.7) (x, y) = (x, y).
∂y ∂x
Theorem 3. Suppose the functions M , N , My , and Nx are continuous in the rectangular region
α < x < β, γ < y < δ. Then,
dy
(4.8) M (x, y) + N (x, y) = 0,
dx
is an exact differential equation in R if and only if
(4.9) My (x, y) = Nx (x, y).
Proof. We can try integrating in x or in y. Take
Z x
(4.10) ψ(x, y) = Q(x, y) + h(y), Q(x, y) = M (s, y)ds.
x0

Differentiating (4.10) with respect to y,


Z x
∂Q
(4.11) ψy (x, y) = (x, y) + h0 (y) = Nx (s, y)ds + h0 (y) = N (x, y) − N (x0 , y) + h0 (y).
∂y x0
0
Now
R y then, since we want ψy (x, y) = N (x, y), we need to solve h (y) = N (x0 , y). So take h(y) =
y0
N (x0 , s)ds. 
6 BENJAMIN DODSON

First consider the equation


dy
(4.12) (y cos x + 2xey ) + (sin x + x2 ey − 1) = 0.
dx
In this case, ψ(x, y) = y sin x + x2 ey − y.
It is sometimes possible to convert a differential equation that is not exact to an exact differential
equation by multiplying by a suitable integrating factor. Indeed, suppose we have the equation
dy
(4.13) M (x, y) + N (x, y) = 0.
dx
Multiplying by µ(x, y),
dy
(4.14) µ(x, y)M (x, y) + µ(x, y)N (x, y) = 0.
dx
Then (4.14) is exact if and only if
(4.15) (µ(x, y)M (x, y))y = (µ(x, y)N (x, y))x .
Computing,
(4.16) M µy − N µx + (My − Nx )µ = 0.
For example, consider the equation
dy
(4.17) (3xy + y 2 ) + (x2 + xy) = 0.
dx
Then we wish to solve
(4.18) (3xy + y 2 )µy − (x2 + xy)µx + (3x + 2y − 2x − y)µ = 0.
Simplifying by setting µy = 0,
µx x+y 1
(4.19) = = , µ = x.
µ x(x + y) x

5. Second order equations - reducible cases


Second order differential equations have the form
dy 2 dy
(5.1) = f (t, y, ), y(t0 ) = y0 , y 0 (t0 ) = v0 .
dt2 dt
There are some cases which reduce to first order equations for
dy
(5.2) v(t) = .
dt
For example, consider
(5.3) y 00 = f (t, y 0 ).
In this case, let v = y 0 ,
dv
(5.4) = f (t, v), v(t0 ) = v0 .
dt
Solving for v(t),
Z t
(5.5) y(t) = y0 + v(s)ds.
t0
7

For example, consider the equation


d2 y dy
(5.6) =t .
dt2 dt
dv
(5.7) = tv,
dt
2 Rt 2
so v(t) = et /2
and y(t) = y0 + 0
es /2
ds.
Now, consider the equation,
(5.8) y 00 = f (y, y 0 ).
dy
Taking v(t) = dt ,
dv
(5.9) = f (y, v),
dt
which contains too many variables. Rewriting the equation as one for v as a function of y,
dv dv dy dv
(5.10) = =v .
dt dy dt dx
Substituting (5.10) into (5.8),
dv f (y, v)
(5.11) = , v(y0 ) = v0 .
dy v
For example, consider the equation
(5.12) y 00 = f (y).
In this case,
dv f (y)
(5.13) = .
dy v
This equation is separable,
(5.14) vdv = f (y)dy.
Therefore,
Z
1 2
(5.15) v = g(y) + C, f (y)dx = g(y) + C.
2
Therefore,
dy p
(5.16) = v(t) = ± 2g(x) + 2C.
dt
This equation is also separable:
Z
dy
(5.17) ± p = t + C2 .
2g(y) + 2C
Take
d2 y
(5.18) = y2 .
dt2
8 BENJAMIN DODSON

Then,

dv y2
(5.19) = .
dy v
Therefore,
1 2 1
(5.20) v = y 3 + C.
2 3
Therefore,
r
dy 2 3
(5.21) =v=± y + 2C.
dt 3

Z
dy
(5.22) ± q = t + C2 .
2 3
3y + 2C

6. Homogeneous differential equations with constant coefficients


Consider the constant coefficient, second order linear differential equation

(6.1) ay 00 + by 0 + cy = 0.

Taking y(t) = ert , y 0 (t) = rert , and y 00 (t) = r2 ert . Substituting this into (6.1),

(6.2) (ar2 + br + c)ert = 0.

This condition is only satisfied when ar2 + br + c = 0. This equation is called the characteristic
equation.

For example, take

(6.3) y 00 − y = 0, y(0) = 2, y 0 (0) = −1.

The characteristic equation is r2 − 1 = 0, which has solutions r = ±1. A general solution of (6.3)
is given by

(6.4) y(t) = c1 et + c2 e−t .

Now then, solving c1 + c2 = 2, c1 − c2 = −1, so c1 = 21 , c2 = 32 .


Solve

(6.5) y 00 + 5y 0 + 6y = 0, y(0) = 2, y 0 (0) = 3.

Solve
1
(6.6) 4y 00 − 8y 0 + 3y = 0, y(0) = 2, y 0 (0) = .
2
9

7. Repeated roots: reduction of order


Suppose now that the characteristic equation has a repeated root. This occurs when the dis-
criminant is zero,
(7.1) b2 − 4ac = 0.
In this case,
b
(7.2) r1 = r2 = −
.
2a
Let us first suppose that r1 = 0. In that case, we have the equation,
(7.3) y 00 (t) = 0.
We know how to solve this equation,
(7.4) y(t) = c1 t + c2 .
r1 t
Notice that in this case, e is a constant function.
For a general equation with r1 = r2 , we have an equation of the form
(7.5) y 00 − 2r1 y 0 + r12 y = 0.
In this case, y1 (t) = er1 t is a solution to our equation. Now let us try y2 (t) = v(t)y1 (t) = er1 t v(t).
In this case, by the product rule,
v 00 (t)y1 (t) + 2v 0 (t)y10 (t) + v(t)y100 (t) − 2r1 v 0 (t)y1 (t) − 2r1 v(t)y10 (t) + r12 v(t)y1 (t)
(7.6)
= v 00 (t)y1 (t) + 2v 0 (t)y10 (t) − 2r1 v 0 (t)y1 (t) = v 00 (t)y1 (t) = 0.
Therefore, in this case, y2 (t) = c2 ter1 t .
We can use the reduction of order for a general equation of the form
(7.7) y 00 + p(t)y 0 + q(t)y = 0.
Suppose we know that there exists a solution y1 (t) to (7.7), not everywhere zero. Set v(t)y1 (t) =
y(t). Plugging this into (7.7),
v 00 (t)y1 (t) + 2v 0 (t)y10 (t) + v(t)y100 (t) + p(t)v 0 (t)y1 (t) + p(t)v(t)y10 (t) + q(t)v(t)y1 (t)
(7.8)
= v 00 (t)y1 (t) + (2y10 (t) + p(t)y1 (t))v 0 (t) = 0.
This equation is actually first order, if we substitute w(t) = v 0 (t).
Consider, for example, the equation
(7.9) 2t2 y 00 + 3ty 0 − y = 0, t > 0.
−1
Now then, we know that y1 (t) = t is a solution of (7.9). Now, set y(t) = v(t)t−1 . Then,
3
(7.10) v 00 (t)t−1 − 2t−2 v 0 (t) + p(t)t−1 v 0 (t) = v 00 (t)t−1 − 2t−2 v 0 (t) + t−2 v 0 (t) = 0.
2
Therefore,
(7.11) 2tv 00 − v 0 = 0.
Setting w = v 0 , we wish to solve,
1
(7.12) w0 − w = 0.
2t
10 BENJAMIN DODSON

Therefore,
2 3/2
(7.13) w(t) = ct1/2 , and v(t) = ct + k.
3

8. Complex roots of the characteristic equation


Now consider the second order differential equation,
(8.1) ay 00 + by 0 + cy = 0.
Suppose this equation has the complex roots,
(8.2) r1 = λ + iµ, r2 = λ − iµ.
In this case, we can try
(8.3) y1 (t) = c1 er1 t , y2 (t) = c2 er2 t .
Now let us make sense of eit . Observe that
d it
(8.4) (e ) = ieit .
dt
Therefore, eit = c(t) + is(t) solves an ordinary differential equation
d
(8.5) y = iy, y(0) = 1.
dt
Since i rotates by ninety degrees, y(t) travels at speed one counterclockwise along the unit circle.
Thus,
(8.6) eit = cos(t) + i sin(t).
Therefore, the general solution has the form
(8.7) y(t) = c1 e−λt (cos(µt) + i sin(µt)) + c2 e−λt (cos(µt) − i sin(µt)).
Doing some algebra,
(8.8) y(t) = c1 e−λt cos(µt) + c2 e−λt sin(µt).
We can also use the power series expansion to obtain (8.6). In this case,
∞ k k ∞ ∞
X i t X (−1)n t2n X t2n+1
(8.9) eit = = +i = cos(t) + i sin(t).
k! n=0
(2n)! n=0
(2n + 1)!
k=0

9. The Wronskian
Let us define the concept of a differential operator. Suppose p(t) and q(t) are continuous func-
tions. Then let
(9.1) L[φ] = φ00 (t) + p(t)φ0 (t) + q(t)φ(t).
With this equation, we associate a set of initial conditions,
(9.2) y(t0 ) = y0 , y 0 (t0 ) = y00 .
We have the existence and uniqueness theorem.
11

Theorem 4. Consider the initial value problem


(9.3) y 00 + p(t)y 0 + q(t)y = g(t), y(t0 ) = y0 , y 0 (t0 ) = y00 ,
where p, q, and g are continuous on an open interval I that contains the point t0 . This problem
has exactly one solution y(t) = φ(t), and the solution exists throughout the interval I.
Now then, L[cy] = cL[y] and L[c1 y1 + c2 y2 ] = c1 L[y1 ] + c2 L[y2 ]. Therefore, if L[y1 ] = 0 and
L[y2 ] = 0, then L[c1 y1 + c2 y2 ] = 0.
Now we need to solve the system of equations,
c1 y1 (t0 ) + c2 y2 (t0 ) = y0 ,
(9.4)
c1 y10 (t0 ) + c2 y20 (t0 ) = y00 .
This system is solvable if and only if
 
y (t ) y2 (t0 )
(9.5) det 10 0 6= 0.
y1 (t0 ) y20 (t0 )
Then we solve
 −1    −1  0  
y1 (t0 ) y2 (t0 ) y0 y1 (t0 ) y2 (t0 ) y2 (t0 ) −y2 (t0 ) y0
(9.6) = det 0 .
y10 (t0 ) y20 (t0 ) y00 y1 (t0 ) y20 (t0 ) −y10 (t0 ) y1 (t0 ) y00
Theorem 5. Suppose that y1 and y2 are two solutions to L[y] = 0, and that the initial conditions
y(t0 ) = y0 and y 0 (t0 ) = y00 are assigned. Then it is possible to choose constants c1 , c2 so that
y(t) = c1 y1 (t) + c2 y2 (t) satisfies the differential equation and the initial conditions if and only if the
Wronskian W [y1 , y2 ] is not zero at t0 .
Theorem 6 (Abel’s theorem). If y1 and y2 are solutions of the second order linear differential
equation,
(9.7) L[y] = y 00 + p(t)y 0 + q(t)y = 0,
where p and q are continuous on an open interval I, then the Wronskian W [y1 , y2 ](t) is given by
Z
(9.8) W [y1 , y2 ](t) = c exp(− p(t)dt).

Furthermore, W [y1 , y2 ](t) is either zero for all t ∈ I or else is never zero on I.
Proof. By direct computation,
(9.9) (y1 y200 − y100 y2 ) + p(t)(y1 y20 − y10 y2 ) = 0.
Now then, observe that W 0 = y1 y200 − y100 y2 , proving that
(9.10) W 0 + p(t)W = 0.
Thus,
Z
(9.11) W (t) = c exp(− p(t)dt).


12 BENJAMIN DODSON

10. Nonhomogeneous equations: method of undetermined coefficients


Now turn attention to the nonhomogeneous second-order linear differential equations
(10.1) L[y] = y 00 + p(t)y 0 + q(t)y = g(t).
The equation,
(10.2) L[y] = y 00 + p(t)y 0 + q(t)y = 0,
is called the homogeneous equation.
Theorem 7. If Y1 and Y2 are two solutions of the nonhomogeneous linear differential equation
(10.1), then their difference Y1 (t) − Y2 (t) is a solution to the corresponding homogeneous differential
equation (10.2). Then,
(10.3) Y1 (t) − Y2 (t) = c1 y1 (t) + c2 y2 (t).
Proof. Indeed,
(10.4) L[Y1 ](t) − L[Y2 ](t) = g(t) − g(t) = 0.

Theorem 8. The general solution of the nonhomogeneous equation (10.1) can be written in the
form,
(10.5) y(t) = c1 y1 (t) + c2 y2 (t) + Y (t),
where Y (t) is any solution to the nonhomogeneous equation (10.1).
Definition 1. The solution Y (t) is called the particular solution.

(10.6) y 00 − 3y 0 − 4y = 3e2t .
Take the particular solution Y (t) = Ae2t . In this case, Y (t) = − 21 e2t .

(10.7) y 00 − 3y 0 − 4y = 2 sin t.
In this case, use the particular solution Y (t) = A sin t + B cos t. Indeed, we can decompose
1 1
(10.8) 2 sin t = eit + e−it .
i i
Find the particular solution,
(10.9) y 00 − 3y 0 − 4y = −8et cos(2t).
In this case,
(10.10) Y (t) = Aet cos(2t) + Bet sin(2t).
Here is a table.
(10.11)
Pn (t) = a0 tn + ... + an , ts (A0 tn + ... + An ),
Pn (t)eαt , ts (A0 tn + ... + An )eαt ,
Pn eαt (A1 sin(βt) + A2 cos(βt)), ts (A0 tn + ... + An )eαt cos(βt) + ts (B0 tn + ... + Bn )eαt sin(βt).
13

11. Variation of parameters


Consider the nonhomogeneous second order linear differential equation,
(11.1) y 00 (t) + p(t)y 0 (t) + q(t)y(t) = g(t).
Now then, reducing to a first order equation,
      
d y(t) 0 −1 y(t) 0
(11.2) + = .
dt y 0 (t) q(t) p(t) y 0 (t) g(t)
To simplify notation, let
 
0 −1
(11.3) A(t) = .
q(t) p(t)
Rt
Now let µ(t) = exp( 0 A(s)ds) to be the integrating factor. Then,
   
d y(t) 0
(11.4) (µ(t) 0 ) = µ(t) .
dt y (t) g(t)
Therefore,
    Z t  
y(t) −1 y0 −1 0
(11.5) = µ(t) + µ(t) ( µ(s) ds).
y 0 (t) y00 0 g(s)
Now we need to compute µ(t) and µ(t)−1 . First observe that
 
−1 y0
(11.6) µ(t)
y00
gives the solution to
(11.7) y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0, y(0) = y0 , y(0) = y00 .
Thus, if y1 (t) and y2 (t) are solutions to (11.7) with nonzero Wronskian,
  −1   −1
y (0) y2 (0) y1 (0) y2 (0) y1 (t) y2 (t) y1 (0) y2 (0)
(11.8) µ(t)−1 = µ(t)−1 10 = .
y1 (0) y20 (0) y10 (0) y20 (0) y10 (t) y20 (t) y10 (0) y20 (0)
Then, doing some algebra,
  −1
y1 (0) y2 (0) y1 (s) y2 (s)
(11.9) µ(s) = .
y10 (0) y20 (0) y10 (s) y20 (s)
Therefore,
Z t   Z t  −1  
0 y1 (t) y2 (t) y1 (s) y2 (s) 0
(11.10) µ(t)−1 µ(s) ds = ds.
0 g(s) 0 y10 (t) y20 (t) y10 (s) y20 (s) g(s)
(11.11)
Z t   0   Z t   
y1 (t) y2 (t) −1 y2 (s) −y2 (s) 0 y1 (t) y2 (t) −1 −y2 (s)g(s)
= W (s) ds = W (s) ds.
0 y10 (t) y20 (t) −y10 (s) y1 (s) g(s) 0 y10 (t) y20 (t) y1 (s)g(s)
Therefore, we have a particular solution,
Z t Z t
y2 (s)g(s) y1 (s)g(s)
(11.12) Y (t) = −y1 (t) ds + y2 (t) ds.
0 W (s) 0 W (s)
Therefore, the general solution is given by
(11.13) y(t) = c1 y1 (t) + c2 y2 (t) + Y (t).
14 BENJAMIN DODSON

12. Mechanical and electrical vibrations


We know from physics that F = ma. Under Hooke’s law, the force is given by −kx, where x is
the displacement. Therefore, our equation is given by

d2 x
(12.1) m + kx = 0.
dt2
The characteristic polynomial of (12.1) is given by

(12.2) mr2 + k = 0,

and the general solution is given by


r r
k k
(12.3) c1 cos( t) + c2 sin( t).
m m
Now let us add the force of damping to our equation. In this case, it is reasonable to think that
there will be a damping force opposing the direction of motion. So then,

(12.4) F = ma + γv, γ > 0.

In this case we have


d2 x dx
(12.5) m +γ + kx = 0.
dt2 dt
For this, the discriminant is given by

(12.6) γ 2 − 4mk.

When γ 2 − 4mk < 0, our solutions are of the form


γ γ
(12.7) c1 e− 2m t cos(µt) + c2 e− 2m t sin(µt).

When γ 2 − 4mk = 0, our solution is of the form


γ γ
(12.8) c1 e− 2m t + c2 te− 2m t .
p
When γ 2 − 4mk > 0, observe that γ − γ 2 − 4mk > 0, so we have a solution of the form

(12.9) c1 e−r1 t + c2 e−r2 t , r1 , r2 > 0.

We can also add a forcing term.

We have an identical calculation for an RLC circuit. The voltage drop across a resistor is
RI = R dQ Q
dt , the voltage drop across a capacitor is C , and the voltage drop across an inductor is
2
d Q
L dI
dt = L dt2 . Then by Kirchoff’s law,

d2 Q dQ 1
(12.10) L 2
+R + Q = E(t).
dt dt C
15

13. Vector spaces and linear transformations


Recall the notion of vectors in Rn . If v is such a vector,
(13.1) v = (v1 , ..., vn ).
We can add two vectors in Rn .
(13.2) v + w = (v1 + w1 , ..., vn + wn ),
or multiply a vector by a scalar,
(13.3) av = (av1 , ..., avn ).
Remark 1. We are interested in vectors on Rn , but we could also take vectors on Cn .
We have laws for vector addition:
(1) Commutative law u + v = v + u,
(2) Associative law (u + v) + w = u + (v + w),
(3) Zero vector, there exists 0 ∈ V such that v + 0 = v for any v ∈ V .
(4) For any vector v ∈ V , there exists −v ∈ V such that v + (−v) = 0.
We also have laws for multiplication by scalars.
(1) Associative law, a(bv) = (ab)v,
(2) Unit law. 1v = v.
Finally, we have the distributive property.
(1) a(u + v) = au + av,
(2) (a + b)u = au + bu.
There are other vector spaces, other than Rn . For example, a subset W of a vector space is a
linear subspace provided wj ∈ W implies a1 w1 + a2 w2 ∈ W for any a1 , a2 ∈ R.
Remark 2. We can also generalize the notion of a vector space and consider, for example, the
vector space of polynomials.
If V and W are vector spaces, a map
(13.4) T : V → W,
is said to be a linear transformation provided
(13.5) T (a1 v1 + a2 v2 ) = a1 T v1 + a2 T v2 .
We say that T ∈ L(V, W ).
The linear transformations also are a vector space. Indeed, linear transformations may be added,
(13.6) T1 + T2 : V → W, (T1 + T2 )v = T1 v + T2 v,
or multiplied by a scalar,
(13.7) aT : V → W, (aT )v = a(T v).
One important example of a linear transformation is a n × m matrix. Other examples include
our differentiation and integration operators. Recall
(13.8) L[φ] = φ00 (t) + p(t)φ0 (t) + q(t)φ(t).
16 BENJAMIN DODSON

We can also compose linear transformations using matrix multiplication. Suppose A and B are
matrices, A = (aij ), B = (bij ), and let
n
X
(13.9) AB = (dij ), dij = ail blj .
l=1

14. Basis and dimension


For any linear transformation T there is the null space of T and the range of T ,
(14.1) N (T ) = {v ∈ V : T v = 0},

(14.2) R(T ) = {T v : v ∈ V }.
The null space is a subspace of V and the range is a subspace of W . If N (T ) = {0}, we say that T
is an injection, or one-to-one. If R(T ) = W , we say that T is surjective or onto. If both are true,
we say that T is an isomorphism. We also say that T is invertible.
Let S = {v1 , ..., vk } be a finite set in a vector space V . The span of S is the set of vectors in V
that are of the form
(14.3) c1 v1 + ... + ck vk , ck ∈ R.
This set, Span(S) is a linear subspace of V .
Definition 2. The set S is said to be linearly dependent if and only if there exist scalars c1 , ..., ck ,
not all zero, such that (14.3) = 0. Otherwise, S is said to be linearly independent.
Definition 3. If {v1 , ..., vk } is linearly independent, we say that S is a basis of span(S), and that
k is the dimension of span(S). In particular, if span(S) = V , k = dim(V ). Also, V has finite basis
and is finite dimensional.
It remains to show that any two bases of a finite dimensional vector space V must have the same
number of elements, and thus dim(V ) is well-defined. Suppose V has a basis S = {v1 , ..., vk }. Then
define the linear transformation
(14.4) A : Rk → V,
by
(14.5) A(c1 e1 + ... + ck ek ) = c1 v1 + ... + ck vk ,
where {e1 , ..., ek } is the standard basis of Rk .
Linear independence of S is equivalent to the injectivity of A. The statement that S spans V
is equivalent to the surjectivity of A. The statement that S is a basis of V is equivalent to the
statement that A is an isomorphism, with inverse specified by
(14.6) A−1 (c1 v1 + ... + ck vk ) = c1 e1 + ... + ck ek .
We can show that dim(V ) is well-defined.
Lemma 1. If v1 , ..., vk+1 are vectors in Rk , then they are linearly dependent.
17

Proof. This is clear for k = 1. Now we can suppose that the last component of some vj is nonzero,
since otherwise we are in Rk−1 . Reorder so that the last component of vk+1 is nonzero. We can
assume it is equal to 1. Then take
(14.7) wj = vj − vkj vk+1 .
Then by induction, there exist a1 , ..., ak , not all zero such that a1 w1 + ... + ak wk = 0. Therefore,
(14.8) a1 v1 + ... + ak vk = (a1 vk1 + ... + ak vkk )vk+1 ,
which gives linear dependence. 
Proposition 1. If V has a basis {v1 , ..., vk } with k elements and {w1 , ..., wl } ⊂ V is linearly
independent, then l ≤ k.
Proof. Take the isomorphism A : Rk → V . Then, {A−1 w1 , ..., A−1 wl } is linearly independent in
Rk , so l ≤ k. 
Corollary 1. If V is finite dimensional, then any two bases of V have the same number of elements.
If V is isomorphic to W , these two spaces have the same dimension.
Proposition 2. Suppose V and W are finite dimensional vector spaces, and
(14.9) A : V → W,
is a linear map. Then,
(14.10) dimN (A) + dimR(A) = dim(V ).
Proof. Let {w1 , ..., wl } be a basis of N (A) ⊂ V , and complete it to a basis of V ,
(14.11) {w1 , ..., wl , u1 , ..., um }.
Let L = span{u1 , ..., um } and let A0 = A|L . Then,
(14.12) R(A0 ) = R(A),
and
(14.13) N (A0 ) = N (A) ∩ L = 0.
Therefore, dimR(A) = dimR(A0 ) = dim(L) = m. 
Corollary 2. Let V be finite dimensional and let A : V → V be linear. Then A is injective if and
only if A is surjective if and only if A is an isomorphism.
Proposition 3. Let A be an n×n matrix defining A : Rn → Rn . Then the following are equivalent.
A is invertible, the columns of A are linearly independent, the columns of A span Rn .

15. Eigenvalues and eigenvectors


Let T : V → V be linear. If there exists a nonzero v ∈ V such that
(15.1) T v = λj v,
for some λ ∈ F, then λj is an eigenvalue of T and v is an eigenvector.
Let E(T, λj ) denote the set of vectors v ∈ V such that (15.1) holds. Then E(T, λj ) is a vector
subspace of V and
(15.2) T : E(T, λj ) → E(T, λj ).
18 BENJAMIN DODSON

Definition 4. The set of λj ∈ F such that E(T, λj ) 6= 0 is denoted Spec(T ).


If V is finite dimensional, then λj ∈ Spec(T ) if and only if
(15.3) det(λj I − T ) = 0.
Then, KT (λ) = det(λI − T ) is called the characteristic polynomial of T .
Proposition 4. If V is a finite dimensional vector space and T ∈ L(V ), then T has at least one
eigenvector in V .
Proof. Fundamental theorem of algebra. 

A linear transformation might have only one eigenvector, up to scalar multiple. Consider
 
2 1 0
(15.4) 0 2 1 .
0 0 2
In this case the characteristic polynomial is given by (λ − 2)3 . Now then, if
    
2 1 0 v1 v1
(15.5) 0 2 1 v2  = 2 v2  ,
0 0 2 v3 v3
then v2 = v3 = 0.
Proposition 5. Suppose that the characteristic polynomial of T ∈ L(V ) has k distinct roots
λ1 , ..., λk with eigenvectors vj ∈ E(T, λj ), 1 ≤ j ≤ k. Then {v1 , ..., vk } is linearly independent.
In particular, if k = dim(V ), these vectors form a basis of V .
Proof. Suppose {v1 , ..., vk } is a linearly dependent set. Then
(15.6) c1 v1 + ... + ck vk = 0,
reordering so that c1 6= 0. Applying T − λk I to (15.6) gives
(15.7) c1 (λ1 − λk )v1 + ... + ck−1 (λk−1 − λk )vk−1 = 0.
Thus, {v1 , ..., vk−1 } is linearly dependent. Arguing by induction, we obtain a contradiction. 

Observe that in the case that we have k linearly independent eigenvectors, the eigenvectors
{v1 , ..., vk } form a natural basis of Rk . Indeed, for any vector v ∈ Rk ,
(15.8) T (c1 v1 + ... + ck vk ) = c1 λ1 v1 + ... + ck λk vk .

16. The matrix exponential


Define the matrix exponential

A
X 1 k
(16.1) e = A .
k!
k=1

We can define the norm of a matrix,


(16.2) kT k = sup{|T v| : |v| ≤ 1}.
19

Then, we can compute kAk k ≤ kAkk , so the matrix exponential (16.1) converges. Therefore, by
the ratio test, (16.1) converges. Similarly, we can define,
∞ k
tA
X t
(16.3) e = Ak ,
k!
k=0

which converges for any t ∈ C.


Differentiating term by term,

d tA X tk−1 k
(16.4) e = k A = etA A = AetA .
dt k!
k=1

Therefore, v(t) = etA v0 solves the first order system


dv
(16.5) = Av, v(0) = v0 .
dt
This solution is unique. Indeed, let u(t) = e−tA v(t). Then u(0) = v(0) = v0 and
d
(16.6) u(t) = e−tA Av(t) + e−tA v 0 (t) = 0,
dt
so u(t) ≡ u(0) = v0 . The same argument implies
d tA −tA
(16.7) (e e ) = 0, hence etA e−tA = I,
dt
so v(t) = etA v0 .
Proposition 6. Given A ∈ M (n, C), s, t ∈ R,

(16.8) e(s+t)A = esA etA .


Proof. Using the product rule,
d (s+t)A −tA
(16.9) (e e ) = e(s+t)A Ae−tA − e(s+t)A Ae−tA = 0.
dt
Therefore, e(s+t)A e−tA is independent of t, so (16.9) = esA . If we take s = 0, etA e−tA = I, so
multiplying the left and right hand sides by etA gives (16.8). 

On the other hand, in general, it is not true that eA+B = eA eB . However, it is true if AB = BA.
Proposition 7. Given A, B ∈ M (n, C),
(16.10) eA+B = eA eB .
Proof.
(16.11)
d t(A+B) −tB −tA
(e e e ) = et(A+B) (A + B)e−tB e−tA − et(A+B) Be−tB e−tA − et(A+B) e−tB Ae−tA .
dt
Since AB k = B k A for any k, (16.11) = 0, which gives (16.10). 
20 BENJAMIN DODSON

Let’s do some computations.


   
1 0 0 1
(16.12) A= , B= .
0 2 0 0
Then,
et
   
tA 0 tB 1 t
(16.13) e = , e = .
0 e2t 0 1
If we take
et tet
   
1 1 tC tI tB
(16.14) C= , e =e e = .
0 1 0 et
Now suppose we have a basis of eigenvectors. Since Avj = λj vj ,
∞ k
X t
(16.15) etA vj = Ak vj = etλj vj .
k!
k=0

For example, take


     
0 1 1 1
(16.16) A= , λ1 = 1, λ2 = −1, v1 = , v2 = .
1 0 1 −1
Then, etA v1 = et v1 and etA v2 = e−t v2 . Now then,
           
1 1 1 1 1 0 1 1 1 1
(16.17) etA = et + e−t , etA = et − e−t .
0 2 1 2 −1 1 2 1 2 −1
Therefore,
 
tA cosh(t) sinh t
(16.18) e = .
sinh t cosh t
Next, consider the matrix
 
0 −2
(16.19) A= .
1 2
The characteristic polynomial of A is
(16.20) det(A − λI) = λ2 − 2λ + 2 = 0.
The eigenvalues of (16.20) are λ1 = 1 + i and λ2 = 1 − i, with corresponding eigenvectors
   
−2 −2
(16.21) v1 = , v2 = .
1+i 1−i
Then,
(16.22) etA v1 = e(1+i)t v1 , etA v2 = e(1−i)t v2 .
Doing some algebra,
     
1 i + 1 −2 i − 1 −2
=− + ,
0 4 1+i 4 1−i
(16.23)      
0 i −2 i −2
=− + .
1 2 1+i 2 1−i
21

Now then,
(16.24) 
et (2i + 2)eit + (2 − 2i)e−it
      
i + 1 −2 i − 1 −2 t cos t − sin t
− e(1+i)t + e(1−i)t = = e ,
4 1+i 4 1−i 4 −2ieit + 2ie−it sin t
and
et 2ieit − 2ie−it
       
i (1+i)t −2 i (1−i)t −2 t −2 sin t
(16.25) − e + e = =e .
2 1+i 2 1−i 2 (1 − i)eit + (1 + i)e−it cos t + sin t
Therefore,
 
cos t − sin t −2 sin t
(16.26) etA = et .
sin t cos t + sin t

17. Generalized eigenvectors and the minimal polynomial


Recall that the matrix
 
2 1 0
(17.1) A = 0 2 1 ,
0 0 2
has just one eigenvalue 2 and one eigenvector e1 . However,
(17.2) (A − 2I)2 e2 = 0, (A − 2I)3 e3 = 0.
Definition 5. For T ∈ L(V ), we say a nonzero v ∈ V is a generalized λj eigenvector if there exists
k ∈ N such that (T − λj I)k v = 0.
Consider for example the matrix
 
0 1
(17.3) .
−4 −4
This matrix has one eigenvalue, −2. Now then,
 
2 1
(17.4) A = −2I + T, T = .
−4 −2
     
1 2 1
In this case, T = 0, T =5 . Therefore, T 2 = 0. Then,
−2 1 −2
 
1 + 2t t
(17.5) etA = exp(−2It + tT ) = e−2t .
−4t 1 − 2t
Let GE(T, λj ) be the set of vectors v ∈ V such that (T − λj I)k v = 0 for some k ∈ N. Then,
GE(T, λj ) is a linear subspace of V and
(17.6) T : GE(T, λj ) → GE(T, λj ).
Lemma 2. For each λj ∈ C such that GE(T, λj ) 6= 0,
(17.7) T − µI : GE(T, λj ) → E(T, λj ),
is an isomorphism for all µ 6= λj .
Proof. If T −µI is not an isomorphism then T v = µv for some v ∈ GE(T, λj ). But then, (T −λj I)k =
(µ − λj )k v for any k ∈ N, which cannot be true unless µ = λj . 
22 BENJAMIN DODSON

Lemma 3. If V is finite dimensional and T ∈ L(V ), then there exists a nonzero polynomial p such
that p(T ) = 0.
2
Proof. If dim(V ) = n2 then {I, T, ..., T n } is linearly dependent. 
Now let
(17.8) IT = {p : p(T ) = 0}.
Certainly we can add such polynomials together and get new polynomials that satisfy (17.8) or
multiply them.
Lemma 4. Let p1 be the polynomial with minimal degree among the nonzero polynomials in an
ideal I. Then any polynomial in I is of the form p1 (λ)q(λ) for some polynomial q.
Proof. Indeed, we can divide polynomials, so
(17.9) p(λ) = p1 (λ)q(λ) + r(λ),
where r(λ) has degree less than the degree of p1 . Since the degree of p1 is minimal, r(λ) = 0. 
The minimal polynomial of T is of the form
K
Y
(17.10) mT (λ) = (λ − λj )kj .
j=1

Then let
Y
(17.11) pl (λ) = (λ − λj )kj .
j6=l

Proposition 8. If V is an n-dimensional complex vector space and T ∈ L(V ), then for each
l ∈ {1, ..., K},
(17.12) GE(T, λl ) = R(pl (T )).
Proof. For any v ∈ V ,
(17.13) (T − λl )kl pl (T ) = 0,
so pl (T ) : V → GE(T, λl ). Also, each factor
(17.14) (T − λj )kj : GE(T, λl ) → GE(T, λl ),
for any j 6= l, is an isomorphism, so pl (T ) : GE(T, λl ) → GE(T, λl ) is an isomorphism. 
Proposition 9. If V is an n-dimensional complex vector space and T ∈ L(V ), then
(17.15) V = GE(T, λ1 ) + ... + GE(T, λK ).
Proof. We claim that the ideal generated by p1 , ..., pK is equal to all polynomials. Indeed, any ideal
is generated by a minimal element, which must have a zero. But p1 , ..., pK have no common zeros.
Therefore,
(17.16) p1 (T )q1 (T ) + ... + pK (T )qK (T ) = I.
Therefore,
(17.17) v = p1 (T )q1 (T )v + ... + pK (T )qK (T )v = v1 + ... + vK .

23

Proposition 10. Let GE(T, λl ) denote the generalized eigenspaces of T , and let Sl = {vl1 , ..., vl,dl },
with dl = dimGE(T, λl ) be a basis of GE(T, λl ). Then,
(17.18) S = S1 ∪ ... ∪ SK
is a basis of V .
Proof. We know that S spans V . We need to show that S is linearly independent. Suppose wl
are nonzero elements of GE(T, λl ). We can apply the same argument as in the case of distinct
eigenvalues, only we replace (T − λI) with (T − λI)k . 
Definition 6. We say that T ∈ L(V ) is nilpotent provided T k = 0 for some k ∈ N.
Proposition 11. If T is nilpotent then there is a basis of V for which T is strictly upper triangular.
Proof. Let Vk = T k (V ), so V = V0 ⊃ V1 ⊃ V2 ⊃ ... ⊃ Vk−1 ⊃ {0} with Vk−1 6= 0. Then, choose a
basis for Vk−1 , augment it to produce a basis for Vk−2 , and so on. Then we have an upper triangular
matrix. 
Now decompose V = V1 + ... + Vl , where Vl = GE(T, λl ). Then,
(17.19) Tl : Vl → Vl ,
where Tl = T |Vl . Then Spec(Tl ) = {λl }, and we can take a basis of Vl for which Tl is strictly upper
triangular. Now for any strictly upper triangular matrix T of dimension k, T k = 0. Thus,
K
Y
(17.20) KT (λ) = det(T − λI) = (λ − λl )dl , dl = dim(V ),
l=1

and KT (λ) is a polynomial multiple of mT (λ).

18. Systems of first order linear equations


A general system of n functions is given by
x01 (t) = p11 (t)x1 (t) + ... + p1n (t)xn (t) + g1 (t),
x02 (t) = p21 (t)x1 (t) + ... + p2n (t)xn (t) + g2 (t),
(18.1)
···
x0n (t) = pn1 (t)x1 (t) + ... + pnn (t)xn (t) + gn (t).
Theorem 9. If the functions p11 (t), ..., pnn (t) and g1 (t), ...., gn (t) are continuous on an interval
I, α < t < β, then there exists a unique solution x1 (t) = φ1 (t), ..., xn (t) = φn (t) of the equation
(18.1) that also satisfies the initial conditions x1 (0) = x01 , ..., xn (0) = x0n .
Proof. Let A(t) denote the matrix
 
p11 (t) p12 (t) ... p1n (t)
 p21 (t) p22 (t) ... p2n (t) 
(18.2) A(t) = 
 ...
.
... ... ... 
pn1 (t) pn2 (t) ... pnn (t)
Then,
d→ −
(18.3) x (t) = A(t)→

x (t) + →

g (t).
dt
24 BENJAMIN DODSON

Let S(t, 0) be the solution operator to


d→−
(18.4) x (t) = A(t)→
−x (t).
dt
That is, if →

x (t) = S(t, 0)→

x (0), then →

x (t) solves (18.4) with initial data →

x (0). Then,
Z t
(18.5) →

x (t) = S(t, 0)→
−x (0) + S(t, s)→

g (s)ds.
0


For example, let us consider the equation


 
d→ − 1 1 → −
(18.6) x (t) = x (t).
dt 4 1
Then the solution has the form
   

− 1 3t 1
(18.7) x (t) = c1 e + c2 e−t .
2 −2

Theorem 10. If the vector functions → −x (1) (t), ..., →



x (n) (t) are linearly independent solutions of the
system (18.1) for each point in the interval α < t < β, then each solution → −
x (t) can be expressed as

− (1) →
− (n)
a linear combination of x (t), ..., x (t) in exactly one way.
Theorem 11 (Abel’s theorem). If x(1) (t), ..., x(n) (t) are solutions to (18.1) on the interval α <
t < β, then in this interval, W [x(1) (t), ..., x(n) (t)] is either identically zero or never vanishes.
Proof. Choose a basis such that W (t) is an upper triangular matrix. Now then, in any basis,
T r(A(t)) = p11 (t) + ... + pnn (t). Then by direct computation,
dW
(18.8) = (p11 (t) + ... + pnn (t))W (t).
dt
Another way to prove this is to remember that if we have one row that is the multiple of another,
W (t) = 0. The same is true of two columns. This means that det(A) 6= 0 if and only if the rows
and columns are linearly independent.
The only way to avoid that is if we have p11 , ..., pnn . 

19. Nonhomogeneous linear systems


Now let us consider the nonhomogeneous linear system
d→ −
(19.1) x (t) = P (t)→

x (t) + →

g (t).
dt
Then recall (18.5),
Z t

−x (t) = S(t, 0)→
−x (0) + S(t, s)→
−g (s)ds
0
(19.2) Z t


= S(t, 0) x (0) + S(t, 0)S(0, s)→
−g (s)ds.
0
−1
Remark 3. Note that S(0, s) = S(s, 0) .
25

Consider, for example, the system


   −t 
d→ − −2 1 → − 2e
(19.3) x (t) = x (t) + .
dt 1 −2 3t
In this case, since A(t) is constant,
t
2e−s
Z  
(19.4) →

x (t) = etA →

x (0) + e (t−s)A
ds.
0 3s
   
1 1
In this case, the eigenvalues are given by λ = −1, −3 with eigenvectors and . Then,
1 −1
   1 −t 1 −3t 
tA 1 e + e
(19.5) e = 12 −t 21 −3t ,
0 2e − 2e
and
   1 −t 1 −3t 
0 e − e
(19.6) etA = 12 −t 21 −3t .
1 2e + 2e
Therefore,
−t
+ 12 e−3t 1 −t
− 12 e−3t
1 
(19.7) etA = 2e 2e .
1 −t
2e − 12 e−3t 1 −t
2e + 12 e−3t
Then,
Z t 1 −(t−s)
+ 21 e−3(t−s) 1 −(t−s)
− 12 e−3(t−s)
  −s 
(19.8) →

x (t) = etA →

x (0) + 2e 2e 2e
ds.
1 −(t−s)
0 2e − 21 e−3(t−s) 1 −(t−s)
2e + 12 e−3(t−s) 3s
We can convert an n-th order differential equation into a system of first order equations. Indeed,
consider the n-th order differential equation
dn y dn−1 y dy
(19.9) n
+ a n−1 n−1
+ ... + a1 + a0 y = 0.
dt dt dt
Then →

x (t) = (x0 (t), ..., xn−1 (t)) will satisfy
d
x0 (t) = x1 (t),
dt
···
(19.10) d
xn−2 (t) = xn−1 (t),
dt
d
xn−1 (t) = −an−1 xn−1 (t) − ... − a0 x0 (t).
dt
Equivalently,
d→ −
(19.11) x (t) = A→

x (t),
dt
with
 
0 1 ··· 0 0
 0
 0 ··· 0 0 
(19.12) ···
A= ··· ··· ··· ··· 
.
 0 0 ··· 0 1 
−a1 −a2 ··· −an−1 −an
26 BENJAMIN DODSON

Definition 7. The matrix A given by (19.12) is called the companion matrix of the polynomial
(19.13) p(λ) = λn + an−1 λn−1 + ... + a1 λ + a0 .
Proposition 12. If p(λ) is a polynomial of the form (19.13), with companion matrix A given by
(19.12), then
(19.14) p(λ) = det(λI − A).
Proof. The determinant of a matrix is equal to the determinant of the transpose. Then,
 
λ −1 · · · 0 0
 0 λ ··· 0 0 
n−1
a0 (−1)n−1 .
 
(19.15) det(λI − A) = λdet · · · · · · · · ·
 ··· ··· 
 + (−1)
 0 0 ··· λ −1 
a1 a2 · · · an−2 an−1
Therefore,
(19.16) det(λI − A) = λ(λn−1 + an−1 λn−1 + ... + a1 ) + a0 .


20. Variable coefficient systems


Consider a variable coefficient n × n first order system,
dx
(20.1) = A(t)x, x(t0 ) = x0 .
dt
Then,
(20.2) →

x (t) = S(t, t )→

x (t ).
0 0

Now suppose that → −x 1 (t0 ), ..., →



x n (t0 ) are linearly independent. Then, by Abel’s theorem, if →

x j (t)


is a solution to (20.1) with initial data x j (t0 ). Then let M (t) denote the matrix,
(20.3) M (t) = (x1 (t), ..., xn (t)).
Then,
(20.4) M (t) = S(t, t0 )(x1 (t0 ), ..., xn (t0 )).
Therefore,
(20.5) S(t, t0 ) = M (t)(x1 (t0 ), ..., xn (t0 ))−1 ,
and
(20.6) S(t0 , t) = (x1 (t0 ), ..., xn (t0 ))M (t)−1 ,
and
(20.7) S(t, t0 )S(t0 , s) = M (t)M (s)−1 .
Therefore, the solution to
dx
(20.8) = A(t)x + g(t), x(t0 ) = 0,
dt
so the solution to (20.8) is given by
Z t
(20.9) x(t) = M (t)M (s)−1 g(s)ds.
t0
27

For the ordinary differential equation,


dn y dn−1 y dy
(20.10) + an−1 + ... + a1 + a0 y = g(t),
dtn dtn−1 dt
We can use the variation of parameters and Cramer’s rule to compute
(20.11)    
0 0
Z t  0  n Z t  0  n Z t
−1 
  X
−1 
  X 1
Y (t) = M (t)M (s)  · · ·  ds =
 yi (t) (M (s)  · · · )i ds =
 yi (t) (detM (s))ni ds.
0  0  i=1 0  0  i=1 0 W (s)
g(s) g(s)
Lemma 5 (Cramer’s rule). If A is a square matrix, then the inverse of A is given by the matrix
M , where
1
(20.12) Mij = det(M )ij ,
det(M )
where det(M )ij is the determinant of the matrix with the j-th row replaced by the vector (0, ..., 0, 1, 0, ..., 0),
with 1 in the i-th column and 0 everywhere else.
Case 1: In this case, we assume that (19.13) has n distinct real roots. Then y1 (t) = er1 t , ...,
yn (t) = ern t form a nonzero Wronskian.
Case 2: If r is a complex root to (19.13) and (19.13) has only real coefficients, then r̄ is also a
complex root. Thus, if (19.13) has n distinct complex roots, then
(20.13) er1 t , er2 t , ..., erm t , erm+1 t sin(am t), erm+1 t cos(am t), ..., erm+j t sin(am+j t), erm+j t cos(am+j t).
Case 3: If (19.13) has m repeated roots r, then we can choose a basis for A that is in Jordan
canonical form. Then, ert , tert , t2 ert , ..., tm−1 ert form m linearly independent solutions to (19.13).
Indeed, if
 
0 1 0 0 ... 0
 0 0 1 0 ... 0
 
 0 0 0 1 ... 0
(20.14) N = · · · · · · · · · · · · · · · · · · ,

 
 0 0 0 0 ··· 1
0 0 0 0 ... 0
then
1 1
(20.15) etN = I + tN + t2 N 2 + ... + tm−1 N m−1 .
2 (m − 1)!

21. Laplace Transform


The computations in the previous section can be quite cumbersome, depending on g(t). In many
cases, the Laplace transform is often useful.
Definition 8 (Laplace transform). The Laplace transform of a function f (t) is given by
Z ∞
(21.1) L{f (t)} = F (s) = e−st f (t)dt.
0
28 BENJAMIN DODSON

Theorem 12. Suppose that f is piecewise continuous on the interval 0 ≤ t ≤ A for any positive
A > 0. Also suppose that there exist constants K > 0, a, and M > 0, such that
(21.2) |f (t)| ≤ Keat , when t ≥ M.
Then the Laplace transform L{f (t)} = F (s) exists for s > a.
Proof. We can compute
Z ∞ Z ∞
−st K
(21.3) |f (t)|e dt ≤ Ke(a−s)t ≤ .
M M s−a


We can compute Laplace transforms of some important functions.


Z ∞
1
(21.4) at
L{e } = e−st eat dt = .
0 s − a
One of the important aspects of the Laplace transform is that we can take a Laplace transform of
a function that is not continuous. Suppose f (t) = 1 for 0 ≤ t < 1, f (t) = k for t = 1, and f (t) = 0
for t > 1. Then,
Z ∞ Z 1
e−st 1 1 − e−s
(21.5) e−st f (t)dt = e−st dt = − |0 = , s > 0.
0 0 s s
In general, L is a linear functional. Indeed,
(21.6) L{c1 f1 (t) + c2 f2 (t)} = c1 L{f1 (t)} + c2 L{f2 (t)}.
We can use this to compute the Laplace transform of sin(at).
1 1 1 1 1 1 2ia a
(21.7) L{sin(at)} = L{eiat } − L{e−iat } = − = 2 2
= 2 .
2i 2i 2i s − ia 2i s + ia 2i(s + a ) s + a2

1 1 1 1 1 1 s
(21.8) L{cos(at)} = L{eiat } + L{e−iat } = + = 2 .
2 2 2 s − ia 2 s + ia s + a2
Next,
∞ ∞
dn dn 1
Z Z
n!
(21.9) tn e−st dt = (−1)n e−st dt = (−1)n ( ) = n+1 .
0 dn s 0 dn s s s
More generally,
Z ∞
n!
(21.10) tn eat e−st dt = .
0 (s − a)n+1
Indeed,
Theorem 13. If F (s) = L{f (t)} exists for s > a ≥ 0 and if c is a constant,
(21.11) L{ect f (t)} = F (s − c), s > a + c.
−1
Conversely, if f (t) = L {F (s)}, then
(21.12) ect f (t) = L−1 {F (s − c)}.
29

Proof.
Z ∞ Z ∞
(21.13) L{ect f (t)} = e−st ect f (t)dt = e−(s−c)t f (t)dt = F (s − c).
0 0

Now we examine the Laplace transform of a derivative.
Theorem 14. Suppose f is continuous and f 0 is piecewise continuous on any interval 0 ≤ t ≤ A.
Also suppose that there exist constants K, a, M such that |f (t)| ≤ Keat for t ≥ M . Then L{f 0 (t)}
exists for s > a, and
(21.14) L{f 0 (t)} = sL{f (t)} − f (0).
Proof. Integrating by parts,
Z A Z A
(21.15) e−st f 0 (t)dt = e−st f (t)|A
0 + s e−st f (t)dt.
0 0
Taking the limit as A → ∞,
(21.16) L{f 0 (t)} = −f (0) + sL{f (t)}.

Corollary 3. Suppose that f , f 0 , ..., f (n−1) are continuous and that f (n) is piecewise continuous on
an interval 0 ≤ t ≤ A. Also suppose that there exists constants K, a, and M such that |f (t)| ≤ Keat ,
and all the derivatives of f are bounded by Keat for t ≥ M . Then,
(21.17) L{f (n) (t)} = sn L{f (t)} − sn−1 f (0) − ... − sf (n−2) (0) − f (n−1) (0).
Now, solve the differential equation,
(21.18) y 00 − y 0 − 2y = 0, y(0) = 1, y 0 (0) = 0.
Doing the Laplace transform,
(21.19) (s2 − s − 2)L{y(t)} − (s − 1)y(0) = 0.
Therefore, doing partial fractions,
s−1 1 2
(21.20) L{y(t)} = = + .
(s − 2)(s + 1) 3(s − 2) 3(s + 1)
Therefore,
1 2t 2 −t
(21.21) y(t) = e + e .
3 3
22. Initial value problems
Now consider the initial value problem
dn y dn−1 y
(22.1) + an−1 + ... + a0 y(t) = 0, y(0) = c0 , ..., y (n−1) (0) = cn−1 .
dtn dtn−1
Taking the Laplace transform of both sides,
(22.2)
L{y(t)}·{sn +an−1 sn−1 +...+a0 } = sn−1 y(0)+...+y (n−1) (0)+an−1 {sn−2 y(0)+...+y (n−2) (0)}+...+a1 y(0).
30 BENJAMIN DODSON

Therefore, doing some algebra,


bn−1 sn−1 + ... + b0
(22.3) L{y(t)} = .
sn + an−1 sn−1 + ... + a0
By the fundamental theorem of algebra,
Y
(22.4) sn + an−1 sn−1 + ... + a0 = (s − m1 ) · · · (s − mn ) = (s − mj )kj .
n=k1 +...+kl

Then by partial fractions,


X pj (s) X X aij
(22.5) L{y(t)} = = .
(s − mj )kj (s − mj )i
k1 +...+kl =n n=k1 +...+kl 1≤i≤kj

If (22.2) is real valued then aij = āij 0 when mj = m̄j 0 . Then, doing the inverse Laplace transform,
aij aij
(22.6) L−1 ( i
)= ti−1 etmj .
(s − mj ) (i − 1)!

23. Convolution
We define the convolution,
Z t Z t
(23.1) h(t) = (f ∗ g)(t) = f (t − τ )g(τ )dτ = f (τ )g(t − τ )dτ = (g ∗ f )(t).
0 0

Theorem 15. If F (s) = L{f (t)} and G(s) = L{g(t)} both exist for s > a ≥ 0, then
(23.2) H(s) = F (s)G(s) = L{h(t)}, s > a,
where
(23.3) h(t) = (f ∗ g)(t) = (g ∗ f )(t).
Proof. By direct computation,
Z ∞ Z ∞ Z ∞ Z ∞
(23.4) F (s)G(s) = e−sτ f (τ ) · e−sξ g(ξ)dξ = f (τ ) e−s(τ +ξ) g(ξ)dξdτ.
0 0 0 0
Setting t = τ + ξ, ξ = t − τ , so by a change of variables, since τ ≤ t,
Z ∞ Z t
−ts
(23.5) F (s)G(s) = e f (τ )g(t − τ )dτ dt = H(s).
0 0

We can use this computation to compute the inverse Laplace transform. Indeed, let
a 1 a
(23.6) H(s) = 2 = 2· 2 .
s (s + a) s s + a2
We know that
1 a
(23.7) L−1 { } = t, L−1 { } = sin(at).
s2 s2 + a2
Therefore,
Z t Z t
t t τ 1 t sin(at)
(23.8) h(t) = (t − τ ) sin(aτ )dτ = − cos(at) + + cos(aτ )|t0 − cos(aτ )dτ = − .
0 a a a a 0 a a2
31

Now find the solution of the initial value problem


(23.9) y 00 + 4y = g(t), y(0) = 3, y 0 (0) = −1.
Taking the Laplace transform of both sides,
(23.10) (s2 + 4)Y (s) − 3s + 1 = G(s).
Doing some algebra,
3s − 1 G(s)
(23.11) Y (s) = + .
s2 + 4 s2 + 4
Decomposing Y (s),
s 1 2 1 2
(23.12) Y (s) = 3 − + G(s).
s2 + 4 2 s + 4 2 s2 + 4
2

Therefore,
1 t
Z
1
(23.13) y(t) = 3 cos(2t) −sin(2t) + sin(2(t − τ ))g(τ )dτ.
2 2 0
In general, suppose we have the initial value problem with the forcing function,
(23.14) y (n) (t) + an−1 y (n−1) (t) + ... + a0 y(t) = g(t), y(0) = c0 , y (n−1) (0) = cn−1 .
Then if we let
1
(23.15) H(s) = ,
sn + an−1 sn−1 + ... + a0

(23.16) Y (s) = (bn−1 sn−1 + ... + b0 )H(s) + G(s)H(s).


Then if we let h(t) = L−1 {H(s)}, then
Z t
(23.17) L−1 {G(s)H(s)} = h(t − τ )g(τ )dτ.
0

Definition 9. The function H is called the transfer function.


We can use this formula to obtain the variation of parameters formula. Suppose we have a second
order equation,
(23.18) y 00 + a1 y 0 + a0 y = g(t), y(0) = c0 , y 0 (0) = c1 .
Taking the Laplace transform of both sides,
(23.19) (s2 + a1 s + a0 )Y (s) = b1 s + b0 + G(s).
Therefore,
b1 s + b0 G(s)
(23.20) y(t) = L−1 { } + L−1 { 2 }.
s2 + a1 s + a0 s + a1 s + a0
Case 1, no real roots: In this case, s2 + a0 s + a1 = (s − a)2 + b2 for some real a and b. Then,
1 eat
(23.21) L−1 { } = sin(bt).
(s − a)2 + b2 b
Therefore, if c0 = c1 = 0,
Z t
(23.22) y(t) = ea(t−τ ) sin(b(t − τ ))g(τ )dτ.
0
32 BENJAMIN DODSON

Meanwhile, doing the variation of parameters calculation,


Z t aτ Z t aτ
e sin(bτ )g(τ ) e cos(bτ )g(τ )
(23.23) y(t) = −eat cos(bt) 2aτ
dτ + e at
sin(bt) dτ.
0 be 0 be2aτ
Case 2, one real root: In this case, s2 + a0 s + a1 = (s − a)2 . In this case,
1
(23.24) L−1 { } = teat ,
(s − a)2
so if c0 = c1 = 0,
Z t
(23.25) y(t) = (t − τ )ea(t−τ ) g(τ )dτ.
0
Doing the variation of parameters calculation,
Z t aτ Z t aτ
e τ g(τ ) e g(τ )
(23.26) y(t) = −eat 2aτ
dτ + te at
dτ.
0 e 0 e2aτ
Case 3, two real roots: In this case, s2 + a0 s + a1 = (s − r1 )(s − r2 ). Doing the variation of
parameters formula,
(23.27)
Z t Z t Z t
r1 t e r2 τ r2 t g(τ )er1 τ g(τ )
y(t) = −e (r +r )τ
g(τ )dτ +e (r +r )τ
dτ = (er2 (t−τ ) −er1 (t−τ ) )dτ.
0 (r2 − r1 )e
1 2
0 (r2 − r1 )e
1 2
0 r2 − r1
Meanwhile, doing partial fractions,
1 1 1
(23.28) L−1 { }= e r2 t − e r1 t .
(s − r1 )(s − r2 ) r2 − r1 r2 − r1
We can use the Laplace transform and convolution to solve a system of equations,
   −t 
d→ − −2 1 → − 2e
(23.29) x (t) = x (t) + .
dt 1 −2 3t
Taking the Laplace transform of both sides,
(23.30) sX(s) − →−
x (0) = AX(s) + G(s),
where
2
 
(23.31) G(s) = s+1 .
3
s2
If →

x (0) = 0, doing some algebra,
(23.32) (sI − A)X(s) = G(s).
Doing some algebra,
(23.33) X(s) = (sI − A)−1 G(s),
where
   
s+2 −1 −1 1 s+2 1
(23.34) sI − A = , (sI − A) = .
−1 s+2 (s + 1)(s + 3) 1 s+2
Therefore,
2(s+2)
!
3
(s+1)2 (s+3) + s2 (s+1)(s+3)
(23.35) X(s) = 2 3(s+2) .
(s+1)2 (s+3) + s2 (s+1)(s+3)
33

Therefore,
         

− 2 −t 2 1 1 1 1 4
(23.36) x (t) = e − e−3t + te−t + t− .
1 3 −1 1 2 3 5

24. Heaviside function


Now consider the problem where the right hand side of (22.1) is not equal to zero and need not
be continuous.
Definition 10 (Heaviside function). Let uc (t) = 0 for t < c and uc (t) = 1 for t ≥ c.

∞ ∞
e−cs
Z Z
(24.1) L{uc (t)} = e−st uc (t)dt = e−st = .
0 c s
The Laplace transform intertwines multiplication by an exponential and translation.
Theorem 16. If the Laplace transform of f (t), F (s) = L{f (t)} exists for s > a ≥ 0, and if c is a
positive constant, then
(24.2) L{uc (t)f (t − c)} = e−cs L{f (t)} = e−cs F (s), s > a.
−1
Conversely, if f (t) is the inverse Laplace transform of F (s), f (t) = L {F (s)}, then
(24.3) uc (t)f (t − c) = L−1 {e−cs F (s)}.
Proof. By direct computation and a change of variables,
Z ∞
(24.4) L{uc (t)f (t − c)} = e−st f (t − c)dt = e−cs F (s).
c

We can apply Theorem 16 to obtain (24.1). Also, if f (t) = sin(t) + u π4 (t) cos(t − π4 ), then
(24.5) F (s) = L{sin t} + e−πs/4 L{cos t}.
On the other hand, if
1 − e−2s
(24.6) F (s) = , f (t) = t − u2 (t)(t − 2).
s2
Consider the ordinary differential equation
(24.7) 2y 00 + y + 2y = g(t), g(t) = u5 (t) − u20 (t), y(0) = y 0 (0) = 0.
Taking the Laplace transform of both sides, let Y (s) be the Laplace transform of y(t).
1
(24.8) (2s2 + s + 2)Y (s) = (e−5s − e−20s ).
s
Doing some algebra,
e−5s − e−20s
(24.9) Y (s) = .
s(2s2 + s + 2)
Doing some partial fractions,
1 a bs + c 1 1
(24.10) = + 2 , a= , b = −1, c=− .
s(2s2 + s + 2) s 2s + s + 2 2 2
34 BENJAMIN DODSON

Then if
1 1 (s + 41 ) + 14
(24.11) H(s) = , h(t) = − L−1 { 15 }.
s(2s2 + s + 2) 2 2((s + 14 )2 + 16 )
Next,

−1 (s + 14 ) 1 −t/4 15
(24.12) −L { 15 } = − 2 e cos( t).
2((s + 41 )2 + 16 )
4

1 15
√ √
−1 4 1 −1 4 e−t/4 15
(24.13) −L { 1 2 15 } = − √ L { 15 } = −
√ sin( t).
2((s + 4) + 16 ) 15 2((s + 14 )2 + 16 ) 2 15 4
Next, using Theorem 16,
(24.14) y(t) = u5 (t)h(t) − u20 (t)h(t).
Next, consider the problem
1
y 00 (t) + 4y(t) = g(t), g(t) = 0, 0 ≤ t < 5, g(t) = (t − 5), 5 ≤ t < 10,
(24.15) 5
g(t) = 1, t ≥ 10, y(0) = y 0 (0) = 0.
Taking the Laplace transform of both sides,
(24.16) (s2 + 4)Y (s) = L{g(t)} = G(s).
Rewriting,
1
(24.17) g(t) = (u5 (t)(t − 5) − u10 (t)(t − 10)),
5
so
e−5s − e−10s
(24.18) G(s) = .
5s2
Doing some algebra,
e−5s − e−10s 1
(24.19) Y (s) = · 2 2 .
5 s (s + 4)
Now then,
1 1 1/4 1/4
(24.20) = 2 − 2 ,
s2 s2 + 4 s s +4
Now then,
1/4 1/4 1 1
(24.21) L−1 { − 2 } = t − sin(2t).
s2 s +4 4 8
Therefore,
1 1 1 1 1
(24.22) y(t) = ( u5 (t)(t − 5) − u5 (t) sin(2(t − 5)) − u10 (t)(t − 10) + u10 (t) sin(2(t − 10))).
5 4 8 4 8
35

25. Impulse functions


Consider the differential equation
(25.1) ay 00 + by 0 + cy = g(t),
where g(t) is large during a short interval t0 − τ < t < t0 + τ for some τ > 0, and zero otherwise.
Now then, define the intergral
Z t0 +τ
(25.2) I(τ ) = g(t)dt,
t0 −τ

and since g(t) = 0 outside the interval (t0 − τ, t0 + τ ), then


Z ∞
(25.3) I(τ ) = g(t)dt.
−∞

For example, define


1
(25.4) g(t) = dτ (t) = , −τ < t < τ, g(t) = 0, otherwise.

Then,
(25.5) lim dτ (t) = 0, t 6= 0, lim I(τ ) = 1.
τ &0 τ &0
R∞
Define the unit impulse function, δ(t), δ(t) = 0, t 6= 0 and −∞ δ(t)dt = 1. This is called the
Dirac delta function. Now then,
Z ∞
(25.6) L{δ(t − t0 )} = lim L{dτ (t − t0 )} = e−st dτ (t − t0 )dt = e−st0 .
τ &0 0

In general, for any continuous function f (t),


Z ∞
(25.7) f (t)δ(t − t0 )dt = f (t0 ).
−∞

For example, solve the differential equation,


(25.8) 2y 00 + y 0 + 2y = δ(t − 5), y(0) = y 0 (0) = 0.
Taking the Laplace transform of both sides,
(25.9) (2s2 + s + 2)Y (s) = e−5s .
Taking the inverse Laplace transform of both sides,
15
√ √
−1 1 2 2 −t/4 15
(25.10) L { 2 } = L−1 { √ 4
} = √ e sin( t).
2s + s + 2 15 (s + 14 )2 + 15
16 15 4
Therefore,

2 −(t−5)/4 15
(25.11) y(t) = √ u5 (t)e sin( (t − 5)).
15 4
36 BENJAMIN DODSON

26. Existence and uniqueness of solutions


Consider the equation,
dx
(26.1) = F (t, x), x(t0 ) = x0 .
dt
Also suppose that F (t, x) satisfies the Lipschitz condition,
(26.2) kF (t, x) − F (t, y)k ≤ Lkx − yk.
We can achieve this bound if
(26.3) kDx F (t, x)k ≤ L.
Proposition 13. Suppose F : I × Ω → Rn is bounded and continuous and satisfies the Lipschitz
condition and that x0 ∈ Ω. Then, there exists T0 > 0 and a unique C 1 solution to (26.1) for
|t − t0 | < T0 .
Proof. We prove this using Picard iteration. Indeed, a solution to (26.1) satisfies
Z t
(26.4) x(t) = x0 + F (s, x(s))ds.
t0

Then by Picard iteration, define


Z t
(26.5) xn+1 (t) = x0 + F (s, xn (s))ds.
t0

We assume that there exists R > 0 such that BR (x0 ) ⊂ Ω and


(26.6) kF (s, x)k ≤ M,
for any x ∈ BR (x0 ). Clearly, x0 (t) = x0 for all t. Also,
(26.7) kxn+1 (t) − x0 k ≤ M |t − t0 |,
so for |t − t0 | < T0 , T0 sufficiently small, implies that xn+1 (t) also takes values in BR (x0 ).
Now then, by the Lipschitz continuity,
(26.8) kxn+1 (t) − xn (t)k ≤ LT0 max kxn (s) − xn−1 (s)k.
|s−t0 |≤T0

1
Thus, for T0 ≤ 2L ,

(26.9) max kxn+1 (t) − xn (t)k ≤ 2−n R,


|t−t0 |≤T0

so then, the infinite sequence,



X
(26.10) x(t) = x0 + (xn+1 (t) − xn (t)).
n=0

For each closed, bounded subset K of Ω, (26.2) and (26.6) hold. If a solution stays in K, then
we can extend a solution.
37

Proposition 14. Let F be as in Proposition 13 but with Lipschitz and boundedness conditions only
holding on a closed, bounded set K. Assume that [a, b] is contained in the open interval I and that
x(t) solves (26.4) for t ∈ (a, b). Assume that there exists a closed, bounded set K ⊂ Ω such that
x(t) ∈ K for all t ∈ (a, b). Then there exist a1 < a and b1 > b such that x(t) solves (26.1) for
t ∈ (a1 , b1 ).
We can use this result to prove global existence. For example, consider the 2 × 2 system,
dy dv
(26.11) = v, = −y 3 .
dt dt
In this case,
d v2 y4
(26.12) ( + ) = 0.
dt 2 4
Therefore, the solution x(t) = (y(t), v(t)) lies on a level curve
y4 v2
(26.13) + = C.
4 2

27. Nonlinear ODEs : The phase plane


We turn now to nonlinear ordinary differential equations of the form
dy
(27.1) = f (y).
dt
Such equations usually do not have a solution constructed of elementary functions.
Of particular importance are the critical points of (27.1). These are points y0 such that f (y0 ) = 0.
In this case, of course, y(t) = y0 is a solution to (27.1). Then, by Taylor’s formula,
d(y − y0 )
(27.2) = f (y) − f (y0 ) = Df (y0 ) · (y − y0 ) + o(y − y0 ).
dt
Then we study the eigenvalues and eigenvectors of Df (y0 ) = A.

Definition 11. We say that a critical point is stable if, given any  > 0, there exists δ > 0 such
that if kx(0) − x0 k < δ, then the solution exists for all positive t and satisfies kx(t) − x0 k < . A
point that is not stable is unstable. A solution is said to be asymptotically stable if, in addition to
being stable,
(27.3) lim x(t) = x0 .
t→∞

Case 1: Real, unequal eigenvalues of the same sign. In this case, the solution to the linearized
equation is
(27.4) →
−x (t) = c ξ (1) er1 t + c ξ (2) er2 t .
1 2

Stable if negative, unstable if both positive.


Case 2: Real, unequal eigenvalues of opposite sign. In this case, we have a stable direction and an
unstable direction, see (27.4).
Saddle point.
38 BENJAMIN DODSON

Case 3: Equal eigenvalues. In this case, we could have Df (y0 ) = λI, so then in that case we
can again use (27.4). This is called a proper node. Otherwise, if we have one eigenvalue and a
generalized eigenvalue, which is called an improper node,
(27.5) →

x (t) = c ξert + c (ξtert + ηert ).
1 2

This is unstable if positive and stable if negative.


Case 4: Complex eigenvalues with nonzero real part. In this case, we may have either a spiral sink
or a spiral source. In this case, the linearized equation is
 
dx λ µ
(27.6) = x,
dt −µ λ
and the matrix exponential is given by
 
cos(µt) sin(µt)
(27.7) etA = eλt .
− sin(µt) cos(µt)
This is an unstable spiral if λ > 0 and a stable spiral if λ < 0.
Case 5: In this case, λ = 0, so we have a center. In this case,
 
dx 0 µ
(27.8) = x,
dt −µ 0
and the matrix exponential is given by
 
cos(µt) sin(µt)
(27.9) etA = .
− sin(µt) cos(µt)
This is a center, which is stable.

28. Predator-prey equations


The simplest model for a species population growth is
dx
(28.1) = ax.
dt
Remark 4. Of course, a population should be an integer.
The solution to this equation is
(28.2) x(t) = eat x(0).
Of course, resources are not unlimited. Instead, we consider the logistic population growth
equation,
dx 1
(28.3) = ax(1 − bx), b= .
dt K
This is a separable equation,
dx
(28.4) = adt.
x(1 − bx)
By partial fractions,
1 1 K
(28.5) = + .
x(1 − bx) x (1 − bx)
39

Integrating both sides,


(28.6) ln(x) − K 2 ln(1 − bx) = at + C.
1
This equation has two critical points, x = 0 and x = b = K.
Now we turn to a 2 × 2 system of equations, the predator-prey equations. Let x(t) be the
population of prey, y(t) the population of predator, and α the rate at which the predator eats the
prey. Then we have the system of equations
dx
= ax − αxy = x(a − αy),
(28.7) dt
dy
= −cy + γxy = y(−c + γx).
dt
In this case, if y = 0, then we have exponential growth of the prey. If x = 0, the population of the
predator goes to zero.
This equation has two critical points, (x, y) = (0, 0), and (x, y) = ( γc , αa ).
The origin: (x, y) = (0, 0)
In this case, the linearization of (28.7) is given by
    
d x a 0 x
(28.8) = + O(x2 + y 2 ).
dt y 0 −c y
In this case, the critical point is a saddle point.
The point (x, y) = ( γc , αa ):
To simplify the computations, consider the equation
dx
= x(1 − 0.5y),
(28.9) dt
dy
= y(−0.75 + 0.25x).
dt
In this case, the critical point is (x, y) = (3, 2). Expanding around (3, 2),
(28.10)
dx
= x(1 − 0.5y) = (3 + (x − 3))(1 − (0.5) ∗ 2 − 0.5(y − 2)) = −1.5(y − 2) − 0.5(x − 3)(y − 2),
dt
dy
= y(−0.75 + 0.25x) = (2 + (y − 2))(−0.75 + 0.25 ∗ 3 + 0.25(x − 3)) = 0.5(x − 3) + 0.25(x − 3)(y − 2).
dt
Linearizing the matrix, we have
    
d x−3 0 −1.5 x−3
(28.11) = + O((x − 3)2 + (y − 2)2 ).
dt y − 2 0.5 0 y−2
 
0 −1.5
The matrix has two imaginary eigenvalues, so the linearization is periodic solution.
0.5 0
For the nonlinear solution, observe that
dy
dy dt y(−0.75 + 0.25x)
(28.12) = dx
= .
dx dt
x(1 − 0.5y)
This equation is separable, obtaining
dy dx
(28.13) (1 − 0.5y) = (−0.75 + 0.25x).
y x
40 BENJAMIN DODSON

Integrating both sides,


(28.14) ln(y) − 0.5y + 0.75 ln(x) − 0.25x = C.
Therefore, we have a trajectory that circles the critical point.

29. Competing species equation


Let x(t) and y(t) be two competing species,
dx
= ax(1 − bx) − cxy,
(29.1) dt
dy
= αy(1 − βy) − γxy.
dt
In this case, each population is governed by the logistic equation in the absence of the other species.
Let us consider the specific equation,
dx
= x(1 − x − y),
(29.2) dt
dy y
= (3 − 4y − 2x).
dt 4
There are two critical points of the second equation when x = 0: y = 0 and y = 34 . There are two
critical points of the first equation when y = 0: x = 1 and x = 0. Finally, we have the critical
point:
(29.3) 1 − x − y = 0, 3 − 4y − 2x = 0,
which has the fourth critical point ( 21 , 21 ).
In this case, we have the Jacobian
 
1 − 2x − y −x
(29.4) .
−0.5y 0.75 − 2y − 0.5x
At (x, y) = (0, 0),
    
d x 1 0 x
(29.5) = + O(x2 + y 2 ).
dt y 0 0.75 y
This is an unstable equilibrium.
At (x, y) = (0, 43 ),
    
d x 0.25 0 x 3
(29.6) 3 = 3 + O(x2 + (y − )2 ).
dt y − 4 −0.375 −0.75 y − 4 4
This is a saddle point. The eigenvalues and eigenvectors are
   
1 8 3 0
(29.7) r1 = , e1 = , r2 = − , e2 = .
4 −3 4 1
At (x, y) = (1, 0),
    
d x−1 −1 −1 x−1
(29.8) = + O((x − 1)2 + y 2 ).
dt y 0 0.25 y
41

The eigenvalues and eigenvectors are


   
1 1 4
(29.9) r1 = −1, e1 = , r2 = , .
0 4 −5
This is also a saddle point.
At (x, y) = ( 21 , 12 ),
d x − 12 x − 21
    
−0.5 −0.5 1 1
(29.10) = + O((x − )2 + (y − )2 ).
dt y − 12 −0.25 −0.5 y − 12 2 2
The eigenvalues and eigenvectors are
√ √
√  √ 
1 2 1 2
(29.11) r1 = (−2 + 2), e1 = , r2 = (−2 − 2), e2 = .
4 −1 4 1
This is a stable critical point.
Now consider the system of equations
dx
= x(1 − x − y),
(29.12) dt
dy
= y(0.5 − 0.25y − 0.75x).
dt
This has the critical points (0, 0), (0, 2), (1, 0), and ( 21 , 12 ). We have the Jacobian
 
1 − 2x − y −x
(29.13) .
−0.75y 0.5 − 0.5y − 0.75x
At (x, y) = (0, 0), we have the Jacobian
 
1 0
(29.14) ,
0 0.5
which is an unstable equilibrium.
At (x, y) = (0, 2), we have the Jacobian
 
−1 0
(29.15) ,
−1.5 −0.5
   
1 0
which has the eigenvalues r1 = −1, r2 = −0.5 and the eigenvectors e1 = and e2 = . Thus,
3 1
(x, y) = (0, 2) is a stable equilibrium.
At (x, y) = (1, 0), we have the Jacobian
 
−1 −1
(29.16) ,
0 −0.25
   
1 1 4
which has the eigenvalues r1 = −1 and r2 = − 4 and eigenvectors e1 = and e2 = . This
0 −3
is also a stable equilibrium.
At (x, y) = ( 21 , 12 ), the Jacobian is
 
−0.5 −0.5
(29.17) .
−0.375 −0.125
42 BENJAMIN DODSON

The eigenvalues and eigenvectors are


(29.18)
√ √
   
1 1 √ 1 1 √
r1 = (−5+ 57), e1 = 1 , r2 = (−5− 57), e2 = 1 .
16 8 (−3 − 57) 16 8 (−3 + 57)
This critical point is a saddle point. This forms a separatrix.

References
[BD65] William E. Boyce and Richard C. DiPrima. Elementary differential equations and boundary value problems.
John Wiley & Sons, Inc., New York-London-Sydney, 1965.
[Tay22] Michael E. Taylor. Introduction to differential equations, volume 52 of Pure and Applied Undergraduate
Texts. American Mathematical Society, Providence, RI, second edition, [2022] 2022.
c

You might also like