Math302 LectureNotes PDF
Math302 LectureNotes PDF
BENJAMIN DODSON
Contents
1. The method of integrating factors 2
2. Separable differential equations 3
3. Linear and nonlinear differential equations 4
4. Exact differential equations and integrating factors 5
5. Second order equations - reducible cases 6
6. Homogeneous differential equations with constant coefficients 8
7. Repeated roots: reduction of order 9
8. Complex roots of the characteristic equation 10
9. The Wronskian 10
10. Nonhomogeneous equations: method of undetermined coefficients 12
11. Variation of parameters 13
12. Mechanical and electrical vibrations 14
13. Vector spaces and linear transformations 15
14. Basis and dimension 16
15. Eigenvalues and eigenvectors 17
16. The matrix exponential 18
17. Generalized eigenvectors and the minimal polynomial 21
18. Systems of first order linear equations 23
19. Nonhomogeneous linear systems 24
20. Variable coefficient systems 26
21. Laplace Transform 28
22. Initial value problems 30
23. Convolution 30
24. Heaviside function 33
25. Impulse functions 35
26. Existence and uniqueness of solutions 36
27. Nonlinear ODEs : The phase plane 37
28. Predator-prey equations 38
29. Competing species equation 40
References 42
These class notes are primarily taken from [BD65] and [Tay22].
1
2 BENJAMIN DODSON
if f (y0 ) = 0, then the solution to (2.10) is of the form y(x) = y0 . In this case we would not want
to divide by f (y).
is well-defined.
Now then, suppose (3.1) has two solutions, y1 (t) and y2 (t). Then, let y1 (t) − y2 (t) = y(t). Then,
dy
(3.3) + p(t)y(t) = 0, y(t0 ) = 0.
dt
We can show that the only solution to (3.3) is y(t) = 0.
Theorem 2. Suppose f and ∂f ∂y are continuous in some rectangle α < t < β, γ < y < δ, containing
(t0 , y0 ). Then there exists some interval t0 − h < t < t0 + h contained in α < t < β, there is a
unique solution y(t) = φ(t) of the initial value problem
dy
(3.4) = f (t, y), y(t0 ) = y0 .
dt
We can apply this theorem to the initial value problem
(3.5) ty 0 (t) + 2y(t) = 4t2 , y(1) = 2.
2
Doing some algebra, p(t) = t, which is continuous on t 6= 0.
Now consider the initial value problem
dy 3x2 + 4x + 2
(3.6) = , y(0) = −1.
dx 2(y − 1)
∂f
In this case, f and ∂y are continuous on any rectangle that does not contain y = 1. If x = 0 and
y = 1, we obtain
(3.7) y 2 − 2y = x3 + 2x2 + 2x + c, c = −1.
For the equation,
dy
(3.8) = y 1/3 , y(0) = 0,
dt
we do not have a unique solution.
The initial value problem,
dy
(3.9) = y2 , y(0) = 1,
dt
has a solution on the interval (0, 1).
5
Then,
dv y2
(5.19) = .
dy v
Therefore,
1 2 1
(5.20) v = y 3 + C.
2 3
Therefore,
r
dy 2 3
(5.21) =v=± y + 2C.
dt 3
Z
dy
(5.22) ± q = t + C2 .
2 3
3y + 2C
(6.1) ay 00 + by 0 + cy = 0.
Taking y(t) = ert , y 0 (t) = rert , and y 00 (t) = r2 ert . Substituting this into (6.1),
This condition is only satisfied when ar2 + br + c = 0. This equation is called the characteristic
equation.
The characteristic equation is r2 − 1 = 0, which has solutions r = ±1. A general solution of (6.3)
is given by
Solve
1
(6.6) 4y 00 − 8y 0 + 3y = 0, y(0) = 2, y 0 (0) = .
2
9
Therefore,
2 3/2
(7.13) w(t) = ct1/2 , and v(t) = ct + k.
3
9. The Wronskian
Let us define the concept of a differential operator. Suppose p(t) and q(t) are continuous func-
tions. Then let
(9.1) L[φ] = φ00 (t) + p(t)φ0 (t) + q(t)φ(t).
With this equation, we associate a set of initial conditions,
(9.2) y(t0 ) = y0 , y 0 (t0 ) = y00 .
We have the existence and uniqueness theorem.
11
Furthermore, W [y1 , y2 ](t) is either zero for all t ∈ I or else is never zero on I.
Proof. By direct computation,
(9.9) (y1 y200 − y100 y2 ) + p(t)(y1 y20 − y10 y2 ) = 0.
Now then, observe that W 0 = y1 y200 − y100 y2 , proving that
(9.10) W 0 + p(t)W = 0.
Thus,
Z
(9.11) W (t) = c exp(− p(t)dt).
12 BENJAMIN DODSON
(10.6) y 00 − 3y 0 − 4y = 3e2t .
Take the particular solution Y (t) = Ae2t . In this case, Y (t) = − 21 e2t .
(10.7) y 00 − 3y 0 − 4y = 2 sin t.
In this case, use the particular solution Y (t) = A sin t + B cos t. Indeed, we can decompose
1 1
(10.8) 2 sin t = eit + e−it .
i i
Find the particular solution,
(10.9) y 00 − 3y 0 − 4y = −8et cos(2t).
In this case,
(10.10) Y (t) = Aet cos(2t) + Bet sin(2t).
Here is a table.
(10.11)
Pn (t) = a0 tn + ... + an , ts (A0 tn + ... + An ),
Pn (t)eαt , ts (A0 tn + ... + An )eαt ,
Pn eαt (A1 sin(βt) + A2 cos(βt)), ts (A0 tn + ... + An )eαt cos(βt) + ts (B0 tn + ... + Bn )eαt sin(βt).
13
d2 x
(12.1) m + kx = 0.
dt2
The characteristic polynomial of (12.1) is given by
(12.2) mr2 + k = 0,
(12.6) γ 2 − 4mk.
We have an identical calculation for an RLC circuit. The voltage drop across a resistor is
RI = R dQ Q
dt , the voltage drop across a capacitor is C , and the voltage drop across an inductor is
2
d Q
L dI
dt = L dt2 . Then by Kirchoff’s law,
d2 Q dQ 1
(12.10) L 2
+R + Q = E(t).
dt dt C
15
We can also compose linear transformations using matrix multiplication. Suppose A and B are
matrices, A = (aij ), B = (bij ), and let
n
X
(13.9) AB = (dij ), dij = ail blj .
l=1
(14.2) R(T ) = {T v : v ∈ V }.
The null space is a subspace of V and the range is a subspace of W . If N (T ) = {0}, we say that T
is an injection, or one-to-one. If R(T ) = W , we say that T is surjective or onto. If both are true,
we say that T is an isomorphism. We also say that T is invertible.
Let S = {v1 , ..., vk } be a finite set in a vector space V . The span of S is the set of vectors in V
that are of the form
(14.3) c1 v1 + ... + ck vk , ck ∈ R.
This set, Span(S) is a linear subspace of V .
Definition 2. The set S is said to be linearly dependent if and only if there exist scalars c1 , ..., ck ,
not all zero, such that (14.3) = 0. Otherwise, S is said to be linearly independent.
Definition 3. If {v1 , ..., vk } is linearly independent, we say that S is a basis of span(S), and that
k is the dimension of span(S). In particular, if span(S) = V , k = dim(V ). Also, V has finite basis
and is finite dimensional.
It remains to show that any two bases of a finite dimensional vector space V must have the same
number of elements, and thus dim(V ) is well-defined. Suppose V has a basis S = {v1 , ..., vk }. Then
define the linear transformation
(14.4) A : Rk → V,
by
(14.5) A(c1 e1 + ... + ck ek ) = c1 v1 + ... + ck vk ,
where {e1 , ..., ek } is the standard basis of Rk .
Linear independence of S is equivalent to the injectivity of A. The statement that S spans V
is equivalent to the surjectivity of A. The statement that S is a basis of V is equivalent to the
statement that A is an isomorphism, with inverse specified by
(14.6) A−1 (c1 v1 + ... + ck vk ) = c1 e1 + ... + ck ek .
We can show that dim(V ) is well-defined.
Lemma 1. If v1 , ..., vk+1 are vectors in Rk , then they are linearly dependent.
17
Proof. This is clear for k = 1. Now we can suppose that the last component of some vj is nonzero,
since otherwise we are in Rk−1 . Reorder so that the last component of vk+1 is nonzero. We can
assume it is equal to 1. Then take
(14.7) wj = vj − vkj vk+1 .
Then by induction, there exist a1 , ..., ak , not all zero such that a1 w1 + ... + ak wk = 0. Therefore,
(14.8) a1 v1 + ... + ak vk = (a1 vk1 + ... + ak vkk )vk+1 ,
which gives linear dependence.
Proposition 1. If V has a basis {v1 , ..., vk } with k elements and {w1 , ..., wl } ⊂ V is linearly
independent, then l ≤ k.
Proof. Take the isomorphism A : Rk → V . Then, {A−1 w1 , ..., A−1 wl } is linearly independent in
Rk , so l ≤ k.
Corollary 1. If V is finite dimensional, then any two bases of V have the same number of elements.
If V is isomorphic to W , these two spaces have the same dimension.
Proposition 2. Suppose V and W are finite dimensional vector spaces, and
(14.9) A : V → W,
is a linear map. Then,
(14.10) dimN (A) + dimR(A) = dim(V ).
Proof. Let {w1 , ..., wl } be a basis of N (A) ⊂ V , and complete it to a basis of V ,
(14.11) {w1 , ..., wl , u1 , ..., um }.
Let L = span{u1 , ..., um } and let A0 = A|L . Then,
(14.12) R(A0 ) = R(A),
and
(14.13) N (A0 ) = N (A) ∩ L = 0.
Therefore, dimR(A) = dimR(A0 ) = dim(L) = m.
Corollary 2. Let V be finite dimensional and let A : V → V be linear. Then A is injective if and
only if A is surjective if and only if A is an isomorphism.
Proposition 3. Let A be an n×n matrix defining A : Rn → Rn . Then the following are equivalent.
A is invertible, the columns of A are linearly independent, the columns of A span Rn .
A linear transformation might have only one eigenvector, up to scalar multiple. Consider
2 1 0
(15.4) 0 2 1 .
0 0 2
In this case the characteristic polynomial is given by (λ − 2)3 . Now then, if
2 1 0 v1 v1
(15.5) 0 2 1 v2 = 2 v2 ,
0 0 2 v3 v3
then v2 = v3 = 0.
Proposition 5. Suppose that the characteristic polynomial of T ∈ L(V ) has k distinct roots
λ1 , ..., λk with eigenvectors vj ∈ E(T, λj ), 1 ≤ j ≤ k. Then {v1 , ..., vk } is linearly independent.
In particular, if k = dim(V ), these vectors form a basis of V .
Proof. Suppose {v1 , ..., vk } is a linearly dependent set. Then
(15.6) c1 v1 + ... + ck vk = 0,
reordering so that c1 6= 0. Applying T − λk I to (15.6) gives
(15.7) c1 (λ1 − λk )v1 + ... + ck−1 (λk−1 − λk )vk−1 = 0.
Thus, {v1 , ..., vk−1 } is linearly dependent. Arguing by induction, we obtain a contradiction.
Observe that in the case that we have k linearly independent eigenvectors, the eigenvectors
{v1 , ..., vk } form a natural basis of Rk . Indeed, for any vector v ∈ Rk ,
(15.8) T (c1 v1 + ... + ck vk ) = c1 λ1 v1 + ... + ck λk vk .
Then, we can compute kAk k ≤ kAkk , so the matrix exponential (16.1) converges. Therefore, by
the ratio test, (16.1) converges. Similarly, we can define,
∞ k
tA
X t
(16.3) e = Ak ,
k!
k=0
On the other hand, in general, it is not true that eA+B = eA eB . However, it is true if AB = BA.
Proposition 7. Given A, B ∈ M (n, C),
(16.10) eA+B = eA eB .
Proof.
(16.11)
d t(A+B) −tB −tA
(e e e ) = et(A+B) (A + B)e−tB e−tA − et(A+B) Be−tB e−tA − et(A+B) e−tB Ae−tA .
dt
Since AB k = B k A for any k, (16.11) = 0, which gives (16.10).
20 BENJAMIN DODSON
Now then,
(16.24)
et (2i + 2)eit + (2 − 2i)e−it
i + 1 −2 i − 1 −2 t cos t − sin t
− e(1+i)t + e(1−i)t = = e ,
4 1+i 4 1−i 4 −2ieit + 2ie−it sin t
and
et 2ieit − 2ie−it
i (1+i)t −2 i (1−i)t −2 t −2 sin t
(16.25) − e + e = =e .
2 1+i 2 1−i 2 (1 − i)eit + (1 + i)e−it cos t + sin t
Therefore,
cos t − sin t −2 sin t
(16.26) etA = et .
sin t cos t + sin t
Lemma 3. If V is finite dimensional and T ∈ L(V ), then there exists a nonzero polynomial p such
that p(T ) = 0.
2
Proof. If dim(V ) = n2 then {I, T, ..., T n } is linearly dependent.
Now let
(17.8) IT = {p : p(T ) = 0}.
Certainly we can add such polynomials together and get new polynomials that satisfy (17.8) or
multiply them.
Lemma 4. Let p1 be the polynomial with minimal degree among the nonzero polynomials in an
ideal I. Then any polynomial in I is of the form p1 (λ)q(λ) for some polynomial q.
Proof. Indeed, we can divide polynomials, so
(17.9) p(λ) = p1 (λ)q(λ) + r(λ),
where r(λ) has degree less than the degree of p1 . Since the degree of p1 is minimal, r(λ) = 0.
The minimal polynomial of T is of the form
K
Y
(17.10) mT (λ) = (λ − λj )kj .
j=1
Then let
Y
(17.11) pl (λ) = (λ − λj )kj .
j6=l
Proposition 8. If V is an n-dimensional complex vector space and T ∈ L(V ), then for each
l ∈ {1, ..., K},
(17.12) GE(T, λl ) = R(pl (T )).
Proof. For any v ∈ V ,
(17.13) (T − λl )kl pl (T ) = 0,
so pl (T ) : V → GE(T, λl ). Also, each factor
(17.14) (T − λj )kj : GE(T, λl ) → GE(T, λl ),
for any j 6= l, is an isomorphism, so pl (T ) : GE(T, λl ) → GE(T, λl ) is an isomorphism.
Proposition 9. If V is an n-dimensional complex vector space and T ∈ L(V ), then
(17.15) V = GE(T, λ1 ) + ... + GE(T, λK ).
Proof. We claim that the ideal generated by p1 , ..., pK is equal to all polynomials. Indeed, any ideal
is generated by a minimal element, which must have a zero. But p1 , ..., pK have no common zeros.
Therefore,
(17.16) p1 (T )q1 (T ) + ... + pK (T )qK (T ) = I.
Therefore,
(17.17) v = p1 (T )q1 (T )v + ... + pK (T )qK (T )v = v1 + ... + vK .
23
Proposition 10. Let GE(T, λl ) denote the generalized eigenspaces of T , and let Sl = {vl1 , ..., vl,dl },
with dl = dimGE(T, λl ) be a basis of GE(T, λl ). Then,
(17.18) S = S1 ∪ ... ∪ SK
is a basis of V .
Proof. We know that S spans V . We need to show that S is linearly independent. Suppose wl
are nonzero elements of GE(T, λl ). We can apply the same argument as in the case of distinct
eigenvalues, only we replace (T − λI) with (T − λI)k .
Definition 6. We say that T ∈ L(V ) is nilpotent provided T k = 0 for some k ∈ N.
Proposition 11. If T is nilpotent then there is a basis of V for which T is strictly upper triangular.
Proof. Let Vk = T k (V ), so V = V0 ⊃ V1 ⊃ V2 ⊃ ... ⊃ Vk−1 ⊃ {0} with Vk−1 6= 0. Then, choose a
basis for Vk−1 , augment it to produce a basis for Vk−2 , and so on. Then we have an upper triangular
matrix.
Now decompose V = V1 + ... + Vl , where Vl = GE(T, λl ). Then,
(17.19) Tl : Vl → Vl ,
where Tl = T |Vl . Then Spec(Tl ) = {λl }, and we can take a basis of Vl for which Tl is strictly upper
triangular. Now for any strictly upper triangular matrix T of dimension k, T k = 0. Thus,
K
Y
(17.20) KT (λ) = det(T − λI) = (λ − λl )dl , dl = dim(V ),
l=1
Definition 7. The matrix A given by (19.12) is called the companion matrix of the polynomial
(19.13) p(λ) = λn + an−1 λn−1 + ... + a1 λ + a0 .
Proposition 12. If p(λ) is a polynomial of the form (19.13), with companion matrix A given by
(19.12), then
(19.14) p(λ) = det(λI − A).
Proof. The determinant of a matrix is equal to the determinant of the transpose. Then,
λ −1 · · · 0 0
0 λ ··· 0 0
n−1
a0 (−1)n−1 .
(19.15) det(λI − A) = λdet · · · · · · · · ·
··· ···
+ (−1)
0 0 ··· λ −1
a1 a2 · · · an−2 an−1
Therefore,
(19.16) det(λI − A) = λ(λn−1 + an−1 λn−1 + ... + a1 ) + a0 .
Theorem 12. Suppose that f is piecewise continuous on the interval 0 ≤ t ≤ A for any positive
A > 0. Also suppose that there exist constants K > 0, a, and M > 0, such that
(21.2) |f (t)| ≤ Keat , when t ≥ M.
Then the Laplace transform L{f (t)} = F (s) exists for s > a.
Proof. We can compute
Z ∞ Z ∞
−st K
(21.3) |f (t)|e dt ≤ Ke(a−s)t ≤ .
M M s−a
1 1 1 1 1 1 s
(21.8) L{cos(at)} = L{eiat } + L{e−iat } = + = 2 .
2 2 2 s − ia 2 s + ia s + a2
Next,
∞ ∞
dn dn 1
Z Z
n!
(21.9) tn e−st dt = (−1)n e−st dt = (−1)n ( ) = n+1 .
0 dn s 0 dn s s s
More generally,
Z ∞
n!
(21.10) tn eat e−st dt = .
0 (s − a)n+1
Indeed,
Theorem 13. If F (s) = L{f (t)} exists for s > a ≥ 0 and if c is a constant,
(21.11) L{ect f (t)} = F (s − c), s > a + c.
−1
Conversely, if f (t) = L {F (s)}, then
(21.12) ect f (t) = L−1 {F (s − c)}.
29
Proof.
Z ∞ Z ∞
(21.13) L{ect f (t)} = e−st ect f (t)dt = e−(s−c)t f (t)dt = F (s − c).
0 0
Now we examine the Laplace transform of a derivative.
Theorem 14. Suppose f is continuous and f 0 is piecewise continuous on any interval 0 ≤ t ≤ A.
Also suppose that there exist constants K, a, M such that |f (t)| ≤ Keat for t ≥ M . Then L{f 0 (t)}
exists for s > a, and
(21.14) L{f 0 (t)} = sL{f (t)} − f (0).
Proof. Integrating by parts,
Z A Z A
(21.15) e−st f 0 (t)dt = e−st f (t)|A
0 + s e−st f (t)dt.
0 0
Taking the limit as A → ∞,
(21.16) L{f 0 (t)} = −f (0) + sL{f (t)}.
Corollary 3. Suppose that f , f 0 , ..., f (n−1) are continuous and that f (n) is piecewise continuous on
an interval 0 ≤ t ≤ A. Also suppose that there exists constants K, a, and M such that |f (t)| ≤ Keat ,
and all the derivatives of f are bounded by Keat for t ≥ M . Then,
(21.17) L{f (n) (t)} = sn L{f (t)} − sn−1 f (0) − ... − sf (n−2) (0) − f (n−1) (0).
Now, solve the differential equation,
(21.18) y 00 − y 0 − 2y = 0, y(0) = 1, y 0 (0) = 0.
Doing the Laplace transform,
(21.19) (s2 − s − 2)L{y(t)} − (s − 1)y(0) = 0.
Therefore, doing partial fractions,
s−1 1 2
(21.20) L{y(t)} = = + .
(s − 2)(s + 1) 3(s − 2) 3(s + 1)
Therefore,
1 2t 2 −t
(21.21) y(t) = e + e .
3 3
22. Initial value problems
Now consider the initial value problem
dn y dn−1 y
(22.1) + an−1 + ... + a0 y(t) = 0, y(0) = c0 , ..., y (n−1) (0) = cn−1 .
dtn dtn−1
Taking the Laplace transform of both sides,
(22.2)
L{y(t)}·{sn +an−1 sn−1 +...+a0 } = sn−1 y(0)+...+y (n−1) (0)+an−1 {sn−2 y(0)+...+y (n−2) (0)}+...+a1 y(0).
30 BENJAMIN DODSON
If (22.2) is real valued then aij = āij 0 when mj = m̄j 0 . Then, doing the inverse Laplace transform,
aij aij
(22.6) L−1 ( i
)= ti−1 etmj .
(s − mj ) (i − 1)!
23. Convolution
We define the convolution,
Z t Z t
(23.1) h(t) = (f ∗ g)(t) = f (t − τ )g(τ )dτ = f (τ )g(t − τ )dτ = (g ∗ f )(t).
0 0
Theorem 15. If F (s) = L{f (t)} and G(s) = L{g(t)} both exist for s > a ≥ 0, then
(23.2) H(s) = F (s)G(s) = L{h(t)}, s > a,
where
(23.3) h(t) = (f ∗ g)(t) = (g ∗ f )(t).
Proof. By direct computation,
Z ∞ Z ∞ Z ∞ Z ∞
(23.4) F (s)G(s) = e−sτ f (τ ) · e−sξ g(ξ)dξ = f (τ ) e−s(τ +ξ) g(ξ)dξdτ.
0 0 0 0
Setting t = τ + ξ, ξ = t − τ , so by a change of variables, since τ ≤ t,
Z ∞ Z t
−ts
(23.5) F (s)G(s) = e f (τ )g(t − τ )dτ dt = H(s).
0 0
We can use this computation to compute the inverse Laplace transform. Indeed, let
a 1 a
(23.6) H(s) = 2 = 2· 2 .
s (s + a) s s + a2
We know that
1 a
(23.7) L−1 { } = t, L−1 { } = sin(at).
s2 s2 + a2
Therefore,
Z t Z t
t t τ 1 t sin(at)
(23.8) h(t) = (t − τ ) sin(aτ )dτ = − cos(at) + + cos(aτ )|t0 − cos(aτ )dτ = − .
0 a a a a 0 a a2
31
Therefore,
1 t
Z
1
(23.13) y(t) = 3 cos(2t) −sin(2t) + sin(2(t − τ ))g(τ )dτ.
2 2 0
In general, suppose we have the initial value problem with the forcing function,
(23.14) y (n) (t) + an−1 y (n−1) (t) + ... + a0 y(t) = g(t), y(0) = c0 , y (n−1) (0) = cn−1 .
Then if we let
1
(23.15) H(s) = ,
sn + an−1 sn−1 + ... + a0
Therefore,
→
− 2 −t 2 1 1 1 1 4
(23.36) x (t) = e − e−3t + te−t + t− .
1 3 −1 1 2 3 5
∞ ∞
e−cs
Z Z
(24.1) L{uc (t)} = e−st uc (t)dt = e−st = .
0 c s
The Laplace transform intertwines multiplication by an exponential and translation.
Theorem 16. If the Laplace transform of f (t), F (s) = L{f (t)} exists for s > a ≥ 0, and if c is a
positive constant, then
(24.2) L{uc (t)f (t − c)} = e−cs L{f (t)} = e−cs F (s), s > a.
−1
Conversely, if f (t) is the inverse Laplace transform of F (s), f (t) = L {F (s)}, then
(24.3) uc (t)f (t − c) = L−1 {e−cs F (s)}.
Proof. By direct computation and a change of variables,
Z ∞
(24.4) L{uc (t)f (t − c)} = e−st f (t − c)dt = e−cs F (s).
c
We can apply Theorem 16 to obtain (24.1). Also, if f (t) = sin(t) + u π4 (t) cos(t − π4 ), then
(24.5) F (s) = L{sin t} + e−πs/4 L{cos t}.
On the other hand, if
1 − e−2s
(24.6) F (s) = , f (t) = t − u2 (t)(t − 2).
s2
Consider the ordinary differential equation
(24.7) 2y 00 + y + 2y = g(t), g(t) = u5 (t) − u20 (t), y(0) = y 0 (0) = 0.
Taking the Laplace transform of both sides, let Y (s) be the Laplace transform of y(t).
1
(24.8) (2s2 + s + 2)Y (s) = (e−5s − e−20s ).
s
Doing some algebra,
e−5s − e−20s
(24.9) Y (s) = .
s(2s2 + s + 2)
Doing some partial fractions,
1 a bs + c 1 1
(24.10) = + 2 , a= , b = −1, c=− .
s(2s2 + s + 2) s 2s + s + 2 2 2
34 BENJAMIN DODSON
Then if
1 1 (s + 41 ) + 14
(24.11) H(s) = , h(t) = − L−1 { 15 }.
s(2s2 + s + 2) 2 2((s + 14 )2 + 16 )
Next,
√
−1 (s + 14 ) 1 −t/4 15
(24.12) −L { 15 } = − 2 e cos( t).
2((s + 41 )2 + 16 )
4
1 15
√ √
−1 4 1 −1 4 e−t/4 15
(24.13) −L { 1 2 15 } = − √ L { 15 } = −
√ sin( t).
2((s + 4) + 16 ) 15 2((s + 14 )2 + 16 ) 2 15 4
Next, using Theorem 16,
(24.14) y(t) = u5 (t)h(t) − u20 (t)h(t).
Next, consider the problem
1
y 00 (t) + 4y(t) = g(t), g(t) = 0, 0 ≤ t < 5, g(t) = (t − 5), 5 ≤ t < 10,
(24.15) 5
g(t) = 1, t ≥ 10, y(0) = y 0 (0) = 0.
Taking the Laplace transform of both sides,
(24.16) (s2 + 4)Y (s) = L{g(t)} = G(s).
Rewriting,
1
(24.17) g(t) = (u5 (t)(t − 5) − u10 (t)(t − 10)),
5
so
e−5s − e−10s
(24.18) G(s) = .
5s2
Doing some algebra,
e−5s − e−10s 1
(24.19) Y (s) = · 2 2 .
5 s (s + 4)
Now then,
1 1 1/4 1/4
(24.20) = 2 − 2 ,
s2 s2 + 4 s s +4
Now then,
1/4 1/4 1 1
(24.21) L−1 { − 2 } = t − sin(2t).
s2 s +4 4 8
Therefore,
1 1 1 1 1
(24.22) y(t) = ( u5 (t)(t − 5) − u5 (t) sin(2(t − 5)) − u10 (t)(t − 10) + u10 (t) sin(2(t − 10))).
5 4 8 4 8
35
1
Thus, for T0 ≤ 2L ,
For each closed, bounded subset K of Ω, (26.2) and (26.6) hold. If a solution stays in K, then
we can extend a solution.
37
Proposition 14. Let F be as in Proposition 13 but with Lipschitz and boundedness conditions only
holding on a closed, bounded set K. Assume that [a, b] is contained in the open interval I and that
x(t) solves (26.4) for t ∈ (a, b). Assume that there exists a closed, bounded set K ⊂ Ω such that
x(t) ∈ K for all t ∈ (a, b). Then there exist a1 < a and b1 > b such that x(t) solves (26.1) for
t ∈ (a1 , b1 ).
We can use this result to prove global existence. For example, consider the 2 × 2 system,
dy dv
(26.11) = v, = −y 3 .
dt dt
In this case,
d v2 y4
(26.12) ( + ) = 0.
dt 2 4
Therefore, the solution x(t) = (y(t), v(t)) lies on a level curve
y4 v2
(26.13) + = C.
4 2
Definition 11. We say that a critical point is stable if, given any > 0, there exists δ > 0 such
that if kx(0) − x0 k < δ, then the solution exists for all positive t and satisfies kx(t) − x0 k < . A
point that is not stable is unstable. A solution is said to be asymptotically stable if, in addition to
being stable,
(27.3) lim x(t) = x0 .
t→∞
Case 1: Real, unequal eigenvalues of the same sign. In this case, the solution to the linearized
equation is
(27.4) →
−x (t) = c ξ (1) er1 t + c ξ (2) er2 t .
1 2
Case 3: Equal eigenvalues. In this case, we could have Df (y0 ) = λI, so then in that case we
can again use (27.4). This is called a proper node. Otherwise, if we have one eigenvalue and a
generalized eigenvalue, which is called an improper node,
(27.5) →
−
x (t) = c ξert + c (ξtert + ηert ).
1 2
References
[BD65] William E. Boyce and Richard C. DiPrima. Elementary differential equations and boundary value problems.
John Wiley & Sons, Inc., New York-London-Sydney, 1965.
[Tay22] Michael E. Taylor. Introduction to differential equations, volume 52 of Pure and Applied Undergraduate
Texts. American Mathematical Society, Providence, RI, second edition, [2022]
2022.
c