Notes Pde
Notes Pde
Wen
c Shen, Spring 2015. All rights reserved
Contents
1 Introduction 4
1.1 Classification of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Directional Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1
7 Systems of Two Linear Differential Equations 114
7.1 Introduction to systems of differential equations . . . . . . . . . . . . . . . . . . 114
7.2 Review of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.3 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.4 Basic theory of systems of first order linear equation . . . . . . . . . . . . . . . 119
7.5 Homogeneous systems of two equations with constant coefficients. . . . . . . . . 119
7.6 Complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.7 Fundamental Matrices* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.8 Repeated eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.9 Summary of Stabilities and types of critical points for linear systems . . . . . . 140
12 Homeworks 196
12.1 Homework 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
12.2 Homework 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
12.3 Homework 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
12.4 Homework 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
12.5 Homework 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.6 Homework 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
12.7 Homework 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.8 Homework 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
12.9 Homework 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
12.10Homework 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
12.11Homework 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
12.12Homework 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
12.13Homework 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
12.14Homework 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
2
13.6 Answer/keys to homework 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.7 Answer/keys to homework 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13.8 Answer/keys to homework 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
13.9 Answer/keys to homework 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13.10Answer/keys to homework 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
13.11Answer/keys to homework 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
13.12Answer/keys to homework 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13.13Answer/keys to homework 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
3
Chapter 1
Introduction
Example 1. Let y(t) be the unknown. Identify the order and linearity of the following
equations.
(a). (y + t)y ′ + y = 1,
(b). 3y ′ + (t + 4)y = t2 + y ′′ ,
(c). y ′′′ = cos(2ty),
√
(d). y (4) + ty ′′′ + cos t = ey .
Answer.
Problem order linear?
(a). (y + t)y ′ + y = 1 1 No
(b). 3y ′ + (t + 4)y = t2 + y ′′ 2 Yes
(c). y ′′′ = cos(2ty) 3 No
√
(d). y (4) + ty ′′′ + cos t = ey 4 No
4
What is a solution? A solution is a function that satisfies the equation, the boundary
conditions (if any), the initial conditions (if any), and whose derivatives exist.
Example 2. Verify that y(t) = eat is a solution of the IVP (initial value problem)
y ′ = ay, y(0) = 1.
Answer. Let’s check if y(t) satisfies the equation and the initial condition:
Answer.
(t + 1)y ′ = t2
t2 t2 − 1 + 1 (t + 1)(t − 1) + 1 1
y′ = = = = (t − 1) +
t+1 t+1 t+1 t+1
To find y, we need to integrate y ′ :
t2
Z
1
Z
y = y ′ (t)dt = (t − 1) + dt = − t + ln |t + 1| + c
t+1 2
where c is an integration constant which is arbitrary. This means there are infinitely many
solutions.
Additional condition: initial condition y(0) = 1. (meaning: y = 1 when t = 0) Then
t2
y(0) = 0 + ln |1| + c = c = 1, so y(t) = − t + ln |t + 1| + 1.
2
So for equation like y ′ = f (t), we can solve it by integration: y = f (t)dt.
R
5
Review on integration:
1
Z
xn dx = xn+1 + c, (n 6= −1)
n+1
1
Z
dx = ln |x| + c
x
Z
sin x dx = − cos x + c
Z
cos x dx = sin x + c
Z
ex dx = ex + c
ax
Z
ax dx = +c
ln a
Integration by parts:
Z Z Z b b Z b
′
u dv = uv − v du, u(x)v (x) dx = u(x)v(x) − v(x)u′ (x) dx
a a a
Chain rule:
d
(f (g(t)) = f ′ (g(t)) · g ′ (t)
dt
1
Example 5. Consider the equation y ′ = (3 − y). We know the following:
2
• If y = 3, then y ′ = 0, flat slope,
6
See the directional field below (with some solutions sketched in red):
4.5
3.5
2.5
1.5
0.5
0
0 1 2 3 4 5 6
Remarks:
(1). For equation y ′ (t) = a(b − y) with a > 0, it will have similar behavior to Example 5,
where b = 3 and a = 12 . Solution will approach y = b as t → +∞.
(2). Now consider y ′ (t) = a(b − y), but with a < 0. This changes the sign of y ′ . We now have
• If y = 1 or y = 5, then y ′ = 0.
7
8
−1
−0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Remark: If we have y ′ (t) = f (y), and for some y0 we have f (y0 ) = 0, then, y(t) = y0 is a
solution.
8
6
−1
−2
−0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
We first check the constant solution, y = 1 and y = 3. Then (a) can not be. Then, we
check the sign of y ′ on the intervals: y < 1, 1 < y < 3, and y > 3, to match the directional
field. We found that (c) could be the equation.
One can sketch the directional field along lines of y = −t + c for various values of c.
• If y = −t , then y ′ = 0;
• If y = −t − 1 , then y ′ = −1;
• If y = −t − 2 , then y ′ = −2;
• If y = −t + 1 , then y ′ = 1;
• If y = −t + 2 , then y ′ = 2;
9
Below is the graph of the directional field, with some solutions plotted in red.
2
−1
−2
−3
−4
−5
−1 0 1 2 3 4 5
We see that, for general y ′ = f (t, y) where the function f depends on t, the directional
fields could be rather tedious to generate by hand. There are computer softwares that will
do this for you, as I did with these plots. They are generated in Matlab, a powerful software
that can solve many math problems.
10
Chapter 2
• modeling;
The function µ is chosen such that the equation is integrable, meaning the LHS (Left Hand
Side) is the derivative of something. In particular, we require:
11
which requires
dµ dµ
µ′ (t) = = µ(t)p(t), ⇒ = p(t) dt
dt µ
Integrating both sides Z
ln µ(t) = p(t) dt
d
Z
(µ(t)y) = µ(t)g(t), µ(t)y = µ(t)g(t) dt + c
dt
which gives the formula for the solution
Z Z
1
y(t) = µ(t)g(t) dt + c , where µ(t) = exp p(t) dt .
µ(t)
so
b at b
Z
−at at −at
y=e e b dt = e e + c = + ce−at ,
a a
where c is an arbitrary constant. Pay attention to where one adds this integration constant!
and
1 3t 1
Z Z
y(t) = e−t et e2t dt = et e3t dt = e−t e + c = e2t + ce−t .
3 3
Can you discuss the behavior of this solution, as t → ±∞?
12
Example 3. Solve the IVP
−2
(1 + t2 )y ′ + 4ty = 1 + t2 , y(0) = 1.
Answer. First, let’s rewrite the equation into the normal form
4t −3
y′ + 2
y = 1 + t2 ,
1+t
so
4t −3
p(t) = , g(t) = 1 + t2 .
1 + t2
Then
Z Z Z
4t 2 2
µ(t) = exp p(t) dt = exp dt = exp d(t )
1 + t2 1 + t2
2
= exp(2 ln(1 + t2 )) = exp(ln(1 + t2 )2 ) = 1 + t2 .
Then
(1 + t2 )2 (1 + t2 )−3 dt (1 + t2 )−1 dt
R R
arctan t + c
y= = = .
(1 + t2 )2 (1 + t2 )2 (1 + t2 )2
By the IC: y(0) = 1:
0+c arctan t + 1
y(0) = = c = 1, ⇒ y(t) = .
1 (1 + t2 )2
13
so
3 −4t
Z Z
1
t − 13 t −t 1
t − 34 t 1
t
y(t) = e 3 e e dt = e 3 e dt = e 3 − e 3 +c .
4
Plug in the IC to find c
3 3
y(0) = e0 (− + c) = a, c = a +
4 4
so
1
t 3 −4t 3 3 −t 3 t/3
y(t) = e 3 − e 3 +a+ =− e + a+ e .
4 4 4 4
To see the behavior of the solution, we see that it contains two terms. The first term e−t
goes to 0 as t grows. The second term et/3 goes to ∞ as t grows, but the constant a + 34 is
multiplied on it. So we have
3
• If a + 4 = 0, i.e., if a = − 43 , we have y → 0 as t → ∞;
3
• If a + 4 > 0, i.e., if a > − 43 , we have y → ∞ as t → ∞;
3
• If a + 4 < 0, i.e., if a < − 43 , we have y → −∞ as t → ∞;
On the other hand, as t → −∞, the term e−t will blow up to −∞, and will dominate.
Therefore, y → −∞ as t → −∞ for any values of a.
See plot below:
14
So p(t) = 2/t and g(t) = 4t. We have
Z
µ(t) = exp 2/t dt = exp(2 ln t) = t2
and Z
−2
4t · t2 dy = t−2 t4 + c = t2 + ct−2 .
y(t) = t
We see that the solution has 2 terms, t2 and t−2 . With different initial value, we will get
different values of c, and the solution will be very different.
By our IC y(1) = 2, we get
y(1) = 1 + c = 2, c=1
10
−5
−4 −3 −2 −1 0 1 2 3 4
15
Bernoulli Equations*. This is an example of solving nonlinear first order ODE by a vari-
able change and turn it into a linear equation. Consider the Bernoulli differential equa-
tions
y ′ (t) + p(t)y = q(t)y n
where n is an integer.
If n = 0 or n = 1, this equation is linear and we can solve it by the method integrating
factor. Otherwise, the equation is non-linear. Such equations arises in applications such as
population models and models of one-dimensional motion influenced by drag forces.
We now make a variable change, and call v(t) = y(t)m , for some m to be determined.
Then, y = v 1/m . By the chain rule, we have
1 1/m−1 ′
y′ = v v.
m
Put these into the differential equation, we get
1 1/m−1 ′
v v + p(t)v 1/m = q(t)v n/m .
m
Multiply both sides by mv 1−1/m , we get
This may look more complicated than the original equation, but remember we can choose the
value m in a smart way. Now let m = 1 − n, we get
v ′ + (1 − n)p(t)v = (1 − n)q(t),
which is linear, and we can solve it! Once we get v(t), we can then easily recover y(t).
y ′ + y = ty 3 , y(0) = 2.
Answer. We make the variable change and let v = y 1−n = y 1−3 = y −2 . By the derivation
we did, we have
v ′ − 2v = −2t, v(0) = y(0)−2 = 1/4.
The general solution is
1
v(t) = Ce2t + (t + ).
2
Imposing the initial condition, we get
1 1 1
C=− , v(t) = − e2t + (t + ).
4 4 2
Finally, we go back to y(t) = v −1/2 , and get
1 −1/2
1 2t
y(t) = − e + (t + ) .
4 2
16
Remarks on the definition of linear and nonlinear equations:
(1) For any linear equation
y ′ + p(t)y = g(t)
if we make a non-linear variable change, say y = f (v) for some nonlinear function f (for
example, f (v) = v 2 ), we turn the linear equation for y into a nonlinear equation for v.
(2) However, the other way around is not always possible. Only for some special nonlinear
equations, a suitable (carefully chosen) variable change could turn it into a linear equation in
the new variable. Therefore, such special equations are worth mentioning.
Example 1. Consider
dy sin x
= .
dx 1 − y2
We can separate the variables:
1
Z Z
(1 − y 2 ) dy = sin x dx, ⇒ y − y 3 = − cos x + c.
3
If one has IC as y(π) = 2, then
1 3 5
2− · 2 = − cos π + c, ⇒ c=− ,
3 3
so the solution y(x) is implicitly given as
1 5
y − y 3 + cos x + = 0.
3 3
Z Z
2(y − 1) dy = (3x2 + 4x + 2) dx , ⇒ (y − 1)2 = x3 + 2x2 + 2x + c
17
Set in the IC y(0) = −1, i.e., y = −1 when x = 0, we get
To determine which sign is the correct one, we check again by the initial condition:
√
y(0) = 1 ± 4 = 1 ± 2, must have y(0) = −1.
We see we must choose the ‘-’ sign. The solution in explicitly form is:
p
y(x) = 1 − x3 + 2x2 + 2x + 4.
x3 + 2x2 + 2x + 4 ≥ 0, ⇒ x2 (x + 2) + 2(x + 2) ≥ 0
10
−2
−4
−6
−8
−3 −2 −1 0 1 2 3
Example 3. Solve y ′ = 3x2 + 3x2 y 2 , y(0) = 0, and find the interval where the solution is
defined.
18
Set in the IC:
arctan 0 = 0 + c, ⇒ c=0
we get the solution
arctan y = x3 , ⇒ y = tan(x3 ).
Since the initial data is given at x = 0, i.e., x3 = 0, and tan is defined on the interval (− π2 , π2 ),
we have
π π h π i1/3 h π i1/3
− < x3 < , ⇒ − <x< .
2 2 2 2
Example 4. Solve
1 + 3x2
y′ = , y(0) = 1
3y 2 − 6y
and identify the interval where solution is valid.
1 − 3 = c, ⇒ c = −2,
Then,
y 3 − 3y 2 = x3 − x − 2.
Note that solution is given in implicitly form.
To find the valid interval of this solution, we note that y ′ is not defined if 3y 2 − 6y = 0,
i.e., when y = 0 or y = 2. These are the two so-called “bad points” where you can not define
the solution. To find the corresponding values of x, we use the solution expression:
y=0: x3 + x − 2 = 0,
✛ ✲
× × ✲ x
−2 −1 0 1 2
19
2.3 Differences between linear and nonlinear equations; exis-
tence and uniqueness of solutions
We discuss here some fundamental differences between linear and nonlinear equations regard-
ing existence and uniqueness of solutions.
For a linear equation
y ′ + p(t)y = g(t), y(t̄) = ȳ,
if we require the solution y(t) to be differentiable functions, then we have the following exis-
tence and uniqueness theorem.
Theorem . If p(t) and g(t) are continuous and bounded on an open interval containing
t0 , then it has an unique solution on that interval.
A brief proof. The existence of a solution is obvious, since we can write it out using
the method of integrating factors. For the uniqueness, let y1 and y2 be two solutions of the
problem, i.e.,
y1′ + p(t)y1 = g(t), y1 (t̄) = ȳ, y2′ + p(t)y2 = g(t), y2 (t̄) = ȳ.
From the directional field, we see that e(t) ≡ 0, proving y1 (t) = y2 (t), which implies unique-
ness.
One can use this Theorem to identify intervals where the solution of the IVP could be
defined.
Example 1. Find the largest interval where the solution can be defined for the following
problems.
(A). ty ′ + y = t3 , y(−1) = 3.
(B). ty ′ + y = t3 , y(1) = −3.
(C). (t − 3)y ′ + (ln t)y = 2t, y(1) = 2
(D). y ′ + (tan t)y = sin t, y(π) = 100.
Remark: The conditions in the Theorem guarantees the uniqueness. If the conditions fail,
the uniqueness might still hold, but not granted. Also, in many cases one will have discontin-
uous functions of p(t) and g(t). One can relax the restriction on the solution, and require y(t)
to be only continuous. Then, the conditions reduces to: p, q are integrable functions.
20
Example (With discontinuous coefficient functions). Consider
(
1, 0≤t<1
y ′ − y = g(t), y(0) = 0, g(t) =
−2, 1 ≤ t ≤ 2.
Answer. The function g(t) has a jump at t = 1. We can solve the equation using 2 steps.
(1) For 0 ≤ t < 1, we have the IVP
y ′ − y = 1, y(0) = 0.
y ′ − y = −2, y(1) = e − 1.
Note that the initial condition is given at t = 1, and we use the solution in step 1 and evaluate
it at t = 1.
This can be easily solved, and we get
y(t) = (1 − 3/e)et + 2, 1 ≤ t ≤ 2.
We see that y(t) is continuous at t = 1, but the graph has a kink, since y ′ (t) is discontinuous
at t = 1.
In general, ODEs with discontinuous coefficients have their own sets of Theorems.
21
For non-linear equation
y ′ = f (t, y), y(t̄) = ȳ,
we have a much weaker theorem, for differentiable functions y(t).
Theorem . If f (t, y), ∂f ∂y (t, y) are continuous and bounded on an rectangle (α < t <
β, a < y < b) containing (t̄, ȳ), then there exists an open interval around t̄, contained in (α, β),
where the solution exists and is unique.
We note that the statement of this theorem is not as strong as the one for linear equation.
The proof uses the Picard iteration, and is much more complicated. It is outlined in the
textbook, you may read it if you are curious.
Some remarks*:
(1). If f (t, y), ∂f
∂y (t, y) are continuous at (t0 , y0 ), then in a small neighborhood around the
point (t0 , y0 ), one can linearize f (t, y), and f (t, y) ≈ a+by will be a good approximation. Then
by the Theorem for linear equation, solution exits and is unique. This is a rather standard
technique for nonlinear problems, studying the local linearized problem.
(2). A rough (not rigorous) argument to see how the bound on ∂f ∂y (t, y) helps the unique-
ness: Let y1 , y2 be two solutions, then
This implies Z t
y1 (t) − y2 (t) = f (s, y1 (s)) − f (s, y2 (s)) ds
t̄
∂f
The bound on ∂y implies that there exist a constant M , such that
We now have
Z t Z t
|y1 (t) − y2 (t)| ≤ |f (s, y1 (s)) − f (s, y2 (s))| ds ≤ M |y1 (s) − y2 (s)| ds.
t̄ t̄
By comparison we have E(t) ≤ w(t). For w(t), is satisfies the equation w′ = M w with
w(t̄) = 0, which implies w(t) ≡ 0. Therefore E(t) ≡ 0, leading to uniqueness.
22
We first note that at y = 0, which is the initial value of y, we have y ′ = f (t, y) → ∞. So the
conditions of the Theorem are not satisfied, and we expect something to go wrong.
Solve the equation as an separable equation, we get
Z Z
y dy = − t dt, y 2 + t2 = c,
y ′ = y 1/3 , y(0) = 0.
where c ≥ 0 is any constant. We can easily check that yc (t) is a solution to the IVP, for any
c ≥ 0. Indeed, we have, for t ≥ c,
3 2
y′ = (2(t − c)/3)1/2 = (2(t − c)/3)1/2 = y 1/3 ,
2 3
This is no surprise, since f (t, y) = y 1/3 , then fy = 13 y −2/3 is not bounded at y = 0, which
is where the initial condition is given.
y′ = y2, y(0) = 1.
23
Note that f (t, y) = y 2 , which is defined for all t and y. But, due to the non-linearity of f ,
solution can not be defined for all t.
This equation can be easily solved as a separable equation.
1 1 −1
Z Z
2
dy = dt, − = t + c, y(t) = .
y y t+c
By IC y(0) = 1, we get 1 = −1/(0 + c), and so c = −1, and
−1
y(t) = .
t−1
We see that the solution blows up as t → 1, and can not be defined beyond that point.
This kind of blow-up phenomenon is well-known for nonlinear equations.
Q(t) = Q0 ert .
Two concepts:
• For r > 0, we define Doubling time TD , as the time such that Q(TD ) = 2Q0 .
ln 2
Q(TD ) = Q0 erTD = 2Q0 , erTD = 2, rTD = ln 2, TD = .
r
• For r < 0, we define Half life (or half time) TH , as the time such that Q(TH ) = 21 Q0 .
1 1 1 ln 2
Q(TH ) = Q0 erTH = Q0 , erTH = , rTH = ln = − ln 2, TH = .
2 2 2 −r
Note here that TH > 0 since r < 0.
24
NB! TD , TH do not depend on Q0 . They only depend on r.
Example 2. A radio active material is reduced to 1/3 after 10 years. Find its half life.
dQ
Answer. Model: dt = rQ, r is rate which is unknown. We have the solution Q(t) =
Q0 ert . So
1 1 − ln 3
Q(10) = Q0 , Q0 e10r = Q0 , r= .
3 3 10
To find the half life, we only need the rate r
ln 2 10 ln 2
TH = − = − ln 2 = 10 .
r − ln 3 ln 3
Example 3. Start an IRA account at age 25. Suppose deposit $2000 at the beginning
and $2000 each year after. Interest rate 8% annually, but assume compounded continuously.
Find total amount after 40 years.
Answer. Set up the model: Let S(t) be the amount of money after t years
dS
= 0.08S + 2000, S(0) = 2000.
dt
This is a first order linear equation. Solve it by integrating factor
e−0.08t
2000
Z
S(t) = e0.08t 2000 · e−0.08t dt = e0.08t 2000 +c = + ce0.08t = −25000 + ce0.08t .
−0.08 −0.08
By IC,
S(0) = −25000 + c = 2000, C = 27000,
we get
S(t) = 27000e0.08t − 25000.
When t = 40, we have
S(40) = 27000 · e3.2 − 25000 ≈ 637, 378.
Compare this to the total amount invested: 2000 + 2000 ∗ 40 = 82, 000.
Example 4: A home-buyer can pay $800 per month on mortgage payment. Interest rate is
r annually, (but compounded continuously), mortgage term is 20 years. Determine maximum
amount this buyer can afford to borrow. Calculate this amount for r = 5% and r = 9% and
observe the difference.
25
Answer. Set up the model: Let Q(t) be the amount borrowed (principle) after t years
dQ
= rQ(t) − 800 ∗ 12
dt
The terminal condition is given Q(20) = 0. We must find Q(0).
Solve the differential equation:
Q′ − rQ = −9600, µ = e−rt
e−rt
9600
Z
Q(t) = ert (−9600)e−rt dt = ert −9600 +c = + cert
−r r
By terminal condition
9600 9600
Q(20) = + ce20r = 0, c=−
r r · e20r
so we get
9600 9600 rt
Q(t) = − e .
r r · e20r
Now we can get the initial amount
9600 9600 9600
Q(0) = − 20r
= (1 − e−20r ).
r r·e r
If r = 5%, then
9600
Q(0) = (1 − e−1 ) ≈ $121, 367.
0.05
If r = 9%, then
9600
Q(0) = (1 − e−1.8 ) ≈ $89, 034.
0.09
We observe that with higher interest rate, one could borrow less.
A few words on compounding of interests rate. Let r be the annual interest rate, and
Q0 be the amount of deposit initially. If the interest is compounded continuously, we see that
it gives exponential growth. After t years and we have
Q(t) = Q0 ert .
Q4 (t) = Q0 (1 + r/4)4t .
26
One can show that
Q(T ) > Q365 (t) > Q12 (t) > Q4 (t).
Indeed, consider the function
Qn (t) = (1 + r/n)nt .
Write it out by binomial formula (Taylor expansion)
1 1
Qn (t) = 1 + nt(r/n) + nt(nt − 1)(r/n)2 + nt(nt − 1)(nt − 2)(r/n)3 · · ·
2 6
1 nt − 1 1 nt − 1 nt −2
= 1 + rt + (rt)2 + (rt)2 + · · ·
2 nt 6 nt nt
The Taylor expansion for the exponential function is
1 1
Q(t) = 1 + rt + (rt)2 + (rt)3 + · · ·
2 6
Comparing term-by-term, we have
Q(t) > Qn (t), Qn (t) > Qm (t) (n > m), lim Qn (t) = Q(t).
n→+∞
(2). When t → ∞, meaning after a long time, what is the limit amount QL ?
27
By IC
Q(0) = 25 + c = Q0 , c = Q0 − 25,
we get
Q(t) = 25 + (Q0 − 25)e−(r/100)t .
(2). As t → ∞, the exponential term goes to 0, and we have
We can also observed intuitively that, as time goes on for long, the concentration of salt in the
tank must approach the concentration of the salt in the inflow mixture, which is 1/4. Then,
the amount of salt in the tank would be 1/4 × 100 = 25 lb, as t → +∞.
Example 6. Tank contains 50 lb of salt dissolved in 100 gal of water. Tank capacity is
400 gal. From t = 0, 1/4 lb of salt/gal is entering at a rate of 4 gal/min, and the well-mixed
mixture is drained at 2 gal/min. Find:
(1) time t when it overflows;
(2) amount of salt before overflow;
Answer. (1). Since the inflow rate 4 gal/min is larger than the outflow rate 2 gal/min,
the tank will be filled up at tf :
400 − 100
tf = = 150min.
4−2
(2). Let Q(t) be the amount of salt at t min.
In-rate: 1/4 lb/gal × 4 gal/min = 1 lb/min
Q(t) Q(t)
Out-rate: 2 gal/min × lb/gal = lb/min
100 + 2t 50 + t
Q(t) 1
Q′ (t) = 1 − , Q′ + Q = 1, Q(0) = 50
50 + t 50 + t
Z
1
µ = exp dt = exp (ln(50 + t)) = 50 + t
50 + t
1 1 1 2
Z
Q(t) = (50 + t)dt = 50t + t + c
50 + t 50 + t 2
By IC:
Q(0) = c/50 = 50, c = 2500,
We get
50t + t2 /2 + 2500
Q(t) = .
50 + t
(3). The concentration of salt at overflow time t = 150 is
Q(150) 50 · 150 + 1502 /2 + 2500 17
= = lb/gal.
400 400(50 + 150) 64
28
Model IV: Air resistance
Example 7. A ball with mass 0.5 kg is thrown upward with initial velocity 10 m/sec from
the roof of a building 30 meter high. Assume air resistance is |v|/20. Find the max height
above ground the ball reaches.
Answer. Let S(t) be the position (m) of the ball at time t sec. Then, the velocity is
v(t) = dS/dt, and the acceleration is a = dv/dt. Let upward be the positive direction. We
have by Newton’s Law:
v v dv
F = ma = −mg − , a = −g − =
20 20m dt
Here g = 9.8 is the gravity, and m = 0.5 is the mass. We have an equation for v:
dv 1
= − v − 9.8 = −0.1(v + 98),
dt 10
so
1
Z Z
dv = (−0.1)dt, ⇒ ln |v + 98| = −0.1t + c
v + 98
which gives
v + 98 = c̄e−0.1t , ⇒ v = −98 + c̄e−0.1t .
By IC:
v(0) = −98 + c̄ = 10, c̄ = 108, ⇒ v = −98 + 108e−0.1t .
To find the position S, we use S ′ = v and integrate
Z Z
S(t) = v(t) dt = (−98 + 108e−0.1t )dt = −98t + 108e−0.1t /(−0.1) + c
By IC for S,
At the maximum height, we have v = 0. Let’s find out the time T when max height is reached.
29
• Find the speed when the ball hit the ground.
Solution: Compute |v(tH )|.
• Find the total distance traveled by the ball when it hits the ground.
Solution: Add up twice the max height SM with the height of the building.
Example 8. (Nonlinear air resistance). A rocket sled with initial speed 100 m/s is slowed
by a channel of water. We assume that the acceleration satisfies a = −0.01v 2 where v is the
velocity.
(1) Find the velocity v(t) at any time t > 0.
(2) Find the distance traveled s(t) at any time t > 0.
(3) Find the distance traveled when the speed is reduced to 10 m/s.
30
Why? Because if f (y0 ) = 0, then y(t) = y0 is a constant solution. It is called an equilib-
rium.
Question: Is an equilibrium stable or unstable?
3 4
2.5
3
2
1.5
y
1
f
0.5 + +
0
−1
−0.5 _
−1 −2
−1 −0.5 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
y t
Example 2. For the equation y ′ = f (y) where f (y) is given in the following plot:
0
f
−1
−2
−3
−4
−5
0 1 2 3 4 5 6
y
(C). Sketch the solutions in the t − y plan, and describe the behavior of y as t → ∞ (as it
depends on the initial value y(0).)
31
Answer. (A). There are three critical points: y1 = 1, y2 = 3, y3 = 5.
(B). To see the stability, we add arrows on the y-axis:
2 +
1 +
f
−1
−2 _ _
−3
−4
−5
0 1 2 3 4 5 6
y
3
y
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
32
• if y(t) > 5, then y(t) → 5.
Example 3. For y ′ = y 2 , we have only one critical point y1 = 0. For y < 0, we have
y′> 0, and for y > 0 we also have y ′ > 0. So solution is increasing on both intervals. So on
the interval y < 0, solution approaches y = 0 as t grows, so it is stable. But on the interval
y > 0, solution grows and leaves y = 0, and it is unstable. This type of critical point is called
semi-stable. This happens when one has a double root for f (y) = 0.
1.5
0.5
f
−0.5
−1
−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
y
33
4
3.5
2.5
1.5
y 1
0.5
−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t
y(t) = y0 ert .
If initially there is no life, i.e., y0 = 0, then it remains that way and y(t) ≡ 0. Otherwise, if
only a very small amount of population exists, i.e., y0 > 0, then y(t) will grow exponentially
in time.
Of course, this model is not realistic. In nature there is limited natural resource that can
support only limited amount of population.
Model 2. The more realistic model is the “logistic equation”:
dy
= (r − ay)y.
dt
or
dy y r
=r 1− y, k= ,
dt k a
34
r=intrinsic growth rate,
k=environmental carrying capacity.
critical points: y = 0, y = k. Here y = 0 is unstable, and y = k is stable.
If 0 < y(0) < k, then y → k as t grows.
If y(0) > k, then y → k as t grows.
Im summary, if y(0) > 0, then
lim y(t) = k.
t→+∞
The tricky part is the integration of the left hand side. We apply a technique called partial fraction,
and get
1 k 1 1
y = = + .
1− k y (k − y) y y k−y
Then Z
1 1 y
+ dy = ln |y| − ln |k − y| = ln .
y k−y k − y
This yields
y y rt
y0
ln
= rt + c, → k − y = ce ,
where c =
.
k − y k − y0
If 0 < y0 < k, so does 0 < y(t) < k, and we can remove the absolute value signs in the
solution, and after some manipulation we get
We see that y → k as t → ∞.
On the other hand, if y0 > k, we have
y y0
= cert , c= .
y−k y0 − k
After some manipulation we get
35
Model 3. An even more detailed model is the logistic growth with a threshold:
y y
y ′ (t) = −r 1 − · 1− y, r > 0, 0 < T < K.
T K
We see that there are 3 critical points: y = 0, y = T, y = K, where y = T is unstable, and
y = 0, y = K are stable.
Let y(0) = y0 be the initial population. We discuss the asymptotic behavior as t → +∞.
• If y0 > K, then y → K.
Some populations follow this model, for example, the some fish population in the ocean. If we
over-fishing and make the population below certain threshold, then the fish will go extinct.
That’s too sad.
Remark*. One can also solve this equation and obtain an explicit expression, but the
computation is lengthy!
∂f ∂f
fx = , fy =
∂x ∂y
and correspondingly the higher derivatives:
∂2f ∂2f
fxx = , fyy =
∂x2 ∂y 2
and the cross derivatives
∂2f ∂2f
∂ ∂f ∂ ∂f
fxy = = , fyx = = ,
∂x∂y ∂y ∂x ∂y∂x ∂x ∂y
36
Chain Rule:
df ∂f dx ∂f dy
= · + · = f x x′ + f y y ′ .
dt ∂x dt ∂y dt
Example 1. Let f (x, y) = x2 y 2 + ex , and x(t) = t2 , y(t) = et , and consider the composite
function f (x(t), y(t)). Compute df
dt .
Special case: If y = y(x), then the composite function f (x, y(x)) will follow this form of
Chain Rule
df ∂f dx ∂f dy
= · + · = fx + fy y ′ (x).
dx ∂x dx ∂y dx
6x + ex y 2 + 2ex yy ′ = 0
We see that the equation is NOT linear. It is NOT separable either. None of the methods we
know can solve it.
However, define the function
ψ(x, y) = 3x2 + ex y 2
We notice that
ψx = 6x + ex y 2 , ψy = 2ex y
and the equation can be written as
Since y = y(x), we apply the Chain Rule to the composite function ψ(x, y(x)) and get
dψ
= ψx + ψy y ′ (x)
dx
which is the left-hand side of the equation. By the differential equation, we now have
dψ
= 0, → ψ(x, y) = C
dx
37
where C is an arbitrary constant, to be determined by initial condition. We have found the
solution in an implicit form:
3x2 + ex y 2 = C.
In this example, we are even able to write out the solution in an explicit form by algebraic
manipulation
y 2 = e−x (C − 3x2 ),
p
y = ± e−x (C − 3x2 ).
Here, the choice of + or − sign should be determined by initial condition.
How to solve it? Need to find the function ψ, then we get implicit solution
ψ(x, y) = C.
Then
My = 3, Nx = 1, My 6= Nx ,
so the equation is not exact.
f (x)
y′ = .
g(y)
f (x) − g(y)y ′ = 0.
38
Now we check if this equation is exact. Clearly, we have fy = 0 and gx = 0, so it is exact. We
may conclude that all separable equations can be rewritten into an exact equation.
Remark 2. However, the exactness of an equation is up to manipulation. Consider the
separable equation in Remark 1, and rewrite the equation now into
f (x)
− y ′ = 0.
g(y)
So now we have
f (x)
M (x, y) = , N (x, y) = 1.
g(y)
Since Nx = 0 and My 6= 0 in general, the equation is not exact.
So, be careful. When you say an equation is exact, you must specify in which form you
present your equation.
(2x + y) + (x + 2y)y ′ = 0
(1) Is it exact? (2) If yes, find the solution with the initial condition y(1) = 1.
ψx = 2x + y, ψy = x + 2y. (A)
Therefore
ψ = x2 + xy + y 2
and the implicit solution is
x2 + xy + y 2 = C.
Finally, we determine the constant C by initial condition. Plug in x = 1, y = 1, we get
C = 3, so the implicit solution is
x2 + xy + y 2 = 3.
Remark. This procedure is rather lengthy, and could easily cause many mistakes. An
alternative (shorter) method is described below. We integrate the two equations in (A), with
39
respect to the partial derivatives. This means, for the first equation we integrate in x and the
second equation in y. We do not take the integration constants, we only find terms in the
anti-derivatives. This gives two expressions of ψ:
Z Z
2
ψ = (2x + y) dx = x + xy, ψ = (x + 2y) dy = xy + y 2 .
x y
Now we comb through the two expressions, and collect all different terms. We get
ψ = x2 + xy + y 2 .
(1) Find the values of b such that the equation is exact. (2) Solve it with that value of b.
so
My = 2xy + bx2 , Nx = 3x2 + 2xy
We see that we must have b = 3 to ensure My = Nx which would make the equation exact.
(2). We now set b = 3. To solve the equation, we need to find the function ψ. We have
ψx = xy 2 + 3x2 y, ψy = x3 + x2 y.
ψy = x2 y + x3 + h′ (y) = N = x3 + x2 y
40
(1). The exactness of an equation is up to manipulation. Consider an exact equation
M (x, y) + N (x, y)y ′ = 0 where My = Nx . If we multiply the equation by some function f (x, y)
on both sides, we get
f (x, y)M (x, y) + f (x, y)N (x, y)y ′ = 0
which in general is not exact for an arbitrary choice of f .
(2). Then, the other way around is also possible. Consider an equation M̂ (x, y) +
N̂ (x, y)y ′ = 0, which is not exact as they appear. If the conditions are favorable, it might be
possible to find some function µ(x, y), such that
µ(x, y)M̂ (x, y) + µ(x, y)N̂ (x, y)y ′ = 0
becomes exact. If this is true, then µ(x, y) is called an integrating factor.
(3). Consider now an separable equation y ′ = f (x)/g(y). Write it as
f (x)
− y′ = 0
g(y)
the equation is not exact. But if we multiply both sides by g(y), we get
f (x) − g(y)y ′ = 0
which is exact. This shows that: any separable equation could be written into an exact equation.
41
Chapter 3
In this chapter we study linear 2nd order ODEs. The general form of these equations is
where
a2 (t) 6= 0, y(t0 ) = y0 , y ′ (t0 ) = ȳ0 .
If b(t) ≡ 0, we call it homogeneous. Otherwise, it is called non-homogeneous.
a2 y ′′ + a1 y ′ + a0 y = 0.
42
Then, y = c1 y1 + c2 y2 for any constants c1 , c2 is also a solution.
Let y = c1 y1 + c2 y2 , we have
a2 r 2 y + a1 ry + a0 y = 0
Since y 6= 0, we get
a2 r 2 + a1 r + a0 = 0
This is called the characteristic equation.
Conclusion: If r is a root of the characteristic equation, then y = ert is a solution.
If there are two real and distinct roots r1 6= r2 , then the general solution is y(t) =
c1 er1 t + c2 er2 t where c1 , c2 are two arbitrary constants to be determined by initial conditions
(ICs).
Example 2. Consider y ′′ − 5y ′ + 6y = 0.
(b). If ICs are given as: y(0) = −1, y ′ (0) = 5, find the solution.
43
Solve these two equations for c1 , c2 : Plug in c2 = −1 − c1 into the second equation, we get
2c1 + 3(−1 − c1 ) = 5, so c1 = −8. Then c2 = 7. The solution is
(c). We see that y(t) = e2t · (−8 + 7et ), and both terms in the product go to infinity as t
grows. So y → +∞ as t → +∞.
Example 3. Find the solution for 2y ′′ +y ′ −y = 0, with initial conditions y(1) = 0, y ′ (1) =
3.
Remark. Note that the initial data is given at t = 1, not t = 0. It appears that this
causes difficulties in computing the constants c1 , c2 . Note that we write the final answer in
the form such that it becomes a function of t − 1. Indeed, this suggests a better form for the
general solution that would lead to simpler computation when one plugs in the initial data.
We take the general solution as
Note that we replaced t with t − t0 , where t0 = 1 is the time when the initial conditions are
given. We have
1
y ′ (t) = c̄1 e(t−1)/2 − c̄2 e−(t−1)
2
and the initial conditions give us
1
y(1) = c̄1 + c̄2 = 0, y ′ (1) = c̄1 − c̄2 = 3.
2
44
This leads to easy computation to find c̄1 = 2, c̄2 = −2, and the solution is the same
1
y(t) = 2e 2 (t−1) − 2e−(t−1) .
Summary of receipt:
1. Write the characteristic equation; y ′′ → r 2 , y ′ → r, y → 1.
(b). If the initial conditions are given as y(0) = 1 and y ′ (0) = a, then, for what values of a
would y remain bounded as t → +∞?
r 2 − 4 = 0, ⇒ r1 = −2, r2 = 2.
General solution is
y(t) = c1 e−2t + c2 e2t .
(b). If y(t) remains bounded as t → ∞, then the term e2t must vanish, which means we must
have c2 = 0. This means y(t) = c1 e−2t . If y(0) = 1, then y(0) = c1 = 1, so y(t) = e−2t . This
gives y ′ (t) = −2e−2t which means a = y ′ (0) = −2.
Example 6. Find a 2nd order equation such that c1 e3t + c2 e−t is its general solution.
Answer. From the form of the general solution, we see the two roots are r1 = 3, r2 = −1.
The characteristic equation could be (r − 3)(r + 1) = 0, or this equation multiplied by any
non-zero constant. So r 2 − 2r − 3 = 0, which gives us the equation
y ′′ − 2y ′ − 3y = 0.
45
3.2 Solutions of Linear Homogeneous Equations; the Wron-
skian
We consider some theoretical aspects of the solutions to a general 2nd order linear equations.
Theorem . (Existence and Uniqueness Theorem) Consider the initial value problem
If p(t), q(t) and g(t) are continuous and bounded on an open interval I containing t0 , then
there exists exactly one solution y(t) of this equation, valid on I.
t t+3 et
y ′′ + y′ − y= ,
t(t − 3) t(t − 3) t(t − 3)
so we have
t t+3 et
p(t) = , q(t) = − , g(t) = .
t(t − 3) t(t − 3) t(t − 3)
We see that we must have t 6= 0 and t 6= 3. Since t0 = 1, then the largest interval is I = (0, 3),
or 0 < t < 3. See the figure below.
t0
✛ ✲
❄
× × ✲ x
0 1 2 3
˙ f g′ − f ′ g.
W (f, g)(t) =
Remark: One way to remember this definition could be using the determinant of a 2 × 2
matrix,
f g
W (f, g)(t) = ′ ′ .
f g
46
• If W (f, g) ≡ 0, then f and g are linearly dependent.
Answer. We have
W (f, g) = et (−e−t ) − et e−t = −2 6= 0
so they are linearly independent.
Answer. We have
Answer. We have
W (f, g) = (t + 1)4 − (4t + 4) = 0
so they are linearly dependent. (In fact, we have g(t) = 4 · f (t).)
(d∗ ). f (t) = 2t, g(t) = |t|.
Answer. Note that g ′ (t) = sign(t) where sign is the sign function. So
(we used t · sign(t) = |t|). So they are linearly dependent for all t 6= 0.
y ′′ + p(t)y ′ + q(t)y = 0.
Then
47
(I) We have either W (y1 , y2 ) ≡ 0 or W (y1 , y2 ) never zero;
(II) If W (y1 , y2 ) 6= 0, the y = c1 y1 + c2 y2 is the general solution. They are also called to form
a fundamental set of solutions. As a consequence, for any ICs y(t0 ) = y0 , y ′ (t0 ) = ȳ0 ,
there is a unique set of (c1 , c2 ) that gives a unique solution.
c1 y1 (t0 ) + c2 y2 (t0 ) = y0
c1 y1′ (t0 ) + c2 y2′ (t0 ) = ȳ0 .
We see that there exist a uniqueness solution for c1 , c2 if the determinant of the coefficient
matrix is not zero. This determinant is precisely the Wronskian.
√
Example . Show that y1 (t) = t and y2 (t) = 1/t form a fundamental set of solution for
Answer. One needs to check two things: (1) plug in y1 , y2 to see that they are solutions.
(skip details). (2) Compute the Wronskian W (y1 , y2 ) to see if it is not 0. In fact, one get
W (y1 , y2 ) = − 23 t−3/2 < 0 for t > 0.
The next Theorem is probably the most important one in this chapter.
y ′′ + p(t)y ′ + q(t)y = 0
Multiply the first equation by −y2 and the second one by y1 , and add them up, we get
48
Note that W (y1 , y2 ) = y1 y2′ − y1′ y2 and W ′ = y1 y2′′ − y1′′ y2 , we get
W ′ + p(t)W = 0
Completing the proof. Note that this also proves part (I) of the previous Theorem. The
Wronskian W (y1 , y2 ) is either identically 0 (when C = 0) or never 0 (when C 6= 0).
Example 3. Given
t2 y ′′ − t(t + 2)y ′ + (t + 2)y = 0.
Find W (y1 , y2 ) without solving the equation.
NB! The solutions are defined on either (0, ∞) or (−∞, 0), depending on t0 .
From now on, when we say two solutions y1 , y2 of the equation, we mean two linearly
independent solutions that can form a fundamental set of solutions.
ty ′′ + 2y ′ + tet y = 0,
49
Answer. By definition of the Wronskian, we have
g ′ − 2g = 3e2t .
Next example shows how Abel’s Theorem can be used to solve 2nd order differential
equations.
and we can choose C = 1 and get W (y1 , y2 ) = e−2t . By the definition of the Wronskian, we
have
W (y1 , y2 ) = y1 y2′ − y1′ y2 = e−t y2′ − (−e−t y2 ) = e−t (y2′ + y2 ).
These two computation must have the same answer, so
This is called the method of reduction of order. We will study it more later in chapter 3.4.
50
3.3 Complex Roots
We start with an example.
Answer. By inspection, we need to find a function such that y ′′ = −y. We see that
y1 = cos t and y2 = sin t both work. By the Wronskian W (y1 , y2 ) = −2 6= 0, we see that these
two solutions are linearly independent. Therefore, the general solution is
r 2 + 1 = 0, r 2 = −1, r1 = i, r2 = −i.
The roots are complex. In fact, they are complex-conjugate pair. We see that the imaginary
part seems to give sin and cos functions.
In general, the roots of the characteristic equation can be complex numbers. Consider the
equation
ay ′′ + by ′ + cy = 0, → ar 2 + br + c = 0.
The two roots are √
−b ±b2 − 4ac
r1,2 = .
2a
If b2 − 4ac < 0, the root are complex, i.e., a pair of complex conjugate numbers. We will write
r1,2 = λ ± iµ. There are two solutions:
To deal with exponential function with pure imaginary exponent, we need the Euler’s Formula:
Then
y1 = eλt (cos µt + i sin µt), y2 = eλt (cos µt − i sin µt).
But these solutions are complex-valued. We want real-valued solutions! To achieve this, we
use the Principle of Superposition. If y1 , y2 are two solutions, then c1 y1 +c2 y2 is also a solution
for any constants c1 , c2 . In particular, the functions 12 (y1 + y2 ), 2i
1
(y1 − y2 ) are also solutions.
Write
1 1
˙ (y1 + y2 ) = eλt cos µt,
ỹ1 = ỹ2 =
˙ (y1 − y2 ) = eλt sin µt.
2 2i
We need to make sure that they are linearly independent. We can check the Wronskian,
y(t) = c1 eλt cos µt + c2 eλt sin µt = eλt (c1 cos µt + c2 sin µt).
51
Remark: We note here that if a complex valued function is a solution, then the real
part and the imaginary part are each a solution. This is a more general result. Now let
y(t) = u(t) + iv(t) be a solution of
y ′′ + p(t)y ′ + q(t)y = 0
which implies
u′′ + p(t)u′ + q(t)u = 0, v ′′ + p(t)v ′ + q(t) = 0,
proving that u and v are both solutions.
Example 1. (Perfect Oscillation: Simple harmonic motion.) Solve the initial value prob-
lem
π π
y ′′ + 4y = 0, y( ) = 0, y ′ ( ) = 1.
6 6
r 2 + 4 = 0, ⇒ r = ±2i, ⇒ λ = 0, µ = 2.
Example 2. (Decaying oscillation.) Find the solution to the IVP (Initial Value Problem)
52
Answer. The characteristic equation is
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 0.5 1 1.5 2 2.5 3 3.5 4
We see it is a decaying oscillation. The sin and cos part gives the oscillation, and the e−t
part gives the decaying amplitude. As t → ∞, we have y → 0.
Answer.
53
20
15
10
−5
−10
−15
−20
0 1 2 3 4 5 6
We see that y oscillate with growing amplitude as t grows. In the limit when t → ∞, y
oscillates between −∞ and +∞.
Conclusion: Sign of λ, the real part of the complex roots, decides the type of oscillation:
• λ = 0: perfect oscillation;
54
We can check if y2 satisfies this equation. We have
Finally, we must make sure that y1 , y2 are linearly independent. We compute their Wronskian
Method 2. This is the textbook’s version. We guess a solution of the form y2 = v(t)y1 =
v(t)e−2t , and try to find the function v(t). We have
55
Note that the term c2 e−2t is already contained in cy1 . Therefore we can choose c1 = 1, c2 = 0,
and get y2 = te−2t , which gives the same general solution as Method 1. We observe that this
method involves more computation than Method 1.
A typical solution graph is included below:
2.5
1.5
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
We see if c2 > 0, y increases for small t. But as t grows, the exponential (decay) function
dominates, and solution will go to 0 as t → ∞.
One can show that in general if one has repeated roots r1 = r2 = r, then y1 = ert and
y2 = tert , and the general solution is
y ′′ + 2y ′ + y = 0, y(0) = 2, y ′ (0) = 1.
56
Example 2b. If the initial conditions in Example 2 are changed into
y(2) = 2, y ′ (2) = 1,
Note that we substitute t with t − 2 in the general solution form in Example 2. Now we have
Of course you can write the same general solution as in Example 2 and work out the detail.
Try that, and you will be convinced that it takes much more work!
Example 2c. Let’s look at even another way of solving Example 2b. Note that the
equation is autonomous, i.e., no t appears in the equation. Let the initial data be given at
t0 = 2. Introduce a new time variable τ = t − t0 = t − 2. Note this is just shift the time by 2
units into the future. We have dτ = dt, and the IVP becomes
y(τ ) = (2 + 3τ )e−τ .
57
On reduction of order: This method can be used to find a second solution y2 if the first
solution y1 is given for a second order linear equation.
Answer. Method 1: Use Abel’s Theorem and Wronskian. By Abel’s Theorem, and
choose C = 1, we have
Z Z
3t 3
W (y1 , y2 ) = exp − p(t) dt = exp − dt = exp − ln t = t−3/2 .
2t2 2
By definition of the Wronskian,
1 ′ 1
W (y1 , y2 ) = y1 y2′ − y1′ y2 = y2 − (− 2 )y2 = t−3/2 .
t t
Solve this for y2 :
2√
Z
1 1 12 3
Z
3
µ = exp dt = exp(ln t) = t, ⇒ y2 = t · t− 2 dt = t2 = t.
t t t3 3
√
Drop the constant 32 , we get y2 = t.
Method 2: This is the textbook’s version. Guess solution to be y2 (t) = y1 (t)v(t), and
find v(t) by solving a first order equation!
Comment 1: This method is more complicated in computation than the 1st method.
However, it is a more general method. In the case one does not have something like the Abel’s
Theorem, such a method will always work, and will always reduce the order by 1.
Comment 2: A similar situation occurs in finding roots (or factorizing) a polynomial. Let
Pn (t) be a polynomial of degree n. If we found that r1 is a root, then it has a factor t − r1 , so
Pn (t) = (t − r1 )Q(t), where Q(t) is a polynomial of degree n − 1.
Let’s introduce another method that combines the ideas from Method 1 and Method 2.
Method 3. We will use Abel’s Theorem, and at the same time we will seek a solution of
the form y1 = vy1 .
3
By Abel’s Theorem, we have ( worked out in M1) W (y1 , y2 ) = t− 2 . Now, seek y2 = vy1 .
By the definition of the Wronskian, we have
W (y1 , y2 ) = y1 y2′ − y1′ y2 = y1 (vy1 )′ − y1′ (vy1 ) = y1 (v ′ y1 + vy1′′ ) − vy1 y1′ = v ′ y12 .
58
Drop the constant 32 , we get
3 1 1
y2 = vy1 = t 2 = t2 .
t
We see that Method 3 is the most efficient one among all three methods. We will focus on
this method from now on.
Answer. We have
t(t + 2) t+2 2
p(t) = − 2
=− = −1 − .
t t t
Let y2 be the second solution. By Abel’s Theorem, choosing c = 1, we have
Z
2
W (y1 , y2 ) = exp 1+ dt = exp{t + 2 ln t} = t2 et .
t
Let y2 = vy1 , the W (y1 , y2 ) = v ′ y12 = t2 v ′ . Then we must have
t2 v ′ = t2 et , v ′ = et , v = et , y2 = tet .
(A cheap trick to double check your solution y2 would be: plug it back into the equation and
see if it satisfies it.) The general solution is
y(t) = c1 y2 + c2 y2 = c1 t + c2 tet .
We observe here that Method 3 is very efficient.
√
3
Example 5. Given the equation t2 y ′′ − (t − 16 )y = 0, (t > 0), and y1 = t(1/4) e2 t ,
find y2 .
Answer. We will always use method 3. We see that p = 0. By Abel’s Theorem, setting
c = 1, we have Z
W (y1 , y2 ) = exp 0dt = 1.
1 √
Seek y2 = vy1 . Then, W (y1 , y2 ) = y12 v ′ = t 2 e4 t v ′ . So we must have
1
√ 1
√ Z
1
√
4 t ′ ′ − −4 t
t 2 e v = 1, ⇒ v = t 2 e , ⇒ v = t− 2 e−4 t dt.
√ 1
Let u = −4 t, so du = −2t− 2 dt, we have
Z
1 1 1 √
v = − eu du = − eu = − e−4 t .
2 2 2
So drop the constant − 12 , we get
√ 1
√ 1
√
y2 = vy1 = e−4 t t 4 e2 t
= t 4 e−2 t .
The general solution is
1
√ √
y(t) = c1 y1 + c2 y2 = t 4 c1 e2 t + c2 e−2 t .
59
3.5 Non-homogeneous equations; method of undetermined co-
efficients
Want to solve the non-homogeneous equation
Steps:
i.e., find y1 , y2 , linearly independent of each other, and form the general solution
yH = c1 y1 + c2 y2 .
y = yH + Y = c1 y1 + c2 y2 + Y.
Adding up (A) and (B), and write y = yH + Y , we get y ′′ + p(t)y ′ + q(t)y = g(t).
Main focus: constant coefficient case, i.e.,
ay ′′ + by ′ + cy = g(t).
r 2 − 3r − 4 = (r + 1)(r − 4) = 0, ⇒ r1 = −1, r2 = 4,
so
yH = c1 e−t + c2 e4t .
60
Step 2: Find Y . We guess/seek solution of the same form as the source term Y = Ae2t , and
will determine the coefficient A.
Y ′ = 2Ae2t , Y ′′ = 4Ae2t .
Plug these into the equation:
1
4Ae2t − 3 · 2Ae2t − 4Ae2t = 3e2t , ⇒ −6A = 3, ⇒ A=− .
2
So Y = − 12 e2t .
Step 3. The general solution to the non-homogeneous solution is
1
y(t) = yH + Y = c1 e−t + c2 e4t − e2t .
2
Observation: The particular solution Y take the same form as the source term g(t).
But this is not always true.
Answer. The homogeneous solution is the same as Example 1: yH = c1 e−t +c2 e4t . For the
particular solution Y , let’s first try the same form as g, i.e., Y = Ae−t . So Y ′ = −Ae−t , Y ′′ =
Ae−t . Plug them back in to the equation, we get
LHS = Ae−t − 3(−Ae−t ) − 4Ae−t = 0 6= 2e−et = RHS.
So it doesn’t work. Why?
We see r1 = −1 and y1 = e−t , which means our guess Y = Ae−t is a solution to the
homogeneous equation. It will never work.
Second try: Y = Ate−t . So
Y ′ = Ae−t − Ate−t , Y ′′ = −Ae−t − Ae−t + Ate−t = −2Ae−t + Ate−t .
Plug them in the equation
(−2Ae−t + Ate−t ) − 3(Ae−t − Ate−t ) − 4Ate−t = −5Ae−t = 2e−t ,
we get
2
−5A = 2, ⇒ A=− ,
5
so we have Y = − 52 te−t .
Summary 1. If g(t) = aeαt , then the form of the particular solution Y depends on r1 , r2
(the roots of the characteristic equation).
r1 6= α and r2 6= α Y = Aeαt
r1 = α or r2 = α, but r1 6= r2 Y = Ateαt
r1 = r2 = α Y = At2 eαt
61
Example 3. Find the general solution for
y ′′ − 3y ′ − 4y = 3t2 + 2.
Y = At2 + Bt + C, Y ′ = 2At + B, Y ′′ = 2A
Compare the coefficient, we get three equations for the three coefficients A, B, C:
3
−4A = 3 → A=−
4
9
−(6A + 4B) = 0, → B =
8
1 55
2A − 3B − 4C = 2, → C = (2A − 3B − 2) = −
4 32
So we get
3 9 55
Y (t) = − t2 + t − .
4 8 32
Answer. We see that the form we used in the previous example Y = At2 + Bt + C won’t
work because Y ′′ − 3Y ′ will not have the term t2 .
New try: multiply by a t. So we guess Y = t(At2 + Bt + C) = At3 + Bt2 + Ct. Then
(6At + 2B) − 3(3At2 + 2Bt + C) = −9At2 + (6A − 6B)t + (2B − 3C) = 3t2 + 2.
Compare the coefficient, we get three equations for the three coefficients A, B, C:
1
−9A = 3 → A=−
3
1
(6A − 6B) = 0, → B = A = −
3
1 8
2B − 3C = 2, → C = (2B − 2) = −
3 9
62
So Y = t(− 31 t2 − 13 t − 98 ).
g(t) = αn tn + · · · + α1 t + α0
y ′′ − 3y ′ − 4y = sin t.
Answer. Since g(t) = sin t, we will try the same form. Note that (sin t)′ = cos t, so we
must have the cos t term as well. So the form of the particular solution is
Y = A sin t + B cos t.
Then
Y ′ = A cos t − B sin t, Y ′′ = −A sin t − B cos t.
Plug back into the equation, we get
So we must have
5 3
−5A + 3B = 1, −3A − 5B = 0, → A= , B= .
34 34
So we get
5 3
Y (t) = − sin t + cos t.
34 34
We observe that: (1). If the right-hand side is g(t) = a cos t, then the same form would
work; (2). More generally, if g(t) = a sin t + b sin t for some a, b, then the same form still work.
However, this form won’t work if it is a solution to the homogeneous equation.
63
Answer. Let’s first find yH . We have r 2 + 1 = 0, so r1,2 = ±i, and yH = c1 cos t + c2 sin t.
For the particular solution Y : We see that the form Y = A sin t + B cos t won’t work
because it solves the homogeneous equation.
Our new guess: multiply it by t, so
Then
Y ′ = (A sin t + B cos t) + t(A cos t + B sin t),
Y ′′ = (−2B − At) sin t + (2A − Bt) cos t.
Plug into the equation
1
Y ′′ + Y = −2B sin t + 2A cos t = sin t, ⇒ A = 0, B = −
2
So
1
Y (y) = − t cos t.
2
The general solution is
1
y(t) = yH + Y = c1 cos t + c2 sin t − t cos t.
2
Summary 3. If g(t) = a sin αt + b cos αt, the form of the particular solution depends on
the roots r1 , r2 .
We now have discovered some general rules to obtain the form of the particular solution
for the non-homogeneous equation ay ′′ + by ′ + cy = g(t).
• Rule (2). Except, if the form of g(t) provides a solution to the homogeneous equation.
Then, one can multiply it by t.
• Rule (3). If the resulting form in Rule (2) is still a solution to the homogeneous equation,
then, multiply it by another t.
y ′′ − 3y ′ − 4y = tet .
64
Answer. We see that g = P1 (t)eat , where P1 is a polynomial of degree 1. Also we see
r1 = −1, r2 = 4, so r1 6= a and r2 6= a. For a particular solution we will try the same form as
g, i.e., Y = (At + B)et . So
[(2A + B)et + Atet ] − 3[(A + b)et + Atet ] − 4(At + B)et = (−6At − A − 6B)et = tet .
However, if the form of g is a solution to the homogeneous equation, it won’t work for a
particular solution. We must multiply it by t in that case.
y ′′ − 3y ′ − 4y = te−t .
Answer. Since a = −1 = r1 , so the form we used in Example 7 won’t work here. (Can
you intuitively explain why?) Try a new form now
Then
Y ′ = · · · = [−At2 + (2A − B)t + B]e−t ,
Y ′′ = · · · = [At2 + (B − 4A)t + 2A − 2B]e−t .
Plug into the equation
65
case form of the particular solution Y
Other cases of g are treated in a similar way: Check if the form of g is a solution to the
homogeneous equation. If not, then use it as the form of a particular solution. If yes, then
multiply it by t or t2 .
We summarize a few cases below.
Summary 5. If g(t) = eαt (a cos βt + b sin βt), and r1 , r2 are the roots of the characteristic
equation. Then
case form of the particular solution Y
r1,2 6= α ± iβ Y = eαt (A cos βt + B sin βt)
r1,2 = α ± iβ Y = t · eαt (A cos βt + B sin βt)
Summary 6. If g(t) = Pn (t) cos βt + P̃n (t) sin βt) where Pn (t) and P̃n (t) are polynomials
of degree n, and r1 , r2 are the roots of the characteristic equation. Then
case form of the particular solution Y
r1,2 6= ±iβ Y = (An t + · · · + A0 ) cos βt + (Bn tn + · · · + B0 ) sin βt
n
Summary 7. If g(t) = Pn (t)eαt (a cos βt + b sin βt) where Pn (t) is a polynomial of degree
n, and r1 , r2 are the roots of the characteristic equation. Then
case form of the particular solution Y
r1,2 6= α ± iβ Y = e [(An tn + · · · + A0 ) cos βt + (Bn tn + · · · + B0 ) sin βt]
αt
More terms in the source. If the source g(t) has several terms, we treat each separately
and add up later. Let g(t) = g1 (t) + g2 (t) + · · · gn (t), then, find a particular solution Yi for
each gi (t) term as if it were the only term in g, then Y = Y1 + Y2 + · · · Yn . This claim follows
from the principle of superposition. (Can you provide a brief proof?)
Answer. Since r1 = −1, r2 = 2, we treat each term in g separately and the add up:
66
Example 10. y ′′ + 16y = sin 4t + cos t − 4 cos 4t + 4.
We also note that the terms sin 4t and −4 cos 4t are of the same type, and we must multiply
it by t. So
Y = t(A sin 4t + B cos 4t) + (C cos t + D sin t) + E.
Answer. The char equation is r 2 − 2r + 2 = 0 with roots r1,2 = 1 ± i. Then, for the term
et cos t we must multiply by t.
Y = tet (A1 cos t + A2 sin t)+ et (B1 cos 2t + B2 sin 2t)+ (C1 t + C0 )e−t + De−t + (F2 t2 + F1 t + F0 ).
Our goal is to find some functions u1 , u2 that will make yp a particular solution.
Note that we have a lot of freedom here. If we plug in yp in (N), we will get one constraint.
But we have 2 functions. This means there is one constraint that is free-choice for us! We
should use it wisely, to get 1st order equations for u1 , u2 .
We compute the derivative of yp
If we differentiate one more time, the expression gets large. The term with u′1 and u′2 would
give u′′1 , u′′2 , which we should avoid. We see it is a good place to use our free choice and require
Then, we have
yp′ = y1′ u1 + y2′ + u2 , yp′′ = y1′′ u1 + y2′′ u2 + y1′ u′1 + y2′ u′2 .
67
Plug into (N), we get
h i h i h i
y1′′ u1 + y2′′ u2 + y1′ u′1 + y2′ u′2 + p(t) y1′ u1 + y2′ + u2 + q(t) y1 u1 + y2 u2 = g(t)
Since y1 , y2 are solutions of (H), the two brackets are 0, and we get
Note now (A) and (B) are two linear equations for the unknowns u′1 , u′2 . Write it into
matrix-vector form: ′
y1 y2 u1 0
=
y1′ y2′ u′2 g(t)
Note the determinant of the coefficient matrix is W (y1 , y2 ) 6 0, which implies unique solution
for u′1 , u′2 : ′ ′
u1 1 y2 −y2 0
=
u′2 W −y1′ y1 g(t)
which gives
1 1
u′1 = − y2 (t)g(t), u′2 = y1 (t)g(t). (C)
W W
We recover u1 , u2 by integrating
1 1
Z Z
u1 (t) = − y2 (t)g(t) dt, u2 (t) = y1 (t)g(t)dt.
W (t) W (t)
The particular solution is
1 1
Z Z
yp (t) = u1 y1 + u2 y2 = −y1 (t) y2 (t)g(t) dt + y2 (t) y1 (t)g(t)dt.
W (t) W (t)
Remark 1: In general, finding the general solution for (H) is not trivial. We don’t have a
general algorithm yet.
Remark 2: However, if one can find one solution of (H), call it y1 , by reduction of order
we can find y2 . Then, by variation of parameter, we can find yp , and therefore the general
solution of (N).
y ′′ − 2y ′ + y = et ln t, t > 0.
Answer. We first find the two linearly independent solutions y1 , y2 for the homogeneous
equation. Note that it has constant coefficients. The char eqn r 2 − 2r + 1 = 0 gives two
repeated roots r1 = r2 = 1, so
y1 = et , y2 = tet .
68
Then
y1′ = et , y2′ = (1 + t)et , W (y1 , y2 ) = (1 + t)e2t − te2t = e2t .
Write the particular solution as yp = u1 y1 + u2 y2 , by (C) we have
tet et ln t
u′1 = − = −t ln t, u′2 = ln t
e2t
which gives
t2 t2
Z Z
u1 (t) = − t ln tdt = − ln t + , u2 (t) = ln tdt = t ln t − t.
2 4
t2 y ′′ − ty ′ + y = 0, t>0
t2 y ′′ − ty ′ + y = t, y(1) = 1, y ′ (1) = 4.
c1 = 1, c2 = 3,
69
3.7 Mechanical Vibrations
In this chapter we study some applications of the IVP
l l l+L
extra
stretch
mg
Hooke’s law: Spring force Fs = −kL, where L =elongation and k =spring constant.
So: we have mg = kL which give
mg
k=
L
which gives a way to obtain k by experiment: hang a mass m and measure the elongation L.
Model the motion: Let u(t) be the displacement/position of the mass at time t, assuming
the origin u = 0 is at the equilibrium position, and downward is the positive direction.
Total elongation: L + u
Total spring force: Fs = −k(L + u)
Other forces:
* damping/resistent force: Fd (t) = −γv = −γu′ (t), where γ is the damping constant, and v
70
is the velocity
* External force applied on the
P mass: F (t), given function of t
Total force on the mass: f =Pmg + Fs + Fd + F .
Newton’s law of motion ma = f gives
X
ma = mu′′ = f = mg + Fs + Fd + F, mu′′ = mg − k(L + u) − γu′ + F.
mu′′ + γu′ + ku = F
where m ia the mass, γ is the damping constant, k is the spring constant, and F is the external
force.
mu′′ + ku = 0.
Solve it
r r
2 2 k k k
mr + k = 0, r =− , r1,2 = ± i = ±ω0 i, where ω0 = .
m m m
General solution
u(t) = c1 cos ω0 t + c2 sin ω0 t.
Four terminologies of this motion: frequency, period, amplitude and phase, defined
below. r
k
Frequency: ω0 =
m
2π
Period: T =
ω0
Amplitude and phase: We need to work on this a bit. We can write
!
c1 c2
q
2 2
u(t) = c1 + c2 p 2 cos ω0 t + p 2 sin ω0 t .
c1 + c22 c1 + c22
so we have
q q
u(t) = c21 + c22 (cos δ · cos ω0 t + sin δ · sin ω0 t) = c21 + c22 cos(ω0 t − δ).
p c2
So amplitude is R = c21 + c22 and phase is δ = arctan .
c1
71
A trick to memorize the last term formula: Consider a right triangle, with c1 and c2 as the
sides that form the right angle. Then, the amplitude equals to the length of the hypotenuse,
and the phase δ is the angle between side c1 and the hypotenuse. Draw a graph and you will
see it better.
Problems in this part often come in the form of word problems. We need to learn the skill
of extracting information from the text and put them into mathematical terms.
mu′′ + ku = 0
72
The four terms of the motion are
r
√ 2π π
q
19
ω0 = 192, T = =√ , R= c21 + c22 = ≈ 0.18,
ω0 48 576
and √
c2 6 3
δ = arctan = arctan − √ = − arctan .
c1 192 4
mu′′ + γu′ + ku = 0
then p
2 −γ ± γ 2 − 4km
mr + γr + k = 0, r1,2 = .
2m
We see the type of root depends on the sign of the discriminant ∆ = γ 2 − 4km.
√
• If ∆ > 0, (i.e., γ > 4km, large damping,) we have two real roots, and they are both
negative. The general solution is u = c1 er1 t + c2 er2 t , with r1 < 0, r2 < 0.
Due to the large damping force, there will be no vibration in the motion. The mass will
simply return to the equilibrium position exponentially. This kind of motion is called
overdamped.
√
• If ∆ = 0, (i.e., γ = 4km) we have double roots r1 = r2 = r < 0. So u = (c1 + c2 t)ert .
Depending on the sign of c1 , c2 (which is determined by the ICs), the mass may cross the
equilibrium point maximum once. This kind of motion is called critically damped,
and this value of γ is called critical damping.
√
• If ∆ < 0, (i.e., γ < 4km, small damping) we have complex roots
p
γ 4km − γ 2
r1,2 = −λ ± µi, λ= , µ= .
2m 2m
So the position function is
Here the term e−λt R is the amplitude, and µ is called the quasi frequency, and the quasi
period is 2π
µ . The graph of the solution looks like the one for complex roots with negative
real part.
73
Summary: For all cases, since the real part of the roots are always negative, u will go to
zero as t grow. This means, if there is damping, no matter how big or small, the motion will
eventually come to a rest.
1
mu′′ + γu′ + ku = 0, u′′ + u′ + u = 0, u(0) = 2, u′ (0) = 0.
8
Solve it √ √
2 1 1 255 255
r + r + 1 = 0, r1,2 =− ± i, ω0 =
8 16 16 16
1
u(t) = e− 16 t (c1 cos ω0 t + c2 sin ω0 t) .
By ICs, we have u(0) = c1 = 2, and
1 1
u′ (t) = − u(t) + e− 16 t (−ω0 c1 sin ω0 t + ω0 c2 cos ω0 t),
16
1 2
u′ (0) = − u(0) + ω0 c2 = 0, c2 = √ .
16 255
So the position at any time t is
−t/16 2
u(t) = e 2 cos ω0 t − √ sin ω0 t .
255
74
The appearance of U is due to the force term F . Therefore it is called the forced response.
The form of this particular solution is U (t) = A1 cos ωt + A2 sin ωt. As we have seen, we can
rewrite it as U (t) = R cos(ωt − δ) where R is the amplitude and δ is the phase. We see it is a
periodic oscillation for all time t.
As time t → ∞, we have u(t) → U (t). So U (t) is called the steady state.
mu′′ + ku = F0 cos ωt
p
Let w0 = k/m denote the system frequency (i.e., the frequency for the free oscillation). The
homogeneous solution is
uH (t) = c1 cos ω0 t + c2 sin ω0 t.
The form of the particular solution depends on the value of w. We have two cases.
Case 2A: if w 6= w0 . The particular solution is of the form
U = A cos wt.
(Note that we did not take the sin wt term, because there is no u′ term in the equation.) And
U ′′ = −w2 A cos wt. Plug these in the equation
F0 F0 F0
(k − mw2 )A = F0 , A= 2
= 2
= 2 .
k − mw m(k/m − w ) m(w0 − w2 )
Note that if w is close to w0 , then A takes a large value.
General solution
u(t) = c1 cos w0 t + c2 sin w0 t + A cos wt
where c1 , c2 will be determined by ICs.
Now, assume ICs:
u(0) = 0, u′ (0) = 0.
Let’s find c1 , c2 and the solution:
u(0) = 0 : c1 + A = 0, c1 = −A
u′ (0) = 0 : 0 + w0 c2 + 0 = 0, c2 = 0
Solution
u(t) = −A cos w0 t + A cos wt = A(cos wt − cos w0 t).
We see that the solution consists of the sum of two cosine functions, with different frequen-
cies. In order to have a better idea of how the solution looks like, we apply some manipulation.
Recall the trig identity:
b−a a+b
cos a − cos b = 2 sin sin .
2 2
We now have
w0 − w w0 + w
u(t) = 2A sin t · sin t.
2 2
75
Since both w0 , w are positive, then w0 + w has larger value than w0 − w. Then, the first term
2A sin w02−w t can be viewed as the varying amplitude, and the second term sin w02+w t is the
vibration/oscillation.
One particular situation of interests: if w0 6= w but they are very close wo ≈ w, then we
have |w0 − w| << |w0 + w|, meaning that |w0 − w| is much smaller than |w0 + w|. The plot
of u(t) looks like (we choose w0 = 9, w = 10)
1.5
0.5
−0.5
−1
−1.5
−2
0 5 10 15 20 25
This is called a beat. (One observes it by hitting a key on a piano that’s not tuned, for
example.)
U = At cos w0 t + Bt sin w0 t
76
5
−1
−2
−3
−4
−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
This is called resonance. If the frequency of the source term ω equals to the frequency of
the system ω0 , then, small source term could make the solution grow very large!
One can bring down a building or bridge by small periodic perturbations.
Historical disasters such as the French troop marching over a bridge and the bridge col-
lapsed. Why? Unfortunate for the French, the system frequency of the bridge matches the
frequency of their foot-steps.
Summary:
• With damping:
Transient solution uH plus the forced response term U (t) (steady state),
• Without damping:
if w = w0 : resonance.
if w 6= w0 but w ≈ w0 : beat.
77
Chapter 4
We need to assign n initial/boundary conditions. Let t0 be the initial time. Normally, the
lower derivatives are given at t0 , i.e.,
Theoretical aspects are very similar to those for 2nd order linear equations, with suitable
extensions.
Existence and Uniqueness. If the coefficient functions p0 (t), p1 (t), · · · , pn−1 (t) are con-
tinuous and bounded on an open interval I containing t0 , then equation (A) has a uniqueness
solution on the interval I.
Typical problem types: Find the largest interval where solution is valid.
Linear dependency of n functions: The Wronskian determinant for n functions (y1 , y2 , · · · , yn )
is defined as
y1 y2 ... yn
y1′ y2′ yn′
...
W (y1 , y2 , · · · , yn ) = .. .. ..
(n−1). . .
(n−1) (n−1)
y y . . . y
1 2 n
The determinant is lengthy to compute for general n × n matrices. The simpler cases are
when n = 2 and n = 3, which we recall here
a11 a12 a13
a b
a21 a22 a23 = a11 a22 a33 + a21 a32 a13 + a31 a12 a23
c d = ad − bc,
a31 a32 a33
−a31 a22 a13 − a21 a12 a33 − a11 a32 a23 .
78
One can use the Wronskian determinant to check if a set of functions are linearly dependent
or not.
Remark: We observe now how the property W 6= 0 leads to unique solutions for the
constants c1 , · · · , cn . Pugging in the n initial conditions, we have
c1 y1 (t0 ) + c2 y2 (t0 ) + · · · + cn yn (t0 ) = yo
c1 y1′ (t0 ) + c2 y2′ (t0 ) + · · · + cn yn′ (t0 ) = yo′
..
.
(n−1) (n−1)
c1 y1 (t0 ) + c2 y2 (t0 ) + · · · + cn yn(n−1) (t0 ) = yo(n−1)
The proof could be found in most advanced texts on differential equations. The basic argument
is similar to that of the case n = 2. The computations are more involved, however, since one
needs to compute the derivatives of an (n × n) determinant of functions. Here is a brief proof.
By product rule, the derivative of the determinant is:
′
y1 y2′ ··· yn′ y1 y2 ··· yn y1 y · · · y
2 n
′
y1 ′ ′ ′′ ′′ ′′
y1 ′ y2′ ··· ′
yn
y 2 · · · y n
y1 y 2 · · · y n
′′
y1 y2′′ ··· yn′′ y1′′
y2′′ ··· yn′′
y ′′ y2′′ ··· yn′′
′
1
W = y
′′′ ′′′
y2 ··· ′′′ +
yn y1 ′′′ ′′′
y2 ··· ′′′
yn +· · ·+ ..
.
1
.. ..
(n−2) (n−2) (n−2)
. . y y · · · y n
1 (n) 2
(n−1) (n−1)
(n−1) (n−1) (n−1) (n−1)
y (n) (n)
1 y2 · · · yn y1 y2 · · · yn y1 y2 ··· yn
79
Here in the first matrix we differentiate the first row, in the the second matrix we differentiate
the second row and so on. There are totally n terms. We observe now that all the terms,
except the last, have two identical rows in the matrix. Then, their determinants are all 0,
except the last term. Therefore, we have
y1 y2 ··· yn
y′ y2′ ··· yn′
1
y ′′ ′′
y2 ··· yn′′
′
1
W = .. . (A)
.
(n−2) (n−2) (n−2)
y1 y2 · · · yn
(n) (n) (n)
y1 y2 ··· yn
Now, multiply first row by p0 , the second row by pi , and so on, and the (n − 1)-th row by
pn−2 , and add them all to the last row. Remember that this operation does not change the
(n−1)
determinant. Then the last row becomes −pn−1 yi for the i-th entry. We now have
y1 y2 ··· yn
y1′ y2′ yn′
···
y1′′ y2′′ yn′′
W′ =
··· = pn−1 (t)W ,
..
.
(n−1) (n−1) (n−1)
p
n−1 y1 pn−1 y2 ··· pn−1 yn
Solution for the non-homogeneous equation: y(t) = yH (t) + Y (t), where yH is the
homogeneous solution, and Y is a particular solution.
Characteristic equation
an r n + · · · a1 r + a0 = 0
In general, one find n roots, (counting multiplicity)
(r − r1 )(r − r2 ) · · · (r − rn ) = 0.
From these roots one can find n solutions. It follows the same rules as for 2nd order
equations, with some extensions (marked with * in the following table).
80
root type solution(s)
r is real, un-repeated ert
r is real, double root ert , tert
(*) r is real, triple root ert , tert , t2 ert
(*) r is real, repeated with multiplicity m ert , tert , · · · tm−1 ert
r = λ ± iµ complex eλt cos µt, eλt sin µt
(*) r = λ ± iµ complex and double roots eλt cos µt, eλt sin µt and teλt cos µt, teλt sin µt
(*) r = λ ± iµ complex, repeated m times similar ...
r 4 − 4r 2 = 0, r 2 (r 2 − 4) = 0, r 2 (r − 2)(r + 2) = 0
General solution
(b). We now determine the constants by initial data. It’s useful to work out the derivatives
first:
y(0) = 0 : c1 + c2 + c3 = 0
′
y (0) = 0 : −2c1 + 2c2 + c4 = 0
′′
y (0) = 0 : 4c1 + 4c2 = 8, c1 + c2 = 2
′′′
y (0) = 0 : −8c1 + 8c2 = 0, c1 = c2
81
From the last two equation, we get c1 = c2 = 1. Putting these back into the first 2 equations,
we get c3 = −2 and c4 = 0.
The solution is
y(t) = e−2t + e2t − 2.
(I) : y (4) + 4y ′′ = 0
(II) : y ′′′ − y = 0
(III) : y (4) + 8y ′′ + 16y = 0
(VI) : y ′′′ + 3y ′′ + 3y ′ + y = 0
(V)* : y (4) + 8y = 0
r 4 + 4r 2 = 0, r 2 (r 2 + 4) = 0, r1 = r2 = 0, r3,4 = ±2i
General solution
y(t) = c1 + c2 t + c3 cos 2t + c4 sin 2t.
(II): Characteristic equation and the roots:
√
3 2 1 3
r − 1 = 0, (r − 1)(r + r + 1) = 0, r1 = 1, r2 = − ±
2 2
General solution √ √ !
t − 21 t 3 3
y(t) = c1 e + e c2 cos t + c3 sin t .
2 2
(III): Characteristic equation and the roots:
r 3 + 3r 2 + 3r + 1 = 0, (r + 1)3 = 0, r1 = r2 = r3 = −1
82
so we have 4 roots
83
The n-th derivative is
h i h i
(n) (n) (n−1) ′ (n−1) ′
Y (n) = y1 u1 + y2 u2 + · · · + yn(n) un + y1 u1 + y2 u2 + · · · + yn(n−1) u′n
Plug all these into the equation (A), and collect like terms, we get
h i
(n) (n−1)
LHS = u1 y1 + pn−1 (t)y1 + · · · + p1 (t)y1′ + p0 (t)y1
h i
(n) (n−1)
+u2 y2 + pn−1 (t)y2 + · · · + p1 (t)y2′ + p0 (t)y2 + · · ·
h i
+un yn(n) + pn−1 (t)yn(n−1) + · · · + p1 (t)yn′ + p0 (t)yn
h i
(n−1) ′ (n−1) ′
+ y1 u1 + y2 u2 + · · · + yn(n−1) u′n .
Here all the terms are 0 except the last one. We get
(n−1) ′ (n−1) ′
y1 u1 + y2 u2 + · · · + yn(n−1) u′n = g(t).
Combine the last equation with the (n − 1) constraints we imposed earlier, we get a system
of n linear equations to solve for u′i ’s. Indeed, in matrix-vector form, it can be written as
′
y1 y2 ... yn
u1 0
y1′ y2′ ... yn′ ′
u2 0
.. .. .. .. = .. . (∗)
. . . . .
(n−1) (n−1) (n−1)
y1 y2 . . . yn u′n g(t)
Note that the coefficient matrix is the Wronskian, and is never 0. There this system has a
unique solution for u′i . One can the recover ui by integration.
Show that
y1 (t) = t, y2 (t) = t2 y3 (t) = t3
form a fundamental set of solution for the corresponding homogeneous equation. Then, find
a particular solution for the non-homogeneous equation.
Answer. (1) One can easily plug these 3 function and verify that they are solutions for
the homogeneous equation.
(2) To check if they are linearly independent, we compute the Wronskian
t t2 t3
84
By formula (*), we have
t t2 t3
′
u1 0
1 2t 3t2 u′2 = 0
0 2 6t u3 t−2
We get ′ 1
u1 1/(2t) u1 2 ln t
u′2 = −1/t2 , u2 = 1/t .
u′3 1/(2t3 ) u3 −t−2 /4
A particular solution is
1 t 3
Y = t ln t + t2 (1/t) − t3 t−2 /4 = ln t + t, (t > 0).
2 2 4
We see that the last term is already in y1 , so we can drop it and simply take
t
Y = ln t, (t > 0)
2
We can now form the general solution for the non-homogeneous equation
t
Y = c1 t = c2 t2 + c3 t3 + ln t.
2
Remark. In general case, find the solution for the homogeneous equation with variable
coefficient is not easy! We can only handle the constant coefficient case so far.
85
Chapter 6
We say the transform converges if the limit exists, and diverges if not.
Next we will give examples on computing the Laplace transform of given functions by
definition.
Answer.
A
1 −st A
Z
−st
F (s) = L{f (t)} = lim e dt = lim − e
A→∞ 0 A→∞ s
t=0
1 −sA 1 1 1
− 1 = lim − e−sA + = ,
= lim − e (s > 0)
A→∞ s A→∞ s s s
Note that the condition s > 0 is needed to ensure that the limit exists, and it is 0.
Answer.
A A
1 −(s−a)t A
Z Z
−st at −(s−a)t
F (s) = L{f (t)} = lim e e dt = lim e dt = lim − e
A→∞ 0 A→∞ 0 A→∞ s−a
0
1 −(s−a)A 1
= lim − e −1 = , (s > a)
A→∞ s−a s−a
86
Note that the condition s − a > 0 is needed to ensure that the limit exists.
By Example 2 we have
1 1(s + ia) s + ia s a
L eiat =
= = 2 = 2 +i 2 .
s − ia (s − ia)(s + ia) s + a2 s + a2 s + a2
Comparing the real and imaginary parts, we get
s a
L{cos at} = , L{sin at} = , (s > 0).
s2 + a2 s2 + a2
R∞ RA
Remark: Now we will use 0 instead of limA→∞ 0 , without causing confusion.
87
For piecewise continuous functions, Laplace transform can be computed by integrating
each integral and add up at the end.
We do this by definition:
Z ∞ Z 2 Z ∞
−st −st
F (s) = e f (t) dt = e dt + (t − 2)e−st dt
0 0 2
1 −st 2 1 −st ∞
Z A
1 −st
= e + (t − 2) e − e dt
−s t=0 −s t=2 2 −s
1 1 −st ∞
1 −2s 1 −2s 1
= (e − 1) + (0 − 0) + e = (e − 1) + 2 e−2s
−s s −s t=2 −s s
Remark. Later in Ch 6.3 we will use a different method to deal with discontinuous
(piecewise continuous) functions.
• Solution of initial value problems, with continuous source terms, with examples covering
various cases.
L{f (n) (t)} = sn L{f (t)} − sn−1 f (0) − sn−2 f ′ (0) − · · · − sf (n−2) (0) − f (n−1) (0).
3. L{−tf (t)} = F ′ (s) where F (s) = L{f (t)}. This also implies L{tf (t)} = −F ′ (s).
Remarks:
88
• Note properties 2 are useful in differential equations. It shows that each derivative in t
caused a multiplication of s in the Laplace transform.
• Property 3 is the counter part for Property 2. It shows that each derivative in s causes
a multiplication of −t in the inverse Laplace transform.
• Property 4 is the first Shift Theorem. A counter part of it will come later in chapter 6.3.
Proof:
1. This follows by definition.
2. By definition
Z ∞ ∞ Z ∞
′ −st ′ −st
L{f (t)} = e f (t)dt = e f (t) − (−s)e−st f (t)dt = −f (0) + sL{f (t)}.
0 0 0
The second derivative formula follows from that of the first derivative. Set f to be f ′
we get
L{f ′′ (t)} = sL{f ′ (t)} − f ′ (0) = s(sL{f (t)} − f (0)) − f ′ (0) = s2 L{f (t)} − sf (0) − f ′ (0).
By using these properties, we could find more easily Laplace transforms of many other
functions.
Example 1.
n! n!
From L{tn } = , we get L{eat tn } = .
sn+1 (s − a)n+1
Example 2.
b b
From L{sin bt} = , we get L{eat sin bt} = .
s 2 + b2 (s − a)2 + b2
Example 3.
s s−a
From L{cos bt} = , we get L{eat cos bt} = .
s2 + b2 (s − a)2 + b2
89
Example 4.
3! 1 1
L{t3 + 5t − 2} = L{t3 } + 5L{t} − 2L{1} = 4
+5 2 −2 .
s s s
Example 5.
3! 1 1
L{e2t (t3 + 5t − 2)} = 4
+5 2
−2 .
(s − 2) (s − 2) s−2
Example 6.
2 4 s+1
L{(t2 + 4)e2t − e−t cos t} = 3
+ − ,
(s − 2) s − 2 (s + 1)2 + 1
because
2 4 2 4
L{t2 + 4} = + , ⇒ L{(t2 + 4)e2t } = + .
s3 s (s − 2)3 s − 2
Example 7.
′
at 1 at 1 1
Given L{e } = , we get L{te } = − =
s−a s−a (s − a)2
Example 8. ′
b 2bs
L{t sin bt} = − = .
s 2 + b2 (s2 + b2 )2
Example 9. ′
s 2 − b2
s
L{t cos bt} = − = ··· = .
s + b2
2 (s2 + b2 )2
Example 10.
−1 3 −1 3 2 3 2 3
L 2
=L · 2 = L−1 = sin 2t.
s +4 2 s + 22 2 s + 22
2 2
90
Example 11.
−1 2 −1 1 6 1 −1 3! 1 −5t −1 3! 1
L 4
=L · 4
= L 4
= e L 4
= e−5t t3 .
(s + 5) 3 (s + 5) 3 (s + 5) 3 s 3
Example 12.
−1 s+1 −1 s 1 −1 2 1
L 2
=L 2
+ L 2
= cos 2t + sin 2t.
s +4 s +4 2 s +4 2
Example 13.
−1 s+1 −1 s+1 −1 3/4 1/4 3 1
L 2
=L =L + = e2t + e−2t .
s −4 (s − 2)(s + 2) s−2 s+2 4 4
s+1 A B
= + , A = 3/4, B = 1/4.
(s − 2)(s + 2) s−2 s+2
Example 14. (Two distinct real roots.) Solve the initial value problem by Laplace
transform,
y ′′ − 3y ′ − 10y = 2, y(0) = 1, y ′ (0) = 2.
Answer. Step 1. Take Laplace transform on both sides: Let L{y(t)} = Y (s), and then
2 s2 − s + 2 s2 − s + 2
(s2 − 3s − 10)Y (s) = +s+2−3= , ⇒ Y (s) = .
s s s(s − 5)(s + 2)
Step 3: Take inverse Laplace transform to get y(t) = L−1 {Y (s)}. The main technique
here is partial fraction.
91
Compare the numerators:
The previous equation holds for all values of s. We will now choose selected values of s such
that only one of the constants A, B, C will be non-zero, so we can solve for it.
1
s=0: −10A = 2, ⇒ A=−
5
22
s=5: 35B = 22, ⇒ B=
35
4
s = −2 : 14C = 8, ⇒ C=
7
Now, Y (s) is written into sum of terms which we can find the inverse transform:
1 1 1 1 22 4
y(t) = AL−1 { } + BL−1 { } + CL−1 { } = − + e5t + e−2t .
s s−5 s+2 5 35 7
NB! Pay attention to the roots of the denominators for F (s). Note that the factors (s − 5)
and (s + 2) come from the characteristic equation, and the term s comes from the source term.
• Take Laplace transform on both sides. You will get an algebraic equation for Y (s).
Example 15. (Distinct real roots, but one matches the source term.) Solve the initial
value problem by Laplace transform,
92
Set s = −1, we get A = − 92 . Set s = 2, we get C = 31 . Set s = 0 (any convenient values of s
can be used in this step), we get
8 1 1 1 4 2
−1 = 4A − 2B + C, −1 = − − 2B + , 2B = + = , B= .
9 3 9 3 9 9
So
2 1 2 1 1 1
Y (s) = − + +
9 s + 1 9 s − 2 3 (s − 2)2
and
2 2 1
y(t) = L−1 {Y } = − e−t + e2t + te2t .
9 9 3
Answer. Before we solve it, let’s use the method of undetermined coefficients to find out
which terms will be in the solution.
93
s+2 s+2 A B(s − 1) + 3C
Y (s) = 2
= 2 2
= +
(s + 1)(s − 2s + 10) (s + 1)((s − 1) + 3 ) s+1 (s − 1)2 + 32
Compare the numerators:
so
y = yH + Y = c1 cos t + c2 sin t + A cos 2t,
and the Laplace transform would be
s 1 s
Y (s) = c1 + c2 2 +A 2 .
s1 +1 s +1 s +4
Now, let’s take Laplace transform on both sides:
s
s2 Y − 2s − 1 + Y = L{cos 2t} =
s2 +4
s 2s3 + s2 + 9s + 4
(s2 + 1)Y (s) = + 2s + 1 =
s2 + 4 s2 + 4
2s3 + s2 + 9s + 4 As + B Cs + D(2)
Y (s) = 2 2 2
= 2 + .
(s + 2 )(s + 1) s +1 s2 + 22
Comparing numerators, we get
One may expand the right-hand side and compare terms s3 , s2 , s, 1 to find A, B, C, D, but that
takes more work.
94
Let’s try by setting s into complex numbers.
Set s = i, and remember the facts i2 = −1 and i3 = −i, we have
which gives
7
3 + 7i = 3B + 3Ai, ⇒ B = 1, A = .
3
Set now s = 2i:
−16i − 4 + 18i + 4 = (2Ci + 2D)(−3),
then
1
0 + 2i = −6D − 6Ci, ⇒ D = 0, C=− .
3
So
7 s 1 1 s
Y (s) = + 2 −
3 s + 1 s + 1 3 s2 + 4
2
and
7 1
y(t) = cos t + sin t − cos 2t.
3 3
We see that the method can be applied to higher order equations with constant coefficients.
The hard part in the computation is to factorize the characteristic polynomial.
Answer. We use
4s2 4s2
s4 Y − 4s2 − 16Y = 0, Y (s) = = .
s4 − 16 (s2 − 4)(s2 + 4)
95
Take the inverse transform, we get the solution
A very brief review on partial fraction, targeted towards inverse Laplace trans-
form.
The type of terms appeared in the partial fraction is solely determined by the denominator
Pm (s). First, we factorize Pm (s) and write it into product of terms of
The following table gives the terms in the partial fraction and their corresponding inverse
Laplace transform.
Pn (s)
(s − a)(s − b)2 (s − c)3 ((s − λ)2 + µ2 )
A B1 B2 C1 C2 C3 D1 (s − λ) + D2 µ
= + + + + + + .
s − a s − b (s − b)2 s − c (s − c)2 (s − c)3 (s − λ)2 + µ2
96
Remark. As we have mentioned before, Laplace transform is basically used to deal with
discontinuous (piecewise continuous) functions. The examples we have seen so far are all with
continuous functions. These example serve as a way for us to get familiar with the method
and the basic techniques involved. Next, we will study discontinuous functions.
• Laplace transform of step functions and functions involving step functions (piecewise
continuous functions),
✻uc
1
✲
c t
0
Note: It is common to write u(t) = u0 (t) where the step occurs at t = 0, and uc (t) is then
shifted by c units in t axis, i.e., uc (t) = u(t − c).
This says that uc (t)f (t) equals to f (t) on the interval [c, ∞), and is 0 everywhere else.
We say uc (t) picks up the interval [c, ∞).
Example 1. Consider
1, 0 ≤ t < c,
1 − uc (t) =
0, c ≤ t.
A plot of this is given below
97
✻1 − uc
1
✲
c t
0
Similarly, the function (1− uc (t))f (t) equals to f (t) on the interval [0, c), and 0 everywhere
else.
We say that the function (1 − uc (t)) picks up the interval [0, c).
Example 2. Rectangular pulse. Let 0 < a < b < ∞ and consider the function ua (t)−ub (t).
The plot of the function looks like
✲
a t
0 b
The function (ua (t) − ub (t))f (t) equals to f (t) on the interval [a, b), and 0 everywhere else.
We say that the function ua (t) − ub (t) picks up the interval [a, b).
98
If we add another function in it:
2
t ,
0≤t<a
k(t) = sin t a ≤ t < b
t
e, b≤t
It shall be apparent to us now, that the functions t2 , sin t, et are “dummies”. We could
replace them with any function, so
f1 (t), 0 ≤ t < a
g(t) = f2 (t) a ≤ t < b
f3 (t), b ≤ t
can be written as
g(t) = f1 (t) · 1 − ua (t) + f2 (t) · (ua (t) − ub (t)) + f3 (t)ub (t).
Note that in the final form, each function fi (t) is multiplied by the step function that “picks
up” the corresponding interval. Then we add them all up.
Example 4. One also needs to understand how to go the back way, i.e., if a function is
given using step functions, we must understand its meaning. Now consider
99
is the shift of f by c units. See figure below.
f g
✻ ✻
✲ ✲
t c t
0 0
Let F (s) = L{f (t)} be the Laplace transform of f (t). Then, the Laplace transform of g(t)
is Z ∞ Z ∞
−st
L{g(t)} = L{uc (t) · f (t − c)} = e uc (t)f (t − c) dt = e−st f (t − c) dt.
0 c
Make a variable change, and let τ = t − c, so t = τ + c, and dt = dτ , and we continue
Z ∞ Z ∞
L{g(t)} = e−s(τ +c) f (τ ) dτ = e−sc e−sτ f (τ ) dτ = e−cs F (s).
0 0
So we conclude
L{uc (t)f (t − c)} = e−cs L{f (t)} = e−cs F (s) ,
which is equivalent to
L−1 {e−cs F (s)} = uc (t)f (t − c) .
Note now we are only considering the domain t ≥ 0. So u0 (t) = 1 for all t ≥ 0.
This is the famous second shift Theorem:
In general, the transform of the product is NOT the product of the transform:
L{f (t)g(t)} =
6 L{f (t)} · L{g(t)} .
In the remaining part of this Chapter, we will deal with equations with discontinuous (or
impulsive) source terms. We shall be convinced by now that the key steps in the computation
are taking Laplace transform and inverse Laplace transforms for discontinuous (piecewise
continuous) functions, using the second shift Theorem. Once we are fluent with these two
steps, we can deal with any equations!
Laplace transform with piecewise continuous functions. In following examples we
will compute Laplace transform of piecewise continuous functions with the help of the unit
step function and the second shift Theorem.
100
Example 5. Given
π
sin t, 0≤t< ,
f (t) = 4
sin t + cos(t − π ), π
≤ t.
4 4
It can be rewritten in terms of the unit step function as
π
f (t) = sin t + u π4 (t) · cos(t − ).
4
(Or, if we write out each intervals
π π
f (t) = sin t(1 − u π4 (t)) + sin t + cos(t − ) u π4 (t) = sin t + u π4 (t) · cos(t − ).
4 4
which gives the same answer.)
And the Laplace transform of f is
n π o 1 π s
F (s) = L{sin t} + L u π4 (t) · cos(t − ) = 2 + e− 4 s 2 .
4 s +1 s +1
Example 6. Given (
t, 0 ≤ t < 1,
f (t) =
1, 1 ≤ t.
It can be rewritten in terms of the unit step function as
Example 7. Given (
0, 0 ≤ t < 2,
f (t) =
t + 3, 2 ≤ t.
We can rewrite it in terms of the unit step function as
Example 8. Given (
1, 0 ≤ t < 2,
g(t) =
t2 , 2 ≤ t.
101
We can rewrite it in terms of the unit step function as
Observe that
we have
g(t) = 1 + (t − 2)2 + 4(t − 2) + 3 u2 (t) .
Example 9. Given
0, 0 ≤ t < 3,
f (t) = et , 3 ≤ t < 4,
0, 4 ≤ t.
Example 10.
1 − e−2s 1 1
F (s) = = 3 − e−2s 3 .
s3 s s
We know that L−1 { s13 } = 21 t2 , so we have
1 2
t , 0 ≤ t < 2,
−1 1 2 1 2
2
f (t) = L {F (s)} = t − u2 (t) (t − 2) =
2 2 1 2 1
t − (t − 2)2 , 2 ≤ t.
2 2
102
Example 11. Given
e−3s
1 A B
F (s) = 2 = e−3s = e−3s + .
s + 7s + 12 (s + 4)(s + 3) s+4 s+3
By partial fraction, we find A = −1 and B = 1. So
h i h i
f (t) = L−1 {F (s)} = u3 (t) Ae−4(t−3) + Be3(t−3) = u3 (t) −e−4(t−3) + e3(t−3)
Next we study initial value problems with discontinuous force. We will start with an
example.
Example 1. (Damped system with force, complex roots) Solve the following initial value
problem
′′ ′ 0, 0 ≤ t < 1,
y + 2y + 2y = g(t), g(t) = , y(0) = 1, y ′ (0) = 0 .
2, 1 ≤ t,
103
which gives
2e−s s+2
Y (s) = + 2 .
s(s2 + 2s + 2) s + 2s + 2
Note that the first term is caused by the source (forced response), and the second term is from
the solution of the homogeneous equation, without source.
Now we need to find the inverse Laplace transform for Y (s). We rewrite
2 (s + 1) + 1
Y (s) = e−s + .
s((s + 1) + 1) (s + 1)2 + 1
2
Set s = 0, we get A = 1.
Set s = −1, we get 2 = A − C, so C = A − 2 = −1.
Compare s2 -term: 0 = A + B, so B = −A = −1.
We now have
−s 1 (s + 1) + 1 (s + 1) + 1
Y (s) = e − + .
s (s + 1)2 + 1 (s + 1)2 + 1
We now take the inverse Laplace transform. The second term is easy, we have
−1 (s + 1) + 1
L = e−t (cos t + sin t).
(s + 1)2 + 1
For the first term, we need to apply the 2nd shift Theorem because of the e−s term. We get
h i
y(t) = u1 (t) 1 − e−(t−1) (cos(t − 1) + sin(t − 1)) + e−t (cos t + sin t).
Remark: There are other ways to work out the partial fractions.
Extra question: What happens when t → ∞?
Answer: We see all the terms with the exponential function will go to zero, so y → 1 in the
limit. We can view this system as the spring-mass system with damping. Since g(t) becomes
constant 1 for large t, and the particular solution (which is also the steady state) with 1 on
the right hand side is 1, which provides the limit for y.
Further observation:
• Actually the solution consists of two part: the forced response and the homogeneous
solution.
104
• Furthermore, the g has a discontinuity at t = 1, and we see a response in the solution
also for t = 1, in the term u1 (t).
Example 2. (Undamped system with force, pure imaginary roots) Solve the following
initial value problem
0,
0 ≤ t < π,
′′
y + 4y = g(t) = 4, π ≤ t < 2π, y(0) = 1, y ′ (0) = 0 .
0, 2π ≤ t,
Rewrite
4 4
g(t) = 4(uπ (t) − u2π (t)), L{g} = e−πs − e−2πs .
s s
So
4 −π
s2 Y − s + 4Y = e − e−2π .
s
Solve it for Y :
4 s 4e−π 4e−2π s
Y (s) = e−π − e−2π
2
+ 2
= 2
− 2
+ 2 .
s(s + 4) s + 4 s(s + 4) s(s + 4) s + 4
So
4
L−1 { } = 1 − cos 2t .
s(s2+ 4)
Now we take inverse Laplace transform of Y
y(t) = uπ (t) (1 − cos 2(t − π)) − u2π (t) (1 − cos 2(t − 2π)) + cos 2t
= (uπ (t) − u2π (t))(1 − cos 2t) + cos 2t
(
1 − cos 2t, π ≤ t < 2π,
= cos 2t +
0, otherwise,
= homogeneous solution + forced response
105
Example 3. In Example 2, let
0, 0 ≤ t < 4,
g(t) = et , 4 ≤ 5 < 2π,
0, 5 ≤ t.
Find Y (s).
Answer. Rewrite
so
1 1
G(s) = L{g(t)} = e4 e−4s − e5 e−5s .
s−1 s−1
Take Laplace transform of the equation, we get
1 s
(s2 + 4)Y (s) = G(s) + s, Y (s) = e4 e−4s − e5 e−5s
2
+ 2 .
(s − 1)(s + 4) s + 4
Remark: We see that the first term will give the forced response, and the second term is from
the homogeneous equation.
The students may work out the inverse transform as a practice.
Example 4. (Undamped system with force, example 2 from the book p. 334)
0, 0 ≤ t < 5,
′′ ′
y + 4y = g(t), y(0) = 0, y (0) = 0, g(t) = (t − 5)/5, 5 ≤ 5 < 10,
1, 10 ≤ t.
G(s) 1 1 1 1
(s2 + 4)Y (s) = G(s), Y (s) = 2
= e−5s 2 2 − e−10s 2 2
s +4 5 s (s + 4) 5 s (s + 4)
Work out the partial fraction:
. 1 A B Cs + 2D
H(s) = = + 2+ 2
s2 (s2 + 4) s s s +4
one gets A = 0, B = 14 , C = 0, D = − 18 . So
. −1 1 −1 1 1 1 2 1 1
h(t) = L 2 2
=L · 2− · 2 2
= t − sin 2t.
s (s + 4) 4 s 8 s +2 4 8
106
Go back to y(t)
1 1
y(t) = L−1 {Y } = u5 (t)h(t − 5) − u10 (t)h(t − 10)
5 5
1 1 1 1 1 1
= u5 (t) (t − 5) − sin 2(t − 5) − u10 (t) (t − 10) − sin 2(t − 10)
5 4 8 5 4 8
0, 0 ≤ t < 5,
1 1
= 20 (t − 5) − 40 sin 2(t − 5), 5 ≤ 5 < 10,
1 − 1 (sin 2(t − 5) − sin 2(t − 10)), 10 ≤ t.
4 40
Note that for t ≥ 10, we have y(t) = 14 + R · cos(2t + δ) for some amplitude R and phase δ.
The plots of g and y are given in the book. Physical meaning and qualitative nature of
the solution:
The source g(t) is known as ramp loading. During the interval 0 < t < 5, g = 0 and initial
conditions are all 0. So solution remains 0. For large time t, g = 1. A particular solution is
Y = 14 . Adding the homogeneous solution, we should have y = 14 + c1 sin 2t + c2 cos 2t for t
large. We see this is actually the case, the solution is an oscillation around the constant 41 for
large t.
One can think of this function as the limit of a rectangular wave with area equals to 1:
1
δ(t) = lim · (u(t + τ ) − u(t − τ )) .
τ →0+ 2τ
Recall u(t) is the unit step function. One can visualize this with graphs.
The impulse function can be shifted:
Z a+τ
δ(t − a) = 0, (t 6= a), δ(t − a) dt = 1, (τ > 0)
a−τ
107
This mean: Integrating f (t)δ(t − a) over any interval that contains t = a, one gets the value
of f evaluated at t = a.
Example 1. Solve
Physical interpretation of the equation: Think of the spring-mass system, initially at rest.
Then at time t = π, it gets a hit. BANG!
which gives
e−πs 1
Y (s) = 2
= e−πs .
s + 4s + 5 (s + 2)2 + 1
Taking the inverse transform, we get
−2(t−π) 0, 0<t<π
y(t) = uπ (t) · e sin(t − π) = −2(t−π)
e sin(t − π) t ≥ π.
6.6 Convolution*
We observe that we often need to find
L−1 {F (s)G(s)}.
H(s) = F (s)G(s), f (t) = L−1 {F (s)}, g(t) = L−1 {G(s)}, h(t) = L−1 {H(s)}.
Then Z t
−1
L {F (s)G(s)} = (f ∗ g)(t)=
˙ f (t − τ )g(τ ) dτ, 0 ≤ t < ∞, (A)
0
provided that the integral exists. The integral in (A) is known as the convolution of f and
g.
108
Properties of the convolution integral:
These properties could be easily proved by the definition of the convolution integral. We skip
the proofs.
Watch out! In general, (f ∗ 1)(t) 6= f (t). For example, let f (t) = cos t, then
Z t τ =t
(f ∗ 1)(t) = cos(t − τ ) dτ = − sin(t − τ ) = sin t.
0 τ =0
This is because Z t
(f ∗ 1)(t) = f (t − τ ) dτ 6= f (t).
0
Example 2. Let g(t) = δ(t) be the impulse function. Then G(s) = 1. Following the
convolution Theorem, we have
Z t
−1 −1
L {F (s)} = L {F (s)G(s)} = (f ∗ g)(t) = f (t − τ )δ(τ ) dτ = f (t),
0
109
Answer. The piecewise definition of these functions, in particular, the property of local
support of these functions, provides a great opportunity to illustrate the graphical aspects of
the convolution integral. The value of the function (f ∗ g)(t) depends on the value of t in a
piecewise way.
If 0 ≤ t < 2, the graphs of f (t − τ ) and g(τ ) do not have common support, therefore the
product is 0, and so is f ∗ g.
If 2 ≤ t ≤ 3, the functions f (t − τ ) and g(τ ) are both non-zero on the interval 2 ≤ τ < t.
Then f ∗ g is the area of the overlapped triangle, i.e., (f ∗ g)(t) = 12 (t − 2)2 .
If 3 ≤ t ≤ 4, the functions f (t − τ ) and g(τ ) are both non-zero on the interval t ≤ τ < 4.
Then f ∗ g is the area of the overlapped trapezoid, i.e., (f ∗ g)(t) = 12 − 12 (t − 3)2 .
If 4 ≤ t < ∞, the graphs of f (t − τ ) and g(τ ) do not have common support, therefore the
product is 0, and so is f ∗ g.
The graph of (f ∗ g)(t) is given in plot (f) in Figure 6.1.
Proof. We have
Z ∞ Z ∞
−sξ
F (s) = e f (ξ) dξ, G(s) = e−sτ g(τ ) dτ,
0 0
Then
Z ∞ Z ∞ Z ∞ Z ∞
−sξ −sτ −sτ −sξ
F (s)G(s) = e
f (ξ) dξ · e g(τ ) dτ = e g(τ ) e f (ξ) dξ dτ
0 0 0 0
Z ∞ Z ∞ Z ∞ Z ∞
= g(τ ) e−sτ e−sξ f (ξ) dξ dτ = g(τ ) e−s(τ +ξ) f (ξ) dξ dτ.
0 0 0 0
We now make a variable change. For fixed τ , we let t = ξ + τ , so dt = dξ, and t = τ when
ξ = 0. We have
Z ∞ Z ∞ Z ∞ Z ∞
−st −st
F (s)G(s) = g(τ ) e f (t − τ ) dt dτ = e f (t − τ )g(τ ) dt dτ.
0 τ 0 τ
110
✻ ✻ Rt
f (t − τ )g(τ )dτ = 0
0
f (τ ) g(τ ) f (t − τ ) g(τ )
1 1
✲τ ✲τ
1 2 3 4 1 t 2 3 4
(a) (b) 0 ≤ t < 2
Rt Rt
✻ 0 f (t − τ )g(τ )dτ = 0.5(t − 2)2 ✻ 0 f (t − τ )g(τ )dτ = 0.5 − 0.5(t − 3)2
f (t − τ ) g(τ ) g(τ ) f (t − τ )
1 1
✲τ ✲τ
1 2 t 3 4 1 2 3 t 4
(c) 2 ≤ t < 3 (d) 3 ≤ t < 4
Rt f ∗g
✻ 0 f (t − τ )g(τ )dτ = 0 ✻
0.5
f (t − τ ) g(τ )
1
✲τ ✲t
1 2 3 4 t 1 2 3 4
(e) 4 ≤ t < ∞ (f) plot of (f ∗ g)(t)
111
Example Find the Laplace inverse transform of
a
H(s) = .
s2 (s2 + a2 )
y ′ = ay + g(t), y(0) = y0 .
We can solve it by the method of integrating factor, where µ(t) = e−at . We have
′
(µ(t)y(t))′ = µ(t)g(t), e−at y(t) = e−at g(t).
112
The Laplace transform of y(t), computed from this expression, is
y0 G(s)
Y (s) = y0 L{eat } + L{eat }L{g(t)} = + .
s−a s−a
We can now compare. Taking Laplace transform directly on the differential equation, we
get
y0 + G(s)
sY (s) − y(0) = aY (s) + G(s), Y (s) = ,
s−a
which is of course gives us the same result.
We have
(as2 + bs + c)Y (s) − (as + b)y0 − ay0′ = G(s)
so
(as + b)y0 + ay0′ G(s)
Y (s) = 2
+ 2 =Φ(s)
˙ + Ψ(s).
as + bs + c as + bs + c
Take inverse transform, we get
We see clearly now that φ(t) is the solution caused by the non-zero initial condition, and ψ(t)
is the system’s response to the source term g(t). We observe that we can write
Ψ(s) = U (s)G(s).
where U (s) solution for the problem in Example 3. This function H is known as the transfer
function, and we can express the solution using convolution
Z t
ψ(t) = (u ∗ g)(t) = u(t − τ )g(τ ) dτ.
0
113
Chapter 7
x1 = y, x2 = x′1 = y ′
then
x′1 = x2
(
x1 (0) = α
1 ′′
x′2
= y = (g(t) − bx2 − cx1 ) x2 (0) = β
a
Observation: For any 2nd order equation, we can rewrite it into a system of 2 first order
equations.
Example 1. Given
Rewrite it into a system of first order equations: let x1 = y and x2 = y ′ = x′1 , then
′
x1 = x2 x1 (0) = 2
I.C.’s:
x′2 = y ′′ = −5x2 + 10x1 + sin t x2 (0) = 4
We can do the same thing to any high order equations. For n-th order differential equation:
x1 = y, x2 = y ′ , ··· xn = y (n−1)
114
we get
x′1 = y ′ = x2
x′2 = y ′′ = x3
..
.
x′ = y (n−1) = xn
n−1′
xn = y (n) = F (t, x1 , x2 , · · · , xn )
with corresponding source terms.
(Optional) Reversely, we can convert a 1st order system into a high order equation.
Example 2. Given
3 ′ 1
x − x′′ = −x1 + x′1
2 1 2 1
x′′1 − x′1 − 2x1 = 0
with the initial conditions:
Definition of a solution: a set of functions x1 (t), x2 (t), · · · , xn (t) that satisfy the differential
equations and the initial conditions.
am,1 · · · am,n
115
• Addition: A + B = (aij ) + (bij ) = (aij + bij )
• Product: For A · B = C, it means ci,j is the inner product of (ith row of A) and (jth
column of B). Example:
a b x y ax + bu ay + bv
· =
c d u v cx + du cy + dv
Example 1.
x1 − x2 + 3x3 = 4 1 −1 3 x1 4
2x1 + 5x3 = 0 can be expressed as: 2 0 5 · x2 = 0
x2 − x3 = 7 0 1 −1 x3 7
Example 2.
′
x′1 = a(t)x1 + b(t)x2 + g1 (t)
x1 a(t) b(t) x1 g1 (t)
⇒ = · +
x′2 = c(t)x1 + d(t)x2 + g2 (t) x2 c(t) d(t) x2 g2 (t)
Some properties:
• Determinant det(A):
a b
det = ad − bc,
c d
a b c
det u v w = avz + bwx + cuy − xvc − ywa − zub.
x y z
– (1) A is invertible;
– (2) A is non-singular;
– (3) det(A) 6= 0;
– (4) row vectors in A are linearly independent;
– (5) column vectors in A are linearly independent.
– (6) All eigenvalues of A are non-zero.
116
7.3 Eigenvalues and eigenvectors
Eigenvalues and eigenvectors of A (only for A as a 2 × 2 real matrix)
λ: scalar value, ~v : column vector, ~v 6≡ 0.
Definition: If A~v = λ~v , then (λ, ~v ) is the (eigenvalue, eigenvector) of A.
They are also called the eigen-pair of A.
Remark: If ~v is an eigenvector, then α~v for any α 6= 0 is also an eigenvector, because
We see that det(A − λI) is a polynomial of degree 2 (if A is 2 × 2) in λ, and it is also called
the characteristic polynomial of A. We need to find its roots.
Now, let’s find the eigenvector ~v1 for λ1 = −1: let ~v1 = (a, b)T
~ 1 − (−1) 1 a 0
(A − λ1 I)~v1 = 0, ⇒ · = ,
4 1 − (−1) b 0
2 1 a 0
⇒ · = ,
4 2 b 0
so
1
2a + b = 0, choose a = 1, then we have b = −2, ⇒ ~v1 = .
−2
Finally, we will compute the eigenvector ~v2 = (c, d)T for λ2 = 3:
1−3 1 c 0
(A − λ1 I)~v2 = ~0, ⇒ · = ,
4 1−3 d 0
−2 1 c 0
⇒ · = ,
4 −2 d 0
so
1
2c − d = 0, choosec = 1, then we have d = 2, ⇒ ~v2 = .
2
117
Example 2. Eigenvalues can be complex numbers.
2 −9
A= .
4 2
Let’s first find the eigenvalues.
2 − λ −9
det(A − λI) = det = (2 − λ)2 + 36 = 0, ⇒ λ1,2 = 2 ± 6i
4 2−λ
We see that λ2 = λ̄1 , complex conjugate. The same will happen to the eigenvectors, i.e.,
~v1 = ~v¯2 . So we need to only find one. Take λ1 = 2 + 6i, we compute ~v = (v 1 , v 2 )T :
1
~ −i6 −9 v
(A − λ1 I)~v = 0, · = ~0,
4 −i6 v2
2
−6iv 1 − 9v 2 = 0, choose v 1 = 1, so v 2 = − i,
3
so
1 1
~v1 = , ~v2 = ~v¯1 = .
− 23 i 2
3i
Example 4. But, sometimes repeated eigenvalues could have more than 1 corresponding
eigenvectors. Consider the identity matrix A = I, then we obviously have λ1 = λ2 = 1. In
order to find the eigenvector, we compute
1
~ 0 0 v
(A − I)~v = 0, · = ~0.
0 0 v2
This is automatically satisfied, without any constraint on v 1 , v 2 . We can then choose any two
linearly independent vectors, for example
1 0
~v1 = , ~v2 = .
0 1
We say that, in this case, the double eigenvalue has two linearly independent eigenvectors.
Note this behavior is essentially different from that in Example 3!
118
7.4 Basic theory of systems of first order linear equation
General form of a system of first order equations written in matrix-vector form:
~x′ = P(t)~x + ~g .
~x′ = P(t)~x.
Superposition: If ~x1 (t) and ~x2 (t) are two solutions of the homogeneous system, then any
linear combination c1 ~x1 + c2 ~x2 is also a solution.
Wronskian of vector-valued functions are defined as
W [~x1 (t), ~x2 (t), · · · , ~xn (t)] = det X(t)
where X is a matrix whose columns are the vectors ~x1 (t), ~x2 (t), · · · , ~xn (t).
If det X(t) 6= 0, then ~x1 (t), ~x2 (t), · · · , ~xn (t) is a set of linearly independent functions.
A set of linearly independent solutions ~x1 (t), ~x2 (t), · · · , ~xn (t) is said to be a fundamental
set of solutions.
The general solution is the linear combination of these solutions, i.e.
119
• Step III: Form two solutions: ~z1 = eλ1 t~v1 , ~z2 = eλ2 t~v2 .
• Step IV: Check that ~z1 , ~z2 are linearly independent: the Wronskian
Example 1. Solve
′ 1 1
~x = A~x, A= .
4 1
First, find out the eigenvalues of A. By an example in 7.3, we have
1 1
λ1 = −1, λ2 = 3, ~v1 = , ~v1 = ,
−2 2
x1 c1 e−t + c2 e3t
= .
x2 −2c1 e−t + 2c2 e3t
As t → ∞, we have
x1 c2 e3t 1
= 3t
= .
x2 2c2 e 2
This means, x1 → 21 x2 asymptotically.
120
• What happens when t → −∞?
x1
Looking at x2 , we see as t → −∞ we have
x1 c1 e−t 1
= −t
=− ,
x2 −2c1 e 2
• If c1 = 0, then
x1 c2 e3t 1
= = ,
x2 2c2 e3t 2
so the trajectory is a straight line x1 = 12 x2 .
Note that this is exactly the direction of ~v2 .
Since λ2 = 3 > 0, the trajectory is going away from 0.
• If c2 = 0, then
x1 c1 e−t 1
= −t
=− ,
x2 −2c1 e 2
so the trajectory is another straight line x1 = − 12 x2 .
Note that this is exactly the direction of ~v1 .
Since λ2 = −1 < 0, the trajectory is going towards 0.
• For general cases where c1 , c2 are not 0, the trajectories should start (asymptotically)
from line x1 = − 12 x2 , and goes to line x1 = 12 x2 asymptotically as t grows.
x2
✻
~v2
~v1 ~v2
❑ ✕
❯ ✕
✲
❄ ✲ x1
✻
✛
☛ ❑
~v1
Definition: If A has two real eigenvalues of opposite signs, the origin (critical point) is
called a saddle point.
121
Notion of stability: (in layman’s term). For solutions nearby a critical point, as time
goes,
(1). If the solutions go away: then it is unstable;
(2). If the solutions approach the critical point: it is asymptotically stable;
(3). If the solution stays nearby, but not approaching the critical point: it is stable, but not
asymptotically.
• If c1 = 0, then the solution is ~x = c2 eλ2 t~v2 . We see that the solution vector is a scalar
multiple of ~v2 . This means a line parallel to ~v2 through the origin is a trajectory. Since
λ2 > 0, solutions |~x| → ∞ along this line, so the arrows are pointing away from the
origin.
• The similar other half: if c2 = 0, then the solution is ~x = c1 eλ1 t~v1 . We see that the
solution vector is a scalar multiple of ~v1 . This means a line parallel to ~v1 through the
origin is a trajectory. Since λ1 < 0, solutions approach 0 along this line, so the arrows
are pointing toward the origin.
• Now these two lines cut the plane into 4 regions. We need to draw at least one trajectory
in each region. In the region, we have the general case, i.e., c1 6= 0 and c2 6= 0. We need
to know the asymptotic behavior. We have
We see these are exactly the two straight lines we just made. This means, all trajectories
come from the direction of ~v1 , and will approach ~v2 as t grows. See the plot below.
122
x2
✻
~v2
~v1
❑ ~v2
✒
❯ ✒
q
✍ ✲ x1
✌
✐
✠ ❑
~v1
✻
x2
~v1
■ ✲ ~v2
■
❨
✻
~v2
✲ ✛ ✲ x1
❥ ❄
~v1
If the two real distinct eigenvalue have the same sign, the situation is quite different.
123
Find the general solution and sketch the phase portrait.
Answer.
• Eigenvalues of A:
−3 − λ 2
det(A−λI) = det = (−3−λ)(−2−λ)−2 = λ2 +5λ+4 = (λ+1)(λ+4) = 0,
1 −2 − λ
• General solution is
λ1 t λ2 t −t 1 −4t −2
~x(t) = c1 e ~v1 + c2 e ~v2 = c1 e + c2 e .
1 1
Phase portrait:
• If c1 = 0, then ~x = c2 eλ2 t~v2 , so the straight line through the origin in the direction of ~v2
is a trajectory. Since λ2 < 0, the arrows point toward the origin.
• If c2 = 0, then ~x = c1 eλ1 t~v1 , so the straight line through the origin in the direction of ~v1
is a trajectory. Since λ1 < 0, the arrows point toward the origin.
So all trajectories come into the picture in the direction of ~v2 , and approach the origin
in the direction of ~v1 . See the plot below.
124
✻
x2
~v1
~v2 ~v1
❨ ✒
✠
❄
❥ ✛
✲ x1
✲ ❨
✻ ~v2
✒
In the previous example, if λ1 > 0, λ2 > 0, say λ1 = 1 and λ2 = 4, and ~v1 , ~v2 are the same,
then the phase portrait will look the same, but with all arrows going away from 0.
Definition: If λ1 6= λ2 are real with the same sign, the critical point ~x = 0 is called a
node.
If λ1 > 0, λ2 > 0, this node is called a source.
If λ1 < 0, λ2 < 0, this node is called a sink.
A sink is stable, and a source is unstable.
Example 4. (Source node) Suppose we know the eigenvalues and eigenvectors of A are
1 1
λ1 = 3, λ2 = 4, ~v1 = , ~v2 = .
2 −3
(1) Find the general solution for ~x′ = A~x, (2) Sketch the phase portrait.
Answer. (1) The general solution is simple, just use the formula
λ1 t λ2 t 3t 1 4t 1
~x = c1 e ~v1 + c2 e ~v2 = c1 e + c2 e .
2 −3
(2) Phase portrait: Since λ2 > λ1 , then the solution approach ~v2 as time grows. As
t → −∞, ~x → c1 eλ1 t~v1 . See the plot below.
125
✻
x2
~v1 ~v2 ~v1
▼ ✕
▼
✕
✻
✲
✲ x1
❄
☛
◆
~v2
Summary:
(1). If λ1 and λ2 are real and with opposite sign: the origin is a saddle point, and it’s unstable;
(2). If λ1 and λ2 are real and with same sign: the origin is a node.
If λ1 , λ2 > 0, it’s a source node, and it’s unstable;
If λ1 , λ2 < 0, it’s a sink node, and it’s asymptotically stable;
126
One solution can be written
~z = eλt~v = e(α+iβ)t (~vr + i~vi ) = eαt (cos βt + i sin βt) · (~vr + i~vi )
= eαt (cos βt · ~vr − sin βt · ~vi ) + ieαt (sin βt · ~vr + cos βt · ~vi ) .
~x = c1 eαt (cos βt · ~vr − sin βt · ~vi ) + c2 eαt (sin βt · ~vr + cos βt · ~vi ) .
Example 1. (pure imaginary eigenvalues.) Find the general solution and sketch the phase
portrait of the system:
′ 0 −4
~x = A~x, A= .
1 0
Phase portrait:
• ~x is a periodic function, so all trajectories are closed curves around the origin.
• They do not intersect with each other. This follows from the uniqueness of the solution.
127
• The arrows are pointing either clockwise or counter clockwise, determined by A. In
this example, take ~x = (1, 0)T , a point on the x1 -axis. By the differential equations, we
get ~x ′ = A~x = (0, 1)T , which is a vector pointing upward. So the arrows are counter-
clockwise.
See plot below.
x2
x1
Definition. The origin in this case is called a center. A center is stable (b/c solutions
don’t blow up), but is not asymptotically stable (b/c solutions don’t approach the origin as
time goes).
If the complex eigenvalues have non-zero real part, the situation is still different.
128
Phase portrait. Solution is growing oscillation due to the et . If this term is not present,
(i.e., the eigenvalues would be pure imaginary), then the solutions are perfect oscillations,
whose trajectory would be closed curves around origin, as the center. But with the et term,
we will get spiral curves. Since α = 1 > 0, all arrows are pointing away from the origin.
To determine the direction of rotation, we need to go back to the original equation and
take a look at the directional field.
Consider the point (x1 = 1, x2 = 0), then ~x′ = A~x = (3, 4)T . The arrow should point up
with slope 4/3.
At the point ~x = (0, 1)T , we have ~x′ = (−2, −1)T .
Therefore, the spirals are rotating counter clockwise. We don’t stress on the exact shape
of the spirals. See plot below.
2.5
1.5
0.5
x2
−0.5
−1
−1.5
−2
−2.5
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x1
In this case, the origin (the critical point) is called the spiral point. The origin in this
example is an unstable critical point since α > 0.
Remark: If α < 0, then all arrows will go towards the origin. The origin will be a stable
critical point. An example is provided in the text book. We will go through it here.
Example 3. Consider
− 21 1
′
~x = ~x.
−1 − 21
The eigenvalues and eigenvectors are:
1 1 1 0
λ1,2 = − ± i, ~v = = ±i .
2 ±i 0 1
Since the formula for the general solution is not so “friendly” to memorize, we use a different
approach.
129
We know that one solution is
λ1 t −( 12 +i)t 1 0
~z = e ~v1 = e ±i .
0 01
This is a complex values function. We know the real part and the imaginary part are both
solutions, so work them out:
− 12 t 1 0 1 0
~z = e cos t − sin t + i sin t + i cos t .
0 1 0 1
If c2 = 0, we have
1
x21 + x22 = (e− 2 t )2 c21 .
In general, if c1 6= 0 and c2 6= 0, we can show:
1
x21 + x22 = (e− 2 t )2 (c21 + c21 ).
The trajectories will be spirals, with arrows pointing toward the origin. To determine with
direction they rotate, we check a point on the x1 axis:
1
1 ′ −2
~x = , ~x = A~x = .
0 −1
So the spirals rotate clockwise. And the origin is a stable equilibrium point. See the picture
below.
130
50
40
30
20
10
−10
−20
−30
−40
−50
−50 −40 −30 −20 −10 0 10 20 30 40 50
Connections. At this point, we could make some connections between the 2nd order
equations and the 2 × 2 system of 1st order equations. Consider a 2nd order homogeneous
equation with constant coefficients
ay ′′ + by ′ + cy = 0. (7.1)
The characteristic equation is
ar 2 + br + c = 0. (7.2)
We know that the solutions depend mainly on the roots of the characteristic equation. We
had detailed discussions in Chapter 3.
We can perform the standard variable change, and rewrite this into a system. Indeed, let
x1 = y, x2 = y ′
we get
x′1 = x2
(7.3)
x′2 = − ab x2 − ac x1
In matrix-vector notation, this gives
′ 0 1
~x = A~x, A= (7.4)
− ac − ab
We now compute the eigenvalues of A. We have
b c
det(A) = −λ(− − λ) + = 0, ⇒ aλ2 + bλ + c = 0.
a a
We see that the eigenvalues are the same as the roots for the characteristic equation in (7.2).
131
7.7 Fundamental Matrices*
The concepts discussed here are a bit abstract. it requires some fluent knowledge on linear
algebra. We will make the discussions in connection with linear homogeneous system with
constant coefficients, i.e., ~x′ = A~x, where A is an n × n matrix. (A more general result is
possible, for variable coefficients A(t), but we will not go into that.)
Let ~x(i) (t) for i = 1, · · · , n be a set of fundamental solutions, i.e.,
d (i)
~x (t) = A~x(i) (t).
dt
We define a matrix Ψ(t) whose ith-column is the vector ~x(i) (t), for i = 1, · · · , n. This matrix
is called the fundamental matrix for the system. Note that the fundamental matrix is not
unique.
One interesting property of the fundamental matrix Ψ(t) is that it satisfies the same
ODE, i.e.,
Ψ′ (t) = AΨ(t).
This is true because each column of Ψ(t) satisfies the same equation,
h
′ d (1) d (n) i h i
Ψ (t) = ~x (t); · · · ; ~x (t) = A~x(1) (t); · · · ; A~x(n) (t) = A ~x(1) (t); · · · ; ~x(n) (t) = AΨ(t).
dt dt
~x = Ψ(t)~c, ~c = (c1 , · · · , cn )T .
~x(0) = ~x0 ,
˙ Ψ(t)Ψ−1 (0),
Φ(t) =
132
The Matrix exponential exp(At): Recall the scalar equation and its solution
~x = Φ(t)~x0 ,
it suggests that the matrix-valued function Φ(t) might have an exponential character. We will
explorer this aspect.
Recall the power expansion (Taylor series) for the exponential function
∞
X ak tk
exp(at) = 1 + ,
k!
k=1
Each term is an n × n matrix. Assuming that the series converges for all t, (which could
actually be proved!), then the series defines a matrix-valued function. We now define
∞
X Ak t k
exp(At) =
˙ I+ .
k!
k=1
(2) If A is a diagonal matrix, i.e., A = D = diag(d1 , · · · , dn ), then each term in the series is
a diagonal matrix, and we have
∞ ∞ ∞
!
X diag(dk1 , ·, dkn )tk X dk1 tk X dkn tk
exp(Dt) = I+ = diag 1 + ,··· ,1 + = diag ed1 t , · · · , edn t .
k! k! k!
k=1 k=1 k=1
which gives
d
exp(At) = A exp(At).
dt
Recall that the fundamental matrix Φ(t) satisfies
Φ′ = AΦ, Φ(0) = I.
133
Thus, Φ(t) and exp(At) solves the same IVP. By uniqueness, we have
~x = exp(At)~x0 .
Diagonalizable matrices. If A is not a diagonal matrix, then the system ~x′ = A~x is
called coupled, meaning that different xi ’s effect each other, and they must be solved simulta-
neously. This presents some difficulty. For some matrices A, one could decouple it through a
variable change, and make it a diagonal system (which is uncoupled, such that each equation
could be solved separately.)
Assume now A has n eigenvalues λi and n linearly independent eigenvectors ~v (i) . We form
a matrix T whose columns are eigenvectors,
T= ˙ ~v (1) ; · · · ; ~v (n) .
Since the eigenvectors are linearly independent, the matrix T is non-singular and invertible.
We have
AT = A~v (1) ; · · · ; A~v (n) = λ1~v (1) ; · · · ; λn~v (n) = TD, (∗)
D = diag(λ1 , · · · , λn ).
T−1 AT = D,
so we transformed A into a diagonal matrix. This process is called diagonalization, and such
a matrix A is called diagonalizable.
T−1 AT = D, ⇒ A = TDT−1 .
134
Then ~y solves the diagonal (decoupled) system
where Q(t) is the fundamental matrix for system (Iy). The solution of ~x is then recovered as
Remark. Note that, combining the equations (A1) and (A2), we accidentally proved that,
if A is diagonalizable then it holds
135
Example 2. (Repeated eigenvalues, with one eigenvector) Let
1 −1
A= .
1 3
Then
1 − λ −1
det(A − λI) = det = (1 − λ)(3 − λ) + 1 = λ2 − 4λ + 3 + 1 = (λ − 2)2 = 0,
1 3−λ
so λ1 = λ2 = 2. And we can find only one eigenvector ~v = (a, b)T
−1 −1 a
(A − λI)~v = · = ~0, a + b = 0.
1 1 b
1
Choosing a = 1, then b = −1, and we find ~v = . Then, one solution is:
−1
λt 2t 1
~z1 = e ~v = e .
−1
We need to find a second solution. Let’s try ~z2 = teλt~v . We have
~z2′ = eλt~v + λteλt~v = (1 + λt)eλt~v
A~z2 = Ateλt~v = teλt (A~v ) = teλt λ~v = λteλt~v
If ~z2 is a solution, we must have
~z2′ = A~z2 → 1 + λt = λt
which doesn’t work.
Try something else: ~z2 = teλt~v + η~eλt . (here ~η is a constant vector to be determined later).
Then
~z2′ = (1 + λt)eλt~v + λ~η eλt = λteλt~v + eλt (~v + λ~η )
A~z2 = λteλt~v + A~η eλt .
Since ~z2 is a solution, we must have ~z2′ = A~z2 . Comparing terms, we see we must have
~v + λ~η = A~η, (A − λI)~η = ~v .
This is what one uses to solve for ~
η . Such an ~η is called a generalized eigenvector corresponding
to the eigenvalue λ.
Back to the original problem, to compute this ~η , we plug in A and λ, and get
−1 −1 η1 1
· = , η1 + η2 = −1.
1 1 η2 −1
0
We can choose η1 = 0, then η2 = −1, and so ~η = .
−1
So the general solution is
~x = c1 ~z1 + c2 ~z2 = c1 eλt~v + c2 (teλt~v + eλt ~η )
1 1 0
= c1 e2t + c2 te2t + e2t .
−1 −1 −1
Phase portrait:
136
• As t → ∞, we have |~x| → ∞ unbounded.
• As t → −∞, we have ~x → 0.
• For the general case, with c1 6= 0 and c2 6= 0, a similar thing happens. As t → ∞, the
dominant term in ~x is teλt~v . This means the solution approach the direction of ~v . As
t → −∞, the dominant term in ~x is still teλt~v . This means the solution approach the
direction of ~v . But, due to the change of sign of t, the ~x will change direction and point
toward the opposite direction as when t → ∞. See plot below.
1
x2
−1
−2
−3 z2
−4 z1
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
x1
Remark: If λ < 0, the phase portrait looks the same except with reversed arrows.
137
Case (2). If there is only one eigenvector, then the origin is called a improper node.
The critical point is stable if λ < 0, and unstable if λ > 0.
5. Discuss stability of the critical point: Asymptotically stable if λ < 0, unstable if λ > 0.
−2 2
Example 2. Find the general solution to the system ~x′ = ~x.
−0.5 −4
We start with finding the eigenvalues:
138
Define
p = a + d = trace(A), q = ad − bc = det(A).
Then, the eigenvalues are the roots of the characteristic polynomial
det(A − λI) = (λ − a)(λ − b) − bc = λ2 − (a + d)λ + ad − bc = λ2 − pλ + q
Let λ1 , λ2 be the two eigenvalues. Then we must have
λ1 + λ2 = p, λ1 λ2 = q. (∗)
Let ∆ = p2 − 4q denote the discriminant of the characteristic polynomial. Then, the two roots
can be written as √
p± ∆
λ1,2 = .
2
We now discuss several cases.
• If ∆ < 0 and p = 0, we have pure imaginary eigenvalues. The equilibrium is a center.
• If ∆ < 0 and p > 0, we have complex eigenvalues with positive real parts. The equilib-
rium is a spiral point, which is unstable.
• If ∆ < 0 and p < 0, we have complex eigenvalues with negative real parts. The equilib-
rium is a spiral point, which is asymptotically stable.
• If ∆ > 0, then the two eigenvalues are real and distinct. Using the relation (*), we see
that:
– If q < 0, then λ1 , λ2 have the opposite sign, and we have a saddle point which is
unstable.
– If q > 0, then λ1 , λ2 have the same sign, and we have a proper node. If p > 0, they
are positive, so the node is unstable. Otherwise, if p < 0, they are negative and we
have a stable node.
• Finally, if ∆ = 0, the eigenvalues are repeated, and we have either a proper node or an
improper node. If p > 0, it is unstable, and if p < 0, it is asymptotically stable.
See graph below for an illustration.
139
7.9 Summary of Stabilities and types of critical points for lin-
ear systems
For the 2 × 2 system
~x′ = A~x
we see that ~x = (0, 0) is the only critical point if A is invertible.
In a more general setting: the system
~x′ = A~x − ~b
As long stability is concerned, the sole factor is the sign of the real part of the eigenvalues.
If any of eigenvalue shall have a positive real part, the it is unstable.
140
9.2: Autonomous systems and their critical points
Let x(t), y(t) be the unknowns, we consider the system
′
x (t) = F (x, y) x(t0 ) = x0 ,
y ′ (t) = G(x, y) y(t0 ) = y0 .
for some functions F (x, y), G(x, y) that do not depend on t. Such a system is called au-
tonomous. Typical examples are in population dynamics, which we will see in our examples.
Using matrix-vector form, one could also write an autonomous system as
F (x, y) = 0, G(x, y) = 0
Answer. We see that the righthand-sides are already in factorized form, which makes our
task easier. We must now require
x = y, or x−y =1
and
x=0 or y = −2.
We see that we have 4 combinations.
x=y x=0
(1) ⇒
x=0 y=0
x=y x = −2
(2) ⇒
y = −2 y = −2
x+y =1 x=0
(3) ⇒
x=0 y=1
x+y =1 x=3
(4) ⇒
y = −2 y = −2
141
General strategy.
(1) Factorize the righthand as much as you can.
(2) Find the conditions for each equation.
(3) Make all combinations and solve.
x=0 and xy − 3x + 2 = 0,
x=y and xy − 3x + 2 = 0,
142
which gives
x2 − 3x + 2 = 0, (x − 1)(x − 2) = 0, x = 1 or x = 2.
This gives us two critical points
Example 4. (Competing species) Let x(t), y(t) be the population densities of two species
living in a common habitat, using the same natural resource. They are not in a prey-predator
relation. They simply compete with each other for the resources. The model is
′
x (t) = x(M − ax − by)
y ′ (t) = y(N − cx − dy)
x=0 or M − ax − by = 0
and
y=0 or N − cx − dy = 0.
We now have 4 combinations:
143
(dies out). The existence of the prey has a positive effect on the growth rate of the predator
(represented by the term dx).
Find the critical points: We have the conditions
x=0 or a − by = 0
and
y=0 or − c + dx = 0.
Only 2 out of the 4 combinations give the critical point. The 2 critical points are
in a small neighborhood of x = x0 .
The idea can be extended to vector valued functions. Here, the derivative of the vector-
valued function, however, takes a more complicated form. Using our notation, consider the
system ′
x (t) = F (x, y)
(A)
y ′ (t) = G(x, y)
Let (xo , yo ) be a critical point such that F (xo , yo ) = 0, G(xo , yo ) = 0.
We introduce the concept of the Jacobian Matrix, defined as
. Fx (x, y) Fy (x, y)
J(x, y) =
Gx (x, y) Gy (x, y)
This matrix serves as the derivative of the vector-valued function on the RHS of the system.
We say that we linearize the system (A) at the point (xo , yo ) as
′
x x − xo
= J(xo , yo )
y y − yo
The type and stability of the critical point (xo , yo ) is determined by the eigenvalues of the
Jacobian matrix J(xo , yo ), evaluated at the critical point (xo , yo ).
144
where the 4 critical points are
We now determine their type and stability. We first compute the Jacobian matrix. We have
Fx = −1 + 2x, Fy = 1 − 2y, Gx = 2 + y, Gy = x
so
−1 + 2x 1 − 2y
J(x, y) = .
2+y x
At (0, 0), we have
−1 1
J(0, 0) = , λ1 = 1, λ2 = −2, saddle point, unstable.
2 0
The eigenvalues are complex with negative real parts. This is a spiral point. It is asymptoti-
cally stable.
At (3, −2), we have
5 5
J(3, −2) = , λ1 = 5, λ2 = 3, nodal source, unstable.
0 3
Example 2. Consider
x′ (t) = xy − 6x
y ′ (t) = xy − 2x + y − 2
whose critical points are
(0, 2), (−1, 6).
To check their type and stability, we compute the Jacobian matrix
y−6 x
J(x, y) = .
y−2 x+1
145
At (0, 2), we have
0 −1
J(−1, 6) = , λ1,2 = ±2i, center, stable but not asymp..
4 0
Example 3. We now consider again the prey-predator model, and set in values for the
constants. We consider1 ′
x (t) = x(10 − 5y)
y ′ (t) = y(−6 + x)
which has 2 critical points (0, 0) and (6, 2). The Jacobian matrix is
10 − 5y −5x
J(x, y) = .
y −6 + x
At (0, 0) we have
10 0
J(0, 0) = , λ1 = 10, λ2 = −6, saddle point, unstable.
0 −6
At (6, 2) we have
√
0 −30
J(6, 2) = , λ1,2 = ±i 60, center, stable but not asymp..
2 0
To see more detailed behavior of the model, we compute the two eigenvector for J(0, 0),
and get ~v1 = (1, 0) and ~v2 = (0, 1). We sketch the trajectories of solution in (x1 , x2 )-plane in
the next plot, where the trajectories rotate around the center counter clock wise.
146
the predators increase their numbers, the prey population shrinks, until there is very little
prey left. Then, the predators starve, and its population decays exponentially (dies out). The
circle continuous in a periodic way, forever!
Example 4*. As a final example, we consider the model of two competing species.
Suppose that in some enclosed environment there are two species that are competing for
natural resource, and let x(t) and y(t) be their populations at time t. Assume that, if living
alone, each species grow following the logistic equation.
x′ (t) = x(a1 − b1 x)
y ′ (t) = y(a2 − b2 y)
where (a1 , a2 ) are the growth rates and (a1 /b1 , a2 /b2 ) are the habitat’s capacities, for x and
y, respectively. When they are competing, then each species will have a negative effect on the
other species. Thus, the system is modified into
x′ (t) = x(a1 − b1 x − c1 y)
y ′ (t) = y(a2 − b2 y − c2 x)
x = 0, or a1 − b1 x − c1 y = 0
and
y = 0, or a2 − b2 y − c2 x = 0.
We have 4 critical points. The first three are
a2 a1
(x1 , y1 ) = (0, 0), (x2 , y2 ) = 0, , (x3 , y3 ) = ,0 ,
b2 b1
a1 = b1 x + c1 y, a2 = b2 y + c2 x.
Depending on the constants, it might or might not have a solution in the first quadrant where
x > 0, y > 0. Assuming that b1 b2 − c1 c2 6= 0, the 4th critical point is (x4 , y4 ) where
a2 b2 − a1 c1 b1 a2 − a1 c2
x4 = , y4 = .
b1 b2 − c1 c2 b1 b2 − c1 c2
If b1 b2 − c1 c2 = 0, then there might be no solutions or infinitely many, depending on the values
of a1 , a2 .
147
We see that the critical points depend on the values of the coefficients. We now consider
two different sets of constants, which would lead to very different dynamics of the populations.
x′ (t) = x(1 − x − y)
y ′ (t) = y(0.75 − y − 0.5x)
(x1 , y1 ) = (0, 0), (x2 , y2 ) = (0, 0.75) , (x3 , y3 ) = (1, 0) , (x4 , y4 ) = (0.5, 0.5).
It is a saddle point.
For CP3, the Jacobian matrix is:
−1 −1
J3 = J(1, 0) = .
0 0.25
148
One can sketch a phrase portrait based on these information. See Figure 9.4.2 on p. 524
in textbook.
It is apparent now that CP4 is the attractor for the first quadrant. This means, if x >
0, y > 0 initially, then after a long time they will approach (x4 , y4 ). This is an example of
co-existence of two species.
x′ (t) = x(1 − x − y)
y ′ (t) = y(0.75 − y − 0.5x)
(x1 , y1 ) = (0, 0), (x2 , y2 ) = (0, 2) , (x3 , y3 ) = (1, 0) , (x4 , y4 ) = (0.5, 0.5).
We see both CP1 and CP4 are unstable, but both CP2 and CP3 are stable.
One can use these info to sketch the phase portrait. See Figure 9.4.4 on p. 528 in textbook.
This implies that, for all initial data in the first quadrant, some of them will approach
CP2, while some others will approach CP3. There is a curve that separates these two. We
call this curve a separatrix. For initial data on this separatrix, the solution will approach CP4,
but however, such an solution is unstable with respect to small perturbations. This predicts
that one of the species will die out, while the other could survive and live up to the habitat’s
capacity. No co-existence is possible. Note that this dynamics is completely different from
Case (1).
We observe that the dynamics depends on the property of the two lines
(L1) : a1 − b1 x − c1 y = 0, (L2) : a2 − b2 y − c2 x = 0.
The intersection of these two line gives the 4th critical point, if it lies in the first quadrant.
Along L1, we have x′ = 0. Above L1, we have x′ < 0. Below L1, we have x′ > 0.
Along L2, we have y ′ = 0. Above L2, we have y ′ < 0. Below L2, we have y ′ > 0.
These lines are called the x− and y−nullclines.
There are 4 possible configurations. See Figure 9.4.5 on p 529 in textbook. We will take
a close look at each, considering only the first quadrant.
Case (a): L2 lies above L1. There are 3 critical points of interests. We see that along L1,
we have x′ = 0, y ′ > 0. Along L2, we have x′ < 0, y ′ = 0. The directional fields now indicates
that critical point on the y-axis with coordinate (0, a2 /b2 ) is asymptotically stable. Here is
the phase portrait:
149
1.6
1.5
y 1.4
1.3
1.2
1.1
L2 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
L1 0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
Case (b): L2 lies below L1. There are 3 critical points of interests. We see that along L1,
we have x′ = 0, y ′ < 0. Along L2, we have x′ > 0, y ′ = 0. The directional fields now indicates
that critical point on the x-axis with coordinate (a1 /b1 , 0) is asymptotically stable. Here is
the phase portrait:
0.9
y
0.8
0.7
L1
0.6
0.5
0.4
0.3
0.2
0.1
L2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
x
Case (c): L1 crosses L2, with the L2 above L1 for small x values, and L2 below L1 for
large x values. There are 4 critical points. Below is the graph of the vector field (on the left)
and the phase portrait (on the right).
150
2.4
y 2.2
1.8
L2 1.6
1.4
1.2
0.8
0.6
0.4
0.2
L1
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
x
Case (d): L1 crosses L2, with the L2 below L1 for small x values, and L2 above L1 for
large x values. There are 4 critical points. Below is the graph of the vector field (on the left)
and the phase portrait (on the right). We see that this is the only case where co-existence is
possible.
0.9
y
0.8
0.7
L1
0.6
0.5
0.4
0.3
0.2
0.1
L2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
x
In the textbook there are detailed discussion analyzing the values of the constants a1 , b1 , c1
etc. In conclusion, if the competition is weak, i.e., if b1 b2 > c1 c2 , then coexist is possible.
Otherwise, if the competition is strong, i.e., if b1 b2 < c1 c2 only one species can survive.
151
Chapter 10
Fourier Series
Fourier series will be useful in series solutions for linear 2nd order partial differential equations.
f (x + P ) = f (x), ∀x
sin x, sin 2x, sin 3x, · · · , cos x, cos 2x, cos 3x, · · ·
152
Figure 10.1: Some sine functions.
How to compute the Fourier coefficients? Use the orthogonality of the trig set!
Definition: Given two functions u(x), v(x), define the inner product as
Z b
(u, v) =
˙ u(x)v(x) dx.
a
153
Both integrals are 0 because it integrates over several periods of cosine function, when m 6= n.
All other identities are proven in a similar way. We skip the details.
Useful identities.
mπx mπx mπx mπx
(sin , sin ) = (cos , cos ) = L.
L L L L
Proof: Direct computation gives:
L
mπx mπx 1 2mπx 1
Z
(sin , sin )= 1 − cos dx = (2L) = L.
L L 2 −L L 2
Similarly
mπx mπx
(cos
, cos ) = L.
L L
One may also observe that, by the periodic property, we have
mπx mπx mπx mπx
(sin , sin ) = (cos , cos ).
L L L L
Then, use the trig identity sin2 x + cos2 x = 1, we have
Z Lh
mπx mπx mπx mπx mπx mπx i
(sin , sin ) + (cos , cos )= sin2 + cos2 dx = 2L
L L L L −L L L
which gives the same answer.
L L ∞ Z L
nπx a0 nπx mπx nπx
Z Z X
f (x) cos dx = cos dx + am cos cos dx
−L L −L 2 L m=1 −L L L
∞ Z L
X mπx nπx
+ bm sin cos dx
−L L L
m=1
1 L nπx
Z
an = f (x) cos dx, n = 1, 2, 3, · · ·
L −L L
1 L nπx
Z
bn = f (x) sin dx, n = 1, 2, 3, · · ·
L −L L
and
L
1
Z
a0 = f (x) dx.
L −L
154
Note that a0 /2 (the constant term in Fourier series) is the average of f (x) over a period. The
formula for a0 fit the one for an with n = 0.
These formulas for computing the Fourier coefficients are called Euler-Fourier formula.
If the period is 2π, i.e., L = π in the formulas, we get simpler looking formulas
1 π
Z
an = f (x) cos nx dx, n = 0, 1, 2, · · ·
π −π
1 π
Z
bn = f (x) sin nx dx, n = 1, 2, 3, · · · .
π −π
Example 1. Find the Fourier series for a periodic function f (x) with period 2π
−1, if − π < x < 0
f (x) = , f (x + 2π) = f (x)
1, if 0 < x < π
We note that f (x) is an odd function, i.e., f (−x) = −f (x). Therefore integrating over a
period, one get 0.
For n ≥ 1, we have
1 π
Z
an = f (x) cos nx dx
π −π
1 0 1 π
Z Z
= − cos nx dx + cos nx dx
π −π π 0
1 1 11
= (− ) sin nx|0x=−π + sin nx|πx=0 = 0
π n πn
Actually, we could get this integral quickly by observing the following: f (x) is an odd function,
and cos nx is an even function. Then, the product f (x) cos nx is an odd function. Therefore,
the integral over an entire period is 0.
Finally, we compute bn as
1 π 1 0 1 π
Z Z Z
bn = f (x) sin nx dx = − sin nx dx + sin nx dx
π −π π −π π 0
11 11
= − (− cos nx)|0x=−π + (− cos nx)|πx=0
πn πn
1 1
= (cos 0 − cos(−nπ)) − (cos nπ − cos 0)
nπ nπ
1 1 2
= (1 − cos nπ) + (1 − cos nπ) = (1 − cos nπ).
nπ nπ nπ
155
The actual computation could be shortened by observing the following: sin nx is an odd
function, so f (x) sin nx is an even function. The integrals on [−π, 0] and [0, π] are the same.
So one needs to do only one integral, and multiply the result by 2.
We observe
Then
4
2 nπ , n odd,
bn = (1 − (−1)n ) =
nπ 0, n even.
We can now write out the Fourier series. Since all an ’s are 0, we will only have sine
functions. Also, bn is non-zero only for odd n, and we can write out the first few
4 4 4 4
b1 = , b3 = , b5 = , , b7 = ,···
1π 3π 5π 7π
Note that 4/π is a common factor, which we can take out. This gives:
∞
X 4 1 1 1
f (x) = bn sin nπ = sin x + sin 3x + sin 5x + sin 7x + · · · .
n=1
π 3 5 7
Then, the limit limn→+∞ yn (x) (if it converges) gives the whole Fourier series.
The partial sums of Fourier series and the original function f (x) for this example are
plotted together in Fig 10.2.
1 2 1
Z
a0 = f (x) dx = 2K = K,
2 −2 2
156
Figure 10.2: Fourier series, the first few terms, Example 1.
and
2
nπx 1
1 nπx K nπx K 2 2K nπ
Z Z
1
an = f (x) cos dx = −1 cos dx = sin = sin .
2 −2 2 2 2 2 nπ 2 x=−1 nπ 2
The function sin nπ
2 takes only values 0, 1, 0, −1 in a periodic ways, depending on n. We have
0, n even
2K
an = nπ , n = 1, 5, 9, 13, 17, · · ·
2K
− nπ , n = 3, 7, 11, 15, 19, · · ·
For the bn , note that f (x) is an even function, and sin nπx
2 is an odd function, so the product
is an odd function. Integrating over a whole period gives 0, i.e,
1 2 nπx
Z
bn = f (x) sin dx = 0.
2 −2 2
Note that in this example, there will be no sine functions in the Fourier series!
We can now write out the Fourier series:
∞
1 X nπx K 2K πx 1 3πx 1 5πx 1 7πx
f (x) = a0 + an cos = + cos − cos + cos − cos + ···
2 n=1
2 2 π 2 3 2 5 2 7 2
157
The partial sums of Fourier series and the original function f (x) are plotted together in
Fig 10.3, for K = 1.
Observation.
If f (x) is an odd function, then there are no cosine functions in the Fourier series.
If f (x) is an even function, then there are no sine functions in the Fourier series.
We will see later that this is a general rule!
On tabular method for integration by parts. We see that if f (x) is some kind of
polynomial, we will end up with integrations of the form
nπx nπx
Z Z
f (x) sin dx, f (x) sin dx.
L L
In this case, we need to perform integration by parts multiple times, which could be quite
time consuming. The tabular method makes this computation a bit easier.
In general, this method works best when one of the two functions in the product is a
polynomial, and the other is a function that you can easily find antiderivatives (such as
ex , sin ax, cos ax). Then after differentiating the polynomial several times one obtains zero.
The method may also be extended to work for functions that will repeat themselves, such as
eax sin bx and eax cos bx.
158
To fix the idea, let’s take an example. Consider the integration
Z
x3 cos nx dx.
Let u = x3 and v = cos nx. Begin with these functions and we set up a table. In column A
we put derivatives of u until it becomes zero. In column B we put in integrals of v. We get
the following result
Derivatives of u (column A) Integrals of v (column B)
x3 cos nx
1
3x2 sin nx
n
1
6x − cos nx
n2
1
6 − sin nx
n3
1
0 cos nx
n4
Now take the product of the 1st entry of column A with the 2nd entry of column B, the
2nd entry of column A with the 3rd entry of column B, in this diagonal way, until you hit the
zero term. These products will be multiplied with alternative signs, starting with the positive
sign, and then added up. We get
1 1 1 1
Z
x3 cos nx dx = x3 sin nx − 3x2 (− 2 cos nx) + 6x(− 3 sin nx) − 6 4 cos nx
n n n n
x 3 3x 2 6x 6
= sin nx + 2 cos nx − 3 sin nx − 4 cos nx.
n n n n
We first note that r(t) is an even function, we can immediately conclude that bn = 0 for all n.
Furthermore, r(t) cos nt will be an even function. To integrate over a period, we only need
to integrate over half period, and multiply the answer by 2. We now compute an .
1 π
Z
a0 = r(t) dt = 0, (Look at the graph of r over a period!)
π −π
159
and π π
1 2 π
Z Z
an = r(t) cos nt dt = (−t + ) cos nt dt
π −π π 0 2
Let’s try the tabular method for integration by parts:
−t + π/2 cos nt
−1 (1/n) sin nt
0 −(1/n2 ) cos nt
We have now
π
2 π 1 1
an = (−t + ) sin nt − 2 cos nt
π 2 n n t=0
2
1 1
2 0, n even,
n
= − 2 cos nπ + 2 = (1 − (−1) ) =
π n n πn2 4 , n odd.
n2 π
We can now write out the Fourier series
∞
X X 4 4 1 1
r(t) = an cos nt = cos nt = cos t + cos 3t + cos 5t + · · · .
n=1
n2 π π 9 25
n odd
The plots of several partial sums and their error are included in Figure 10.4.
We make some observations:
(1). In general, the error decreases as we take more terms.
(2). For a fixed partial sum, the error is larger at the point where the function r(t) has a kink,
and smaller in the region where r(t) is smooth.
(3). After taking 3 terms, the partial sum is already a very good approximation to r(t).
(4). It seems like yn (t) converges to r(t) at every point t.
Example 4. Find the Fourier coefficients for f (x), periodic with p = 2π, given as
Answer. Since the function f here is already given in terms of sine and cosine function,
there is no need to compute the Fourier coefficients. We just need to figure out where each
term would fit, by comparing it with a Fourier series. We have
a0 = 4, a4 = −0.5, an = 0, ∀n 6= 0, 4
and
b1 = 4, b100 = −99, bn = 0 ∀n 6= 1, 100.
160
Figure 10.4: Fourier series, the first few terms and the errors, Example 3.
10.2 Even and Odd Functions; Fourier sine and Fourier cosine
series.
Through examples we have already observed that, for even and odd functions, the Fourier
series takes simpler forms. We will summarize it here.
A function f (x) is even if
f (−x) = f (x).
The graph of the function is symmetric about the y-axis. Examples include f (x) = 1 and
f (x) = cos nx for any integer n.
A function f (x) is odd if
f (−x) = −f (x).
The graph of the function is symmetric about the origin. Examples include f (x) = sin nx for
any integer n.
161
Properties:
• Integration of an even function over [−L, L] is twice the integration over [0, L].
We have already observed that, if f (x) is an even function, then its Fourier series will NOT
have sine functions. If f (x) is an odd function, then its Fourier series will NOT have cosine
functions.
This fits the instinct: One can not represent an odd function with the sum of some even
functions, and visa versa.
The formulas for the Fourier coefficients could be simplified, as we have already observed.
• If f (x) is an even, periodic function with p = 2L, it has a Fourier cosine series
∞
a0 X nπx
f (x) = + an cos
2 L
n=1
where
L
2 nπx
Z
an = f (x) cos dx, n = 0, 1, 2, · · ·
L 0 L
• Correspondingly, if f (x) is an odd, periodic function with p = 2L, it has a Fourier sine
series
∞
X nπx
f (x) = bn sin
L
n=1
where
L
2 nπx
Z
bn = f (x) sin dx, n = 1, 2, 3, · · ·
L 0 L
Note now we only need to integrate over half period, i.e., over [0, L], because the product
is an even function.
Half-range expansion. If a function f (x) is only defined on an interval [0, L], we can
extend/expand the domain into the whole real line by periodic expansion. There are two ways
of doing this:
• Extend f (x) onto the interval [−L, L] such that f is an even function, i.e., f (−x) = f (x),
then extend it into a periodic function with p = 2L;
• Extend f (x) onto the interval [−L, L] such that f is an odd function, i.e., f (−x) =
−f (x), then extend it into a periodic function with p = 2L.
162
even extension
1
0.8
0.6
0.4
0.2
0
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
odd extension
1
0.5
−0.5
−1
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
Example 1. Let f (x) = x be defined on the interval x ∈ [0, L]. Sketch 3 periods of the
even and odd extension of f , and then compute the corresponding Fourier sine or cosine series.
Answer. The graph of even and odd extensions are given in Fig 10.5.
The odd periodic extension turns out to be the “sawtooth wave”. We have Fourier sine
series, with the following coefficients
2 L 2 L 2n
nπx nπx nπx nπx oL
Z
bn = x sin dx = sin − cos
L 0 L L nπ L L L
x=0
2L
= (−1)n+1 , n = 1, 2, 3, · · ·
nπ
therefore
∞
2L X (−1)n+1
nπx 2L πx 1 2πx 1 3πx 1 4πx
fodd (x) = sin = sin − sin + sin − sin + ··· .
π n L π L 2 L 3 L 4 L
n=1
The even extension gives triangle waves (similar to Example 3 in ch 1.1). It will have a
163
Fourier cosine series, with coefficients
L L
2 2 L2
2
Z Z
a0 = f (x) dx = ( ) = L, x dx =
L 0 0 L
L 2
( )L
2 L nπx 2 L 2 nπx xL nπx
Z
an = x cos dx = cos + sin
L 0 L L nπ L nπ L
x=0
2L 2L
= (cos nπ − 1) = 2 2 ((−1)n − 1), n = 1, 2, 3, · · ·
n2 π 2 n π
Therefore an = 0 for n even, and an = − n4L
2 π 2 for odd n. We have the Fourier cosine series
L X 4L nπx L 4L πx 1 3πx 1 5πx 1 7πx
feven (x) = − cos = − 2 cos + cos + 2 cos + 2 cos + ··· .
2 n2 π 2 L 2 π L 9 L 5 L 7 L
n odd
Include the plots of partial sums, and the errors, for even and odd expansions are included
in Fig 10.6 and Fig 10.7.
This is confirmed by our example, see Example 1 and 2 in previous section, and Figure
10.2 and Figure 10.3.
Indicate the function that the Fourier series of h(x) converges to.
Answer. Instead of working out the Fourier coefficients by direct computation, we will
use the linear property. Let now
g(x) = K/2
The Fourier coefficients for g(x) are simply
164
odd extention even extention
1 1
0.8
0.5
0.6
0
0.4
−0.5
0.2
−1 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1 1
0.8
0.5
0.6
0
0.4
−0.5
0.2
−1 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1 1
0.8
0.5
0.6
0
0.4
−0.5
0.2
−1 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1 1
0.8
0.5
0.6
0
0.4
−0.5
0.2
−1 0
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Figure 10.6: Even and odd extensions, first 4 partial sums, over 3 periods.
Then,
−K/2, −π < x ≤ 0 K −1, −π < x ≤ 0 K
h(x) − g(x) = = = f (x),
K/2, 0<x≤π 2 1, 0<x≤π 2
where f (x) is the same function as in Example 1 in previous chapter, for which we have already
computed the Fourier coefficients, i.e,
2
a0 = 0, an = 0, bn = (1 − (−1)n ).
nπ
By linearity, for h(x) = (K/2)f (x) + g(x), will have Fourier coefficients
K
ãn = (K/2)an + ān = 0, b̃n = (K/2)bn + b̄n = (1 − (−1)n ),
nπ
which gives
K 2K 1 1 1
h(x) = + sin x + sin 3x + sin 5x + sin 7x + · · · .
2 π 3 5 7
165
odd extention, error even extention, error
1
0.5
0.5
0 0
−0.5
−0.5
−1
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1 0.1
0.5 0.05
0 0
−0.5 −0.05
−1 −0.1
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1
0.04
0.5
0.02
0 0
−0.02
−0.5
−0.04
−1
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
1 0.04
0.5 0.02
0 0
−0.5 −0.02
−1 −0.04
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Figure 10.7: Even and odd extensions, errors for the first 4 partial sums, over 3 periods.
166
The Fourier series of h(x) converges to h(x) where-ever the function is continuous, and to the
mid value at discontinuities, i.e.,
K/2, x = −π,
0, −π < x < 0,
h̄(x) = h̄(x + 2π) = h̄(x).
K/2, x = 0,
K, 0 < x < π,
We give another example, on convergence of Fourier series, in connection with even and
odd periodic extensions.
Example 2. Let f (x) = 2x2 − 1 be defined on the interval x ∈ [0, 1]. Sketch 3 periods of
it even and odd periodic extension. Where do their Fourier cosine and sin series converge to
at the points x = 0, 0.5, 1, 100, 151.5?
167
even extension
1
0.5
−0.5
−1
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
odd extension
1
0.5
−0.5
−1
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
We see that the even extension is a continuous function, so Fourier cosine series converges
to the function value, and the convergence is faster.
However, the odd extension is discontinuous at x = 0, ±1, ±2, · · · , and the Fourier sine
series will converge to the mid value of the left and right limits.
We put these value in a table.
x 0 0.5 1 100 151.5
even -1 -0.5 1 -1 -0.5
odd 0 -0.5 0 0 0.5
y(x1 ) = y1 , y(x2 ) = y2 .
Since now the conditions are given at the two boundary points, this is called a two-point
boundary value problem.
If y1 = y2 = 0, we called this homogeneous boundary conditions.
168
NB! Boundary conditions could of other forms, such as y ′ (x1 ) = 0 etc.
Solution strategy: Find the general solution (as in Chapter 3), then, use the boundary
conditions to determine the constants c1 , c2 .
y ′′ + y = 0, y(0) = 1, y(π/2) = 0.
c1 = 1, c2 = 0
Example 2. Boundary conditions could change the solution. In the previous example, we
now assume the boundary conditions
y(0) = 0, y(π) = 2.
y(0) = 0, y(π) = 0.
Example 3. Solve
Answer. Since r 2 + 4 = 0, and r1,2 = ±2i, the general solution for the homogeneous equation
is
yH (x) = c1 cos 2x + c2 sin 2x.
We now find a particular solution for the non-homogeneous equation. We guess the form
Y = A cos x. Then Y ′′ = −A cos x, so
1
−A cos x + 4A cos x = cos x, ⇒ 3A = 1, A= .
3
This gives the general solution
1
y(x) = c1 cos 2x + c2 sin 2x + cos x.
3
169
To check the boundary condition, we first differentiate y
1
y ′ (x) = −2c1 sin 2x + 2c2 cos 2x − sin x.
3
Then,
y ′ (0) = 0 ⇒ 2c2 = 0, ⇒ c2 = 0
and
y ′ (π) = 0, ⇒ 2c2 = 0, ⇒ c2 = 0.
Then, c1 remain arbitrary. We conclude
1
y(x) = c cos 2x + cos x
3
is the solution, for arbitrary c.
Important observation: For two-point boundary value problems, the existence and
uniqueness theorem is automatic! This is very different from the initial value problems we
studied in Chapter 3. For linear two-point boundary value problems, there are 3 possibilities:
(i) There is one uniqueness solution, (ii) There are no solutions at all; or (iii) There are
infinitely many solutions.
Comparing this to the solution of two linear equations with two unknowns, do we see the
similarity?
Example 1. We now attempt to solve the problem in (*). The general solution depends
on the roots, i.e., on the sign of λ. We have 3 situations:
(1). If λ < 0, we write λ = −k2 , where k > 0. Then, r 2 = k2 , so r1 = −k, r2 = k, and the
general solution is
y(x) = c1 ekt + c2 e−kt
170
By boundary conditions, we must have
c1 + c2 = 0, c1 ekL + c2 e−kL = 0
which gives the solution c1 = c2 = 0. Then y(x) = 0, which is a trivial solution. We discard
it.
(2). If λ = 0, then y ′′ = 0, and y(x) must be a linear function. With the zero boundary
conditions, we conclude y(x) = 0, therefore√trivial.
(3). If λ > 0, we write λ = k2 , for k = λ > 0. Then, r 2 = −k2 , and r1,2 = ±ik, and the
general solution is
y(x) = c1 cos kx + c2 sin kx
We now check the boundary conditions. By y(0) = 0, we have
y(0) = c1 = 0
Note that these eigenfunctions are precisely part of the trig set used for Fourier series.
We can observe these eigenfunction by playing with the slinky.
We will find very different eigenvalues and eigenfunctions. We still consider the same 3 cases.
(1). If λ = −k2 < 0, then
171
which gives only the trivial solution.
(2). If λ = 0, then y ′′ = 0, so y(x) = Ax + B, and y ′ (x) = A. By boundary condition, we
must have A = 0, but B remains arbitrary. So we found an eigenpair:
λ0 = 0, y0 (x) = 1.
y(x) = c1 cos kx + c2 sin kx, y ′ (x) = −kc1 sin kx + kc2 cos kx.
y ′ (0) = 0, ⇒ kc2 = 0, ⇒ c2 = 0
and
y ′ (L) = 0, ⇒ −kc1 sin kL = 0
If c1 = 0, we get trivial solution. So c1 6= 0. Then, we must have
nπ
sin kL = 0, ⇒ kL = nπ, ⇒ k= , n = 1, 2, 3, · · ·
L
For each k, we get a pair of eigenvalue and eigenfunction
nπ 2 nπ
λn = , yn (x) = cos x, n = 1, 2, 3, · · ·
L L
One could combine the results in (2) and (3), and get
nπ 2 nπ
λn = , yn (x) = cos x, n = 0, 1, 2, · · ·
L L
Note that these are also a part of the trig set used in Fourier series!
Observation.
• We notice that, different types of boundary conditions would give very different eigen-
values and eigenfunctions!
• In these two examples, the eigenfunctions are sine and cosine functions, in the same
form as the trig set we use in Fourier series. Recall that the trig set is a mutually
orthogonal set. So, for each of these two eigenvalue problems, the set of eigenfunctions
are mutually orthogonal. In fact, this is a more general property for eigenfunctions. One
can define proper inner product such that eigenfunctions for the same eigenvalue problem
would always form a mutually orthogonal set. (If you are familiar with eigenvectors of
a symmetric matrix, they also form an orthogonal basis!)
Example 3. Find all positive eigenvalues and their corresponding eigenfunctions of the
problem
y ′′ + λy = 0, y(0) = 0, y ′ (L) = 0.
172
Answer. Since λ > 0, we write λ = k2 where k > 0, and the general solution is
y(x) = c1 cos kx + c2 sin kx, y ′ (x) = −kc1 sin kx + kc2 cos kx.
y(0) = 0 ⇒ c1 = 0
which implies
1 π 1
kL = (n + )π ⇒ kn = (n + ), n = 1, 2, 3, · · ·
2 L 2
We get the eigenvalues λn and the corresponding eigenfunction yn as
2
π(n + 1/2) π(n + 1/2)x
λn = kn2 = , yn = sin , n = 1, 2, 3, · · · .
L L
Example 4. (optional) Find all positive eigenvalues λn and their corresponding eigen-
functions un (x) of the problem
u(x) = c1 cos kx + c2 sin kx, u′ (x) = −kc1 sin kx + kc2 cos kx.
u′ (0) = 0 ⇒ kc2 = 0, ⇒ c2 = 0
173
Since c1 6= 0 (otherwise trivial), we must have
1
cos kπ = k sin kπ, ⇒ = tan kπ.
k
To find the values of k that satisfy this relation, we can plot the two functions
1
f1 (k) = , f2 (k) = tan kπ
k
on the same graph, for k > 0, and we look for intersection points. See plot below:
In this case, one finds infinitely many intersection points. The k-coordinates of all these
points give all the eigenvalues. Marking the p-coordinate of these intersection points as pn for
n = 1, 2, · · · , we get the eigenvalues and the eigenfunctions
λn = p2n , un = cos pn x, n = 1, 2, · · · .
We observe that, as n grows bigger, the interception point gets closer to the k-axis, so λn ≈ n−1
for large n.
Summary: It would be useful to memorize the solutions to the following simple eigenvalue
problems:
y ′′ + λy = 0, y(0) = 0, y(L) = 0.
174
• Homogeneous Neumann boundary condition:
y ′′ + λy = 0, y ′ (0) = 0, y ′ (L) = 0.
175
Chapter 11
Concept of solution: u is a solution if it satisfies the equation, and any boundary or initial
conditions if given.
The students should be able to verify if a given function is a solution of a certain equation.
Fundamental principle of superposition for linear PDEs:
• Homogeneous:
L(u) = 0 (11.1)
176
• Non-homogeneous:
L(u) = f, (11.2)
ux = F ′ (x)G(t), uxx = F ′′ (x)G(t), ut = F (x)G′ (t), utt = F (x)G′′ (t), uxt = F ′ (x)G′ (t).
We will then put these into the equation, and try to separate x and t on different sides of the
equation.
Here u(x, t) measures the temperature of a rod with length L. We first assign the boundary
conditions
u(0, t) = 0, u(L, t) = 0, t>0
This means, we fix the temperature of the two end-points of the rod to be 0. This type of
boundary condition is called Dirichlet condition.
We also have the initial condition
Then
ux = F ′ (x)G(t), uxx = F ′′ (x)G(t), ut = F (x)G′ (t),
177
Plug these into the equation (11.3),
F ′′ (x) G′ (t)
F (x)G′ (t) = c2 F ′′ (x)G(t), → = 2 = constant = −k
F (x) c G(t)
We end up with 2 ODEs,
which implies
F (0) = 0, F (L) = 0
We now have the following eigenvalue problem for F (x)
F ′′ + pF = 0, F (0) = 0, F (L) = 0.
where Cn is arbitrary.
Step 4. We now get the eigenvalues and their eigenfunctions
nπc 2
λn = , un (x, t) = Cn e−λn t sin wn x, n = 1, 2, 3, · · ·
L
The sum of them is also a solution. This gives the formal solution
∞ ∞
X X 2 nπ nπc
u(x, t) = un (x, t) = Cn e−λn t sin wn x, wn = , λn = . (11.4)
L L
n=1 n=1
we conclude that Cn must the Fourier sine coefficients for the odd periodic half-range extension
of f (x), i.e.,
2 L nπx
Z
Cn = f (x) sin dx, n = 1, 2, 3, · · · . (11.5)
L 0 L
178
Summary: The formal solution for initial and boundary value problem for the heat
equation
is
∞ ∞
X X nπc 2 nπx
u(x, t) = un (x, t) = Cn e−( L
) t
sin
L
n=1 n=1
where
L
2 nπx
Z
Cn = f (x) sin dx, n = 1, 2, 3, · · · .
L 0 L
Discussions on solutions:
• Harmonic oscillation in x, exponential decay in t:
• Speed of decay depending on λn = nπc/L. Faster decay for larger n, meaning the high
frequency components are ’killed’ quickly. After a while, what remain in the solution
are the terms with small n.
Example 1. Let c = 1 and L = 1. If f (x) = 10 sin πx, then we have C1 = 10 and all other
Cn = 0, so the solution is
2
u(x, t) = 10e−π t sin πx.
At t = 1, the amplitude of the solution is
2
max |u(x, 1)| = 10e−π ≈ 5.17 × 10−4 .
x
If now we let f (x) = 10 sin 3πx, then C3 = 10 and all other Cn = 0, and the solution is
2
u(x, t) = 10e−9π t sin 3πx.
At t = 1, the amplitude is
2
max |u(x, 1)| = 10e−9π ≈ 2.65 × 10−38 .
x
179
and the amplitude at t = 1 is
ux (0, t) = 0, ux (L, t) = 0
This means that the 2 ends are insulated, and no heat flows through.
Following the same setting, we get the eigenvalue problem for F (x) as
From Example 2 in the last Chapter, we have only non-negative p. Let p = w2 with w ≥ 0.
We have the eigenvalues wn and the eigenfunctions Fn (x):
nπ
wn = , Fn (x) = cos wn x, n = 0, 1, 2, · · ·
L
Note that n = 0 is permitted in this solution. The solution for G(t) remains the same
2 cnπ
Gn (t) = Cn e−λn t , λn = , n = 0, 1, 2, · · ·
L
This gives the eigenfunctions
2
un (x, t) = Cn e−λn t cos wn x, n = 0, 1, 2, · · ·
So, Cn must be the Fourier cosine coefficient for the even half-range expansion of f (x), i.e.,
L L
1 2 nπx
Z Z
C0 = f (x) dx, Cn = f (x) cos dx, n = 1, 2, 3, · · · . (11.7)
L 0 L 0 L
• harmonic oscillation in x,
180
• As t → ∞, we get u → C0 , which is the average of f (x) (initial temperature). This is
reasonable b/c the bar is insulated.
b−a
U (x) = a + x.
L
U (x) = a.
U (x) = a + bx.
Then, we have
which are homogeneous. Then, one can find the solution for w by the standard separation of
variables and Fourier series. Once this is done, one can go back to u by
181
Example 3. Consider the heat equation ut = uxx with the following BCs
u(0, t) = 2, u(1, t) = 4,
and IC
u(x, 0) = 2 + 2x − sin πx − 3 sin 3πx.
Find the solution u(x, t).
Answer. : Step 1. We first find the steady state. Call it w(x), it satisfies the following
two-point boundary value problem
where we have
nπ ncπ
L = 1, wn = = nπ, λn = = nπ
L L
so
+∞
2 π2 t
X
U (x, t) = Cn e−n sin nπx.
n=1
Here Cn are Fourier coefficients of the initial data U (x, 0). We find only two coefficients that
are not 0, namely
C1 = −1, C3 = −3.
This gives us
2 2
U (x, t) = −e−π t sin πx − 3e−9π t sin 3πx.
Step 3. Putting them together, we get the solution
2 2
u(x, t) = w(x) + U (x, t) = 2 + 2x − e−π t sin πx − 3e−9π t sin 3πx.
182
Example 1. Consider
x2 utt − (x + 1)t2 uxx = 0
with boundary conditions
u(0, t) = 0, u(L, t) = 5,
and initial condition
u(x, 0) = cos(x).
Letting u = F (x)G(t), we have
which is separable
G′′ (t) (x + 1)F ′′ (x)
= = −λ
t2 G(t) x2 F (x)
which gives two ODEs:
and
G′′ (t) + λt2 G(t) = 0.
Example 2. Consider
uxx − 6utx + 9utt = 0
Letting u = F (x)G(t), we have
Example 3. Consider
utt = 4uxx − 5ux + u
Then,
F G′′ = 4F ′′ G − 5F ′ G + F G, ⇒ F G′′ = G(4F ′′ − 5F ′ + F )
which is separable
G′′ 4F ′′ − 5F ′ + F
= = −λ
G F
We get two ODEs:
4F ′′ − 5F ′ + F + λF = 0
and
G′′ + λG = 0.
How to solve these eigenvalue problem, that is a different question.
183
11.3 Solutions of Wave Equation by Fourier Series
Consider the 1D wave equation
Here u(x, t) is the unknown variable. If this is a model of vibrating string, then the constant
c2 has physical meaning, namely, c2 = T /ρ where T is the tension and ρ is the density such
that ρ∆x is the mass of the string segment.
For the first example, we assign the following boundary and initial conditions
and we get
F ′′ (x) G′′ (t)
F (x)G′′ (t) = c2 F ′′ (x)G(t), → = 2 = −p (constant).
F (x) c G(t)
This gives us 2 ODEs
F (0) = 0, F (L) = 0.
We are now familiar with this eigenvalue problem, and the solutions are
nπ
p = wn2 , wn = , Fn (x) = sin wn x, n = 1, 2, 3, · · · .
L
Step 3. Now, for a given n, the ODE for G(t) takes the form,
We let
Here λn are eigenvalues, and un (x, t) are eigenfunction. The set of eigenvalues {λ1 , λ2 , · · · }
are called the spectrum.
Discussion on eigenfunctions:
184
• Harmonic oscillation in x.
• G(t) gives change of amplitude in t, harmonic oscillation. Draw a figure (for ex. with
n = 2) and explain.
• Different n gives different motion. These are called modes. Draw figures of modes with
n = 1, 2, 3, 4. With n = 1, we have the fundamental mode. n = 2 gives an octave, n = 3
gives an octave and a fifth, n = 4 gives 2 octaves.
Step 4. We now construct solution of the wave equation (11.8). The formal solution is
∞
X ∞
X
u(x, t) = un (x, t) = (Cn cos λn t + Dn sin λn t) sin wn x. (11.11)
n=1 n=1
The coefficients Cn , Dn are chosen such that the ICs (11.10) are satisfied.
We first check the IC u(x, 0) = f (x). This gives
∞
X
Cn sin wn x = f (x),
n=1
Then, we have
∞
X
ut (x, 0) = λn Dn sin wn x = g(x),
n=1
In summary, the formal solution for the wave equation (11.8) with BC (11.9) and IC (11.10)
is given in (11.11) with (11.12)+(11.13).
NB! Note that here the function f (x) and g(x) are extended to the whole real line by the
odd half-range expension.
185
Remark: This solution takes the form of a Fourier series. It is very hard to see what’s
going on in the solution. In particular, one can not get any intuition on wave phenomenon in
the solution.
On the other hand, we can manipulate the solution by some trig identity. Consider the
simpler case where g(x) = 0 so Dn = 0, the solution takes the form
∞
X nπ
u(x, t) = Cn cos λn t sin wn x, λn = cwn , wn = .
L
n=1
• f ∗ (x − ct): f ∗ travels with speed c as time t goes. (wave travels to the right.)
• f ∗ (x + ct): f ∗ travels with speed −c as time t goes. (wave travels to the left.)
New meaning of solution of wave equation with g(x) = 0: The initial deflection f ∗ is split into
two equal parts, one travels to the left, one to the right, with the speed c, and superposition
of them gives the solution.
Remark: Such a phenomenon can also be observed when g(x) is not 0. The computation
would be somewhat more involving. Try it on your own for practice.
• First sketch f (x) on 0 < x < L, then extend it to odd, and periodic.
L 2L 3L 4L
• Sketch for t = 0, 4c , 4c , 4c , 4c , the graph of f ∗ (x + ct) (in white) and f ∗ (x − ct) (in
color) on the same graph, then the sum of them on a separate graph.
186
11.4 Laplace Equation in 2D (probably skip)
Consider the Laplace equation in 2D
uxx + uyy = 0
Some applications of this equation: steady state of 2D heat equation, electrostatic potential,
minimum surface problem, etc.
To begin with, we consider a rectangular domain R: 0 < x < a, 0 < y < b.
There are different types of BCs one can assign:
• Robin BC: mixed BC, combine the Dirichlet and Neumann BCs.
Note that on 3 sides we have homogeneous conditions, and only on one side the condition is
non-homogeneous.
This will take several steps.
Step 1. Separating variables. Let
Then
uxx = F ′′ (x)G(y), uyy = F (x)G′′ (y),
so
F ′′ (x) G′′ (y)
=− = −p (constant)
F (x) G(y)
This gives us 2 ODEs
F (0) = 0, F (a) = 0.
We solve this eigenvalue problem for F . This we have done several times. We have
nπ
p = wn2 , wn = , Fn (x) = sin wn x, n = 1, 2, 3, · · · .
a
Step 3. Solve for G(y). For each n, we have
187
which gives the general solution
G(0) = 0
Then, we have
An + Bn = 0, Bn = −An
so
Gn (y) = An (ewn y − e−wn y ) = 2An sinh wn y.
Recall that
2 sinh x = ex − e−x , 2 cosh x = ex + e−x .
Since An is arbitrary so far, so is 2An , and we will simply call it An . Then
Gn (y) = An sinh wn y.
How does the solution look like? Consider the minimum surface problem. There is the
maximum principle: The max or min value only occur on the boundary.
188
Example : with a different boundary condition: (graph..)
We may write
An ewn b = Cn , Bn e−wn b = −Cn
so
An = Cn e−wn b , Bn = −Cn ewn b .
which gives
h i
Gn (y) = Cn e−wn b ewn y − Cn ewn b e−wn y = Cn ewn (y−b) − e−wn (y−b) = 2Cn sinh wn (y − b).
We now fit in the last BC, i.e., u(x, 0) = g(x), and we get
∞
X
Cn sinh(−wn b) sin wn x = g(x).
n=1
2 a
Z
Cn sinh(−wn b) = g(x) sin wn x dx, n = 1, 2, 3, · · · .
a 0
This gives the formula for the coefficient Cn :
Z a
2
Cn = − g(x) sin wn x dx. n = 1, 2, 3, · · · (11.20)
a sinh(wn b) 0
189
or
u(0, y) = k(y), u(a, y) = 0, u(x, 0) = 0, u(x, b) = 0. (11.22)
We can simply switch the roles of x and y, and carry out the whole procedure.
Let u1 , u2 , u3 , u4 be the solutions with BCs (11.15), (11.18), (11.21) and (11.22), respec-
tively. (Make a graph.) Then, set
u = u1 + u2 + u3 + u4
By superposition, u solves the Laplace equation, and satisfies the boundary condition in
(11.23).
Possible examples on Neumann BC, or an example of mixed BC. ?
Proof. We need to plug u1 and u2 into the wave equation (11.24) and check is the equation
holds. By the Chain Rule, we have
and
(u1 )t = cφ′ (x + ct), (u1 )tt = c2 φ′′ (x + ct),
We clearly have (u1 )tt = c2 (u1 )xx . The proof for u2 is completely similar.
and derive the formula for the solution, i.e., determine the functions φ, ψ by these ICs in
(11.26).
190
By the condition u(x, 0) = f (x), we get
We differentiate u in t
we get
x
1
Z
φ(x) − ψ(x) = φ(x0 ) − ψ(x0 ) + g(s) ds.
c x0
191
The solution consists of two parts, where the first term is caused by the initial deflection f (x),
and the second term is from the initial velocity g(x).
Note that, if this is a vibrating string problem, then f, g here in (11.32) are the odd
half-range expansions onto the whole real line.
Remark: The solution (11.32) is more general than Fourier Series solution. In fact, there
is no requirement for f (x), g(x) to be periodic. For any f, g defined on the whole real line, the
formula (11.32) gives the solution of the wave equation.
Example . Solve the wave equation by D’Alembert’s formula.
1
utt = uxx , u(x, 0) = sin 5x, ut (x, 0) = cos x
5
1
Here we have f (x) = sin 5x and g(x) = 5 cos x. This is just a practice of using the formula
(11.32) with c = 1. We first work out the integral
Z x+t
1 x+t 1
Z
g(s) ds = cos s ds = (sin(x + t) − sin(x − t)).
x−t 5 x−t 5
Then, we get the solution
1 1 1
u(x, t) = sin 5(x − t) + sin(x + t) + (sin(x + t) − sin(x − t))
2 2 10
1 1 1 1
= sin 5(x + t) + sin(x + t) + sin 5(x − t) − sin(x − t) .
2 10 2 10
Note the first term is a function of x + t, i.e., φ(x + t), where
1 1
φ(u) = sin 5u + sin u,
2 10
and the second term is a function of x − t, i.e., ψ(x − t) where
1 1
ψ(u) = sin 5u − sin u.
2 10
Characteristics. As we see, solutions of the wave equation can be written as
u(x, t) = φ(x + ct) + ψ(x − ct).
This implies:
–φ(x + ct) is constant along lines of x + ct = K where K is constant;
–ψ(x − ct) is constant along lines of x − ct = K where K is constant;
In the t − x plan,
– x + ct =constant: are straight lines with slope −1/c,
– x − ct =constant: are straight lines with slope 1/c
Draw a graph.
These lines are paths where information is being carried along. They are called character-
istics of this problem.
Remark:
• This is a more general property for many PDEs, including non-linear PDEs.
• Characteristics lines might not be parallel to each other, or straight lines. The situation
could be very complicated...
192
11.6 Method of Characteristics; Classification of 2nd order lin-
ear PDEs.
General form of 2nd order quasi-linear PDEs: Let u(x, y) be the unknown (NB! It could be
u(x, t) also...)
Auxx + BUxy + Cuyy = F (x, y, u, ux , uy ). (11.33)
Define the discriminant
∆ = B 2 − 4AC. (11.34)
Type of PDE is determined only by ∆
If A(x, y), B(x, y), C(x, y) are functions, then the equation can changed type. This is called
mixed type. In general, mixed type equations are difficult.
is a solution for (11.35) for an arbitrary function φ. We now try to find values of r.
By the Chain Rule, we have
and
uy = φ′ (y + rx), uyy = φ′′ (y + rx), uxy = −rφ′′ (y + rx).
Plug these back into (11.35), we get
Since φ is arbitrary, then φ′′ is not zero in general. We can drop the common factor φ′′ (y + rx),
and get
Ar 2 − Br + C = 0. (11.36)
This is called the characteristic equation. Solve it for r. The types of the roots depend on the
discriminant ∆:
193
From now on, we will focus on the hyperbolic case, with constant coefficients. Let r1 , r2
be the roots. Then, the solution is
u(x, y) = φ1 (y − r1 x) + φ2 (y − r2 x)
where φ1 , φ2 are arbitrary functions, to be determined by BCs (or ICs if it is a time depending
problem).
Example . Find the solution by method of characteristics for the following equation
with BCs
u(0, y) = f (y), ux (0, y) = 0.
To solve this, we set up the characteristic equation
r 2 − 3r + 2 = 0, r1 = 1, r2 = 2,
u(x, y) = φ1 (y − r1 x) + φ2 (y − r2 x) = φ1 (y − x) + φ2 (y − 2x).
Then, we have
−φ′1 (y) − 2φ′2 (y) = 0, (φ1 (y) + 2φ2 (y))′ = 0.
Integrating this, we get
φ1 (y) + 2φ2 (y) = M
where M is any integration constant. We can now solve
utt − utx = 0,
194
To solve, we set up the char equation
r 2 + r = 0, r1 = 0, r2 = 1
Then
u(t, x) = φ(x − r1 t) + ψ(x − r2 t) = φ(x) + ψ(x − t).
By the first IC, we have
φ(x) + ψ(x) = f (x). (11.37)
For the 2nd IC, we differentiate
ut (t, x) = 0 − ψ ′ (x − t)
then
ut (0, x) = −ψ ′ (x) = g(x).
Integrating this equation from x0 (arbitrary) to x, we get
Z x
ψ(x) = ψ(x0 ) − g(s) ds.
x0
Then, we get Z x
φ(x) = f (x) − ψ(x) = f (x) − ψ(x0 ) + g(s) ds.
x0
195
Chapter 12
Homeworks
12.1 Homework 1
Problem 1: In each problem, determine the order of the equation and state whether the
equation is linear or nonlinear.
(a) t2 y ′′ + ty ′ + 2y = sin t;
Problem 2: In each problem, verify that each given function is a solution of the differential
equation.
(a) ty ′ − y = t2 ; y(t) = 3t + t2 ;
Problem 3: In each problem,draw a direction field for the given differential equation.
Based on the direction field, determine the behavior of y as t → ∞. Describe how this
behavior depends on the initial value of y at t = 0.
(a) y ′ = 1 + 2y;
(b) y ′ = y + 2;
(c) y ′ = y 2 ;
(e) y ′ = −2 + t − y;
(f) y ′ = t + 2y;
(g) y ′ = −t/y. (You don’t need to discuss asymptotical behavior for this problem.)
196
Problem 4: Exam I, Spring 2013, Problem 2.
Problem 5: In each problem, solve the differential equation.
(a) y ′ = x2 /y;
(c) xy ′ = (1 − y 2 )1/2 ;
Problem 6: In each problem, solve the differential equation, and determine approximately
the interval in which the solution is defined. You might use graphing tools if needed.
197
12.2 Homework 2
Problem 1. Find the general solution of the given differential equation. If an initial condition
is given, find the particular solution which satisfies this initial condition.
a. y ′ + 3y = et , y(0) = −2;
b. y ′ − 2y = e2t , y(0) = 4;
c. ty ′ + y = et , y(1) = 0;
y
d. y ′ = − + cos(t2 );
t
e. y ′ − 3y = 25 cos(4t);
f. z ′ = 2t(z − t2 );
Problem 3. Without solving the problems, determine an interval in which the solution of
the given initial value problem is certain to exist.
198
Problem 5. A tank contains 10gal of brine in which 2 lb of salt is dissolved. Brine
containing 1 lb of salt per gallon flows into the tank at the rate of 3 gal/min, and the stirred
mixture is drained off the tank at the rate of 4 gal/min. Find the amount y(t) of salt in the
tank at any time t.
Problem 6. A 30-L container initially contains 10L of pure water. A brine solution
containing 20 g of salt per liter flows into the container at a rate of 4 L/min. The well-stirred
mixture is pumped out of the container at a rate of 2 L/min.
(b) How much salt is in the tank at the moment the tank begins to overflow?
199
12.3 Homework 3
Problem 1. Spring 2013, Exam I, problem 10.
Problem 2. Find the critical points and equilibrium solutions of the given ODEs. Sketch
the graphs of the solutions with the given initial conditions. Determine on the stabilities of
these critical points.
Problem 3. Consider the equation dy/dt = f (y) and suppose that y1 is a critical point,
i.e., f (y1 ) = 0, and f is a continuous function. Show that y(t) = y1 is asymptotically stable if
f ′ (y1 ) < 0 and unstable if f ′ (y1 ) > 0.
Problem 4. Previous Exam Problems.
Problem 5. Verify that the following problems are exact. Then solve the IVP (Initial
Value Problems).
Problem 6. Show that any separable equation written as M (x) + N (y)y ′ = 0 is exact.
Problem 7. Previous Exam Problems.
200
12.4 Homework 4
Problem 1. Find the general solution of the given differential equations.
a. y ′′ + 2y ′ − 3y = 0
b. 6y ′′ − y ′ − y = 0
c. y ′′ + 5y ′ = 0
d. y ′′ − 9y ′ + 9y = 0
Problem 2. Find the solution y(t) of the given IVP (Initial Value Problems).
a. y ′′ + y ′ − 6y = 0, y(0) = 2, y ′ (0) = 9
a. y(t) = c1 + c2 et
y ′′ − y ′ − 2y = 0, y(0) = a, y ′ (0) = 2.
a. e2t , e−3t/2
b. cos t, sin t
c. e−2t , te−2t
d. et sin t, et cos t
Problem 6. Without solving the equation, find the longest interval in which the given
IVP is certain to have a unique twice differential solution.
a. ty ′′ + 3y = t, y(1) = 1, y ′ (1) = 2
Problem 7.
201
a. If W (f, g) = 3e4t and if f (t) = e2t , find g(t).
202
12.5 Homework 5
Problem 1. Find the general solution of the given equations.
a. 9y ′′ + 6y ′ + y = 0
b. 4y ′′ − 4y ′ − 3y = 0
c. 4y ′′ + 12y ′ + 9y = 0
Problem 2. Solve the following IVP (Initial Value Problems). Sketch the graph of the
solution and describe the asymptotic behavior as t → ∞.
b. y ′′ − 6y ′ + 9y = 0, y(0) = 0, y ′ (0) = 2
c. y ′′ + 4y ′ + 4y = 0, y(−1) = 2, y ′ (−1) = 1.
Find the solution (which will contain the constant b). Then determine the critical value of b
that separates solutions that grow positively from those that eventually grow negatively.
Problem 4. Use the method of reduction of order to find a second solution of the given
differential equations.
a. y ′′ − 2y ′ + 6y = 0
b. y ′′ + 2y ′ + 2y = 0
c. y ′′ + 4y ′ + 6.25y = 0
Problem 6. Solve the following IVP (Initial Value Problems). Sketch the graph of the
solution and describe the asymptotic behavior as t → ∞.
a. y ′′ + 4y ′ + 5y = 0, y(0) = 1, y ′ (0) = 0
b. y ′′ − 2y ′ + 5y = 0, y(π/2) = 0, y ′ (π/2) = 2
203
Problem 7. Consider the IVP
y ′′ + 2y ′ + 6y = 0, y(0) = 2, y ′ (0) = a ≥ 0.
204
12.6 Homework 6
Problem 1. Find the general solution of the given equations.
a. y ′′ − 2y ′ − 3y = 3e2t
b. y ′′ + 2y ′ + 5y = 3 sin 2t
c. y ′′ + 9y = t2 e3t + 6
Problem 2. Solve the following IVP (Initial Value Problems). Be patient! These problems
take a lot of time!
a. y ′′ + y ′ − 2y = 2t, y(0) = 0, y ′ (0) = 1
c. u = 4 cos 3t − 2 sin 3t
d. u = −2 cos πt − 3 sin πt
Problem 6. A mass weighing 3 lb stretches a spring 3 in. If the mass is pushed upward,
contracting the spring a distance of 1 in., and then set in motion with a downward velocity of
2 ft/s, and if there is no damping, find the position u of the mass at any time t > 0. Determine
the frequency, period, amplitude, and phase of the motion.
Problem 7. A mass weighing 8 lb stretches a spring 1.5 in. The mass is also attached
to a damper with coefficient γ. Determine the value of γ for which the system is critically
damped.
205
Problem 8. The position of a certain spring-mass system satisfies the initial value problem
3 ′′
u + ku = 0, u(0) = 2, u′ (0) = ν
2
If the period and amplitude of the resulting motion are observed to be π and 3, respectively,
find the values of k and ν.
Problem 9. Previous Exam Problems.
206
12.7 Homework 7
Problem 1. Write the given expression as a product of two trigonometric functions of different
frequencies.
a. cos 9t − cos 7t
b. sin 7t − sin 6t
a. y (4) + 4y ′′′ + 3y = t,
a. y ′′′ − y ′′ − y ′ + y = 0
b. y ′′′ − 3y ′′ + 3y ′ − y = 0
c. y (6) − y ′′ = 0
d. y (4) + 2y ′′ + y = 0
a. Fall 2013, Exam II, Problem 1; (Note these are not Exam I).
207
12.8 Homework 8
Problem 1. Find the Laplace transform of the given functions by definition (without using
the Table 6.2.1). This means you should find a way to work out the integrals.
Problem 2. Use Euler’s formula eix = cos x + i sin x to find the Laplace transform of
f (t) = eat sin(bt) and g(t) = eat cos(bt).
Problem 3. Use Integration by parts to find the Laplace transform of the following
functions.
a. f (t) = teat
a. Show that F ′′ (s) = L{(−t)2 f (t)}, and furthermore F (n) (s) = L{(−t)n f (t)} for all inte-
ger n ≥ 1.
208
2s − 3
g. F (s) = .
s2 + 2s + 10
Problem 6. Use Laplace transform to solve the following IVPs. Be patient! These
problems take a lot of work!
b. y ′′ − 2y ′ + 2y = 0; y(0) = 0, y ′ (0) = 1;
c. y ′′ − 4y ′ + 4y = 0; y(0) = 1, y ′ (0) = 1;
Problem 7. Previous Exam Problems. (Note all the problems are now from Exam II. )
209
12.9 Homework 9
Problem 1. Sketch the graph of the given functions and express f (t) in terms of the unit
step function uc (t).
0, 0≤t<3
−2, 3≤t<5 1, 0≤t<2
(a). f (t) = (b). f (t) = −(t−2)
2, 5≤t<7 e , t≥2
1, 7 ≤ t.
210
Problem 4. Suppose that F (s) = L{f (t)} exists for s > d ≥ 0.
1 s
a. Show that if c is a positive constant, then L{f (ct)} = F , s > cd.
c c
−1 1 t
b. Show that if k is a positive constant, then L {F (ks)} = f .
k k
c. Show that if a and b are constants with a > 0. Then
−1 1 t
L {F (as + b)} = e−bt/a f .
a a
d. Utilize these results to find the inverse Laplace transform of the following functions.
2n+1 n! 2s + 1 e2 e−4s
F1 (s) = , F2 (s) = , F3 (s) = .
sn+1 2
4s + 4s + 5 2s − 1
Problem 5. Find the solution of the given IVP. Be patient! These problems take long
time!
′′ ′ 1, 0 ≤ t < 3π
a. y + y = f (t); y(0) = 0, y (0) = 1; f (t) =
0, 3π ≤ t
Problem 6. Find the solution of the given IVP. (Expect a lot of work!)
Problem 7. Previous Exam Problems. (Note all the problems are now from Exam II. )
211
12.10 Homework 10
Problem 1. In each of the problem, transform the given equation into a system of first order
equations. If ICs are given, specify these as well.
a. u′′ + 0.5u′ + 2u = 0;
c. u(4) − u = 0;
Problem 2. Find the eigenvalues and their corresponding eigenvectors of the following
matrices.
5 −1 −2 1 −3 3/4 −3 4
(a). , (b). , (c). (d).
3 1 1 −2 −5 1 0 1
Problem 3. For each problem, find the general solution and sketch the phrase portrait.
You may use some of the results in Problem 2.
′ 5 −1 ′ −2 1 ′ −3 4
(a). ~x = ~x, (b). ~x = ~x, (c). ~x = ~x,
3 1 1 −2 0 1
Problem 4. For each problem, find the general solution and sketch the phrase portrait.
Note that some of the eigenvalues are 0.
4 −3 3 6
(a). ~x′ = ~x, (b). ~x′ = ~x,
8 −6 −1 −2
Problem 5. Consider the IVP ~x′ = A~x, x1 (0) = 2, x2 (0) = 3. Given the eigenvalues and
eigenvectors of A, (i) write out the general solution; (ii) solve the IVP; (iii) sketch the phase
portrait; and (iv) sketch the trajectory passing through the initial point (2, 3).
−1 1
(a). λ1 = −1, ~v1 = , λ2 = −2, ~v2 =
2 2
−1 1
(b). λ1 = 1, ~v1 = , λ2 = −2, ~v2 =
2 2
212
Problem 6. Previous Exam Problems.
213
12.11 Homework 11
Problem 1. For each of the problem, find the eigenvalues and eigenvectors of the coefficient
matrix, find the general solution, state the type of the equilibrium and the stability, and sketch
the phase portrait.
′ 3 −2 ′ −1 −4 ′ 2 −5
(a). ~x = ~x, (b). ~x = ~x, (c). ~x = ~x.
4 −1 1 −1 1 −2
Problem 3. For each of the problem, find the eigenvalues and eigenvectors of the coefficient
matrix, find the general solution, state the type of the equilibrium and the stability, and sketch
the phase portrait.
′ 3 −4 ′ 4 −2
(a). ~x = ~x, (b). ~x = ~x.
1 −1 8 −4
a. x′ = x − xy; y ′ = y + 2xy;
b. x′ = 1 + 2y, y ′ = 1 − 3x2 ;
c. x′ = 2x − x2 − xy, y ′ = 3y − 2y 2 − 3xy;
214
Problem 6. For each problem: (i) Find all the critical points; (ii) Find the corresponding
linear system near each critical point; (iii) Find the eigenvalues for each system; (iv) Classify
the type of each critical point and its stability. (NB! These problems take long time!)
b. x′ = 1 − y, y ′ = x2 − y 2 ;
215
12.12 Homework 12
Problem 1. For each of the problem, (i) sketch the graph of the given function for three
periods, (ii) find the Fourier series, (iii) write out the first 5 non-zero terms in the series, and
(iv) describe how the Fourier series seems to be converging.
(a). f (x) = −x,
( −L ≤ x < L; f (x + 2L) = f (x);
1, −L ≤ x < 0,
(b). f (x) = f (x + 2L) = f (x);
0, 0 ≤ x < L;
(
x, −π ≤ x < 0,
(c). f (x) = f (x + 2π) = f (x);
0, 0 ≤ x < π;
0, −2 ≤ x ≤ −1,
(d). f (x) = x, −1 ≤ x < 1, f (x + 4) = f (x);
0, 1 ≤ x < 2;
(
−1, −2 ≤ x < 0,
(e). f (x) = f (x + 4) = f (x);
1, 0 ≤ x < 2;
Problem 2*. (optional) Suppose f is an integrable and differentiable periodic function
with period T .
(a). Show that f ′ is periodic with the same period T .
(b). Show that for any a, b, we have
Z a+T Z b+T
f (t) dt = f (t) dt.
a b
Rt
(c). Determine whether F (t) = 0 f (s) ds is always periodic.
Problem 3. Assume that f has a Fourier series
∞
a0 X nπx nπx
f (x) = + an cos + bn sin ,
2 n=1
L L
216
(a). y ′′ + y = 0, y(0) = 0, y ′ (π) = 1;
(b). y ′′ + 2y = 0, y ′ (0) = 1, y ′ (π) = 0;
′′
(c). y + y = 0, y(0) = 0, y ′ (L) = 0;
(d). y ′′ + y = x, y(0) = 0, y(π) = 0;
(e). y ′′ + 4y = cos x, y ′ (0) = 0, y ′ (π) = 0.
Problem 6. Solve the eigenvalue problems. Assume that all eigenvalues are real.
(a). y ′′ + λy = 0, y(0) = 0, y ′ (π) = 0;
′′
(b). y + λy = 0, y ′ (0) = 0, y(π) = 0;
′′
(c). y + λy = 0, y ′ (0) = 0, y ′ (π) = 0;
(d). y ′′ + λy = 0, y ′ (0) = 0, y(L) = 0;
Problem 7. Previous Exam Problems. Note that these are Final Exams!
217
12.13 Homework 13
Problem 1. For each of the problem, determine whether the method of separation of variables
can be used to replace the given PDE by a pair of ODEs. If so, find the ODEs.
(a). xuxx + ut = 0
(b). tuxx + xut = 0
(c). uxx + uxt + ut = 0
(d). [p(x)ux ]x − r(x)utt = 0
(e). uxx + (x + y)uyy = 0
(f). uxx + uyy + xu = 0
Problem 2. Find the solution of the heat conduct problem
Problem 4*. (Optional) The heat conduct equation in two space dimension is
α2 (uxx + uyy ) = ut .
Assuming that u(x, y, t) = X(x) Y (y) T (t), find ODEs that are satisfied by the functions
X(x), Y (y), and T (t).
Problem 5*. (Optional) The heat conduct equation in two space dimensions may be
expressed in terms of polar coordinates as
Assuming that u(r, θ, t) = R(r) Θ(θ) T (t), find ODEs that are satisfied by R(r), Θ(θ), and
T (t).
Problem 6. In each of problem, find the steady state solution of the heat conduct equation
c2 uxx = ut that satisfies the given set of boundary conditions.
(a). u(0, t) = 10, u(50, t) = 40
(b). ux (0, t) = 0, u(L, t) = T
(c). ux (0, t) − u(0, t) = 0, u(L, t) = T
(d). u(0, t) = T, ux (L, t) + u(L, t) = 0
Problem 7. Let an aluminum rod of length 20cm be initially at the uniform temperature
of 25o C. Suppose that at time t = 0, the end x = 0 is cooled to 0O C while the end x =
20 is heated to 60o C, and both are thereafter maintained at those temperatures. Find the
temperature distribution in the rod at any time t.
Problem 8. Previous Exam Problems. Note that these are Final Exams!
218
b. Spring 2013, Final Exam, Problem 8, 9, 14, 16;
219
12.14 Homework 14
Problem 1. (This problem takes a lot of work!) Consider the wave equation for an vibrating
string with length L = 10, as
The ends of the string are fixed. The initial displacement of the string is given as u(x, 0) =
f (x), where f (x) = x on 0 < x < 5, and f (x) = 10 − x on 5 < x < 10. The string is initially
at rest.
(a). Plot the initial displacement;
(b). Find the formal solution u(x, t) in term of Fourier series;
(c*) (optional). Write out the D’Alembert solution of this wave equation;
(d*) (optional). Plot u(x, t) as a function of x, for several t values, such as t = 0, t = 2.5, t =
5, t = 7.5, t = 10.
Problem 2. Consider the same problem as in Problem 1, but with a different function
f (x). Let 0 < a < 10 be given. Then, f (x) = x/a on 0 < x < a, and f (x) = (10 − x)/(10 − a).
Answer all the questions in Problem 1.
Problem 3. Previous Exam Problems. Note that these are Final Exams!
220
Chapter 13
221
13.2 Answer/keys to homework 2
Problem 1:
1 9
a. y = et − e−3t
4 4
b. y = te2t + 4e2t
et e
c. y = −
t t
sin(t2 ) c
d. y = +
2t t
e. y = 4 sin 4t − 3 cos 4t + ce3t
2
f. z = t2 + 1 + cet
g. y = 1 − e− sin t
1 3
h. y = −t − − t−2
2 2
1 bt
i. y = e + ce−at
a+b
1
j. y = + (4a − 2)t−2 .
t
Problem 3.
a. 0 < t < 3
b. 0 < t < 4
c. π/2 < t < 3π/2
d. −∞ < t < −2
e. −2 < t < 2
f. 1 < t < π
Problem 5.
8
y = (10 − t) − (10 − t)4 , for 0 ≤ t ≤ 10. After 10 min, there is no salf in the tank.
10000
Problem 6.
(a) 10 min; (b) ≈ 533.33g.
222
13.3 Answer/keys to homework 3
Problem 2.
a. y = 0 (unstable), y = 150 (asymptotically stable),
b. y = 0 (unstable), y = 30 (asymptotically stable)
c. y = 1 (unstable), y = 3 (asymptotically stable)
d. y = 0 (asymptotically stable), y = 40 (unstable)
e. y = 0 (semi-stable), y = 1 (unstable)
Problem 3. If f ′ (y1 ) < 0 and f (y1 ) = 0, then in a small neighborhood around y = y1 ,
we have y ′ = f (y) < 0 for y > y1 and y ′ = f (y) > 0 for y < y1 . Therefore nearby solutions
approach y = y1 , therefore it is asymptotically stable. On the other hand, if f ′ (y1 ) >, then
y ′ = f (y) > 0 for y > y1 and y ′ = f (y) < 0 for y < y1 , and nearby solutions go away from
y = y1 , therefore it is unstable.
Problem 5.
a. 2x2 y − 3y 2 + 1 = 0
b. x−2 − 2y −3 + x3 y −2 − x−1 y 2 = 3
c. cos(2x) sin y + e2x − y −2 = 2 − 4/π 2
Problem 6. Since My (x) = 0 and Nx (y) = 0, the equation is exact.
223
13.4 Answer/keys to homework 4
Problem 1.
a. y = c1 et + c2 e−3t
b. y = c1 et/2 + c2 e−t/3
c. y = c1 + c2 e−5t
√
r1 t r2 t 9±3 5
d. y = c1 e + c2 e where r1,2 =
2
Problem 2.
c. y(t) = −3 + 4e−4t
Problem 3.
a. y ′′ − y ′ = 0
b. y ′′ − 5y ′ + 6y = 0
Problem 4. a = −2.
7
Problem 5. (a). − et/2 , (b). 1, (c). e−4t , (d). −e2t
2
Problem 6. (a) 0 < t < ∞, (b) −∞ < t < 1, (c) 0 < t < ∞, (d) 0 < x < 3
Problem 7. (a) 3te2t + ce2t , (b) tet + ct (One can choose c = 0 here.)
Problem 8. They are a fundamental set of solutions if and only if ad − bc 6= 0.
224
13.5 Answer/keys to homework 5
Problem 1.
a. y = c1 e−t/3 + c2 te−t/3
b. y = c1 e−t/2 + c2 e3t/2
c. y = c1 e−3t/2 + c2 te−3t/2
Problem 2.
7
a. y = 2e2t/3 − te2t/3 , y → −∞ as t → ∞
3
b. y = 2te3t , y → ∞ as t → ∞
c. y = 7e−2(t+1) + 5te−2(t+1) , y → 0 as t → ∞
Problem 3. y = 2et/2 + (b − 1)tet/2 ; b=1
Problem 4.
a. y2 (t) = t−2
b. y2 (t) = t−1 ln t
c. y2 (t) = tet
d. y2 (x) = x
Problem 5.
√ √
a. y = c1 et cos 5t + c2 et sin 5t
b. y = c1 e−t cos t + c2 e−t sin t
c. y = c1 e−2t cos(3t/2) + c2 e−2t sin(3t/2)
Problem 6.
a. y = e−2t cos t + 2e−2t sin t; decaying oscillation
b. y = −et−π/2 sin(2t); growing oscillation
5
c. y = 3e−t/2 cos t + e−t/2 sin t; decaying oscillation
2
Problem 7.
√ a+2 √
a. y = 2e−t cos 5t + √ e−t sin 5t
5
b. a ≈ 1.50878
" √ #
1 2 5
c. t = √ π − arctan
5 2+a
√
d. π/ 5
225
13.6 Answer/keys to homework 6
Problem 1.
Problem 5.
b. u = 2 cos(t − 2π/3)
√
c. u = 2 5 cos(3t − δ), δ = − arctan(1/2) ≈ −0.4636
√
d. u = 13 cos(πt − δ), δ = π + arctan(3/2) ≈ 4.1244
1 √ 1 √ √ π
Problem 6. u = √ sin(8 2t) − cos(8 2t) ft, ω = 8 2 rad/s, T = √ s, R =
4 2 12 4 2
Problem 7. γ = 8 lb · s/ft √
Problem 8. k = 6, ν = ±2 5.
226
13.7 Answer/keys to homework 7
Problem 1. a. −2 sin(8t) sin t, b. 2 sin(t/2) cos(13t/2), c. 2 cos(3πt/2) cos(πt/2)
Problem 2. a. −∞ < t < ∞, b. t > 0 or t < 0, c. t > 1, or 0 < t < 1, or
t < 0.
Problem 3.
a. y = c1 et + c2 tet + c3 e−t
b. y = c1 et + c2 tet + c3 t2 et
227
13.8 Answer/keys to homework 8
Problem 1.
s b
a. F (s) = , s > |b|; b. F (s) = 2 , s > |b|;
s 2 − b2 s − b2
s−a b
c. F (s) = 2 2
, s > a + |b|; d. F (s) = , s > a + |b|;
(s − a) − b (s − a)2 − b2
Problem 2.
b s−a
F (s) = (s > a) and G(s) = (s > a).
(s − a)2 + b2 (s − a)2 + b2
Problem 3.
1 2as
a. F (s) = 2
, s > a; b. F (s) = 2 , s > 0;
(s − a) (s + a2 )2
s 2 + a2
c. F (s) = , s > |a|.
(s − a)2 (s + a)2
Problem 4.
n! 2b(s − a) (s − a)2 − b2
(b). F (s) = , G(s) = , H(s) = .
(s − a)n+1 [(s − a)2 + b2 ]2 [(s − a)2 + b2 ]2
Problem 5.
3
a. f (t) = sin 2t; b. f (t) = 2t2 et ;
2
2 2 9 6
c. f (t) = et − e−4t ; d. f (t) = e3t + e−2t ;
3 5 5 5
−t
e. f (t) = 2e cos 2t; f. f (t) = 3 − 2 sin 2t + 5 cos 2t;
5
g. f (t) = 2e−t cos 3t − e−t sin 3t.
3
Problem 6.
1
a. y(t) = (e3t + 4e−2t ); b. y(t) = et sin t;
5
2
c. y(t) = e2t − te2t ; d. y(t) = tet − t2 et + t3 et ;
3
1 1
e. y(t) = cosh t = (et + e−t ); f. y(t) = 2 [(w2 − 5) cos wt + cos 2t].
2 w −4
228
13.9 Answer/keys to homework 9
Problem 1.
a. f (t) = −2u3 (t) + 4u5 (t) − u7 (t)
b. f (t) = 1 + u2 (t)[e−(t−2) − 1]
Problem 2.
a. F (s) = 2e−s /s3
b. F (s) = e−s (s2 + 2)/s3
e−πs e−2πs
c. F (s) = 2 − (1 + πs)
s s2
1 −s
d. F (s) = (e + 2e−3s − 6e−4s )
s
e. F (s) = s−2 [(1 − s)e−2s − (1 + s)e−3s ]
f. F (s) = (1 − e−s )/s2
Problem 3.
1
a. f (t) = u2 (t)[et−2 − e−2(t−2) ]
3
b. f (t) = 2u2 (t)et−2 cos(t − 2)
c. f (t) = u2 (t) sinh 2(t − 2)
d. f (t) = u1 (t)e2(t−1) cosh(t − 1)
e. f (t) = u1 (t) + u2 (t) − u3 (t) − u4 (t)
1 1
Problem 4. (d). f1 (t) = 2(2t)n , f2 (t) = e−t/2 cos t, and f3 (t) = et/2 u2 (t/2).
2 2
Problem 5.
a. y(t) = 1 − cos t + sin t − u3π (t)(1 + cos t)
1 1
b. y(t) = (2 sin t − sin 2t) − uπ (t)(2 sin t + sin 2t)
6 6
1 1
c. y(t) = e−t − e−2t + u2 (t)[ − e−(t−2) + e−2(t−2) ]
2 2
1 1 1
d. y(t) = sin t + t − u6 (t)[t − 6 − sin(t − 6)]
2 2 2
Problem 6.
a. y(t) = e−t cos t + e−t sin t + uπ (t)e−(t−π) sin(t − π)
1 1
b. y(t) = uπ (t) sin 2(t − π) − u2π (t) sin 2(t − 2π)
2 2
1 −2t 1 −t 1 1
c. y(t) = − e + e + u5 (t)[−e−2(t−5) + e−(t−5) ] + u10 (t)[ + e−2(t−10) − e−(t−10) ]
2 2 2 2
d. y(t) = sin t + u2π (t) sin(t − 2π)
229
13.10 Answer/keys to homework 10
Problem 1.
a. x1 = u, x2 = u′ , → x′1 = x2 , x′2 = −2x1 − 0.5x2
′
b. x1 = u, x2 = u , → x′1 = x2 , x′2 = −(1 − 0.25t−2 )x1 − t−1 x2
c. x1 = u, x2 = u′ , x3 = u′′ , x4 = u′′′ → x′1 = x2 , x′2 = x3 , x′3 = x4 , x′4 =
x1
d. x1 = u, x2 = u′ , → x′1 = x2 , x′2 = −4x1 − 0.25x2 + 2 cos 3t, x1 (0) =
1, x2 (0) = −2
e. x1 = u, x2 = u′ , → x′1 = x2 , x′2 = −q(t)x1 − p(t)x2 + g(t), x1 (0) =
u0 , x2 (0) = u′0
Problem 2.
1 1
(a). λ1 = 2, ~v1 = , λ2 = 4, ~v2 =
3 1
1 1
(b). λ1 = −3, ~v1 = , λ2 = −1, ~v2 =
−1 1
3 1
(c). λ1 = −1/2, ~v1 = , λ2 = −3/2, ~v2 =
10 2
1 1
(d). λ1 = −3, ~v1 = , λ2 = 1, ~v2 =
0 1
Problem 3. Use the eigenpairs found in Problem 2, and the formula: (a). ~x = c1 eλ1 t~v1 +
λ2 t
c2 e ~v2 , one can simply write out the general solutions.
Problem 4.
3 −2t 1
(a). ~x = c1 + c2 e
4 2
−2 t −3
(b). ~x = c1 + c2 e
1 1
Problem 5.
−t −1 −2t 1
(a).(i). ~x = c1 e + c2 e , (ii). c1 = −1/4, c2 = 7/4
2 2
−1 1
(b).(i). ~x = c1 et + c2 e−2t , (ii). c1 = −1/4, c2 = 7/4
2 2
230
13.11 Answer/keys to homework 11
Problem 1.
t cos 2t t sin 2t
a. ~x = c1 e + c2 e
cos 2t + sin 2t − cos 2t + sin 2t
−t 2 cos 2t −t −2 sin 2t
b. ~x = c1 e + c2 e
sin 2t cos 2t
5 cos t 5 sin t
c. ~x = c1 + c2
2 cos t + sin t − cos t + 2 sin t
Problem 2.
cos t − 3 sin t
(a). ~x = e−t
cos t − sin t
−2t cos t − 5 sin t
(b). ~x = e
−2 cos t − 3 sin t
Problem 3.
t 2 t 2 t 1
(a). ~x = c1 e + c2 te +e
1 1 0
1 1 0
(b). ~x = c1 + c2 t −
2 2 0.5
Problem 4.
−3t 3 + 4t
(a). ~x = e
2 + 4t
1 3
(b). ~x = 2 + 14t
2 −1
Problem 5.
(a). (−0.5, 1), saddle point, unstable;
(0, 0), proper node, unstable
(b). (0, 0), node, unstable;
(2, 0), node, asymptotically stable;
(0, 1.5), saddle point, unstable;
(−1, 3), node, asymptotically stable
(c). (0,√0), spiral
√ point, asymptotically stable;
(1 − √2, 1 + √2), saddle point, unstable;
(1 + 2, 1 − 2), saddle point, unstable
231
13.12 Answer/keys to homework 12
Problem 1.
∞
2L X (−1)n nπx
a. f (x) = sin
π n L
n=1
∞
1 2X 1
b. f (x) = − sin[(2n − 1)πx/L]
2 π 2n − 1
n=1
∞
(−1)n+1
π X 2
c. f (x) = − + cos(2n − 1)x + sin nx
4 n=1 π(2n − 1)2 n
∞
X 2 nπ 4 nπ nπx
d. f (x) = − cos + 2
sin sin
nπ 2 (nπ) 2 2
n=1
∞
4 X sin[(2n − 1)πx/2]
e. f (x) =
π 2n − 1
n=1 Z
x
Problem 2. (c). f (t) dt may not be periodic; for example, consider f (t) = 1 + sin t.
0
Problem 5.
a. y = − sin√x √ √ √
b. y = (cot 2π cos 2x + sin 2x)/ 2;
c. y = c2 sin x if cos L = 0, and y = 0 for all other L;
d. No solution
1
e. y = c1 cos 2x + cos x.
3
232
13.13 Answer/keys to homework 13
Problem 1.
a. Let u(x, t) = F (x)G(t), we have xF ′′ (x) − λF (x) = 0, G′ (t) + λG(t) = 0
b. Let u(x, t) = F (x)G(t), we have F ′′ (x) − λxF (x) = 0, G′ (t) + λtG(t) = 0
c. Let u(x, t) = F (x)G(t), we have F ′′ (x) − λ(F ′ (x) + F (x)) = 0, G′ (t) + λG(t) = 0
d. Let u(x, t) = F (x)G(t), we have [p(x)F ′ (x)]′ + λr(x)F (x) = 0, G′′ (t) + λG(t) = 0
e. Not separable.
f. Let u(x, t) = F (x)G(y), we have F ′′ (x) + (x + λ)F (x) = 0, G′′ (y) − λG(y) = 0
Problem 2.
2 2
u(x, t) = e−400π t sin 2πx − e−2500π t sin 5πx
Problem 3. ∞
100 X 1 − cos(nπ) −n2 π2 t/1600 nπx
u(x, t) = e sin
π n 40
n=1
Problem 4*.
X ′′ + µ2 X = 0, Y ′′ + (λ2 − µ2 )Y = 0, T ′ + α2 λ2 T = 0
Problem 5*.
Problem 6.
3
(a). U (x) = 10 + x (b). U (x) = T
5
T (1 + x) T (1 + L − x)
(c). U (x) = (d). U (x) =
1+L 1+L
Problem 7. The IVBP for the heat equation is:
Note that this has non-homogeneous B.C.s! Carrying out the computation, we get
∞ 20
nπx 1 nπx 1
Z
−( cnπ )2 t
X
u(x, t) = 3x+ Cn e 20 sin , Cn = (25−3x) sin dx = (70(−1)n +50)
20 10 0 20 nπ
n=1
233