Chapter9Notes
Chapter9Notes
4 1
Aim
• All the linear di↵erential equations and systems of any order can be converted into a (large) 1st order di↵erential system.
>> A = [1,3;5 6]
A =
1 3
5 6
>> B = [2 4;6 8]
B =
2 4
6 8
>> A+B
ans =
3 7
11 14
>> A*B
ans =
20 28
46 68
ans =
19 31
53 74
• Define A, B, x by ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 2 1
A= , B= , x=
3 4 2 5 4
– Calculate by hand AB, Ax, and show that (A + B)x = Ax + Bx by calculating both sides of
the equality
• Define A, x by 0 1 0 1
1 0 0 1
A = @ 3 4 1 A, x = @ 4 A
1 1 2 5
– Calculate by hand Ax
• Check your answers above using Matlab
For two unknown functions, x(t) and y(t), consider a 1st order di↵erential system:
( ✓ ◆0 ✓ ◆✓ ◆
x0 = 4x + 2y, x 4 2 x
, = in a matrix form.
y 0 = 4x 4y. y 4 4 y
| {z } | {z } | {z }
x0 A x
Example 3. Express the following di↵erential system (originated from a coupled mass-spring system),
(
m1x00 + (k1 + k2)x k2y = 0,
with x(0) = x0, y(0) = y0, x0(0) = v0, y 0(0) = w0 into a matrix form.
m2y 00 k2x + k2y = 0,
X So our target problem is the 1st order linear di↵erential system given by
x0 = A(t)x + f (t) with x(0) = x0,
where A(t) is a matrix function and f (t) is a vector function. This form is called a normal form.
X Any linear di↵erential equations of any order can be converted into this form by introducing sec-
ondary variables (velocity, acceleration, etc.)
Aim
• Learn some linear algebra tools for 2 × 2 matrices; determinant, trace, inverse
Review of Matrices
a b
Let A be a 2 × 2 matrix, i.e., A = .
c d
• The determinant of A:
a b
det(A) = |A| = = ad − bc
c d
• The trace of A:
tr(A) = a + d
1 0
• The identity matrix I =
0 1
AI = IA = A for any square matrix A (of the same size as A)
1 0 a11 a12 a11 a12
=
0 1 a21 a22 a21 a22
Ix = x
1 0 x1 x1
=
0 1 x2 x2
a(t) b(t)
• If A consists of differentiable functions, A(t) = , we can differentiate or integrate
c(t) d(t)
A(t):
0 0
a b
A0 = ,
c0 d0
Z R R
a(t) dt b(t) dt
A(t) dt = R R .
c(t) dt d(t) dt
1
|A| = 1 · 1 − 2 · = 0 ⇒ A is singular ⇒ @A−1.
2
1 1
2. B = 1 is singular or nonsingular?
2 1
1 1
|B| = 1 · 1 − 1 · = 6= 0 ⇒ B is nonsingular (invertible) ⇒ ∃B −1.
2 2
3. The inverse of B
−1
1 1 1 1 −1 1 1 −1 2 −2
B −1 = 1 = = = .
2 1 |B| − 12 1 1
2
− 12 1 −1 2
Note we can check if this is correct since B −1B = I, so we can calculate
2 −2 1 1 1 0
1 =
−1 2 2 1 0 1
and so we do get back I from B −1B.
(
3x − y = 0
3 −1 x 0 0 0
• ⇒ = A−1
⇒ x = |{z} = .
x + 2y = 0 1 2 y 0 exists
0 0
| {z } | {z }
A with |A|=76=0 x
(
2x + 4y = 2
• ⇒
x + 2y = 5
• As above, x = A−1b will not work.
• Left hand sides of equation are multiples (row 1 of A is double row 2)
• But right hand sides are not the same multiples
(
x + 2y = 1
• If we multiply equation 1 by 21 , we get
x + 2y = 5
• Clearly no such x, y can exist, so we have no solutions
1. Solve the following equation by formulating it as Ax = b, and then multiplying both sides of the
equation by A−1.
x + 2y = 1
3x + 4y = 2
sin(2t) cos(2t)
2. Find X −1(t) for X(t) =
2 cos(2t) −2 sin(2t)
1 1
3. For what values of r is det(A − rI) = 0, for A = ?
−2 4
The
inverse
formula is valid for matrices that are functions
of t, so we just need to use it. For
a b d −b
A= , the inverse formula is A−1 = |A|
1
c d −c a
For our matrix X, we get |X| = −2 sin2(2t) − 2 cos2(2t) = −2. Now applying the formula,
1
1 −2 sin(2t) − cos(2t) sin(2t) 2 cos(2t)
X −1(t) = − =
2 −2 cos(2t) sin(2t) cos(2t) − 12 sin(2t)
While this already finishes the problem, as a sanity check, let’s calculate X −1X and make sure we do
get I back:
1
sin(2t) 2 cos(2t) sin(2t) cos(2t) 1 0
X −1X = = =I
cos(2t) − 12 sin(2t) 2 cos(2t) −2 sin(2t) 0 1
and we do get back the identity, as expected.
This last equation can only have a nonzero solution u if (A − rI) is singular, which is equivalent
to |A − rI| = 0.
− 12 1
•A= : |A − rI| = r2 + r + 45 = 0 ⇒ r = − 12 ± i.
−1 − 21
X For r = − 12 + i,
−i 1
A − rI =
−1 −i
– Thus we need to consider only equation 1: −iu1 + u2 = 0.
1
– Picking u1 = 1 determines u2 = i, and gives the eigenvector u =
i
X For r = − 12 − i,
i 1
A − rI =
−1 i
– Thus we need to consider only equation 1: iu1 + u2 = 0.
1
– Picking u1 = 1 determines u2 = −i, and gives the eigenvector u =
−i
– Complex conjugate of a complex number means to negate the imaginary part: conj(a+bi)=a−bi
– Complex conjugate of a vector means to take complex conjugate of each entry
– For complex valued cases, eigen values/vectors are always complex conjugate to each other.
2 0
•A= :
0 2
|A − rI| = r2 − 4r + 4 = (r − 2)2 = 0 ⇒ r = 2
– (A − rI)u = 0 with r = 2 gives the equations
0 0
u=0
0 0
1
– Any nonzero vector u will work. I would pick
0
1 −1
• For r = 1, A − rI = , so equation 1 of (A − rI)u = 0 is u1 − u2 = 0.
−2 2
1
Picking u1 = 1 determines u2 = 1, so our eigenvector is u =
1
1 −1
Any scalar multiple of would also be a correct eigenvector, e.g. .
1 −1
√
1 √
Picking u1 = 1 determines u2 = − 13 (1 − 2 2i), so our eigenvector is u =
− 31 (1 − 2 2i)
√
• For the r = −2 2i eigenvector, since we know eigenvectors of a real matrix must be complex
conjugates
of each other,
we can simply take complex conjugate of the first eigenvector to get
1 √
u=
− 31 (1 + 2 2i)
Aim
• Learn how to solve x0 = Ax using eigenvalues/vectors of A (the case when two distinct eigenvalues)
a b x1(t)
We would like to solve x0 = Ax, where A = and x = x(t) = .
c d x2(t)
• We have already learned that if (r, u) is an eigenpair of A, then x = ertu is a solution to x0 = Ax.
• We consider now the case that A has 2 distinct eigenvalues (either real or complex)
• We consider the case that A has only one eigenvalue in the next lecture.
• Key result: If two eigenvalues of A are distinct (either real or complex), r1 6= r2, then
x(t) = c1er1tu1 + c2er2tu2 is the general solution of x0 = Ax,
where u1 and u2 are eigenvectors corresponding r1 and r2, respectively.
Proof. Observe that x1(t) := er1tu1 and x2(t) := er2tu2 are solutions, as was shown in Ch 9 Lecture 3.
0 0
Ax1(t) = A[er1tu1] = er1tAu1 = er1tr1u1 = r1er1tu1 = er1t u1 = er1tu1 = x01(t),
r2 t 0
0
r2 t r2 t r2 t r2 t
u2 = e u2 = x02(t).
r2 t
Ax2(t) = A[e u2] = e Au2 = e r2u2 = r2e u2 = e
Then we have the following superposition principle: x = c1x1 + c2x2, since
x0 − Ax = (c1x1 + c2x2)0 − A(c1x1 + c2x2) = c1(x01 − Ax1) + c2(x02 − Ax2) = 0.
• We still find evals and evecs first, (r1, u1), (r2, u2)
• But since two eigenvalues and eigenvectors are complex conjugate to each other, we only need to
bother finding (r1, u1) since (r2, u2) = (r1, u1)
• Complex solutions are
x1 = er1tu1, x2 = er2tu2
• Since r2 = r̄1 and u2 = ū1,
x2 = x̄1, since x2 = er2tu2 = er̄1tū1 = er1tū1 = er1tu1 = x̄1.
• We want real solutions, so these complex solutions are not good enough
Aim
• Learn how to solve x0 = Ax using eigenvalues/vectors of A (the case when a repeated eigenvalue)
• If there are 2 distinct eigenvalues r1 6= r2 for a 2 × 2 matrix A, we know how to find the general
solution
• But what if r1 = r2?
– If only 1 eigenvalue for a 2 × 2 matrix A, it is a repeated root of |A − rI| = 0
– A matrix A with a repeated eigenvalue may have 1 or 2 corresponding (linearly independent)
eigenvectors
– If 2 linearly independent eigenvectors, then the formula we used for 2 distinct eigenvalues u1,
u2 will work with r1 = r2 = r
– If only 1 linearly independent eigenvector, then we need a new tool (which we will develop)
1 0
(3) General solution: x(t) = c1ertu1 + c2ertu2 = c1e−t + c2e−t , ∀c1, c2 ∈ R.
0 1
X Even with a repeated eigenvalue, if we have two linearly independent eigenvectors, we are fine!
− 21 1
Example 2. Solve x0 = Ax, where A = .
0 − 12
− 21 − r 1
(1) Find eigenvalues: 1 = (− 12 − r)2 = (r + 12 )2 = 0 ⇒ r = − 12 (repeated).
0 −2 − r
0 1 1
(2) Find eigenvectors: (A − rI)u = u = 0 ⇒ u = , only one corresponding
0 0 0
eigenvector.
1 1
(3) General solution: x(t) = c1 e− 2 t +c2 ????? .
0 | {z }
| {z }
x1 (t) x2 (t)
0
Ax2 = A[te u + e v] = te ru + e (rv + u) = (rte + e )u + re v = te u + e v = x02.
rt rt rt rt rt rt rt
rt rt
− 12 t 1 1 1 1 0
Thus x(t) = c1 e +c2 te− 2 t + e− 2 t .
0 0 1
| {z } | {z }
x1 (t) x2 (t)=tert u+ert v
Aim
• Learn how to solve nonhomogeneous differential system using undetermined coefficients method or variation of parameters
−4 − e−t
1 −4
• Finally, x(t) = −e−t + = .
−1 1 1 + e−t
−5t − 1 + et − 5e−t
1 1 −5t − 1
• Finally, x(t) = et − 5e−t + = .
0 −1 2t − 3 2t − 3 + 5e−t
e2t
2 −3
Example 6. Solve x0 = x+
1 −2 1
3 1
(1) Homogeneous solution: Since r1 = 1, u1 = and r2 = −1, u2 = ,
1 1
t −t
3 1 3e e c1
xh = c1 et +c2 e−t = .
1 1 et e−t c2
| {z } | {z } | {z }
x1 x2 X(t)
t −t 1 t 1 −t 4 2t
3e e 2e + 2e 3+ 3e
which gives xp = = .
et e−t − 61 e3t + 32 et 2+ 1 2t
3e
| {z }| {z }
X(t) v(t)
Note there are no integration constants since we want the simplest v that satisfies v0 = X −1f .
1. Use the
method
of undetermined
coefficients to find a general solution of x0 = Ax + f , where
6 1 t
A= and f = .
4 3 1
2. Use
the variation ofparameters method to find a general solution of x0 = Ax + f , where A =
t2
8 −4
and f = .
4 −2 1
First, calculate
the eigenvalues
to be r1 = 2 and r2 = 7, then find their corresponding eigenvectors
1 1
to be u1 = and u2 = . Thus we have found the homogeneous solution to be
−4 1
1 1
xh = c1e2t + c2e7t .
−4 1
t 1 0
Now for the particular solution. Since f = =t + is a vector polynomial of degree
1 0 1
1, we guess a vector polynomial of the same order xp = at + b.
1 2
Determine the eigenvalues and eigenvectors to be r1 = 0, r2 = 6, u1 = , and u2 = .
2 1
Thus
6t
1 2 1 2e c1
xh = c1 + c2e6t =
2 1 2 e6t c2
1 2e6t
and so we have defined X(t) = . We need X −1, so calculate it:
2 e6t
6t 6t
1 e −2e
X −1 =
−3e6t −2 1
Determine v such that v0 = X −1f :
6t 6t
2 2 6t 6t
2
1 e −2e t 1 t e − 2e −1 t − 2
v0 = X −1f = = =
−3e6t −2 1 1 −3e6t −2t2 + 1 3 −2t2e−6t + e−6t
Since Z
1 2 2 −6t
t2e−6t dt = − t2e−6t − te−6t − e +C
6 36 216
we get (not ignoring the integration constant)
1 3
−1 3 t − 2t + C 1
v=
3 −2(− 6 t e − 36 te − 216 e ) + −1
1 2 −6t 2 −6t 2 −6t
6 e
−6t
+ C2
Note that this is a cubic vector polynomial. At first glance, you might have expected a quadratic,
since f is a quadratic vector polynomial and we might have wanted to guess a quadratic vector polyno-
mial.
However, notice that x1 solution is a constant polynomial, and so the method of coefficients would
not have worked - hence you would have had to multiply your guess by t, which would produce a cubic
polynomial.
The variation of parameters method is easy to apply, but we can run into ugly expressions due to
the integrals arising in the calculation of v.
However, if one has access to e.g. Maple or Mathematica, then these integrals can be done quickly
and easily, making the entire method easy to use.... but by hand it can get ugly.