Ma2aPracLectures 10-12
Ma2aPracLectures 10-12
Indeed,
( )
′ dx
x = ax =⇒ = 0 ⇔ x = 0 : equilibrium point
dt
1 dx
= a, when x ̸= 0
∫ x dt ∫
(dx/dt)
=⇒ dt = a dt
x
=⇒ log |x| = at + c
=⇒ |x| = eat+c
x = Beat , with B = ±ec ̸= 0
1
(Brush up on the basics of vector spaces, linear maps, and properties of
matrices - including eigenvalues, eigenvectors, and diagonalization - from
Ma1b; it will also be good if you know about the Jordan decomposition,
which we will discuss later.)
Generalization
x′ = A(t)x (*)
2
Using the known fact(s) that differentiation and matrix multiplication are
linear operations, we get the following
Properties of V:
(i) (existence of origin) 0 ∈ V
(ii) (additivity) If x, y are both in V , then x + y ∈ V
(iii) (homogeneity) If x ∈ V , then so is αx for any scalar α
Reason for (ii):
x′ = Ax
y′ = Ay
x′ + y′ = A(x + y)
(x + y)′ = A(x + y)
since αA = Aα.
Conclusion
The solution set V of a linear, homogeneous system x′ = Ax is a vector
space. It is natural to expect V has dimension n.
Basic Questions: Can we guess a non-zero solution of x′ = Ax, for any
n × n constant matrix A? If so, can we find all the solutions, i.e., write down
a general solution like in the n = 1 case?
Here’s a clever idea for any n: (in many cases, but not all, this furnishes
all the solutions)
x1
′ ..
x = Ax, x = .
xn
Try:
x = veλt , where v ∈ Rn , v ̸= 0, λ ∈ R.
3
Here v is independent of t. Then
x′ = λveλt
Ax = λveλt
⇒ Av |{z}
eλt = λv |{z}
eλt
̸=0 ̸=0
⇒ Av = λv
4
t
hermitian iff A equals its conjugate transpose, i.e., A = A . The general fact
(hopefully discussed in Ma1b) is that the eigenvalues of a complex hermitian
matrix are all real. Of course, a real matrix is hermitian iff it is symmetric,
since A = A for real A.
Examples:
( )
0 1
(i) A =
1 0
( )
λ −1
det( λI2 −A) = det = λ2 − 1
|{z}
−1 λ
λ 0
0 λ
Eigenvalues: λ = 1, −1 since ±1: roots of λ2 − 1 = 0.
Eigenvectors:
λ=1 λ = −1
Av = v Av = −v
( )( ) ( ) ( )( ) ( )
0 1 x ? x 0 1 x ? −x
= =
1 0 y y 1 0 y −y
↓ ↓
x = y( ) x = −y
( )
1 1
Take v = Take v =
1 −1
( )
0 1
(ii) A = → det(λI2 − A) = λ2 + 1
−1 0
Eigenvalues: λ± = ±i
( )
± 1
Eigenvectors: v = (no real eigenvector)
±i
5
with v(1) eigenvector of λ1 and v(2) eigenvector for λ2 . Indeed, for j = 1, 2,
since Av(j) = λj v(j) ,
d ( λj t ) d
Ax(j) = λj v(j) eλj t = v(j) e = x(j) .
dt dt
Claim: x(1) and x(2) are linearly independent solutions.
Proof: Suppose c1 x(1) + c2 x(2) = 0, for scalars c1 , c2 , not both zero. Putting
t = 0, and noting that, by definition, x(j) (0) = v(j) for j ∈ {1, 2}, we get the
linear dependence relation
not both constants c1 , c2 being zero. So it suffices to check that v(1) , v(2)
are linearly independent. This should be clear from the material covered in
Ma1b, since these eigenvectors correspond to different eigenvalues, but in any
case, here is the argument: To begin, since the eigenvectors are non-zero, if
one constant, say c1 , is zero, then so is the other. So we may assume that
both c1 and c2 are non-zero. Applying the matrix A to this relation (1), and
using the fact that Av(j) = λj v(j) , we obtain
c1 (λ1 − λ2 )v(1) = 0,
which is impossible since c1 , λ1 − λ2 , and v(1) are all non-zero. This gives
the necessary contradiction, and the Claim follows.
6
Lecture 11
Linear, homogeneous system with constant coefficients:
x1
′ .. dx
x = Ax, x = . , x′ = , A = (aij )1≤i,j≤n (*)
dt
xn
During the first 3 weeks we studied this for n = 1. In general, try to under-
stand well the n = 2 and n = 3 cases. For n = 2, you should know how to
draw various pictures, often called portraits, in the (x1 , x2 )-plane.
Equilibrium points are the solutions x for which x′ = 0, i.e., where Ax = 0.
Important special case: when A is an invertible matrix, i.e., when the deter-
minant of A, denotes as det(A) or just |A|, is nonzero. Then there exists an
inverse matrix to
A. Applying A−1 (in this case) to Ax = 0 on both sides, we
0
..
see that x = 0 = . is the only equilibrium point (when A is invertible).
0
Check that in general, a matrix A is singular, i.e., not invertible, if and
only if 0 is an eigenvalue of A.
General principle/ Theorem:
x1
′ ..
Consider x = Ax, x = .
xn
(a) If λ is an eigenvalue of A with eigenvector v (̸= 0), then the function
of t given by x = veλt is a non-zero solution of (∗).
(b) Suppose A has n distinct eigenvalues, say λ1 , λ2 , . . . λn (with λi ̸= λj if
i ̸= j) with eigenvector v(1) , v(2) , . . . v(n) , i.e.,
Av(j)| = λj v(i)
then every solution of x′ = Ax is a linear combination
x = c1 x(1) + c2 x(2) + · · · + cn x(n)
where the cj are scalars and
x(j) = v(i) eλj t ,
for each j = 1, 2, . . . , n.
7
Look at the case n = 2: When the eigenvalues are (real and) distinct,
i.e., λ1 ̸= λ2 ,
x(1) = v(1) eλ1 t , x(2) = v(2) eλ2 t
Remarks:
(a) It is more subtle if the solutions are not real or not all distinct. Here
when λ1 . . . λn are all real and distinct, all fundamental solutions x(1) . . . x(n)
are all real vectors, i.e., in Rn .
(d) If y(1) . . . y(n) are n arbitrary solutions of (∗), one defines their
Wronskian determinant to be
Example:
( ) ( )
′ 0 1 x1
(1) n = 2, x = Ax, A = ,x =
1 0 x2
We saw last time A has 2 eigenvalues
( ) λ1 = 1 and( λ)2 = −1, with
1 1
corresponding eigenvectors v(1) = and v(2) = .
1 −1
8
The two basic solutions of the linear system are
( ) ( )
1 t 1
(1)
x =v e = (1) t (2)
e , x = v(2)e = t
e−t ,
1 −1
and the Wronskian is
( ) ( )
et et 1 1
(1) (2)
W (x , x ) = det t −t = det = −2 ̸= 0.
e e 1 −1
Thus x(1) , x(2) are independent solutions. Of course we knew this already,
because they correspond to distinct eigenvalues.
Slope field:
This is a plot in the (x1 , x2 )-plane, called the phase plane, where one
chooses a grid and draws, at each point on the grid, a short arrow in the
direction
( ) of the vector connecting the origin to the point determined by
x
A 1 .
x2
Note that since A is a constant matrix, x′ (t), given by Ax, is independent
of t, which is what allows us to draw the slope field on the phase plane (at
all times t).
Asymptotics:
Suppose A is an n × n-matrix with distinct (real) eigenvalues λ1 , . . . , λn
and corresponding eigenvectors v(1) , . . . , v(n) . Then the general solution of
x′ = Ax is given by
x = c1 v(1) eλ1 t · · · + cn v(n) eλn t .
Note that when λj > 0, eλj t goes to ∞ as t → ∞ and goes to 0 when t → −∞.
It follows that the term corresponding to the largest (positive) λj dominates
the other terms as t → ∞, while the largest (negative) λj dominates when
t → −∞. This is because the eigenvectors v(1) , v(2) , . . . v(n) of A are static,
i.e., do not vary with t (in our “constant coefficients” context). However, for
each j with λj ̸= 0, the corresponding basic solution x(j) := v(j) eλj t evolves
as t varies.
Note: If A has a non-zero eigenvalue, then the equilibrium solution
x = 0 is not asymptotically stable (or even stable), since any solution which
is near 0 goes to a (non-zero vector times) ±∞ either as t goes to ∞ or as
t → −∞. If the only eigenvalue is 0, x = 0 is stable as solutions near it will
stay nearby for larger |t|, but is not asymptotically stable.
9
Trajectory:
To fix ideas, look at the example above with eigenvalues ±1, and with
general solution
10
Lecture 12
Last time we discussed the example, in the plane:
( ) ( )
′ x1 0 1
x = Ax, x = , A=
x2 1 0
A three-dimensional example
7 −8 0 x1
A = 3 −8 0 ,
x = x2
0 0 3 x3
7
Solve x′ = Ax subject to the initial condition: x(0) = 3 .
−1
Eigenvalues of A: Solve det(λI3 − A) = 0.
λ−7 8 0
λ − 7 8
det −3 λ + 8 0 = (λ − 3)
−3 λ + 8
0 0 λ−3
= [(λ − 7)(λ + 8) + 24](λ − 3)
= (λ2 + λ − 56 + 24)(λ − 3)
= (λ − 1)(λ + 2)](λ − 3)
Thus λ1 = 1, λ2 = −2, λ3 = 3.
a
Eigenvectors: Look for v = b ̸= 0 such that Av = λv.
c
7 −8 0 a
λ3 = 3 : 3 −8 0 b
00 3 c
|
{z
}
7a − 8b
3a − 8b
3c
0
We may take v(3) = 0.
1
11
a
λ1 = 1: Want A(v(1) = v(1) . Put v(1) = b so that
c
7a − 8b = a
3a − 8b = b
3c = c ⇒ c = 0
3
We may take v(1) = 1.
0
2
λ2 = −2: Check: v(2) = 1 works!
0
Note: Put
( )
M = v(1) v(2) v(3) , matrix of eigenvectors
3 2 0
= 1 1 0
0 0 1
1 −2 0
Check : M −1 = −1 3 0
0 0 1
( )−1 ( )
a b 1 d −b
using =
c d (ad − bc) −c a
1 −2 0 7 −8 0 3 2 0
=⇒ M −1
| {zAM} = 2 −6 0 3 −8 0 1 1 0
conjugation of A by M 0 0 3 0 0 3 0 0 1
1 0 0
= 0 −2 0 ← the diagonal matrix of eigenvalues
0 0 3
Since the eigenvalues of A are all (real and) distinct, we know that a
fundamental set (basis) of solutions of Ax = x′ is given by:
x(j) , x(j) , x(j) , with x(j) = v(j) eλjt , for each j = {1, 2, 3}
12
Explicitly,
3
x(1) = 1 et ,
0
2
x(2) = 1 e−2t ,
0
0
x(3) = 0 e3t .
1
whose Wronskian is
( )
W x(1) x(2) x(3) = (3e−t − 2e−t )e3t = e2t ̸= 0.
13
Note: No non-zero
linear combination of x1 (t), x3 (t) goes to the equi-
0
librium solution 0 as t → ∞, while x(2) does approach the equilibrium
0
solution as t → ∞.
2
x (0) = v = 1
(2) (2)
0
General Solution:
x = c1 x(1) + c2 x(2) + c3 x(3) ,
where c1 , c2 , c3 are scalars.
7
The given initial condition requires that x(0) = 3 . This gives a
−1
system of equations for c1 , c2 , c3 :
3 2 0 7
c1 1 + c2 1 + c3 0 = 3 (*)
0 0 1 −1
c1
Write C = c2 as a vector of constants.
c3
3 2 0
Consider the matrix of eigenvectors M = 1 1 0, which diagonalizes
0 0 1
the matrix A defining the linear system. It follows that
M C = x(0)
We know that M is invertible, and so
1 −2 0 7 1
C = M −1 x(0) = −1 3 0 3 = 2
0 0 1 −1 −1
So the unique solution satisfying the Initial Condition is:
3 4 0
x = x(1) + 2x(2) − x(3) = 1 et + 2 e−2t − 0 e3t .
0 0 1
14
Terminology
Let x′ = Ax be a homogeneous, linear system, with A an n × n-matrix
with constant coefficients.
If we have n solutions, say y(1) , y(2) , . . . y(n) of this system of ODE’s, then
the Wronskian of {y(1) , . . . , y(n) } is
( )
W y(1) , y(2), . . . , y(n) = det(y(1) , y(2) . . . y(n) ).
The y(j) give a basis of solutions in an interval (−a, a), for some a > 0, iff
W (y(1) , . . . y(n) ) ̸= 0 on (−a, a). This is the same as saying: The y(j) are
linearly independent on the interval.
If x(1) , x(2) , . . . , x(n) is a fundamental set of solutions, the associated fun-
damental matrix is
Ψ = (x(1) , x(2) . . . x(n) .
( ) ( )
x1 ′ 0 1
x= , x = Ax, A=
x2 −1 0
As we saw earlier, the eigenvalues of A are λ1 = i, λ2 = −i, with associated
eigenvectors ( ) ( )
(1) 1 (2) 1
v = , v = .
i −i
We get distinct solutions:
( ) ( it )
(1) 1 it
(1) λt e
z =v e = e =
i ieit
( ) ( −it )
(2) (2) λ2 t 1 −it e
z =v e = e =
−i −ie−it
The only catch is that the solutions are complex, not real!
15
If z is a complex solution, i.e., if z′ = Az, then x = Re(z) and y = Im(z)
are also solutions, since A is real, Re(z′ ) = x′ , and Im(z′ ) = y′ . The nice
thing is that x, y are real solutions. Since
and ( ) ( )
(2) cos t (2) − sin t
x (t) = , y (t) = .
− sin t − cos t
Note that
x(1) (t) = x(2) (t), y(1) (t) = −y(2) (t).
So it suffices to consider just the solutions x(1) and y(1) . Moreover, their
Wronskian is
( )
( (1) (1) ) cos t sin t
W x ,y = det = cos2 t + sin2 t = 1 ̸= 0.
− sin t cos t
So these two real solutions are linearly independent (over R), and the corre-
sponding fundamental matrix
( )
cos t sin t
Ψ =
− sin t cos t
16