Eigenvalues and Eigenvectors with Professor Strang (MIT)
Eigenvalues and Eigenvectors with Professor Strang (MIT)
Eigenvecrtors
1
Contents
1 Eigenvalues and Eigenvectors 3
1.1 What are Eigenvalues and Eigenvectors? 4
1.2 Properties of
Eigenvalues and Eigenvectors . . . . 11
1.3 Practical Applications . . . . . . . . . 14
1.4 Challenges and Complexities . . . . . 15
2 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1 Eigenvalues and Eigenvectors
In Lecture 21 from MIT’s 18.06 Linear Algebra
course, Professor Gilbert Strang describes the
meaning of eigenvalues and eigenvectors, an
important concept in linear algebra. These ideas
have profound implications for areas ranging
from physics, engineering, and machine learning
to computer science.
In modal structural dynamics and mechanical vi-
bration, the eigenvalues are the square of natural
frequencies, and the eigenvectors are the mode
shapes of vibration.
This lecture explores the fundamental concepts
and methods for solving problems involving eigen-
values and the challenges associated with them.
3 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.1 What are Eigenvalues and Eigenvec-
tors?
Eigenvalues and eigenvectors provide a way to
understand how a matrix behaves when it trans-
forms a vector.
An eigenvector of a matrix is a vector that only
changes in magnitude (not direction) when a ma-
trix is applied to it. The factor by which it is
scaled is called the eigenvalue.
Mathematically, this relationship is expressed
as:
Ax = λx
where: A is a square matrix, x is the eigenvector,
λ is the eigenvalue.
This can be viewed as the input vector x to the
transformation Ax results in the output of scaled
parallel vector x.
4 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Why Are Eigenvalues and Eigenvectors Impor-
tant?
Eigenvalues and eigenvectors are important be-
cause they provide insight into the behavior of
matrices, which are used to represent systems of
equations, transformations, 2nd-order tensors,
and other phenomena.
5 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Understanding the eigenstructure of a matrix
helps to:
1. Determine stability in dynamic systems.
2. Simplify computations, such as finding pow-
ers of matrices and diagonalization of ma-
trices for solving systems of ordinary differ-
ential equations, for example, in vibration
analysis.
3. Determine principal normal stresses and
stretches for stress and strain tensors in
continuum mechanics.
4. Determine matrix invariants under change
of basis transformation or coordinate rota-
tion; useful, for example, in defining distor-
tion energy yield criterion (equivalent to Von
Mises stress criterion) in terms of deviatoric
stress.
5. Analyze data, for example, in Principal Com-
ponent Analysis (PCA), for dimensionality
reduction.
6 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1. Matrix Transformation and Eigenvectors
When a matrix A is applied to a vector x, it usu-
ally changes both its direction and magnitude.
However, for certain vectors, called eigenvec-
tors, the output vector remains in the same or
exactly opposite direction as x. The scaling
factor for this change is the eigenvalue λ.
A classic example involves the projection
matrix, which projects vectors onto a plane.
In this case, the vectors already on the
plane remain unchanged, making them eigen-
vectors with eigenvalue 1.
7 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2. Characteristic Equation
To find the eigenvalues of a matrix, we rely on
the characteristic equation obtained from the
eigenproblem obtained by rewriting Ax − λx =
0, as
(A − λI)x = 0
where I is the identity matrix.
For nonzero x, the matrix (A − λI) must
be singular, and therefore the determinant
must be zero.
The goal is to solve for the eigenvalues
λ0s by setting det(A − λI) to zero, yield-
ing a characteristic polynomial equation in
λ. The solutions (roots) of this equation are
the eigenvalues.
8 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. Examples of Special Matrices
Professor Strang uses several examples to illus-
trate the behavior of eigenvalues and eigenvec-
tors:
Projection Matrix: A matrix that projects
vectors onto a subspace. Its eigenvalues
are typically 1 (for vectors in the subspace)
and 0 (for vectors orthogonal to it).
Permutation Matrix: A matrix that swaps
elements. It showcases how eigenvectors
change with permutation while maintaining
certain fixed properties.
9 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
4. Complex Eigenvalues
Not all matrices have real eigenvalues.
For example, a rotation matrix that rotates
vectors by 90 degrees has no real eigen-
vector that remains parallel to the original
vector. In such cases, imaginary numbers
are used to describe the eigenvalues.
10 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.2 Properties of
Eigenvalues and Eigenvectors
1. Trace and Determinant
The trace of a matrix (sum of diagonal ele-
ments) equals the sum of its eigenvalues.
The determinant is the product of the eigen-
values.
These properties provide valuable information
for checking calculations, understanding the
matrix’s behavior, and being invariant to changes
in basis, particularly rotations.
11 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2. Symmetric Matrices
Symmetric matrices have real eigenvalues, and
their eigenvectors are orthogonal (perpendicu-
lar). This makes them particularly well-behaved
in many applications, such as physics, mechani-
cal vibration, and machine learning, as the eigen-
vectors can be used as a linearly independent
vector basis in which the matrix transform in this
basis is diagonal.
12 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. Repeated Eigenvalues and Deficiency
Matrices can have repeated eigenvalues. In
some cases, repeated eigenvalues may result
in a shortage of independent eigenvectors, mak-
ing it difficult to describe the system fully. This
scenario is called deficiency and is a complex
aspect of linear algebra.
13 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.3 Practical Applications
Eigenvalues and eigenvectors have numerous appli-
cations:
Mechanical Vibrations: Determine natural fre-
quencies and mode shapes of systems.
Quantum Mechanics: Describe physical states
and energies.
Principal Component Analysis (PCA): Reduce di-
mensions in data, identifying principal directions
of variation.
14 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
1.4 Challenges and Complexities
While the theory of eigenvalues and eigenvectors can
be elegant, several complexities arise:
Complex Numbers: Nonsymmetric real matri-
ces can have complex eigenvalues since they
can be composed as the sum of a symmetric
and anti-symmetric matrix, which leads to com-
plex conjugate eigenvalue pairs with both real
and imaginary parts and complex eigenvectors.
These complex modes are more difficult to in-
terpret physically compared to their real-valued
counterparts with real symmetric matrices.
Deficient Systems: Some matrices might not
have enough independent eigenvectors to de-
scribe their behavior fully.
Sensitivity: Eigenvalues can be sensitive to slight
changes in the matrix, which might lead to chal-
lenges in practical applications.
15 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2 Prof. Gilbert Strang’s
Lecture on Eigenvalues and Eigen-
vectors.
Let’s dive into Professor Strang’s lecture in more
detail.
16 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
A question we can ask is:
For input vector x, operated on by square matrix
A, what parallel vectors scaled with λ exist?
This problem can be posed as the eigenproblem:
Ax = λx (1)
Ax − λx = 0
(A − λI) x = 0 (2)
17 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
In this form, the matrix (A − λI) is the matrix
A shifted by λI, and can be viewed as a linear
system with right-hand-side b = 0. Here both λ
and x are unknowns.
We know from linear algebra that with a b = 0,
we know that for a nonzero x in the null space,
the matrix (A − λI) must be singular.
We know a singular matrix must have the deter-
minant equal to zero:
18 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Since the x’s are in the null space, we can only
solve them up to an arbitrary constant.
If the square matrix is n × n, there will be n eigen-
values and associated eigenvectors.
Since they go together, we can refer to the eigen-
pairs (λi, xi), for i = 1, 2, . . . , n.
19 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.1 Example
Let’s do an example to see this solving process and
any insights into the eigenpairs (λi, xi)
3 1
A=
1 3
20 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Let’s factor this quadratic characteristic equa-
tion,
(λ − 2)(λ − 4) = 0
The two solutions are,
λ1 = 2, λ2 = 4
tr(A) = λ1 + λ2 = 2 + 4 = 6
21 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Now, let’s get the two eigenvectors,
For λ1 = 2:
(A − λ1I) x1 = 0
1 1 0
(A − 2I) = x1 =
1 1 0
For λ2 = 4:
(A − λ2I) x2 = 0
−1 1
0
1
(A − 4I) x2 = x2 = , → x2 =
1 −1 0 1
22 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
What properties of the eigensolutions can we ob-
serve here?
Since the matrix was real and symmetric, the
eigenvalues are real-valued, and the eigenvec-
tors are perpendicular (orthogonal) and linearly
independent, i.e., their dot product with them-
selves is nonzero, and with others are zero;
−1
x1 · x1 = [−1, 1] = (−1)(−1) + (1)(1) = 2
1
1
x2 · x2 = [1, 1] = (1)(1) + (1)(1) = 2
1
1
x1 · x2 = [−1, 1] = (−1)(1) + (1)(1) = 0
1
23 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Consider another matrix:
0 1 0−λ 1
= λ2−1 = 0,
A= , → λ1 = −1, λ2 = 1
1 0 1 0−λ
−1
λ1 = −1 : → x1 =
1
1
λ2 = 1 : → x2 =
1
Here
(A + 3I)x = Ax + 3x = λx + 3x = (λ + 3)x
24 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
CAUTION
Be careful, just because we added 3I to the special
symmetrix matrix with constant diagonals gave the
same eigenvectors just with shifted eigenvalues; this
is not true when adding any matrix, say B to A.
We could be tempted to write: If A has eigen-
values λ’s, and B has eigenvalues α’s, then the
eigenvalues of A and B add, with the follow-
ing (incorrect) argument: Ax = λx; Bx = αx.
Adding gives, (A + B) x = (λ + α)x; but this is
False!
This argument assumed that the eigenvectors of
A and B were the same, which is not true.
In general, the addition (A+B) and product (AB)
of matrices does not give the sum or product of
eigenvalues.
25 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.2 Example of a Rotation
(Orthogonal) Matrix
Consider a rotation (orthogonal) matrix that takes any
vector and rotates it by 90◦ counter-clockwise.
The rotated basis vectors are T (e1) = [0, 1]T , and
T (e2) = [−1, 0]T , thus the rotation matrix is:
0 −1
Q = T (e1) T (e2) =
1 0
26 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Let’s check:
−λ −1
det(Q − λI) = = λ2 + 1 = 0
1 −λ
tr(Q) = λ1 + λ2 = i − i = 0,
det(Q) = λ1λ2 = i(−i) = −i2 = −(−1) = 1
27 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
In general, a matrix can be decomposed into a
symmetric and ani-symmetric part:
A = Asym + Aanti-sym
28 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.3 Trouble with repeated eigenvalues
Consider this triangular matrix (zeros on one side of
the diagonal):
3 1
A =
0 3
det(A) = (3 − λ)(3 − λ) = 0, → λ1 = λ2 = 3
Here, the eigenvalues (roots of the characteristic
polynomial) are the same, i.e., repeated.
Note that the eigenvalues are the same as the
diagonal elements of the triangular matrix (this
will always be true).
With repeated eigenvalues, we will have trouble
with the eigenvectors.
29 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
With λ1 = 3,
0 1
0 1
x1 = , → x1 =
0 0 0 0
30 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.4 Conclusions
Eigenvalues and eigenvectors are foundational
tools in linear algebra. They allow us to analyze
and understand matrices more deeply. From
simple projections to complex rotations, they re-
veal the underlying structure of transformations
and provide insights across various disciplines.
Mastering the methods for interpreting these val-
ues gives us a powerful perspective on theo-
retical and applied mathematics and physics,
making them indispensable for anyone working
with mathematical models, engineering, machine
learning, or data analysis.
31 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
2.5 What’s Next?
What can we do with the eigenvalues and eigen-
vectors after they are found? For one, we can
use them to diagonalize the matrix A and con-
veniently express the powers of the matrix A.
This is the subject of Professor Strang’s Video
Lecture 22 found here: https://youtu.be/13r
9QY6cmjc?si=bwC2xo6a8XjuTkIF.
32 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
References
Lecture 21: Eigenvalues and Eigenvectors.
Prof. Gilbert Strang.
Lec 21 – MIT 18.06 Linear Algebra, Spring 2005.
YouTube Video:
https://youtu.be/lXNXrLcoerU?si=YEsra72F
rzJztKT_
33 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
🧠 Let’s Discuss with Your Insights and Experi-
ence
34 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3 Quiz on Eigenvalues and Eigen-
vectors
1. What is an eigenvector?
A) A vector that changes its direction after matrix
multiplication.
B) A vector that remains in the same direction
after matrix multiplication.
C) A vector that disappears after matrix multipli-
cation.
D) A vector that rotates 90 degrees after matrix
multiplication.
2. How is the eigenvalue of a matrix defined?
A) The sum of all elements in a matrix.
B) The trace of a matrix.
C) The scalar by which an eigenvector is scaled
after matrix multiplication.
D) The number of columns in the matrix.
35 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
3. What happens to eigenvalues when you add a scalar
cI to a matrix A?
A) Eigenvalues remain the same.
B) Eigenvalues are scaled by the scalar.
C) Eigenvalues increase by the value of the scalar.
D) Eigenvalues decrease by the value of the scalar.
4. What is a characteristic equation used for?
A) To find the inverse of a matrix.
B) To determine the null space of a matrix.
C) To calculate eigenvalues by setting the deter-
minant of A − λI to zero.
D) To identify the trace of the matrix.
36 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
5. What is a major challenge when dealing with repeated
eigenvalues?
A) They always produce complex numbers.
B) They often result in a lack of enough indepen-
dent eigenvectors.
C) They require the use of orthogonal matrices.
D) They always have real numbers.
37 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
Answers to Quiz
Here are the correct answers for the quiz:
1. B) A vector that remains in the same direction after
matrix multiplication.
2. C) The scalar by which an eigenvector is scaled after
matrix multiplication.
3. C) Eigenvalues increase by the value of the scalar.
4. C) To calculate eigenvalues by setting the determi-
nant of A − λI to zero.
5. B) They often result in a lack of enough independent
eigenvectors.
38 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.
We Love Matrix Linear Algebra!
– Powered by LaTeX �
My LinkedIn Profile
39 of 39
LinkedIn � - Dr. Lonny Thompson, Clemson University, May 29, 2025.