1550lect15
1550lect15
Textbook Reading:
Beezer, Ver 3.5 Section B (print version p233-238), Section D (print version p245-253)
Exercise
Exercises with solutions can be downloaded at
http://linear.ups.edu/download/fcla-3.50-solution-manual.pdf
Section B and Section D
1 Basis
Definition 1 Let V be a vector space. Then a subset S of V is said to be a basis for
V if
1. S is linearly independent.
2. hSi = V , i.e. S spans V .
Remark: Most of the time V is a subspace of Rm . Occasionally V is assumed to be a
subspace of Mmn or Pn . It does not hurt to assume V is a subspace of Rm .
Example 1 Let V = Rm , then B = {e1 , . . . , em } is a basis for V . (recall all the entries
of ei is zero, except the i-th entry being 1). It is called the standard basis:
Obviously B is linearly independent.
Also, for any v ∈ V , v = [v]1 e1 + · · · + [v]m em ∈ hBi. So hBi = V .
2
Example 2 A vector can have different bases. Example for V = R , S = {e1 , e2 }
space
1 1
is a basis and S 0 = { , } is also a basis.
0 1
Method 2
RREF
Let A = [v1 | · · · |vn ]. Suppose At −−−→ B. Let T be the nonzero columns of B t . Then
1
T is a basis for hSi = C(A)
2
Then we can take
1 0 0
0 1 0
T = {
0 ,
,
0
}.
1
− 31
7
12
7
13
7
4 Dimension
Definition 4 (Dimension) Let V be a vector space. Suppose {v1 , . . . , vt } is a basis
for V . Then t is called the dimension of V and is denoted by t = dim V and V is called
a finite dimensional vector space.
Warning: We need to show that the dimension is well-defined, i.e., suppose both
{v1 , . . . , vt } and {u1 , . . . , us } are bases for V , then s = t.
Theorem 5 Suppose that S = {v1 , v2 , v3 , . . . , vt } is a finite set of vectors which spans
the vector space V . Then any set of t + 1 or more vectors from V is linearly dependent.
3
Proof. Skip the proof
We want to prove that any set of t + 1 or more vectors from V is linearly dependent. So
we will begin with a totally arbitrary set of vectors from V , R = {u1 , u2 , u3 , . . . , um },
where m > t. We will now construct a nontrivial relation of linear dependence on R.
Each vector u1 , u2 , u3 , . . . , um can be written as a linear combination of the vectors
v1 , v2 , v3 , . . . , vt since S is a spanning set of V . This means there exist scalars aij ,
1 ≤ i ≤ t, 1 ≤ j ≤ m, so that
This is a homogeneous system with more variables than equations (our hypothesis
is expressed as m > t), so there are infinitely many solutions. Choose a nontrivial
solution and denote it by x1 = c1 , x2 = c2 , x3 = c3 , . . . , xm = cm . As a solution to the
homogeneous system, we then have
a11 c1 + a12 c2 + a13 c3 + · · · + a1m cm =0
a21 c1 + a22 c2 + a23 c3 + · · · + a2m cm =0
a31 c1 + a32 c2 + a33 c3 + · · · + a3m cm =0
..
.
at1 c1 + at2 c2 + at3 c3 + · · · + atm cm =0
4
lation of linear dependence we desire,
c1 u1 + c2 u2 + c3 u3 + · · · + cm um
= c1 (a11 v1 + a21 v2 + a31 v3 + · · · + at1 vt )
+ c2 (a12 v1 + a22 v2 + a32 v3 + · · · + at2 vt )
+ c3 (a13 v1 + a23 v2 + a33 v3 + · · · + at3 vt )
..
.
+ cm (a1m v1 + a2m v2 + a3m v3 + · · · + atm vt )
= c1 a11 v1 + c1 a21 v2 + c1 a31 v3 + · · · + c1 at1 vt
+ c2 a12 v1 + c2 a22 v2 + c2 a32 v3 + · · · + c2 at2 vt
+ c3 a13 v1 + c3 a23 v2 + c3 a33 v3 + · · · + c3 at3 vt
..
.
+ cm a1m v1 + cm a2m v2 + cm a3m v3 + · · · + cm atm vt
= (c1 a11 + c2 a12 + c3 a13 + · · · + cm a1m ) v1
+ (c1 a21 + c2 a22 + c3 a23 + · · · + cm a2m ) v2
+ (c1 a31 + c2 a32 + c3 a33 + · · · + cm a3m ) v3
..
.
+ (c1 at1 + c2 at2 + c3 at3 + · · · + cm atm ) vt
= (a11 c1 + a12 c2 + a13 c3 + · · · + a1m cm ) v1
+ (a21 c1 + a22 c2 + a23 c3 + · · · + a2m cm ) v2
+ (a31 c1 + a32 c2 + a33 c3 + · · · + a3m cm ) v3
..
.
+ (at1 c1 + at2 c2 + at3 c3 + · · · + atm cm ) vt
= 0v1 + 0v2 + 0v3 + · · · + 0vt
= 0 + 0 + 0 + ··· + 0
=0
That does it. R has been undeniably shown to be a linearly dependent set.
Theorem 6 Suppose that V is a vector space with a finite basis B and a second basis
C. Then B and C have the same size.
Proof. Denote the size of B by t. If C has ≥ t + 1 vectors, then by the previous theorem,
C is linearly dependent. Contradict to the fact that C is a basis. Denote the size of C
by s. So s ≤ t.
Because C is a basis, if t ≥ s + 1, then B is linearly dependent. Contradict to the fact
that B is a basis. Hence t ≤ s. So s = t.
The above theorem shows that the dimension is well-defined. No matter which basis we
choose, the size is always the same.
Example 6 dim Rm = m. See example 1.
5
Lemma 7 Let V be a vector space and v1 , . . . , vk , u ∈ V . Suppose S = {v1 , . . . , vk } is
/ hSi. Then S 0 = {v1 , . . . , vk , u} is linearly independent.
linearly independent and u ∈
Proof. Let the relation of linear dependence of S 0 be
α1 v1 + · · · + αk vk + αu = 0.
Suppose α 6= 0, then
α1 αk
u=− v1 − · · · − vk ∈ hSi .
α α
Contradiction. So α = 0, then
α1 v1 + · · · + αk vk = 0.
By the linear independence of S, αi = 0 for all i. Hence the above relation of dependence
of S 0 is trivial.
6
5 Rank and nullity of a matrix
Definition 10 (Nullity of a matrix) Suppose that A ∈ Mmn . Then the nullity of A
is the dimension of the null space of A, n (A) = dim(N (A)).
Example 7 Rank and nullity of a matrix Let us compute the rank and nullity of
2 −4 −1 3 2 1 −4
1 −2 0 0 4 0 1
−2 4 1 0 −5 −4 −8
A=
1 −2 1 1 6 1 −3
2 −4 −1 1 4 −2 −1
−1 2 3 −1 6 3 −1
To do this, we will first row-reduce the matrix since that will help us determine bases
for the null space and column space.
1 −2 0 0 4 0 1
0
0 1 0 3 0 −2
0
0 0 1 −1 0 −3
0 0 0 0 0 1 1
0 0 0 0 0 0 0
0 0 0 0 0 0 0
7
By Lecture 9 Sect 3, each free variable corresponding to a single basis vector for the null
space. So n (A) is the number of free variables = n − r.
Equivalently
dim C(A) = dim R(A) .
RREF
Proof. Let A −−−→ B. Let r denote the number of pivot columns (= number of nonzero
rows). Then by the above discussion r = r (A). By Lecture 14 Theorem 7, the first r
columns of B t form a basis for R(A). Hence r = r (At ). This completes the proof.
8
Proof. (1 ⇒ 2) If A is nonsingular then C(A) = Rn . If C(A) = Rn , then the column
space has dimension n, so the rank of A is n.
(2 ⇒ 3) Suppose r (A) = n. Then the dimension formula gives
n (A) = n − r (A)
=n−n
=0
(3 ⇒ 1) Suppose n (A) = 0, so a basis for the null space of A is the empty set. This
implies that N (A) = {0} and hence A is nonsingular.
With a new equivalence for a nonsingular matrix, we can update our list of equiva-
lences which now becomes a list requiring double digits to number.
Theorem 16 Suppose that A is a square matrix of size n. The following are equivalent.
1. A is nonsingular.
2. A row-reduces to the identity matrix.
3. The null space of A contains only the zero vector, N (A) = {0}.
4. The linear system LS(A, b) has a unique solution for every possible choice of b.
5. The columns of A are a linearly independent set.
6. A is invertible.
7. The column space of A is Rn , C(A) = Rn .
8. The columns of A are a basis for Rn .
9. The rank of A is n, r (A) = n.
10. The nullity of A is zero, n (A) = 0.