Dimension of A Vector Space - Keith Conrad
Dimension of A Vector Space - Keith Conrad
KEITH CONRAD
1. Introduction
This handout is a supplementary discussion leading up to the definition of dimension of
a vector space and some of its properties. We start by defining the span of a finite set of
vectors and linear independence of a finite set of vectors, which are combined to define the
all-important concept of a basis.
Definition 1.1. Let V be a vector space over a field F . For any finite subset {v1 , . . . , vn }
of V , its span is the set of all of its linear combinations:
Span(v1 , . . . , vn ) = {c1 v1 + · · · + cn vn : ci ∈ F }.
Example 1.2. In F 3 , Span((1, 0, 0), (0, 1, 0)) is the xy-plane in F 3 .
Example 1.3. If v is a single vector in V then
Span(v) = {cv : c ∈ F } = F v
is the set of scalar multiples of v, which for nonzero v should be thought of geometrically
as a line (through the origin, since it includes 0 · v = 0).
Since sums of linear combinations are linear combinations and the scalar multiple of a
linear combination is a linear combination, Span(v1 , . . . , vn ) is a subspace of V . It may not
be all of V , of course.
Definition 1.4. If {v1 , . . . , vn } satisfies Span({v1 , . . . , vn }) = V , that is, if every vector in
V is a linear combination from {v1 , . . . , vn }, then we say this set spans V or it is a spanning
set for V .
Example 1.5. In F 2 , the set {(1, 0), (0, 1), (1, 1)} is a spanning set of F 2 . It has some
redundancy in it, since removing any one of the vectors leaves behind a spanning set, so
remember that spanning sets may be larger than necessary.
Definition 1.6. A finite subset {w1 , . . . , wm } of V is called linearly independent when the
vanishing of a linear combination only happens in the obvious way:
c1 w1 + · · · + cm wm = 0 =⇒ all ci = 0.
The importance of this concept is that a linear combination of linearly independent
vectors has only one possible set of coefficients:
(1.1) c1 w1 + · · · + cm wm = c01 w1 + · · · + c0m wm =⇒ all ci = c0i .
Indeed, subtracting gives (ci − c0i )wi = 0, so ci − c0i = 0 for all i by linear independence.
P
Thus ci = c0i for all i.
1
2 KEITH CONRAD
2. Comparing bases
The following theorem is a first result that links spanning sets in V with linearly inde-
pendent subsets.
Theorem 2.1. Suppose V 6= {0} and it admits a finite spanning set {v1 , . . . , vn }. Some
subset of this spanning set is a linearly independent spanning set.
The theorem says that once there is a finite spanning set, which could have lots of linear
dependence relations, there is a basis for the space. Moreover, the theorem tells us a basis
can be found within any spanning set at all.
Proof. While {v1 , . . . , vn } may not be linearly independent, it contains linearly independent
subsets, such as any one single nonzero vi . Of course, such small linearly independent subsets
can hardly be expected to span V . But consider linearly independent subsets of {v1 , . . . , vn }
that are as large as possible. Reindexing, without loss of generality, we can write such a
subset as {v1 , . . . , vk }.
For i = k + 1, . . . , n, the set {v1 , . . . , vk , vi } is not linearly independent (otherwise
{v1 , . . . , vk } is not a maximal linearly independent subset). Thus there is some linear
relation
c1 v1 + · · · + ck vk + ci vi = 0,
where the c’s are in F are not all of them are 0. The coefficient ci cannot be zero, since
otherwise we would be left with a linear dependence relation on v1 , . . . , vk , which does not
happen due to their linear independence.
Since ci 6= 0, we see that vi is in the span of v1 , . . . , vk . This holds for i = k + 1, . . . , n, so
any linear combination of v1 , . . . , vn is also a linear combination of just v1 , . . . , vk . As every
element of V is a linear combination of v1 , . . . , vn , we conclude that v1 , . . . , vk spans V . By
its construction, this is a linearly independent subset of V as well.
Notice the non-constructive character of the proof. If we somehow can check that a
(finite) subset of V spans the whole space, Theorem 2.1 says a subset of this is a linearly
independent spanning set, but the proof is not constructively telling us which subset of
{v1 , . . . , vn } this might be.
Theorem 2.1 is a “top-down” theorem. It says any (finite) spanning set has a linearly
independent spanning set inside of it. It is natural to ask if we can go “bottom-up,” and
show any linearly independent subset can be enlarged to a linearly independent spanning
set. Something along these lines will be proved in Theorem 2.10.
Lemma 2.2. Suppose {v1 , . . . , vn } spans V , where n ≥ 2. Pick any v ∈ V . If some vi is a
linear combination of the other vj ’s and v, then V is spanned by the other vj ’s and v.
For example, if V is spanned by v1 , v2 , and v3 , and v1 is a linear combination of v, v2 ,
and v3 , where v is another vector in V , then V is spanned by v, v2 , and v3 .
Lemma 2.2 should be geometrically reasonable. See if you can prove it before reading
the proof below.
Proof. Reindexing if necessary, we can suppose it is v1 that is a linear combination of
v, v2 , . . . , vn . We will show every vector in V is a linear combination of v, v2 , . . . , vn , so
these vectors span V .
Pick any w ∈ V . By hypothesis,
w = c1 v1 + c2 v2 + · · · + cn vn
4 KEITH CONRAD
set for V and the second set as a linearly independent subset of V , the exchange theorem
tells us that m ≤ n. Reversing these roles (which we can do since bases are both linearly
independent and span the whole space), we get n ≤ m. Thus m = n.
Definition 2.6. If V is a vector space over F and V has a finite basis then the (common)
size of any basis of V is called the dimension of V (over F ).
6 KEITH CONRAD
Example 2.7. There are obvious bases of the vector spaces Rn and Mn (R): in Rn one
basis is the vectors with 1 in one coordinate and 0 elsewhere, and in Mn (R) one basis is
the matrices with 1 in one entry and 0 elsewhere. Counting the number of terms in a basis,
Rn has dimension n and Mn (R) has dimension n2 .
Example 2.8. Treating C as a real vector space, one basis is {1, i}, so C has dimension 2
as a vector space over R.
Example 2.9. The vector space {0} has no basis, or you might want to say its basis is the
empty set. In any event, it is natural to declare the zero vector space to have dimension 0.
Theorem 2.10. Let V be a vector space with dimension n ≥ 1. Any spanning set has at
least n elements, and contains a basis inside of it. Any linearly independent subset has at
most n elements, and can be extended to a basis of V . Finally, an n-element subset of V is
a spanning set if and only if it is a linearly independent set.
Proof. Since V has a basis of n vectors, let’s pick such a basis, say v1 , . . . , vn . We will
compare this basis to the spanning sets and the linearly independent sets in V to draw our
conclusions, taking advantage of the dual nature of a basis as both a linearly independent
subset of V and as a spanning set for V .
If {u1 , . . . , uk } is a spanning set for V , then a comparison with {v1 , . . . , vn } (interpreted
as a linearly independent subset of V ) shows n ≤ k by the exchange theorem. Equivalently,
k ≥ n. Moreover, Theorem 2.1 says that {u1 , . . . , uk } contains a basis for V . This settles
the first part of the theorem.
For the next part, suppose {w1 , . . . , wm } is a linearly independent subset of V . A compar-
ison with {v1 , . . . , vn } (interpreted as a spanning set for V ) shows m ≤ n by the exchange
theorem. To see that the w’s can be extended to a basis of V , apply the exchange process
from the proof of the exchange theorem, but only m times since we have only m linearly
independent w’s. We find at the end that
V = Span(w1 , . . . , wm , vm+1 , . . . , vn ),
which shows the w’s can be extended to a spanning set for V . This spanning set contains a
basis for V , by Theorem 2.1. Since all bases of V have n elements, this n-element spanning
set must be a basis itself.
Taking m = n in the previous paragraph shows any n-element linearly independent
subset is a basis (and thus spans V ). Conversely, any n-element spanning set is linearly
independent, since any linear dependence relation would let us cut down to a spanning set
of fewer than n elements, but that violates the first result in this proof: a spanning set for
an n-dimensional vector space has at least n elements.
3. Dimension of subspaces
Theorem 3.1. If V is an n-dimensional vector space, any subspace is finite-dimensional,
with dimension at most n.
Proof. This theorem is trivial if V = {0}, so we may assume V 6= {0}, i.e., n ≥ 1.
Let W be a subspace of V . Any linearly independent subset of W is also a linearly
independent subset of V , and thus has size at most n by Theorem 2.10. Choose a linearly
independent subset {w1 , . . . , wm } of W where m is maximal. Then m ≤ n. We will show
Span(w1 , . . . , wm ) = W .
THE DIMENSION OF A VECTOR SPACE 7
For any w ∈ W , the set {w, w1 , . . . , wm } has more than m elements, so it can’t be linearly
independent. Therefore there is some vanishing linear combination
aw + a1 w1 + · · · + am wm = 0
where a, a1 , . . . , am are in F and are not all 0. If a = 0 then the ai ’s all vanish since
w1 , . . . , wm are linearly independent. Therefore a 6= 0, so we can solve for w:
a1 am
w = − w1 − · · · − wm .
a a
Thus w is a linear combination of w1 , . . . , wm . Since w was arbitrary in W , this shows
the wi ’s span W . So {w1 , . . . , wm } is a spanning set for W that is linearly independent by
construction. This proves W is finite-dimensional with dimension m ≤ n.
Theorem 3.2. If V has dimension n and W is a subspace with dimension n, then W = V .
Proof. When W has dimension n, any basis for W is a linearly independent subset of V
with n elements, so it spans V by Theorem 2.10. The span is also W (by definition of a
basis for W ), so W = V .
It is important that throughout our calculations (expressing one vector as a linear com-
bination of others when we have a nontrivial linear combination of vectors equal to 0) we
can scale a nonzero coefficient of a vector to make the coefficient equal to 1. For example,
suppose we tried to do linear algebra over the integers Z instead of over a field. Then we
can’t scale a coefficient in Z to be 1 without possibly needing rational coefficients for other
vectors in a linear combination. That suggests results like the ones we have established for
vector spaces over fields might not hold for “vector spaces over Z.” And it’s true: linear
algebra over Z is more subtle than over fields. For example, Theorem 3.2 is false if we
work with “vector spaces” over Z. Consider the integers Z and the even integers 2Z. By
any reasonable definitions, both Z and 2Z should be considered “one-dimensional” over
the integers, where Z has basis {1} and 2Z has basis {2} (since every integer is a unique
integral multiple of 2). But 2Z ⊂ Z, so a “one-dimensional vector space over Z” can lie
inside another without them being equal. This is a pretty clear failure of Theorem 3.2 when
we use scalars from Z instead of from a field.