0% found this document useful (0 votes)
82 views

Algebra Course05

The document defines linear independence and dependence of vectors in a vector space. It states that a set of vectors is linearly dependent if one vector can be written as a linear combination of the others. A basis of a vector space is defined as a set of vectors that are linearly independent and span the entire space. The document proves that every vector space has a basis by starting with a generating set and removing linearly dependent vectors one by one.

Uploaded by

Pop Robert
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Algebra Course05

The document defines linear independence and dependence of vectors in a vector space. It states that a set of vectors is linearly dependent if one vector can be written as a linear combination of the others. A basis of a vector space is defined as a set of vectors that are linearly independent and span the entire space. The document proves that every vector space has a basis by starting with a generating set and removing linearly dependent vectors one by one.

Uploaded by

Pop Robert
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ALGEBRA - First Year - Computer Science Prof. dr.

Septimiu Crivei

Course 5: 29.10.2020
2.5 Linear independence
Definition 2.5.1 Let V be a vector space over K. We say that the vectors v1 , . . . , vn ∈ V are (or the
set of vectors {v1 , . . . , vn } is):
(1) linearly independent in V if for every k1 , . . . , kn ∈ K,

k1 v1 + · · · + kn vn = 0 =⇒ k1 = · · · = kn = 0 .

(2) linearly dependent in V if they are not linearly independent, that is, ∃k1 , . . . , kn ∈ K not all zero
such that
k1 v1 + · · · + kn vn = 0 .

Remark 2.5.2 (1) A set consisting of a single vector v is linearly dependent ⇐⇒ v = 0.


(2) As an immediate consequence of the definition, we notice that if V is a vector space over K and
X, Y ⊆ V such that X ⊆ Y , then:
(i) If Y is linearly independent, then X is linearly independent.
(ii) If X is linearly dependent, then Y is linearly dependent. Thus, every set of vectors containing the
zero vector is linearly dependent.

Theorem 2.5.3 Let V be a vector space over K. Then the vectors v1 , . . . , vn ∈ V are linearly dependent
if and only if one of the vectors is a linear combination of the others, that is, ∃j ∈ {1, . . . , n} such that
n
X
vj = αi vi
i=1
i6=j

for some αi ∈ K, where i ∈ {1, . . . , n} and i 6= j.

Proof. =⇒. Assume that v1 , . . . , vn ∈ V are linearly dependent. Then ∃k1 , . . . , kn ∈ K not all zero, say
kj 6= 0, such that k1 v1 + · · · + kn vn = 0. But this implies
n
X
−kj vj = ki v i
i=1
i6=j

and further,
n
X
vj = (−kj−1 ki )vi .
i=1
i6=j

Now choose αi = −kj−1 ki for each i 6= j to get the conclusion.


⇐=. Assume that ∃j ∈ {1, . . . , n} such that
n
X
vj = αi vi
i=1
i6=j

for some αi ∈ K. Then


n
X
(−1)vj + αi vi = 0 .
i=1
i6=j

Since there exists such a linear combination equal to zero and the scalars are not all zero, the vectors
v1 , . . . , vn are linearly dependent. 

Page 1
ALGEBRA - First Year - Computer Science Prof. dr. Septimiu Crivei

Example 2.5.4 (a) Let V2 be the real vector space of all vectors (in the classical sense) in the plane
with a fixed origin O. Recall that the addition is the usual addition of two vectors by the parallelogram
rule and the external operation is the usual scalar multiplication of vectors by real scalars. Then:
(i) one vector v is linearly dependent in V2 ⇐⇒ v = 0;
(ii) two vectors are linearly dependent in V2 ⇐⇒ they are collinear;
(iii) three vectors are always linearly dependent in V2 .
Now let V3 be the real vector space of all vectors (in the classical sense) in the space with a fixed
origin O. Then:
(i) one vector v is linearly dependent in V3 ⇐⇒ v = 0;
(ii) two vectors are linearly dependent in V3 ⇐⇒ they are collinear;
(iii) three vectors are linearly dependent in V3 ⇐⇒ they are coplanar;
(iv) four vectors are always linearly dependent in V3 .
(b) If K is a field and n ∈ N∗ , then the vectors e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en =
(0, 0, 0, . . . , 1) ∈ K n are linearly independent in the canonical vector space K n over K. In order to show
that, let k1 , . . . , kn ∈ K be such that

k1 e1 + k2 e2 + · · · + kn en = 0 ∈ K n .

Then we have

k1 (1, 0, 0, . . . , 0) + k2 (0, 1, 0, . . . , 0) + · · · + kn (0, 0, 0, . . . , 1) = (0, . . . , 0),

and furthermore
(k1 , . . . , kn ) = (0, . . . , 0).
This implies that k1 = · · · = kn = 0, and so the vectors e1 , . . . , en are linearly independent in K n .
(c) Let K be a field and n ∈ N. Then the vectors 1, X, X 2 , . . . , X n are linearly independent in the
vector space Kn [X] = {f ∈ K[X] | degree(f ) ≤ n} over K.

Let us now give a very useful practical result on linear dependence.

Theorem 2.5.5 Let n ∈ N, n ≥ 2.


(i) Two vectors in the canonical vector space K n are linearly dependent ⇐⇒ their components are
respectively proportional.
(ii) n vectors in the canonical vector space K n are linearly dependent ⇐⇒ the determinant consisting
of their components is zero.

Proof. (i) Let v = (x1 , . . . , xn ), v 0 = (x01 , . . . , x0n ) ∈ K n . By Theorem 2.5.3, the vectors v and v 0 are
linearly dependent if and only if one of them is a linear combination of the other, say v 0 = kv for some
k ∈ K. That is, x0i = kxi for each i ∈ {1, . . . , n}.
(ii) Let v1 = (x11 , x21 , . . . , xn1 ), . . . , vn = (x1n , x2n , . . . , xnn ) ∈ K n . The vectors v1 , . . . , vn are
linearly dependent if and only if ∃k1 , . . . , kn ∈ K not all zero such that

k1 v1 + · · · + kn vn = 0 .

But this is equivalent to

k1 (x11 , x21 , . . . , xn1 ) + · · · + kn (x1n , x2n , . . . , xnn ) = (0, . . . , 0) ,

and further to 
k1 x11 + k2 x12 + · · · + kn x1n = 0


k x + k x + · · · + k x = 0
1 21 2 22 n 2n
 ................


k1 xn1 + k2 xn2 + · · · + kn xnn = 0 .

We are interested in the existence of a non-zero solution for this homogeneous linear system. We will see
later on that such a solution does exist if and only if the determinant of the system is zero. 

Page 2
ALGEBRA - First Year - Computer Science Prof. dr. Septimiu Crivei

2.6 Basis
We are going to define a key notion related to a vector space, namely that of a basis, which will perfectly
determine a vector space. For the sake of simplicity and because of our limited needs, til the end of the
chapter, by a vector space we will understand a finitely generated vector space.
Definition 2.6.1 Let V be a vector space over K. A list of vectors B = (v1 , . . . , vn ) ∈ V n is called a
basis of V if:
(1) B is linearly independent in V ;
(2) B is a system of generators for V , that is, hBi = V .

Theorem 2.6.2 Every vector space has a basis.


Proof. Let V be a vector space over K. If V = {0}, then it has the basis ∅.
Now let V = hBi =6 {0}, where B = (v1 , . . . , vn ). If B is linearly independent, then B is a basis and
we are done. Suppose that the list B is linearly dependent. Then by Theorem 2.5.3, ∃j1 ∈ {1, . . . , n}
such that
Xn
vj 1 = ki vi
i=1
i6=j1

for some ki ∈ K. It follows that V = hB \ {vj1 }i, because every vector of V can be written as a linear
combination of the vectors of B \ {vj1 }. If B \ {vj1 } is linearly independent, it is a basis and we are done.
Otherwise, ∃j2 ∈ {1, . . . , n} \ {j1 } such that
n
X
vj2 = ki0 vi
i=1
i6=j1 ,j2

for some ki0 ∈ K. It follows that V = hB \ {vj1 , vj2 }i, because every vector of V can be written as a linear
combination of the vectors of B \ {vj1 , vj2 }. If B \ {vj1 , vj2 } is linearly independent, then it is a basis and
we are done. Otherwise, we continue the procedure. If all the previous intermediate subsets are linearly
dependent, we get to the step V = hB \ {vj1 , . . . , vjn−1 }i = hvjn i. If vjn were linearly dependent, then
vjn = 0, hence V = hvjn i = {0}, contradiction. Hence vjn is linearly independent and thus forms a single
element basis of V . 
Remark 2.6.3 We are going to see that a vector space may have more than one basis.
Let us give now a characterization theorem for a basis of a vector space.
Theorem 2.6.4 Let V be a vector space over K. A list B = (v1 , . . . , vn ) of vectors in V is a basis of V
if and only if every vector v ∈ V can be uniquely written as a linear combination of the vectors v1 , . . . , vn ,
that is,
v = k1 v1 + · · · + kn vn
for some unique k1 , . . . , kn ∈ K.
Proof. =⇒. Assume that B is a basis of V . Hence B is linearly independent and hBi = V . The second
condition assures us that every vector v ∈ V can be written as a linear combination of the vectors of B.
Suppose now that v = k1 v1 + · · · + kn vn and v = k10 v1 + · · · + kn0 vn for some k1 , . . . , kn , k10 , . . . , kn0 ∈ K.
It follows that
(k1 − k10 )v1 + · · · + (kn − kn0 )vn = 0 .
By the linear independence of B, we must have ki = ki0 for each i ∈ {1, . . . , n}. Thus, we have proved
the uniqueness of writing.
⇐=. Assume that every vector v ∈ V can be uniquely written as a linear combination of the vectors
of B. Then clearly, V = hBi. For k1 , . . . , kn ∈ K, we have by the uniqueness of writing
k1 v1 + · · · + kn vn = 0 =⇒ k1 v1 + · · · + kn vn = 0 · v1 + · · · + 0 · vn =⇒
=⇒ k1 = · · · = kn = 0 ,
hence B is linearly independent. Consequently, B is a basis of V . 

Page 3
ALGEBRA - First Year - Computer Science Prof. dr. Septimiu Crivei

Definition 2.6.5 Let V be a vector space over K, B = (v1 , . . . , vn ) a basis of V and v ∈ V . Then the
scalars k1 , . . . , kn ∈ K intervening in the unique writing of v as a linear combination
v = k 1 v 1 + · · · + k n vn
of the vectors of B are called the coordinates of v in the basis B.

Example 2.6.6 (a) If K is a field and n ∈ N∗ , then the list E = (e1 , . . . , en ) of vectors of K n , where

e1 = (1, 0, 0, . . . , 0)


e = (0, 1, 0, . . . , 0)
2


 . ........
en = (0, 0, 0, . . . , 1)

is a basis of the canonical vector space K n over K, called the canonical basis. Indeed, each vector
v = (x1 , . . . , xn ) ∈ K n has a unique writing v = x1 e1 + · · · + xn en as a linear combination of the vectors
of E, hence E is a basis of V by Theorem 2.6.4.
Notice that the coordinates of a vector in the canonical basis are just the components of that vector,
fact that is not true in general.
(b) Consider the canonical real vector space R2 . We already know a basis of R2 , namely the canonical
basis ((1, 0), (0, 1)). But it is easy to show that the list ((1, 1), (0, 1)) is also a basis of R2 . Therefore, a
vector space may have more than one basis.
(c) Let V3 be the real vector space of all vectors (in the classical sense) in the space with a fixed origin

− → − →−
O. Then a basis of V3 consists of the three pairwise orthogonal unit vectors i , j , k .
(d) Let K be a field and n ∈ N. Then the list B = (1, X, X 2 , . . . , X n ) is a basis of the vector space
Kn [X] = {f ∈ K[X] | degree(f ) ≤ n} over K, because every vector (polynomial) f ∈ Kn [X] can be
uniquely written as a linear combination a0 · 1 + a1 · X + · · · + an · X n (a0 , . . . , an ∈ K) of the vectors of
B (see Theorem 2.6.4).
In this case, the coordinates of a vector f ∈ Kn [X] in the basis B are just its coefficients as a
polynomial.
(e) Let K be a field. The list
       
1 0 0 1 0 0 0 0
, , ,
0 0 0 0 1 0 0 1
is a basis of the vector space M2 (K) over K.
More generally, let m, n ∈ N, m, n ≥ 2 and consider the matrices Eij = (akl ), where
(
1 if k = i and l = j
akl = .
0 otherwise
Then the list consisting of all matrices Eij is a basis of the vector space Mmn (K) over K.
In this case, the coordinates of a vector A ∈ Mmn (K) in the above basis are just the entries of that
matrix.

Theorem 2.6.7 Let f : V → V 0 be a K-linear map and let B = (v1 , . . . , vn ) be a basis of V . Then f is
determined by its values on the vectors of the basis B.
Proof. Let v ∈ V . Since B is a basis of V , ∃!k1 , . . . , kn ∈ K such that v = k1 v1 + · · · + kn vn . Then
f (v) = f (k1 v1 + · · · + kn vn ) = k1 f (v1 ) + · · · + kn f (vn ) ,
that is, f is determined by f (v1 ), . . . , f (vn ). 
0
Corollary 2.6.8 Let f, g : V → V be K-linear maps and let B = (v1 , . . . , vn ) be a basis of V . If
f (vi ) = g(vi ), ∀i ∈ {1, . . . , n}, then f = g.
Proof. Let v ∈ V . Then v = k1 v1 + · · · + kn vn for some k1 , . . . , kn ∈ K, hence
f (v) = f (k1 v1 + · · · + kn vn ) = k1 f (v1 ) + · · · + kn f (vn ) = k1 g(v1 ) + · · · + kn g(vn ) = g(v) .
Therefore, f = g. 

Page 4
ALGEBRA - First Year - Computer Science Prof. dr. Septimiu Crivei

Extra: Lossy compression


Definition 2.6.9 Let k, n ∈ N∗ be such that k < n, and let u be a vector of the canonical vector space K n over
K. Then the closest k-sparse vector associated to u is defined as the vector obtained from u by replacing all but
its k largest magnitude components by zero.

Example 2.6.10 Consider an image consisting of a single row of four pixels with intensities 200, 50, 200 and 75
respectively. We know that such an image can be viewed as a vector u = (200, 50, 200, 75) in the real canonical
vector space R4 . The closest 2-sparse vector associated to u is the vector ũ = (200, 0, 200, 0).

Suppose that we need to store a grayscale image of (say) n = 2000 × 1000 pixels more compactly. We can
view it as a vector v in the real canonical vector space Rn . If we just store its associated closest k-sparse vector,
then the compressed image may be far from the original.
One may use the following lossy compression algorithm:
Step 1. Consider a suitable basis B = (v1 , . . . , vn ) of the real canonical vector space Rn .
Step 2. Determine the n-tuple u (which is desired to have as many zeros as possible) of the coordinates of v in
the basis B.
Step 3. Replace u by the closest k-sparse n-tuple ũ for a suitable k, and store ũ.
Step 4. In order to recover an image from ũ, compute the corresponding linear combination of the vectors of B
with scalars the components of ũ.
Consider the following image:

First, use the closest sparse vector which supresses all but 10% of the components of v, and secondly, use the
lossy compression algorithm which supresses all but 10% of the components of u in order to get the following
images respectively:

Reference: P.N. Klein, Coding the Matrix. Linear Algebra through Applications to Computer Science,
Newtonian Press, 2013.

Page 5

You might also like