0% found this document useful (0 votes)
6 views

Maths

The document outlines the syllabus for the M.Sc. Part-I Mathematics course at Nalanda Open University, focusing on Linear Algebra. It covers key concepts such as vector spaces, linear combinations, linear independence, and theorems related to bases and dimensions of vector spaces. Additionally, it includes definitions, examples, and solved problems to illustrate these concepts.

Uploaded by

tanay18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Maths

The document outlines the syllabus for the M.Sc. Part-I Mathematics course at Nalanda Open University, focusing on Linear Algebra. It covers key concepts such as vector spaces, linear combinations, linear independence, and theorems related to bases and dimensions of vector spaces. Additionally, it includes definitions, examples, and solved problems to illustrate these concepts.

Uploaded by

tanay18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Nalanda Open University

M.sc Part-I
Course : Mathematics
Paper- V

Prepared by : Dr. L .K .SHARAN,

Rtd. Professor & Head, Dept. Of Mathematics,

V.K.S. University, ARA.

Mobile:- 9835228272

Email Id – [email protected]
UNIT III

LINEAR ALGEBRA
Contents : Vector Space,
Linear Combination,
Finite Dimensional Vector Space,

Row And Column Space Of A Matrix,

Isomorphism,

Linear Transformation,

Dual Space And Dual Basis,

Projection
1. Vector Space, Vector Sub-Space, Linear Combination, Linear
Dependence, Linear Independence

1.1. Introduction-: It is physics which gives inspiration of the concept of a vector


space as an algebraic system. It is a well known fact that in a plane or 3-dim.
space vectors can be added, subtracted, and they can be multiplied by real or
complex scalars.
Vector space is an algebraic generalization of the space of vectors. We often see
their usefulness in solving the system of linear equations.
Linear algebra is a separate branch of algebra which combines the study of
matrices and vector space.

1.2. Definitions-: The basic idea of a group and field is the point of origin in the
study of vector space.

Group:- A system (G, 0) containing a non empty set G and an operation 0 defined on it is
called a group if the following conditions are satisfied:

(i) a 0 b  G  a, b  G i.e, closure property is satisfied


(ii) a 0 ( b 0 c ) = ( a 0 b ) 0 c; a, b, c  G i.e, associative law is satisfied
(iii) There exists an element e in G such that e 0 a = a 0 e = a  a  G
e is called the identity element of G. So existence of identity is satisfied.
(iv) For every a in G an element a-1 is in G such that
a 0 a-1 = a-1 0 a = e. a-1 is called inverse of a.
However, if one additional proper that a 0 b = b 0 a  a, b  G also holds good
then the group (G, 0) is called a commutative group or an abelian group.

Field:- A system (F, +, .) containing a non empty set F together with operation ‘+ ’(addition)
and ‘ .’ multiplication is called a field if the following conditions are satisfied:

(1) ( F, +) is an abelian ( or commutative ) group.

(2) ( F, .) is an abelian group

(3) Multiplication is distributive w.r.t addition.

That is a (b + c) = a . b + a . c and (b + c) . a = b . a + c . a  a, b, c  F

[1]
Vector Space:- By K we shall understand either the set R of all real numbers or the set C of
all complex numbers.

By a linear map we understand the following two maps.

(i) (x, y)  x + y from E x E into E called vector addition


(ii) (, x)   x from K x E into E called scalar multiplication.
The above two maps are assume to satisfy the following conditions:
(a) ( E, +) is an abelian group
(b)  (x + y) =  x +  y, x, y  E,   K.
(c) (  +  ) x =  x +  x, x  E, ,   K
(d)  ( x) = ( ) x, x  E, ,   K
(e) 1 . x = x, x  E and 1 is the unity element of K

Whenever the above conditions are satisfied we say that E is a vector space ( or linear space )
over the field K and in this case we write E(K). If there is no chance of confusion in we write
simple E to mean that E is a vector space over some field K.

When K = R ( the set of real numbers) we say E a real vector space

and if K = C ( the set of complex numbers) we say E a complex vector space.

Vector Sub-Space:- Let W be a non empty subset of a vector space E(K) then W is called
Vector Sub-Space of E(K) if W is itself a vector space over K under the operations defined
over E.

Sum of two Vector Sub-Space :- Let w1, w2 be any two sub-spaces of a vector space E
over the field K, then we define

w1 + w2 = { x1 + x2 : x1  w1, x2  w2 } = w (say)

it is easy to see that w is a vector space.

Direct sum of two vector sub-spaces :- A vector space E(K) is said to be direct sum of
two vector sub-spaces w1 and w2 if

E = w1 + w2 and w1  w2 = {0}

Whenever E is the direct sum of w1 and w2 , E = w1  w2.

Linear combination :- if {x1, x2, ......,xn} be a finite set of vectors of a vector space E(K)
and {1, 2, ......, n} be a finite set of scalars of K then

1 x1 + 2 x2 + ..... + n xn is called the linear combinations (L . C) of vectors (x1, x2, ......,xn)


and scalars (1, 2, ......, n) .

[2]
Finitely Generated Vector Space:- Let {x1, x2, x3 ,.....,xn} be a non empty finite subset
of a vector space E. Now if the sub-space [x1, x2, ......,xn] equals E then we say E is finitely
generated.

Quotient Space:- Let U be a sub-space of a vector space E(K). Let x  U be arbitrary then
x + U = { x + u : u  U } is called a co-set U in E.

Let E / U = the set of all co-sets of U in E = { x + u : x  E }

We define vector addition and scalar multiplication on the set E / U in the following ways:

(i) (x+U)+(y+U)=(x+y)+U
(ii)  ( x + U ) =  x + U, for every x, y  E and   K.

Then E / U is a vector space called quotient Space of E by U.

Zero Vector :- Any vector is called a zero vector if each of its component is zero
e.g {0, 0, 0 ..,..... 0} is zero vector. The zero vector is denoted by 0 simply.

Linear Independence (L . I ) :- A finite set {x1, x2, x3 ,.....,xn} of elements of a vector


space E(K) is called linearly independent set if

Whenever 1 x1 + 2 x2 + ..... + n xn = 0 for each i  K then

1 = 2 = ......= n = 0

Linear Dependence (L . D ) :- If 1 x1 + 2 x2 + ..... + n xn = 0 for xi  E(K), i  K, for


each i then some i ≠ 0. It is also called non trivial linear relation between x1, x2, x3 ,.....,xn .

Basis of a vector Space:- A non empty finite subset {x1, x2, x3 ,.....,xn} of vectors of a
vector space E(K) is said to be a basis of E if :

(i) The set {x1, x2, x3 ,.....,xn} is linearly independent and


(ii) The sub-space [x1, x2, ......,xn], generated by vectors x1, x2, x3 ,.....,xn, equals E.

Equivalently:- A linearly independent set of vectors of a vector space E is called a basis of


E if it generate E.

Dimension of a vector space:- The number of elements in a basis of a vector space E is


called the dimension of E. The dimension of a vector space is also called the rank of it.

1.3. Theorem:

Theorem (1.3) i(2018) :- Any two bases of a vector space E have the same number of
elements.

[3]
Proof :- Let {e1, e2, e3 ,.....,en} and {g1, g2, g3 ,.....,gn} be any two bases of a vector space
E(K).

Now since the set {e1, e2, e3 ,.....,en} of vectors is a basis

 vectors e1, e2, e3 ,.....,en are linear independent.

Also {g1, g2, g3 ,.....,gn} is a basis of E with m elements

Thus n ≤ m -----------------------(1)

Again the set of vectors {g1, g2, g3 ,.....,gn} is a basis of E so its vectors are linearly
independent.

Also {e1, e2, e3 ,.....,en} is a basis of E with n elements

Thus m ≤ n --------------------(2)

Thus from (1) and (2) m = n

That any two bases of a vector space have same number of elements.

Theorem (1.3) ii :- Let E be a vector space of dimension n. Then any set of n linearly
independent elements of E is a basis of E.

Proof :- Let {x1, x2, x3 ,.....,xn} be any set of n linearly independent elements of vector space
E.

Then by a theorem for any x in E the set {x1, x2, x3 ,.....,xn, x } of n + 1 elements of E is
L . D but then we can get scalars 1, 2, ......, n,  not all zero such that

1 x1 + 2 x2 + ..... + n xn +n = 0 .

But by assumption {x1, x2, x3 ,.....,xn} is linearly independent set.

Thus  ≠ 0

Thus x = - -1 (1 x1 + 2 x2 + ..... + n xn)

Hence {x1, x2, x3 ,.....,xn} generates E and is linearly independent.

Thus the set {x1, x2, x3 ,.....,xn} forms a basis of E.

1.4. Solved examples :

Example 1:- The set of real numbers with usual definition of addition and multiplication is
a field.

[4]
Example 2:- The set of complex numbers with usual definition of addition and
multiplication is a field.

Example 3 (2018):- Show that vectors { x + 1, x – 1, -x + 5 } is linearly dependent.

Observation:- Let e1 = x + 1; e2 = x – 1; e3 = ( -x + 5 )
Again let a e1 + b e2 + c e3 = 0

Then a (x + 1) + b (x – 1) + c ( -x + 5 ) = 0

x(a+b–c)+(a–b+5c)=0

 a + b – c = 0 ------- (i) and a – b + 5 c = 0 -------- (ii)

From (i) and (ii) we get

a = - b + c and a = b – 5c

Solving these two we get b = 3c

Putting this value b = 3c in (ii) we get a = - 2c

Thus these equations have a non trivial solution.

Also a : b : c = - 2c : 3c : c i.e, a : b : c = 2 : 3 : 1

Thus the set of given vectors is linearly dependent.

Example 4(2018):- Show that the set {x2 + 1, 3x – 1, -4x + 1} is linearly independent.

Solution:- Let e1 = x2 + 1; e2 = 3x – 1; e3 = -4x + 1


Again let a e1 + b e2 + c e3 = 0

 a (x2 + 1) + b (3x – 1) + c (-4x + 1) = 0

 x2 (a) + x ( 3b - 4c ) + ( a – b + c ) = 0

Thus a = 0 ------- (i)

3b - 4c = 0 ------- (ii) and

a – b + c = 0 ------- (iii)

due to (i) and (iii) we get

– b + c = 0  b = c [ since a = 0 ] ----------- (iv)

Thus from (iv) and (ii)

[5]
3 (c) - 4 c = 0  c = 0

Thus b = c = 0

Thus a = b = c = 0

Thus {x2 + 1, 3x – 1, - 4x + 1} is linearly independent.

Example 5 (2018):- Prove that the vectors ( 1, 0, -1 ), ( 1, 2, 1 ), ( 0, -3, -2 ) form a basis

Proof :- We prove this in two parts.


In first part we show that given vectors are linearly independent and in the second part we
show that given vectors generate V3 (R)

For this first part : Let a ( 1, 0, -1 ) + b ( 1, 2, 1 ) + c ( 0, -3, -2 ) = 0 = ( 0, 0, 0 )

Then a + b = 0 -------- (i)  a = - b or b = - a

2b – 3c = 0 -------- (ii)

-a + b – 2c = 0 -------- (iii)

Since from (i) and (iii)

b = c -------- (iv)

from (ii) and (iv)

c=0

b=c=0b=0

But b = - a  a = 0

Thus we have

a=b=c

thus given vectors are linearly independent.

So the first part is done.

We now come to the 2nd part

For this, sufficient to show that any vector x = (x1, x2, x3) of V3 (R) can be expressed as a
linear combination

x = l ( 1, 0, -1 ) + m ( 1, 2, 1 ) + n ( 0, -3, -2 ) ----------- (A)

[6]
Then we have to determine l, m and n.

Since x = (x1, x2, x3) = ( l + m, 2m – 3m, -l + m – 2n )

Then x1 = l + m --------------- (i)

x2 = 2m – 3n --------------- (ii)

x3 = - l + m – 2n --------------- (iii)

from (i) and (iii)

x3 = - x1 + m + m – 2n

 x3 + x1 = 2m – 2n = 2m – 3n + n = x2 + n ( from ii )

 x1 – x2 + x3 = n

Also from (ii) x2 = 2m – 3 (x1 – x2 + x3)  2m = 3x1 – 2x2 + 3x3  m = ½ (3x1 – 2x2 + 3x3)

Again l = x1 – ½ (3x1 – 2x2 + 3x3) = ½ [- x1 + 2x2 - 3x3]

Thus x = (x1, x2, x3) = l (1, 0, -1) + m ( 1, 2, 1 ) + n ( 0, -3, -2 ) = ½ [- x1 + 2x2 - 3x3] (1, 0, -1 )

+ ½ (3x1 – 2x2 + 3x3) ( 1, 2, 1 ) + (x1 – x2 + x3) ( 0, -3, -2 )

Hence we find that each vector of V3 (R) is a linear combination of the given vectors.

It means the given vectors generate V3 (R)

Therefore the given vectors ( 1, 0, -1 ), ( 1, 2, 1 ), ( 0, -3, -2 ) form a basis of V3 (R).

Also the dimension of V3 (R) is 3.

Example 6:- Prove that the vectors e1= ( 1, 0, 0 ), e2 = ( 0, 1, 0 ) and e3 = ( 0, 0, 1 ) of


V3 (R) are linearly independent and they form a basis of V3 (R). Also find the dim V3 (R).

Proof :- Let a1 e1 + a2 e2 + a3 e3 = 0 where 0 is zero vector of V3 (R) and a1, a2, a3 R
 a1 ( 1, 0, 0 ) + a2 ( 0, 1, 0 ) + a3 ( 0, 0, 1 ) = ( 0, 0, 0 ) but then by definition scalar
multiplication

 (a1, 0, 0 ) + ( 0, a2, 0 ) + ( 0, 0, a3 ) = ( 0, 0, 0 )

Thus by the definition of vector addition we have

(a1 + 0 +0, 0 + a2 +0, 0 + 0 + a3 ) = ( 0, 0, 0 )

 (a1, a2 , a3) = 0  a1 = a2 = a3 = 0

[7]
Thus the set { e1, e2, e3} of vectors of V3 (R) is linearly independent subset of V3 (R).

Also any vector x = (x1, x2, x3) of V3 (R) can be expressed as x = x1 e1 + x2 e2 + x3 e3

Thus each vector of V3 (R) is a linear combination of e1, e2, e3.

Hence V3 (R) is generated by e1, e2, e3.

Thus the set { e1, e2, e3} is a basis of V3 (R).

Also the dimension of V3 (R) is 3 [ no. of elements in basis { e1, e2, e3}].

Note :- From above two examples it follows that a vector space may have more than one
basis.

Example 7:- Let U be a sub-space of a vector space E(K) then the set E/U of all co-sets of
U in E is a vector space under vector addition and scalar multiplication suitably defined as :

(1) ( x + U ) + ( y + U ) = ( x + y ) + U and
(2)  (x + U) =  x + U, x, y  E,   K.

Proof :- By definition E/U = { x + U : x  E }


For x, y  E  x + y  E  ( x + y ) + U  E/U  ( x + U ) + ( y + U )  E/U

Also a  K, x  E  a x  E  a x + U  E/U  a (x + U)  E/U

That is x + U, y + U  E/U  ( x + U ) + ( y + U )  E/U [ i.e; closure property is satisfied]

Also since ( x + U ) + ( y + U ) = ( x + y ) + U = ( y + x ) + U = ( y + U ) + ( x + U )

( that is commutative law is satisfied )

Clearly 0  E  0 + U  E/U  0 is the additive identity of E/U

As ( x + U ) + (0 + U) = ( x + 0 ) + U = x + U [ thus the existence of identity also holds good]

Clearly addition is associative also

Since for x  E  - x  E as E is the vector space

Then (- x + U ) + ( x + U ) = (- x + x ) + U = 0 + U ( thus .....

Further as we have already seen above that for a  K, x + U  E/U then

a ( x + U )  E/U

[8]
From which we find that

(1) 1 ( x + U ) = 1.x + U = x + U
(2) a. [ b ( x + U ) ] = (a b) (x + U)
(3) it can also be easily verified that (a + b) (x + U) = a (x + U) + b (x + U)
(4) a [ ( x + U ) + ( y + U ) ] = a ( x + U ) + a ( y + U ) can also be easily verified

Thus all the conditions for E/U to be a vector space are satisfied and hence E/U forms a
vector space.

EXERCISES

1. A field K can be regarded as vector space over any satisfied F of K.


2. Show that vectors (3, 1, - 4), (2, 2, - 3), (0, - 4, 1) of V3 (R) are linearly independent.
3. Prove that any subset of linearly independent set is linearly independent.
4. For what value of m the vector (m, 3, 1) is a linear combination of vectors
e1 = (3, 2, 1), e2 = (2, 1, 0).
5. Show that the vectors (1, 1, - 1), (2, - 3, 5), (0, 1, 4) of R3(R) are linearly independent.
6. Show that vectors (1, 0, 1), (1, 1, 1) and (0, 0, 1) of V3(R) are linearly independent
and they form a basis for V3 (R).
7. Determine whether or not the following vectors form a basis of R3 (1, 1, 2), (1, 2, 5),
(5, 3, 4).

2. Finite Dimensional Vector Space, Quotient Space

2.1 Introduction: In previous section we know what is the dimension of a vector


space. In this section we shall know when we can say a vector space finite
dimensional. Also we shall know about some of the properties of finite dimensional
vector space ( F.D.V.S).

2.2 Definition:

Linear span: Let S be a non empty subset of a vector space E(K) then the linear span of S is
denoted by L(S) and is defined to be the set of all linear combinations of finite subset of the
elements of S.

L(S) is also called the set generated by S.

Also if U be some other sub-space of E such that S  U then L(S)  U.

We can conclude that L(S) is the smallest sub-space of E containing S.

Also L() ={0}. Clearly L(S) = 1 x1 + 2 x2 + ..... + n xn : 1, 2, ......, n  K, x1, x2, x3
,.....,xn are finite elements of S.

[9]
Finite Dimensional Vector Space: A vector space E(K) is said to be finite dimensional
vector space ( F.D.V.S) if there exists a finite S of U such that L(S) = E. In this case E is also
said to be finitely generated.

Infinite Dimensional Vector Space: A vector space E(K) is called of infinite dimension
if its dimension is not finite.

2.3 Theorem:-

Theorem (2.3) i :- If U is a sub-space of an n dimensional vector space E(K) then


Dim. U ≤ n

Proof :- Let E(K) be a F.D.V.S with dimension n. Also let U be a sub-space of E.


Again let B = { x1, x2, x3 ,.....,xn }be a basis of E.

Thus by definition L(B) = E  each element of E is generated by a linear combination of


elements of B.

Also B is a basis  B is L.I

Thus either B is a basis of U or any subset of B is a basis of U.

Since every subset of L.I is L.I

Thus the basis of U, in no condition, will contain more than n vectors.

Hence Dim. U ≤ n ( the number of the elements in the basis of E)

Theorem (2.3) ii :- If U is a sub-space of a finite dimension vector space E(K) then


Dim. E = Dim. U if and only if E = U.

Proof :- U is a sub-space of F.D.V.S E(K) then U  E.


Firstly let U = E. To prove that Dim. U = Dim. E

For this, E = U  E is a sub-space of U, but U is a sub-space of E.

 Dim. E ≤ Dim. U and Dim. U ≤ Dim. E  Dim. U = Dim. E

Conversely let Dim. E = Dim. U. To prove E = U

Since Dim. E = Dim. U = n (say)

Let B = { x1, x2, x3 ,.....,xn }be a basis of U so that B  U, L(B) = U and B is L.I U  E then
xi  U  xi  E  B is L.I subset of E

[10]
Also Dim. E = n  every L.I subset of E containing n vectors is a basis of E.

Thus B is a basis of E  L(B) = E

Hence U = L(B) = E

So U = E.

Theorem2.3 iii (2018):- Let W1 and W2 are two sub-spaces of a finite dimensional space
E(K) then prove that

(i) W1 + W2 is finite dimensional


(ii) dim. W1 + dim. W2 = dim (W1 + W2) + dim (W1  W2)

Proof :- clearly W1  W2 is a sub-space of E and dim. (E) is finite


Let dim. W1 = m, dim. W2 = n and dim (W1  W2) = r

Let {e1, e2, e3 ,.....,er} be the basis of W1  W2.

Obviously this basis can be extended to the basis of W1 and also to the basis of W2.

Let S1 = {e1, e2, e3 ,.....,er, g1, g2, g3 ,.....,gm-r} be the basis of W1 .

S2 = {e1, e2, e3 ,.....,er, p1, p2, p3 ,.....,pm-r} be the basis of W2 .

We now set S = {e1, e2, e3 ,.....,er , g1, g2, g3 ,.....,gm-r , p1, p2, p3 ,.....,pn-r } -------------(1)

To see that S is a basis of W1 + W2

For this, sufficient to show that S is L.I and S spans W1 + W2.

For this, let a1e1 + a2e2 + .....+ ar er +b1 g1 + b2 g2 +....+bm-r gm-r + c1p1 + c2p2 + ....+cn-r pn-r = 0

Where all a1, a2...... b1, b2.... c1, c2....  K

Then c1p1 + c2p2 + ....+cn-r pn-r = + +  W1 .

Similarly,  W2   ( W1 W2)

Also + = 0 ( since vectors e1 ,...,er and p1, p2, .....are L.I so c1 = c2 =......= cn-r =
0 and a1+ a2...... ar = 0)

Thus (1) is L.I.

We have to show that S spans W1 + W2

For this, Let e be any element of W1 + W2 then e = g + p (by definition )

Such that g  W1, p  W2. Also as S1 and S2 are bases of W1 and W2 respectively

[11]
so g and p can be expressed as

g= + , ai and bj, answerable

and p = + , di , cj  K

thus clearly e is the linear combination of elements of S.

Then S spans W1 + W2 i.e, S is basis for W1 + W2  W1 + W2 is finite dimensional.

So the first part is done.

Further,

Also dimension of W1 + W2 = m + n – r i.e, dim (W1 + W2) = m + n – r

But dim. W1 + dim. W2 = m + n = r + (m + n – r) = dim (W1 W2) + dim (W1 + W2)

Or equivalently dim (W1 + W2) = dim. W1 + dim. W2 - dim (W1 W2) proved.

Theorem (2.3) iv :-Let


(i) V(K) be a finite dimensional vector space
(ii) ( ii) W1 , W2 be two sub-spaces of V(K)
(iii) V is the direct sum of W1 and W2

Then dim V = dim. W1 + dim. W2

Proof :- Since by hypothesis V is finite dimensional, W1 , W2 are its sub-spaces


Thus W1 and W2 are also finite dimensional

Let = dim. W1 = m and dim. W2 = n .

Also given V = W1  W2  V = W1 + W2 and W1  W2 = {0}

Let S1 = {e1, e2, e3 ,.....,em} be the basis of W1 .

S2 = { g1, g2, g3 ,.....,gn} be the basis of W2 .

let S3 = {e1, e2, e3 ,.....,em, g1, g2, g3 ,.....,gn}

then we find that:

(a1e1 + a2e2 + .....+ am em ) + (b1 g1 + b2 g2 +....+bn gn )

 (b1 g1 + b2 g2 +....+bn gn ) = - (a1e1 + a2e2 + .....+ am em )

Thus a1e1 + a2e2 + .....+ am em  W1  W2 and b1 g1 + b2 g2 +....+bn gn  W1  W2

[12]
But W1  W2 = {0}  both of the above L.C. are equal to zero

But S1 and S2 being bases are L.I. so all scalar are zero.

Thus S3 is also L.I.

Let t  V be arbitrary then t = e + g, e  W1 , g  W2

 e, g can be expressed as L.C. with the elements of W1 and W2 separately

Thus t = e + g = a1e1 + a2e2 + .....+ am em + b1 g1 + b2 g2 +....+bn gn.

 S3 generates V thus S3 forms a basis of V.

Hence dim V = m + n = dim. W1 + dim. W2 proved.

Theorem (2.3) v (2017) :-Let W be a sub-space of a F.D.V.S. V then


dim. V/W = dim. V – dim. W

Proof :- Let dim. W = m and { w1, w2, w3 ,.....,wn } be a basis of W


 (w1, w2, w3 ,.....,wn) is in W

 it is L.I. in V then as we know,

{ w1, w2, w3 ,.....,wm, v1, v2, v3 ,.....,vn } can be extended basis of V.

Thus dim. V = n + m

We consider the set {w + v1, w + v2, .... , w + vn,}, we show it forms a bases of V/W .

Let 1(w + v1) + 2(w + v2) + ...... + n(w + vn) = w, i  K (the field )

 W + (1v1 + 2v2 + .....+ n vn) = W  1v1 + 2v2 + .....+ n vn  W

 1v1 + 2v2 + .....+ n vn is L.C. of w1, w2, w3 ,.....,wm =

 1v1 + 2v2 + .....+ n vn = 1w1 + 2w2 + .....+ m wm , j  F

 1v1 + 2v2 + .....+ n vn - 1w1 - 2w2 - .....- m wm = 0  i = j = 0 for all i, j

 { w + v1, w + v2, .... , w + vn,} is L.I.

Again, for any w + v  V/W , v  V  v is linear combination of w1,.....,wm, v1, .....,vn .

Let v = 1w1 + 2w2 + .....+ m wm + 1v1 + 2v2 + .....+ n vn i, j  K

Again w + v = W + (1w1 + 2w2 + .....+ m wm) + (1v1 + 2v2 + .....+ n vn)

[13]
= W + (1v1 + 2v2 + .....+ n vn)

= { ( w + 1v1) + ( w + 2v2 ) +....+ (w + n vn) }

= 1 (w + v1) + 2 (w + v2) +....+ n (w + vn)

Thus S spans V/W and is therefore a basis is

 dim. V/W = n

Therefore dim. V/W = dim. V – dim. W

2.4. Exercise:

Problem 1. The linear span L(S) of any non-empty subset S of a vector space V(F) is a
subspace of V(F).

Problem 2. If S, T are two subsets of a vector space V, then


(1) S  T  L(S)  L(T)
(2) L(S  T) = L(S) + L(T)

Problem 3. The linear sum of two subspaces W1 and W2 of vector space V(F) is generated
by their union.

That is W1 + W2 = L (W1  W2 ).

3. Row space and column space of a matrix, Dimension and Rank


3.1 Introduction: In this section we shall learn how to introduce the notion Linear
combination, linearly independent etc. for the cases of matrix. We are already well
acquainted with row and column but now we shall also know what are row and
column spaces.
3.2 Definition:

Echelon matrix: Let A = [ aij ] be an m  n matrix over some field F. Now if the number
zeros preceding the non zero elements of a row increases row by row. The elements of the
last row or rows may be all zero then the matrix A is called an echelon matrix. Sometime we
also call echelon form.

Distinguished elements of a matrix A: The first non zero elements in the rows of an
echelon matrix A are called distinguished elements of A.

Row canonical form of a matrix : If distinguished elements are each equal to 1 and are
the only non-zero elements in their respective columns. It is also called row reduced echelon
matrix.

[14]
Example 1. Matrix A = -3 0 1 is an echelon matrix.

012

0 04 Distinguished elements are – 3, 1, 4

Example 2. A = 1 3 0 1
0012 is a row reduced echelon matrix.

0000

Row–Equivalence of two matrices: Let A and B be any two matrices then A is called
row equivalent to B if and only if B can be obtained from A by a finite number of elementary
row operations.

Column Equivalence of two matrices: A is called column equivalent to B iff B can be


obtained from A by a finite number of elementary row operations.

Row space of a matrix: Let A = [ aij ] be an m  n matrix over a field F. Then the m
vectors of rows of A are as below:

R1 = ( a11, a12,......., a1n ) is an n tuple over F

R2 = ( a21, a22,......., a2n ) is an n tuple over F

---------------------------------------------------

---------------------------------------------------

Rm = ( am1, am2,......., amn ) is an n tuple over F

Then L(R1, R2,......... Rm), the linear span, is a sub-space of Fn, is called the row space of A.
Vectors R1, R2,......... Rm are called row vectors. Row space A is denoted by R(A)

Column space of a matrix: Let A = [ aij ] be an m  n matrix over a field F.


Then C1 = ( a11, a21,......., am1 )

C2 = ( a12, a22,......., am2 )

---------------------------------------------------

Cn = ( a1n, a2n,......., amn )

The linear span L(C1, C2,......... Cn) is a sub-space Fm and we call it column space of A.
generally we denote it by C(A). vectors C1, C2,......... Cn are called column vectors.

[15]
Null space of a matrix A: Let A = [ aij ] be an m  n matrix over a field F. then we denote
null space of A by N(A) and we define it as

N(A) = { x  Fn: A x = 0 }.

Note 1: column space of A is the same as the row space of At.

Row rank ( or column rank ) of a matrix A: The dim. R(A) is called row rank and
dim. C(A) is called column rank.

Also dim. R(A) = row rank of A = the number of non zero rows in the echelon matrix of A.

Rank of a matrix: Row rank of A ( or column rank ) of A is called the rank of A.

3.3. Theorem:

Theorem (3.3) i :- Row equivalent matrices have the same row space.

Proof :- Let A and B be any two row equivalent matrices.


Then by definition,

Each row of B is either a row of A or is the linear combination of rows of A.

Hence row space of B is contained in row space of A --------------(1)

On the other hand in a similar way we start from B and apply elementary inverse operations
we can find that,

row space of A is contained in the row space of B -----------------(2)

Thus from (1) and (2) it follows that

The row space of A and the row space of B are the same. ,,

Theorem (3.3) ii :- Row space and column space of a matrix A have the same dimension
Or

Let Amxn be a matrix then dim. of its row space is equal to the dim. of its column space.

Proof :- Let{ v1, v2, v3 ,.....,vk }be a basis for C(A)


Then each column of A can be expressed as a linear combination of these vectors,

We suppose that the ith column Ci is given by

Ci = 1iv1 + 2iv2 + .....+ ni vk

[16]
Let us form two matrices as follows:

B is an m  k matrix whose columns are the basis vectors vi , while C = (ij) is a k  n matrix
whose ith column contains the coefficient 1i , 2i , ....., ki then it follows that A = BC .

However we can also view the product A = BC as expressing the rows of A as a linear
combination of the rows C with the ith row of B.

Now giving the coefficients for the linear combination that determine the ith row of A.

Therefore the row of C are a spanning set for the row space of A, and so the dimension of the
row space of A is almost K.

Thus we conclude that dim{ row space (A) }  dim{ column space (A) }------------(1)

Applying the same argument to At we can see that

dim C(A)  dim R(A) -----------------------(2)

thus it direct follows from (1) and (2) that

dim { C(A) }= { dim R(A) },,

Theorem (3.3) iii :- The non zero rows of an echelon matrix are linearly independent

Proof :- Let R1, R2,......... Rn be the non zero rows of an echelon matrix A.
To prove R1, R2,......... Rn are linearly independent vectors.

If not then let R1, R2,......... Rn are linearly dependent

Then one of rows say Rm, is a linear combination of the preceding rows

That is, Rm = m+1Rm+1 + m+2Rm+2 + .....+ nRn --------------- (1)

Let kth element of Rm be its first non zero entry.

Since matrix A is in echelon form, the kth element of each of Rm+1, Rm+2 , .....,Rn is zero.

Thus from (1), the kth element of Rm .

= the kth element of m+1Rm+1 + m+2Rm+2 + .....+ nRn.

= m+1 . 0 + m+2. 0 + ..... + n . 0 = 0 is a contradiction as by assumption kth element of Rm is


non zero.

Thus R1, R2,......... Rn are linearly independent ,,

[17]
3.4 Solved example:

Example 1. Reduce A = 1 -2 3-1 to echelon form and then to row


2-1 2 2 reduced echelon form.

3123

Solution : Operating R2  R2 + ( -2 ) R1, R3  R3 + ( -3 ) R1


Thus 1-2 3 -1

A 0 3 -4 4 operate R3  3R3 + ( -7 ) R2 then we have

0 7 -7 6

1 -2 3 -1

 0 3 -4 4 which is an echelon form of A.

0 0 7 -10

Now we reduce it to row reduced echelon form.

We operate R2  1/3 R2 , R3  1/7 R3 then

1 -2 3 -1

A 0 1 -4/3 4/3

0 0 1 – 10/7

Now operating R1  R1 + 2 R2 , then

1 0 1/3 5/3

A 0 1 -4/3 4/3

0 0 1 -10/7

We now operate R2  R2 + 4/3R3 and R1  R1 + (-1/3)R3

[18]
 1 0 0 15/7

A 0 1 0 4/3

0 0 1 -10/7

This is the required form.

Example 2. Show that the matrices A and B have the same column space where
135 1 2 3

A= 143 and B = -2 -3 -4

119 7 12 17

Solution: Matrix A and B will have the same column space if and only if At and Bt have
some row space.

Thus we have to reduce At and Bt to row reduced echelon form:

111 1 11 11 1 10 3

For, since At = 341  0 1 -2  0 1 -2  0 1 -2

539 0 -2 4 00 0 00 0

1 -2 7 1 -2 7 1 -2 7 103

And Bt = 2 -3 12  0 1 -2  0 1 -2  0 1 -2 .

3 -4 17 0 1 -4 000 000

Thus from above it is clear that

The non zero rows in row reduced echelon matrix of At

= the non zero rows in row reduced echelon matrix of Bt

 R (At) = R (Bt)  C(A) = C(B) //

Example 3. Find the basis for the row space of the matrix A and determine the rank where
123

A= 254

115

[19]
4. Isomorphism

4.1 Introduction: We have much studied about isomorphism in group theory. In this
section we shall study the role of isomorphism in the context of a vector space and its field.

4.2 Definition:

Isomorphism of vector spaces: Let E and E be any two vector spaces over the same
field F. Again Let f : E  E be a mapping then f is called an isomorphism if

(1) f is one – one onto


(2) f (x + y) = f(x) + f(y), x, y  E, f(x), f(y)  E and
(3) f (x) =  f(x),   F, x  E.

Whenever f is an isomorphism of E and E, we say that E is isomorphic to E. We also say
that E is an isomorphic image of E.

4.3 Theorem:

Theorem (4.3) i :- Every n-dimensional vector space V(F) is isomorphic to Fn(F)

Proof :- By question V(F) is an n-dimensional vector space.


Let S = { e1, e2,.....,en} be a basis of V.

Then every vector of V can be expressed as linear combination of the elements of S.

Then for any e  V, e = a1e1 + a2e2 + .....+ an en , a1, a2, ......, an  F.

Let T : V  Fn be a mapping given by,

T(e) = T (a1e1 + a2e2 + .....+ an en) = (a1, a2, ......, an) for every e  V.

Then by uniqueness of the representation of e in the form e = a1e1 + .....+ an en the mapping T
is well defined.

Also let e, e  V and a  F then e = a1e1 +.....+ an en , e = b1e1 + .....+ bn en where ai, bi  F,
i = 1, 2, ...., n.

To prove that:

T (e + e) = T { (a1 + b1) e1 + (a2 + b2) e2 +.........+ (an + bn) en }

= a1 + b1, a2 + b2, ......... , an + bn = (a1, a2, a3 ,.....,an) + (b1, b2, b3 ,.....,bn) = T(e) + T(e)

Also T(ae) = T{a (a1e1 + a2e2 + .....+ an en )}= T{ (a a1) e1 +(a a2) e2 + ...... +(a an) en }

[19]
= { a a1, a a2, ....... , a an} = a (a1, a2, a3 ,.....,an) = a T(e)

We also find that : T(e) = T(e)  T (a1e1 + a2e2 + .....+ an en ) = T (b1e1 + b2e2 + .....+ bn en )

 = (a1, a2, a3 ,.....,an) = (b1, b2, b3 ,.....,bn)  ai = bi for each i

 e = e  T is one-one

Also let any (a1, a2, a3 ,.....,an)  Fn  a1e1 + a2e2 + .....+ an en  V

and T(a1e1 + a2e2 + .....+ an en) = (a1, a2, a3 ,.....,an)  T is onto

Thus all the conditions are satisfied for T to be isomorphism

Thus V(F) is isomorphic to Fn

Theorem (4.3) ii :- Two finite dimensional vector space over the same field F are
isomorphic iff they are same dimension.

Proof :- Let E and E be any two isomorphic vector space over the field F.
Let T : E  E be the isomorphism

Let dim. E = n and { e1, e2,.....,en} be a basis of E

We claim { T (e1), T (e2), ....... , T (en) } is a basis of E

Let [n i=1 ai T (ei) = 0, ai  F

= [ T (ai ei) = 0 = T(0)  [ ai ei = 0 [ since T is one-one ]

 ai = 0  i as e1, e2,.....,en are linearly independent

 T (e1), T (e2), ....... , T (en) are linearly independent

Again if g  E is any element, then as T is onto,  some e in E such that

T(e) = g

Now e  E  a1e1 + a2e2 + .....+ an en, ai  F for each i = 1, 2, ....., n

 g = T (e) = T (a1e1 + a2e2 + .....+ an en)

 g = a linear combination of T (e1), T (e2), ....... , T (en)

Thus T (e1), T (e2), ....... , T (en) span E

Thus T (e1), T (e2), ....... , T (en) forms a basis for E

[20]
It makes clear that dim. E = n

Thus dim. E = dim. E = n provide E and E are isomorphic

Conversely:- Let dim. E = dim. E


To prove E and E are isomorphic

For this, let { e1, e2,.....,en} be a basis of E and { g1, g2,.....,gn} be a basis of E.

Let T : E  E be a mapping given by

T (e) = T (a1e1 + a2e2 + .....+ an en) = a1e1 + a2e2 + .....+ an en .

Clearly T is well defined.

Also for any e, e  E we see that:

T (e + e) = T { (a1e1 + a2e2 + .....+ an en) + (b1e1 + b2e2 + .....+ bn en ) }

= T { (a1 + b1) e1 + (a2 + b2) e2 +.........+ (an + bn) en }

= (a1 + b1) g1 + (a2 + b2) g2 +.........+ (an + bn) gn

= (a1g1 + a2g2 + .....+ an gn) + (b1g1 + b2g2 + .....+ bn gn ) = T(e) + T(e)

That is T (e + e) = T(e) + T(e)

Also T(ae) = T{a (a1e1 + a2e2 + .....+ an en )} = T [ a ai ei = [ (a ai) gi

= a [ ai gi = a T(e)

We also see that :

If e  ker T then T(e) = 0  T ( [ ai ei ) = 0  [ ai gi = 0

 ai = 0,  i, g1, g2,.....,gn being linearly independent

e=0

 ker T = {0}

 T is one-one

Also T is clearly onto

Thus T is isomorphism.

Theorem (4.3) iii :- The complex plane is isomorphic to the Euclidean plane.
[21]
Proof :- Let V = c, the vector space of all complex numbers over the field R of all real
numbers

Also let v = R2 , the vector space of all ordered pairs of reals over the field R.

Let a mapping T : V  V be defined as T ( a + i b ) = ( a, b ).

We see that:

T { ( a + i b ) + ( c + i d ) } = T { ( a + c ) + i ( b + d ) }= ( a + c, b + d ) = (a, b) + (c, d)

=T(a+ib)+T(c+id)

Also T {  ( a + i b ) } = T ( a + i  b) = ( a, b) =  ( a, b ) =  T ( a + i b )

Also T ( a + i b ) = - T ( c + i d )

Then T (a + i b - c – id ) = 0 but T (a + i b - c – id ) = T {(a - c) + i (b - d)} = ( a - c, b - d )

Thus ( a - c, b - d ) = (0, 0)  a – c = 0 and b – d = 0

 a = c, b = d  a + i b = c + i d

 T is one-one

Also, obviously T is onto

Thus T is isomorphism of c onto R2

5. Linear Transformation, Linear functional


5.1. Introduction: It a kind of mapping ( or transformation ) which we shall study in
linear spaces.
5.2. Definitions:

Linear Transformation: Let V and V be any two vector spaces over the field F. Then a
mapping T : V  V is called a linear transformation (L.T) if the following conditions are
satisfied:

(i) T(u + v) = T(u) + T(v), u, v  V


(ii) T( u) =  T(u),   F, u  V

Conditions (i) and (ii) can be expressed together as

T ( u +  v) =  T(u) +  T(v), u, v  V, ,   F.

A linear transformation is also called a linear mapping.

[22]
Kernel of Linear Transformation ( or Null space ) : Let T be a linear transformation of
a vector space V into a vector space V. Then the kernel of T is denoted by ker T and we
defined it as:

ker T = { x  V : T (x) = 0 }.

ker T is also called Null space of T.

Kernel of Identity Transformation : It is denoted by I : V  V defined by the set


{ x  V : I (x) = x = 0 } = {0}.

Kernel of Zero Transformation : It is defined by 0 : V  V is { x  V : 0 (x) = 0 }= V.

Nullity of Linear Transformation: Let T be linear transformation of V into V then


dimension of ker T is called the nullity of T.

Product of two Linear Transformations: For any two linear transformations T1, T2 on V
we define (T1, T2) (x) = T1 (T2(x)), x  V.

Hence we call T1 ,T2 as the product of T1 and T2 similar T2 T1 is called the product of T2 and T1.

The range (or rank space) of Linear Transformation: Let T be a linear transformation
of a vector space V (F) into a vector space V (F).

Then the set T(V) = {T(x) : x  V} is called the range (or rank space) of T.

Also the dim. T(V) is called the rank of the linear transformation T.

Linear Operator: A linear transformation from a vector space V (F) into V is called a linear
operator.

Non Singular Linear Transformation: Let T be linear transformation on a vector space


V. Then T is called invertible or non singular if T is one-one and onto otherwise T is called
singular.

If T is non singular then T-1 exists such that T(x) = y iff x = T-1 (y)

And T T-1 = T-1 T = I ( identity element of V).

Linear Functional (2016): Let V be a vector space over a field F, then a map T : V  F is
called linear functional (L.F) iff

(i) T(u + v) = T(u) + T(v) and


(ii) T( u) =  T(u),    F, u, v  V

[23]
Linear functional it also called linear maps or linear form.

We should remember that F can be regarded as a vector space over F itself.

5.3. Theorems:

Theorem (5.3) i :- Let T be a linear transformation form a vector space V into a vector
space V then T preserves the origin and negative .

Observation:- Since T(0) = T(0 . 0) = 0 T(0) = 0


And T (-x) = T{(-1). x} = (-1) T(x) = - T(x), x  V

Note:- An isomorphism preserves origin and negative.


For an isomorphism is one-one, onto linear transformation.

Theorem (5.3) ii :- Let V and V be any two vector spaces over the same field F.
If T : V  V is linear transformation then,

Ker T is a linear sub-space of V.

Proof :- Let x, y  Ker T then T(x) = 0, T(y) = 0.


Also let ,   F then v is a vector space  x + y  V

Also T is a linear transformation from V to V then T(x + y) = T(x) + T(y)

=  .0 + .0 = 0

This implies that x + y  Ker T for x, y  Ker T, ,   F

Thus Ker T is a linear sub-space of the linear space V

Theorem (5.3) iii :- Let T be a linear transformation of a vector space V (F) into a V(F) .
Then T(V) = { T(x) : x  V }

Prove that T(V) is a sub-space of V.

Proof :- Clearly 0  V  T(0) = 0  V  T(0)  V  T(0)  T(0)  


 T(0) is a non empty subset of V.

Since for x, y  T(V) there exists x, y  V such that T(x) = x, T(y) = y.

Also whenever x, y  V we shall get ,   F such that x + y  V.

[23]
But T is a linear transformation as a result of which

T(x + y) = T(x) + T(y) =  x +  y  T(V)

Thus T(V) is a sub-space of V.

Theorem (5.3) iv :- Let T be a linear transformation of a vector space V (F) into a V(F)
then map-T given by ( - T ) (x) = - T (x), for every x in V is also a linear transformation from
V into V .

Proof :- Since T : V  V is a linear transformation then for x in V, T(x)  V and V is a


vector space  - T (x)  V for x  V

Again clearly for x, y  V, ,   F, x + y  V

Also (- T)(x + y) = -T(x + y) = -[T(x) + T(y)] = - T(x) -  T(y)

=  {-T (x)} +  {- T (x)}

Thus –T is also linear transformation from V into V .

Theorem (5.3) v :- Let T : V  V be a linear transformation of V(F) then prove that T is


one-one if and only if T is non singular

Proof :- First of all let T is non singular


To prove that T is one-one.

For, if K is the null space of T then K = {0}.

Also Let for x, y in V, T(x) = T(y). ---------------(1)

Also T (x – y) = T(x) - T(y) = T(x) - T(x) [ by (1)]

= 0 = T(0)

Hence x - y = 0 and (x – y)  K

x=y

Thus T(x) = T(y)  x = y = T is one-one

Conversely:- Let T is one-one


To prove that : T is non singular

Sufficient to show that the null space of T = K = {0}

[24]
For, let x  K be arbitrary, then T(x) = 0 but T(0) = 0

Thus T(x) = T(0) but T is one-one so x = 0  K = {0}

Thus T is non singular

Theorem (5.3) vi :- Let T1 ,T2 be any two linear transformation of V(F) into V(F) then,
(i) T1 + T2 and (ii)  T1

both are linear transformations

Proof :- We define T1 + T2 and  T1 by

(i) (T1 + T2) (x) = T1(x) + T2(x) and ( T1) (x) =  T1(x) , x  V,   F

Then (T1 + T2) (x + y) = T1 (x + y) + T2 (x + y) =  T1(x) +  T1(y) +  T2(x) +  T2(y)

=  T1(x) +  T2(x) +  T1(y) +  T2(y)

=  (T1(x) + T2(x)) +  (T1(y) + T2(y))

=  (T1 + T2) (x) +  (T1 + T2) (y)

 T1 + T2 is linear transformation of V into V.

(ii) ( T1) (ax + by) =  T1 (ax + by) =  {a T1(x) + b T1(y)}


=  a T1(x) + b T1(y) = a { T1(x)} + b { T1(y)}

Thus  T1 is also linear transformation.

Theorem (5.3) vii (2019):- If f is a linear functional on a vector space V(F) then
(i) f(0) = 0
(ii) f (-x) = -f(x)

Proof :- Since f is a linear functional so for any x in V, f(x)  F


(i) since f(x) + 0 = f(x) for 0  F
= f(x+0)
= f(x) + f(0)

That is f(x) + 0 = f(x) + f(0)  f(0) = 0

(ii) Since f(x) + f (-x) = 0

 f (-x) = -f(x)

[25]
Theorem (5.3) viii (2016):- Prove that function f on Rn defined by
f(x) = f (x1, x2,.....,xn) = a1x1 + a2x2 + .....+ an xn is function on Rn.

Where a1, a2,.....,an be fixed scale of R.

Proof :- let f : Rn  R by f(x) = f (x1, x2,.....,xn) = a1x1 + a2x2 + .....+ an xn


Now since, f { (x1, x2,.....,xn) + (y1, y2,.....,yn) }= f (x1+ y1, x2 + y2 ,...., xn+ yn)

= a1 (x1+ y1) + a2 (x2+ y2) + ...... + an (xn+ yn)

= (a1x1 + a2x2 + .....+ an xn) + (a1y1 + a2y2 + .....+ an yn) = f { (x1, x2,.....,xn) +f (y1, y2,.....,yn) }

= f{ x (x1, x2,.....,xn) } = f ( x x1 , x x2,.....,x xn) = x (a1x1 + a2x2 + .....+ an xn)

= x f (x1, x2,.....,xn) .

Thus x  R. f is a linear functional on Rn.

Theorem (5.3) ix (2018):- let T : U  V be a linear transformation then prove that


dim. ker (T) + dim. rang (T) = dim. domain (T)

Proof :- let { 1, 2,....., k }be a basis of ker (T) i.e, the null space of T.
Let dim. U = n then k+1, k+2 ,....., n  U such that 1, 2,....., n forms a basis of U.

Thus dim. ker (T) = K

Consider {T (k+1) + T (k+2) + ....... + T (n) } ------------ (1)

Then i in F

We have ak+1 T (k+1) + ak+2 T (k+2) + ....... + an T (n) = 0

Then ak+1 k+1 + ak+2 k+2 + ....... + an n  ker (T)

Also { 1, 2,....., k } is a basis of ker (T) so we can get scalars b1, b2,....., bk

ak+1 k+1 + ak+2 k+2 + ....... + an n = b1 1 + b2 2 + ....... bk k

or, b1 1 + b2 2 + ....... bk k - [ak+1 k+1 + ak+2 k+2 + ....... + an n] = 0

but 1, 2,....., n are linearly independent

 b1= b2 = bk = ak+1 = ......... an = 0

Thus (1) is linearly independent ------------ (2)

[26]
Again, let T ()  rang (T) for  U

Clearly  = a1 1 + a2 2 + ....... + an n .

Also T () = T (a1 1) + T (a2 2) + ....... + T (an n) and T is a linear transformation

= a1 T (1) + a2 T (2) + ....... + ak T (k) + ak+1 T (k+1) + ak+2 T (k+2) + ....... + an T (n)

Since T (i) = 0, 1  i  k.

Thus T () = ak+1 T (k+1) + ak+2 T (k+2) + ....... + an T (n)

It means T (k+1), ....... , T (n) spans rang (T) ----------- (3)

Thus from (2) and (3)

[T (k+1), ....... , T (n)] forms a basis of spans rang (T)

Also dim. rang (T) = n – K = dim. U - dim. ker (T)

Or, dim. ker (T) + dim. rang (T) = dim. U = dim. domain (T).

6. Exercise :

Example 1:- A linear transformation T of R3 into itself is defined by


T(e1) = e1 + e2 + e3; T(e2) = e2 + e3 and T(e3) = e2 - e3 where e1, e2, e3 are unit vectors of R3
then,

(i) Determine the transform of (2, -1, 3)


(ii) Describe explicitly the linear transformation T.

Solution:- Since e1, e2, e3 are unit vectors of R3


Thus e1 = ( 1, 0, 0 )

e2 = ( 0, 1, 0 )

e3 = ( 0, 0, 1 )

 T(e1) = e1 + e2 + e3 = ( 1, 0, 0 ) + ( 0, 1, 0 ) + ( 0, 0, 1 )

 T(e1) = ( 1 + 0 + 0, 0 + 1 + 0, 0 + 0 + 1 ) = ( 1, 1, 1 )

Similarly T(e2) = ( 0, 1, 1 )

And T(e3) = ( 0, 1, -1 )

We know that { e1, e2, e3 } forms a basis of R3

[27]
Thus every vector of R3 can be uniquely expressed as the linear combination of e1, e2, e3.

Clearly (2, -1, 3)  R3

Then (2, -1, 3) = 2 ( 1, 0, 0 ) + (-1) ( 0, 1, 0 ) + 3 ( 0, 0, 1 )

= 2 e1 + (-1) e2 + 3 e3

Thus T (2, -1, 3) = T {2 e1 + (-1) e2 + 3 e3 }

Or T (2, -1, 3) = 2 T(e1) +(-1) T (e2) + 3 T (e3)

= 2 (1, 1, 1) +(-1) (0, 1, 1) + 3 (0, 1, -1)

i.e, T (2, -1, 3) = (2, 4, -2)

Hence the transformation of (2, -1, 3) under T is (2, 4, -2).

(ii) Let (x, y, z) be any in R3 then as we know it can be expressed as linear combination of e1,
e2, e3 uniquely

Thus (x, y, z) = x (1, 0, 0) + y (0. 1, 0) + z (0, 0, 1)

= xe1 + ye2 + ze3

T (x, y, z) = T (xe1 + ye2 + ze3)

= x T(e1) + y T(e2) + z T(e3)

= x ( 1, 1, 1 ) + y ( 0, 1, 1 ) + z ( 0, 1, -1 ) = ( x, x + y, x + y – z )

is the required linear transformation explicitly ( or completely )

Example 2:- if T is a non singular linear transformation on vector space V (F) then
T-1 Is also a linear transformation.

Example 3:- if T is a linear transformation on vector space V (F) then


T is one-one if and only if T is onto.

Example 4:- Prove that a non singular transformation T on vector space V(F) is onto.

Example 5:- Verify that if a linear transformation T on vector space V(F) is onto then
Whether T is non singular.

Example 6:- Let (i) U, V be any two vector spaces over the same field F.
and (ii) L(U, V) be the set of all linear transformations from U to V

[28]
verify that weather L(U, V) form a vector spaces over F under addition and scalar
multiplication of linear transformations suitably defined.

7. Dual Space And Dual Basis:-


7.1. In this section our aim is to make a detailed study of the vector space of the linear
functional.
7.2. Definitions:
Dual space of a vector space V(F):- Let V(F) be a vector space over the field F.
Clearly F can be considered as vector space over F itself.

Then vector space L(V, F) of all linear transformations of V into F is called the dual space of
V (or algebraic conjugate of V)

We usually use the symbol V* for the dual space of V.

Every element of V* is called a linear functional on V.

Clearly V* = L(V, F). V* is also called simply conjugate space of V.

Or, V* = { T : T : V  F }.

Second dual space :- Like the vector space V, its dual space V* has also dual space
denoted by V**, which is a vector space and is called second dual space of V.

Dual basis :- Let { v1, v2,.....,vn } be a basis of the vector space V(F) .
Again let T1, T2,.....,Tn  V* be linear functional defined by :

Ti (vJ) = iJ = { 1 if i = J & 0 if i  J

Then a basis { T1, T2,.....,Tn } of V is called dual basis

7.3. Theorems:

Theorem (7.3) i :- Let V(F) be a finite dimensional vector space then


Then prove that : dim. V* = dim. V

Proof :- let V(F) be a finite dimensional vector space such that


dim. V = n

again let V* be the dual space of V.

To prove that dim. V* = n

[29]
Since V* is the set of all linear functional on V(F)

i.e, V* = L(V, F)

Since we know by a theorem that if L(U, V) is the set of all linear transformations from U(F)
into V(F) then

dim. L(U, V) = ( dim. U ) (dim. V)

Thus dim. V* = dim. (V, F) = (dim. V) (dim. F) = n . 1 = n = dim. V

Thus dim. V* = dim. V.

Theorem (7.3) ii :- Let { v1, v2,.....,vn } be a basis of the vector space V(F), over the field F
Also let T1, T2,.....,Tn  V* be the linear functional defined by

Ti (vJ) = { 1 if i = J & 0 if i  J ---------------- (1)

Then { T1, T2,.....,Tn } is a basis of V*

i.e, { T1, T2,.....,Tn } is a dual basis of the basis {v1, v2,.....,vn }

Proof :- First of all we make efforts to show that the set { T1, T2,.....,Tn } generates V* .
For this, let T (vi) = ti, for 1  i  n and t1 T1 + t2 T2 + ....... + tn Tn = P ----------(2)

Then T  V* and P (v1) = { t1 T1 + t2 T2 + ....... + tn Tn } (v1) = t1 [By (1) ]

Let i = 2, 3, ..... , n ; then P (vi) = { t1 T1 + t2 T2 + ....... + tn Tn } (vi) = ti [By (1) ]

Thus P (vi) = ti , 1  i  n

But T (vi) = ti , 1  i  n

It means P = T for every vi

But by (2) P and hence T is generated by the set { T1, T2,.....,Tn }

Thus the set { T1, T2,.....,Tn } generates V*----------(3)

We now make efforts to show that the set { T1, T2,.....,Tn } is L.I.

For, a1 T1 + a2 T2 + ....... + an Tn = 0 then (a1 T1 + a2 T2 + ....... + an Tn) (vr) = 0 (vr)

On simplification we get ar = 0, 1  r  n.

Thus a1 T1 + a2 T2 + ....... + an Tn = 0  a1 = 0, a2 = 0, ........ , an = 0

Thus the set { T1, T2,.....,Tn } is linearly independent ------------ (4)

[30]
Thus from (3) and (4)

The set { T1, T2,.....,Tn } forms a basis of V*

Theorem (7.3) iii (2016):- Let (i) V(F) be a finite dimensional vector space over the field
F.

(ii) B be a basis of V (iii) B be a dual basis of V

Then to show that B = (B) = B

Observation :- Let B = { x1, x2,.....,xn } be a basis of V


And B = { f1, f2,.....,fn } be a basis of V*

B = { e1, e2,.....,en} be a basis of V**,

By question V(F) is a finite dimensional vector space

Let dim. V = n

But we know that dim. V = dim. V* = dim. V* * = n

We also know that dim. (V) = the number of elements in its basi B.

Thus clearly each basis B, B and B will have the same number of elements.

Since f1 (xJ) = iJ ---------(1)

and ei (fJ) = iJ ---------(2)

now for x  V we get ex in V* * such that ex (f) = f(x), f  V* ---------(3)

then exi (fi) = f(xi)  exi (fJ) = fJ (xi) = iJ = ei (fJ) [ by (1) and (2) ]

for J = 1, 2, ..... , n

 exi = ei

Thus by natural isomorphism x  ex , ex is same as x.

ei = exi = xi  B = B

Solved examples :-

Example 1:- Let V3(R) be a vector space and B = { (1, -1, 3), (0, 1, -1), (0, 3, -2) }be a
basis of V3(R). Find its dual basis .

Solution:- Let B = { x1, x2, x3 } be a basis of V and its dual basis be


[31]
B = { f1, f2, f3 } where

f1 (x, y, z) = a1 x + a2 y + a3 z a1, a2, a3  R

f2 (x, y, z) = b1 x + b2 y + b3 z

f3 (x, y, z) = c1 x + c2 y + c3 z

Now f1 (x1) = f1 (1, -1, 3) = a1 - a2 + 3 a3 = 1

Similarly f1 (x2) = f1 (0, 1, -1) = 0 + a2 - a3 = 0

f1 (x3) = f1 (0, 3, -2) = 0 + 3 a2 – 2 a3 = 0

By solving we get a1 = 1, a2 = 0, a3 = 0

Thus f1 (x, y, z) = 1 . x + 0 . y + 0 . z = x -------- (1)

lly f2 (x1) = 0  b1 - b2 + 3 b3 = 0

f2 (x2) = 1  0 + b2 - b3 = 1

f2 (x3) = 0  0 + 3b2 - 2b3 = 0

On solving we get b1 = 7, b2 = -2, b3 = -3

Thus f2 (x, y, z) = 7x – 2y – 3z ------------(2)

Finally f3 (x1) = 0, f3 (x2) = 0, f3 (x3) = 1

On solving we get c1 = -3, c2 = 1, c3 = 1

Thus f3 (x, y, z) = -2x + y + z --------------(3)

Thus dual basis B = { f1, f2, f3 }

= {x, 7x– 2y – 3z, -2x + y + z}

Example 2:- If B = { e1, e2, e3 } be a basis of R3(R) then find dual basis B.

Solution:- Since e1, e2, e3 are unit vectors in R3 so we have


e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1).

For a moment let B = { e1, e2, e3 }= { x1, x2, x3 }; B = { f1, f2, f3 }

Let f1 (x, y, z) = a1 y + a2 y + a3 z

f2 (x, y, z) = b1 y + b2 y + b3 z

f3 (x, y, z) = c1 y + c2 y + c3 z

[32]
Where ai, bi, ci  R, i = 1, 2, 3.

Now f1 (x1) = 1 = f1 (e1) = f1 (1, 0, 0) = a1

f1 (x2) = 0 = f1 (e2) = f1 (0, 1, 0) = a2

f1 (x3) = 0 = f1 (e3) = f1 (0, 0, 1) = a3

Hence f1 (x, y, z) = a1 x + a2 y + a3 z = x

Also f2 (e1) = 0, f2 (e2) =1, f1 (e3) = 0

And f3 (e1) = 0, f3 (e2) =0, f3 (e3) = 1

Thus f2 (x, y, z) = y,

f3 (x, y, z) = z

Thus B = { x, y, z }.

Example 3:- Let f : R3R, g : R3R be two linear functions given by


f (x, y, z) = 2x – y + z and

g (x, y, z) = 3x – y + 2z, then

Find:- (1) 3f (2) 5g (3) 4f – 5g

Solution:- (1) Since 3f = 3(2x – y + z) = 6x – 3y + 3z


(2) Since 5g = 5(3x – y + 2z) = 15x – 5y + 10z

(3) 4f – 5g = -7x + y – 6z

Example 4:- If B = { (-1, 1, 1), (1, -1, 1), (1, 1, -1) } is a basis of V3(R), then find the dual
basis B of B.

Solution:- Do as above.

8. Projection

8.1 Introduction: Projection is in fact a particularly important type of linear transformation


on a linear space over some field.

8.2 Definition:

Idempotent: A linear transformation T of a vector space V(F) into V is called an


idempotent if T2 = T. Also T is called nilpotent if T2 = 0.

[33]
Invariant: Let T be a linear operator on a vector space V(F) and W be its a subspace. Then
we say that W is invariant under T if T maps W into W itself. W is also called T – invariant.

Thus if any x  W  T(x)  W  T(W)  W

Projection: Let L be the direct sum of the subspaces M and N so that, L = M  N. Then
each vector z in L can be written uniquely in the form z = x + y with x in M and y in N. As x
is uniquely determined by z so we can define a mapping E of L into itself be E(z) = x. Then E
is called the projection on M along N.

8.2 Theorems:

Theorem (8.3) i :- Projection E is a linear transformation.

Proof :- Let l1, l2  L such that l1 = m1 + n1, l2 = m2 + n2 ---------------(1)


where m1, m2  M, n1, n2  N

Then  l1 +  l2 =  ( m1 + n1) +  ( m2 + n2) = ( m1 +  m2) + ( n1 +  n2)

Since M, N are subspaces so clearly  m1 +  m2  M,  n1 +  n2  N and E is a projection.

Hence E ( l1 +  l2) = E [ ( m1 +  m2)+(  n1 +  n2)]

=  m1 +  m2 -----------(2) [Since if z = x + y then E(z) = x]

Also from (1) E (l1) = m1, E (l2) = m2 (By definition of projection)

Thus from (2) we have

E ( l1 +  l2) =  m1 +  m2 =  E (l1) +  E (l2)

Thus E is a linear transformation of L into L itself.

Corollary 1: E (m) = m  m  M and E(n) = 0  n  N.


For m  M  E (m + 0) = m

n  N E (0 + n) = 0

Theorem (8.3-2018, 2019) ii :- A linear transformation E is a projection on some


subspace if and only if it is idempotent. That is E2 = E.

Proof :- Let a vector space L is the direct sum of its two subspaces M and N.
That L = M  N  l in L can be uniquely expressed as

l = m + n for m  M, n  N

[34]
Let E be the projection on M along N then E (l) = E (m + n) = m

Also E (m) = m.

To prove that E is idempotent i.e, E2 = E.

For this,

Since E2 (l) = E E(l) = E [E(l)] = E (m) [Since E(l) = m  E [E(l)] = E (m)]

= m = E(l) [ by corollary]

 E2(l) = E(l)  E2 = E  E is idempotent.

Conversely let E be a linear transformation on L and E2 = E i.e E is an idempotent.

To prove that E is a projection on some subspace

For , let U = { l  L : E(l) = E } ---------------(1)

And W = { l  L : E(l) = 0 } -------------------(2)

Then we observe that:

Since 0  L and E(0) = 0 and hence 0  U, 0  W.

Now let ,   F (the field), u1, u2  U, w1, w2  W.

Then E(u1) = u1, E(u2) = u2, E(w1) = 0 = E(w2)

Also by assumption, E is linear and hence

E( a u1 + b u2) = a E(u1) + b E(u2) = a u1 + b u2

Also a u1 + b u2  U for every u1, u2  U

Also 0  U, as a result of which U is a subspace of L(F)

Also E( a w1 + b w2) = a E(w1) + b E(w2)

= a.0 + b.0 = 0 [since w1 ,w2  W]

Thus E( a w1 + b w2) = 0  a w1 + b w2  W.

Also 0  W then W is a subspace of L(F)

Thus U and W are subspace of L(F)

Further :

[35]
Let l  L then l = 0 + l = E(l) - E(l) + I(l) [Since E(l) - E(l) = 0 and I(l) = l]

= E(l) + ( I - E) (l)  l = u + w.

Also E(u) = E E(l) = E2(l) = E(l) = u (since E2 = E by assumption)

E(w) = E( I - E) (l) = (El - E2) (l) = ( E - E)l = 0

Thus E(u) = u, E(w) = 0

Thus from above we find that

l = u + w i.e. l is the linear sum of u and w.

Also, let x  u  w  x  u and x  w  E(x) = x, E(x) = 0  x = E(x) = 0  x = 0

Thus u  w = {0} and from above L = U + W  L = U  W.

Finally E(l) = E (u + w) = E (u) + E (w) = u + 0 = u

Thus E satisfied all the conditions to be a projection on U and W.

Theorem (8.3) iii :- If V = w1  w2 -------- wn , then there exist n-linear operators E1,
E2,.....,En such that

(i) each Ei is a projection:- i.e, Ei2 = Ei for every i


(ii) Ei Ej = 0 if i  j
(iii) E1 + E2 +.....+ En = I
(iv) Rang (Ei) = Wi

Proof :-

Part (i) : To prove Ei is a projection; i.e. Ei2 = Ei for every i.


For let V = w1  w2 -------- wn , then ,   V can be uniquely expressed as

 = 1+ 2+.....+n ,  = 1+ 2+.....+n such that i, i  Wi for each i

We now define a function Ej : V  V such that Ej () = Ej (1+ 2+.....+n ) = j

Let a, b  F then a i, b i  Wi (as each Wi is a subspace of V)

Clearly a, b  V for ,   V (as V is a linear space)

Also a + b = a (1+ 2+.....+n) + b(1+ 2+.....+n) = (a1+ b1) + ......+ (an+ bn)

So Ej (a + b) = a j+ b j = a Ej () + b Ej ()  Ej is a linear map.

Also we know by definition that ,

[36]
Ej (1+......+j + j+1 +.......+n) = j  Ej (0 + 0 +...... + j + 0 +.......+ 0) = j

Then Ej (j) = j and hence ,

Ei (1+......+i +.......+n) = i then Ei (0 + 0 +...... + i + 0 +.......+ 0) = i

That is, Ei () = i = Ei(j) -------------(1)

Now Ei2 () = (Ei Ei) () = Ei [ Ei() ] = Ei(i) (by (1))

= i = Ei()

That is Ei2 () = Ei ()  Ei2 = Ei and we seen that Ei is a linear map

Therefore Ei is a projection.

Part (ii) : Let i  j then (Ei Ej) = Ei[ Ej() ] = Ei(j) (from above)

= 0 = 0 ()

i.e. (Ei Ej) () = 0 ()

Thus Ei Ej = 0

Part (iii) : Since (E1+ E2+.....,+En) () = (E1 () + E2() +......+ En())
=  = I() [see (1)]

i.e. (E1+ E2+.....,+En) () = I()  E1+ E2+.....,+En= I.

Part (iv) : Since Rang (Ei) = { Ei () :   V}


i  Wi  Ei (i) = i (By definition of Ei )

 i  Rang (Ei), for Ei (i)  Rang (Ei)  Wi  (Ei)

Now if any x  Rang (Ei)  x  V them we can get y  V such that E : (y) = x

 Ei ( y1+ y2+.....+yn) = x where y1+ y2+.....+yn = y and yi  Wi for i = 1, 2, ..., n.

 yi = x, and yi  Wi so x  Wi [ since by (1) Ei (yi) = yi ]

Thus we get x  Rang (Ei) = x  Wi  Rang (Ei)  Wi -----------(2)

But Wi  Rang (Ei) --------------(3)

Thus from (2) and (3) and the definition of the equality of any two sets Rang (Ei) = Wi //

[37]
Theorem (8.3) iv :- Let V be the direct sum of its subspaces U and W and E is the
projection on U along W then I – E is a projection on W along U.

Proof :-Since E is the projection on U along W it means that V = U  W


Such that , U = the rang space of E and W = the null space of E

Let x  U  x = E(y) for y  V

 (I – E) (x) = x – E(x) = E(y) – E(y) = 0

 x  Null space of I – E.

Also x  W E (x) = 0  (I – E) (x) = x for every x in W

Thus v  V  v = u + w, u  U ,w  W

 (I – E) v = (I – E) u + (I – E) w = 0 + w = w

The rang space of (I – E) = w

Also (I – E)2 = I2 + E2 – 2 IE = I + E – 2E = (I – E)

That is (I – E)2 = (I – E)  (I – E) is an idempotent.

Thus (I – E) is a projection on W along U.

Theorem (8.3) v :- If E is a projection, then its ad joint E is also a projection.

Proof :-Let E be a projection, then E2 = E.


We have to show that E is also a projection.

For this it is sufficient to show that (E)2 = E

For this, since E2 = E  EE = E  (E E) = E E E= E(E)2 = E

 E is idempotent and hence a projection.//

Solved problems :-

Problem 1:- Let V be a real vector space and E an idempotent linear operator. Prove that
I + E is invertible.

Solution:- Since (I + E) ( I – ½ E) = I + E – ½ E – ½ E2 = I + E – ½ E – ½ E
=I+E–E=I

Hence I + E is invertible and (I + E)-1= I – ½ E.

[38]
Problem 2:- If E and F are projection on a vector space V(K), then prove that E + F – EF is
a projection provided EF = FE.

Solution:- By question E, F are projections on V(F)


Then E, F are idempotent, but then E2 = E, F2 = F

Let (as the provision is) E and F commute i.e. EF = FE.

Our problem is to establish that E + F – EF is a projection.

For this it is sufficient to show that E + F – EF is an idempotent.

For thus it is sufficient to show that (E + F – EF)2 = E + F – EF

For this, L.H.S = (E + F – EF)2 =( E + F – EF)( E + F – EF) Then by actual multiplication

= ( E2 + EF - E2 F)+ (FE+ F2– FEF) +(-EFE – E F2 + EFEF)

=( E + EF – EF) + (EF + F – EFF) + ( - EFE - E F +EFFE)

= (E + 0) +(EF + F - EF2) + (-E2 F – EF + EF2E)

=E + F + ( - EF – EF + EEF)

= E + F + ( - EF – EF + EF)

i.e. (E + F – EF)2 = E + F – EF

Hence our requirement is obtained.

Problem 3:- If T is a linear operator on a vector space V(K) such that


T2(I – T) = T (I – T) 2= 0, then prove that T is a projection.

Solution:- To prove T is a projection.


It is sufficient to prove T is idempotent i.e. T2 = T

For this we are given that T is a linear operator on a vector space V(K),such that,

T2(I – T) = T (I – T) 2= 0

 T2 I – T3 = T ( I2+ T2 -2 IT) = 0

 T2 – T3 = T ( I+ T2 -2 T) = 0

 T2 – T3 = T I+ T3 - 2 T2 = 0

 T2 - T3 = 0 and T + T3 -2 T2 = 0

[39]
 T2 = T3 -----------(1) and T3 = 2 T2 – T -----------(2)

Thus from (1) and (2)

T2 = 2 T2 – T  - T2 = -T

 T2 = T  T is idempotent  T is a projection/

Problem 3:- Let E be a projection on a subspace U of a vector space V(K). Then V is T-


invariant if and only if E T E = T E. T being a linear operator on V.

Solution:- Do yourself..

[40]

You might also like