Block 3
Block 3
LINEAR ALGEBRA
Block
3
Linear Tranformations
UNIT 7
Linear Transformations-I 5
UNIT 8
Bases and Dimension 32
UNIT 9
Linear Transformations and Matrices 61
Miscellaneous Examples and Exercises 87
Course Design Committee*
Prof. Rashmi Bhardwaj Prof. Meena Sahai
G.G.S. Indraprastha University, Delhi University of Lucknow
Dr. Sunita Gupta Dr. Sachi Srivastava
University of Delhi University of Delhi
Prof. Amber Habib Prof. Jugal Verma
Shiv Nadar University I.I.T., Mumbai
Gautam Buddha Nagar
Faculty members
Prof. S. A. Katre
School of Sciences, IGNOU
University of Pune
Prof. M. S. Nathawat (Director)
Prof. V. Krishna Kumar
Dr. Deepika
NISER, Bhubaneswar
Mr. Pawan Kumar
Dr. Amit Kulshreshtha
Prof. Poornima Mital
IISER, Mohali
Prof. Parvin Sinclair
Prof. Aparna Mehra
Prof. Sujatha Varma
I.I.T., Delhi
Dr. S. Venkataraman
Prof. Rahul Roy
Indian Statistical Institute, Delhi
* The Committee met in August, 2016. The course design is based on the recommendations of the
Programme Expert Committee and the UGC-CBCS template.
Acknowledgement: We have used some of the material from the course MTE-02, Linear
Algebra in this course material. Prof. Amber Habib for generously sharing his lecture notes.
June 2021
© Indira Gandhi National Open University
ISBN-
All right reserved. No part of this work may be reproduced in any form, by mimeograph or any other
means, without permission in writing from the Indira Gandhi National Open University.
Further information on the Indira Gandhi National Open University courses, may be obtained from the
University’s office at Maidan Garhi, New Delhi-110 068 and IGNOU website www.ignou.ac.in.
Printed and published on behalf of the Indira Gandhi National Open University, New Delhi by
Prof. Sujatha Varma, School of Sciences.
Unit 7 Linear Transformation-I
UNIT 7
LINEAR TRANSFORMATIONS I
7.1 Introduction
Objectives
7.2 Linear Transformations
7.3 Spaces Associated with a Linear Transformation
The Range Space and the Kernel
Rank and Nullity
7.4 Some Types of Linear Transformations
7.5 Homomorphism Theorems
7.6 Summary
7.7 Solutions/Answers
7.1 INTRODUCTION
You have already learnt about a vector space and several concepts
related to it. In this Unit we initiate the study of certain mappings
between two vector spaces, called linear transformations. The
importance of these mappings can be realized from the fact that, in the
calculus of several variables, every continuously differentiable function
can be replaced, to a first approximation, by a linear one. This fact is a
reflection of a general principle that every problem on the change of
some quantity under the action of several factors can be regarded, to a
first approximation, as a linear problem. It often turns out that this gives
an adequate result. Also, in physics it is important to know how vectors
behave under a change of the coordinate system. This requires a study
of linear transformations.
Objectives
After reading this unit, you should be able to
• verify the linearity of certain mappings between vector spaces;
• construct linear transformations with certain specified properties;
• calculate the rank and nullity of a linear operator;
• prove and apply the Rank Nullity Theorem;
• define an isomorphism between two vector space;
• show that two vector spaces are isomorphic if and only if they
have the same dimension;
• prove and use the Fundamental Theorem of Homomorphism.
In Unit 2 you came across the vector spaces R 2 and R 3 . Now consider
the mapping f : R 2 → R 3 : f ( x, y) = (x , y, 0) (see Fig. 1).
(i) says that the sum of two plane vectors is mapped under f to the sum
of their images under f (ii) says that a line in the plane R 2 is mapped
under f to a line in R 2 .
The properties (i) and (ii) together says that f is linear, a term that we
now define.
The conditions LT1 and LT2 can be combined to give the following
equivalent condition.
What we are saying is that [LT1 and LT2] ⇔ LT3. This can be easily
shown as follows:
You can try and prove the converse now. That is what the following
exercise is all about!
E1) Show that the conditions LT1 and LT2 together imply LT3.
LT4) T(0) = 0. Let’s see why this is true. Since T(0) = T(0 + 0) = T(0) (by
LT1), we subtract T(0) from both sides to get T(0) = 0.
E2) Can you show how LT4 and LT5 will follow from LT2?
7
Now let us look at some common linear transformations.
Solution: We will use LT3 to show that pr1 is a linear operator. For
α, β ∈ R and ( x1 , K, x n ), ( y1 , K, y n ) in R n , we have
pr1 [( α( x 1 , K, x n ) + β( y1 , K , y n )]
= pr1 (αx 1 + β y1 , αx 2 + β y 2 , K , αx n + β y n ) = αx1 + β y1
= αpr1 [( x 1 , K, x n )] + β pr1 [( y1 , K, y n )].
8
Unit 7 Linear Transformation-I
Thus pr1 (and similarly pri ) is a linear transformation.
Before going to the next example, we make a remark about projections.
Fig. 3
9
Solution: Since PQ is perpendicular the line y = mx, the line through
−1
PQ has slope . (Recall that if m1 , m 2 are the slopes of perpendicular
m
lines, m1 ≠ 0, m 2 ≠ 0, m1m 2 = −1). Using the point-slope form of the
equation of a line, the equation of the line passing through PQ is
−1
( Y − y) = (X − x ). Since ( x′, y′) lies on this line we have
m
−1
( y′ − y) = ( x ′ − x ). Since ( x′, y′) lies on the line y = mx, we have
m
y′ = mx′. Using this to eliminate y′ from the above equation we get
−1 1 x 1
(mx ′ − y) = (x ′ − x ) or mx′ + x ′ = + y or x ′ = 2 ( x + ym).
m m m m +1
m
∴ y′ = mx ′ = 2 ( x + ym).
m +1
x + ym mx + ym 2
∴ ( x, y) a ( x′, y′) is given by P(x , y) = 2 , . This is of
m +1 m 2 + 1
the form ( x, y) a (ax + by, a ′x + b′y) where
1 m m m2
a= , b = , a ′ = and b ′ = . Check that
m2 + 1 m2 + 1 m2 + 1 m2 + 1
T( x, y) = (ax + by, a′x + b′y) is a linear operator for a, b ∈ R. (see
exercise 6 below).
Since Q is the mid point of PR, we have
x + x ′′ y + y′′ x + ym mx + ym
2
Q= , = , .
2 2 m 2 + 1 m 2 + 1
x + x ′′ x + ym y + y′′ mx + ym 2
∴ = 2 , = .
2 m +1 2 m2 + 1
Solving for x′′, we get
x + ym x (1 − m 2 ) + 2 ym
x ′′ = 2 2 − x = .
m +1 m2 + 1
Solving for y′′, we get
mx + ym 2 2mx + y(m 2 − 1)
y′′ = 2 2
− y = .
m +1 m2 + 1
Again, we see that R ( x, y) = (ax + by, a′x + b′y) where
(1 − m 2 ) 2m 2m m2 − 1
a= , b = , a ′ = , b ′ = .
1+ m2 m2 + 1 m2 + 1 m2 + 1
11
E7) Let u1 = (1, − 1), u 2 = (2, − 1), u 3 = (4, − 3), v1 = (1, 0), v 2 = (0, 1) and
v 3 = (1, 1) be 6 vectors in R 2 . Can you define a linear
transformation T : R 2 → R 2 such that T ( u i ) = v i , i = 1, 2, 3 ?
12
Unit 7 Linear Transformation-I
Let us see how the idea of Theorem 1 helps us to prove the following
useful result.
Once you have read Sec.7.3 you will realize that this theorem says that
T(R) is a vector space of dimension one, whose basis is {T(1)}.
Now try the following exercise, for which you will need Theorem 1.
13
Solution: By Theorem 3 we know that ∃ T : R 3 → R 2 such that
T (e1 ) = (1, 2), T (e 2 ) = ( 2, 3) and T(e 3 ) = (3, 4). We want to know what
T( x ) is, for any x = ( x 1 , x 2 , x 3 ) ∈ R 3 . Now, x = x 1e1 + x 2 e 2 + x 3e 3 . ’
Hence, T( x ) = x 1T(e1 ) + x 2 T(e 2 ) + x 3T(e 3 )
= ( x 1 + 2 x 2 + 3x 3 , 2x 1 + 3x 2 + 4 x 3 )
Therefore, T( x1 , x 2 , x 3 ) = ( x1 + 2x 2 + 3x 3 , 2x 1 + 3x 2 + 4 x 3 ) is the definition
of the linear transformation T.
***
Let us now look at some vector spaces that are related to a linear
operator.
14
Unit 7 Linear Transformation-I
Solution: R (I) = {I(v) | v ∈ V} = {v | v ∈ V} = V. Also,
Ker I = {v ∈ V | I( v) = 0} = {v ∈ V | v = 0} = {0}.
***
1 − 1 2
So, R (T) = [S] where S = 2 , 1 , 0 .
− 1 2 2
We now reduce the matrix
1 2 − 1
− 1 1 − 2 to get a basis for R (T).
2 0 2
4R 3 R
R 2 → R 1 + R 2 , R 3 → R 3 − 2R 1 , R 3 → R 3 + , R 2 → 2 gives
3 3
1 2 − 1
0 1 − 1.
0 0 0
1 0
So, 2 , 1 is a basis for R (T) and hence R (T) is a plane.
− 1 − 1
Suppose, the equation of the plane is ax + by + cz = 0. Then, we have
1 0
a + 2b − c = 0
. Since the vectors 2 and 1 satisfy the equation
b−c = 0 −1 −1
ax + by + cz = 0. Taking c = λ, we get b = λ, a = −λ.
Taking λ = 1, we see that the plane is − x + y + z = 0 or x = y + z.
Therefore R (T) is the plane x − y − z = 0.
If ( x1 , x 2 , x 3 ) ∈ Ker (T), we have T(x 1 , x 2 , x 3 ) = 0 or
x1 − x 2 + 2x 3 = 0
2x1 + x 2 = 0
− x1 − 2x 2 + 2x 3 = 0
Thus, Ker (T) is the null space of the matrix
1 − 1 2
2 1 0
− 1 − 2
The RREF is
1 0 2 / 3
0 1 − 4 / 3
0 0 0
The third column is the non-pivot column. Taking x 3 = λ, we get
x 1 = −2 / 3λ, x 2 = 4 / 3λ, x 3 = λ. So the solution set is
{λ(−2 / 3, 4 / 3, 1) | λ ∈ R}.
16
Unit 7 Linear Transformation-I
x1 x x
which is the straight line = 2 = 3.
− 2/3 4/3 1
***
In this example, we see that finding R (T) and Ker T amounts to solving
a system of equations. In Unit 4, we discussed the solution of system of
linear equations using row reduction.
The following exercises will help you in getting used to R (T) and Ker T.
b) T : R 3 → R : T( x, y, z) = z
c) T : R 3 → R 3 : T ( x1 , x 2 , x 3 ) = ( x1 − x 2 + x 3 , x1 + x 2 − x 3 , x1 − x 2 − x 3 ).
(Note that the operators in (a) and (b) are projections onto the xy -
plane and the z -axis, respectively.)
Now that you are familiar with the sets R (T) and Ker T , we will prove
that they are vector spaces.
17
Now that we have proved that R (T) and Ker T are vector spaces, you
know, from Unit 4, that they must have a dimension. We will study these
dimensions now.
Thus, rank (T) = dim R(T) and nullity (T) = dim Ker T.
We have already seen that rank (T) ≤ dim U and nullity (T) ≤ dim U.
Solution: In E10 you saw that R (T) = {0} and Ker T = U. Therefore,
rank (T) = 0 and nullity (T) = dim U.
Note that rank (T) + nullity (T) = dim U, in this case.
E14) If T is the identity operator on V, find rank (T) and nullity (T).
E15) Let D be the differentiation operator in E6. Give a basis for the
range space D and for Ker D . What are rank (D) and nullity (T).
In the above example and exercises you will find that for T : U → V,
rank (T) + nullity (T) = dim U. In fact, this is the most important result
about rank and nullity of a linear operator. We will now state and prove
this result.
18
Unit 7 Linear Transformation-I
Theorem 5: Let U and V be vector spaces over a field F and
dim U = n. Let T : U → V be a linear operator. Then
rank (T) + nullity (T) = n.
Proof: Let nullity (T) = m, that is, dim Ker T = m. Let {e1 , K , e m } be a
basis of Ker T . We know that Ker T is subspace of U. Thus, by
Theorem 11 of Unit 4, we can extend this basis to obtain a basis
{e1 , K, e m , e m +1 , K, e n } of U. We shall show that {T (e m +1 ), K , T (e n )} is a
basis of R (T). Then, our result will follow because dim R (T ) will be
n − m = n − nullity (T).
Let us first prove that {T (e m +1 ), K , T (e n )} spans, or generates, R (T). Let
y ∈ R (T). Then, by definition of R (T) there exists x ∈ U such that
T( x) = y.
Let x = c1e1 + L + c m e m + c m +1e m +1 + L + c n e n , c i ∈ F ∀ i.
Then,
y = T ( x ) = c1T (e1 ) + L + c m T (e m ) + c m +1T (e m +1 ) + L + c n T (e n )
= c m +1T (e m +1 ) + L + c n T (e n ),
because T (e1 ) = L = T (e m ) = 0, since T (e i ) ∈ Ker T ∀ i = 1, K , m. ∴ any
y ∈ R (T) is linear combination of {T (e m +1 ), K , T (e n )}. Hence, R (T) is
spanned by {T (e m +1 ), K , T (e n )}.
It remains to show that the set {T (e m +1 ), K , T (e n )} is linearly
independent. For this, suppose there exist a m+1 , K , a n ∈ F with
a m +1T (e m +1 ) + L + a n T (e n ) = 0.
Then, T (a m +1e m +1 + L + a n e n ) = 0.
Hence, a m +1e m +1 + L + a n e n ∈ Ker T, which is generated by {e1 , K, e m }.
Therefore, there exist a1 , K , a m ∈ F such that
a m +1e m +1 + L + a n e n = a 1e1 + L + a m e m .
(−a 1 )e1 + L + (−a m )e m + a m +1e m +1 + L + a n e n = 0.
Since {e1 , K , e n } is a basis of U, it follows that this set is linearly
independent. Hence, − a 1 = 0, K, − a m = 0, a m +1 = 0, K, a n = 0 . In
particular, a m +1 = K = a n = 0, which we wanted to prove.
Therefore, dim R (T) = n − m = n − nullity (T), that is, rank (T) + nullity
(T) = n.
Remark 4: This proof is essentially the same as the one we gave for a
matrix in Unit 6. The only fact that we used in Unit 6 is that A is a linear
transformation.
E16) Find the kernel and range of projection and reflection operators we
discussed in Example 6.
E17) Give the rank and nullity of each of the linear transformation in
E11.
Before ending this section we will prove a result that links the rank (or
nullity) of the composite of two linear operators with the rank (or nullity)
of each of them.
Proof: We shall prove (a). Note that (ST ) ( v) = S(T( v)) for any v ∈ V
(you’ll study more about compositions in Unit 8).
The proof of this theorem will be complete, once you solve the following
exercise.
We would now like to discuss some linear operators that have special
properties.
20
Unit 7 Linear Transformation-I
In this case we say that U and V are isomorphic vector spaces. This
is denoted by U ~ V.
We now give an important result equating ‘isomorphism’ with '1 − 1' and
with ‘onto’ in the finite-dimensional case.
Now (a) implies that Ker T = {0} (from Theorem 7). Hence, nullity
(T) = 0. Therefore, by Theorem 5, rank (T) = n, that is,
dim R (T) = n = dim V. But R (T) is a subspace of V. Thus, by the remark
following Theorem 12 of Unit 4, we get R (T) = V, i.e., T is onto, i.e., (b)
is true. So (a ) (b).
Similarly, if (b) holds then rank (T) = n, and hence, nullity (T) = 0.
Consequently, Ker T = {0}, and T is one-one. Hence, T is one-one and
onto, i.e., T is an isomorphism. Therefore, (b) implies (c).
23
Caution: Theorem 10 is true for finite-dimensional spaces U and V,
of the same dimension. It is not true, otherwise. Consider the following
counter-example.
This theorem is called Theorem 13: Let V and W be vector spaces over a field F and
the Fundamental T : V → W be a linear transformation. Then V/Ker T ~ R(T).
Theorem of
Homomorphism. Proof: You know that Ker T is a subspace of V, so that V / Ker T is a
well defined vector space over F. Also R (T) = {T( v) | v ∈ V}. To prove
the theorem let us define θ : V / Ker T → R (T) by θ(v + Ker T) = T(v).
T S
Solution: We have V → V→ V. ST is the composition of the
operators S and T, which you have studied in Unit 2, BMTC-131 and
will also study in Unit 8. Now, we apply Theorem 13 to the
homomorphism θ : (V) → ST(V) : θ(T( v)) = (ST) ( v).
Now, Ker θ = {x ∈ T(V) | S( x) = 0} = Ker S ∩ T(V) = Ker S ∩ R (T).
Also R (θ) = ST(V), since any element of ST(V) is (ST)(v) = θ(T( v)).
T ( V)
Thus ~ ST (V).
Ker S ∩ T(V )
Therefore,
T ( V)
dim = dim ST (V)
Ker S ∩ T (V)
That is, dim T(V) − dim(Ker S ∩ T(V)) = dimST(V), which is what we had
to show.
***
E27) Using Example 14 and the Rank Nullity Theorem, show that
nullity(ST) = nullity(T) + dim(R (T) ∩ Ker S).
3
Example 17: Show that R ~ R 2 where W ~ R.
W
n
Note: In general, for any n ≥ m, R ~ R n −m where W ~ R m . Similarly,
W
n
for n ≥ m C n −m ~ C where W ′ ~ C m .
W′
The next result is a corollary to the Fundamental Theorem of
Homomorphism.
But, before studying it, read Unit 3 for the definition of the sum of
spaces. 27
Corollary 1: Let A and B be subspaces of a vector space V. Then
A + B / B ~ A / A ∩ B.
A+B
Proof: We define a linear function T : A → by T(a ) = a + B.
B
A+B
T is well defined because a + B is an element of (since
B
a = a 1 + 0 ∈ A + B ).
T is a linear transformation because, for α1 , α 2 in F and a 1 , a 2 in A,
we have
T (α1a 1 + α 2 a 2 ) = α1a 1 + α 2 a 2 + B = α1 (a 1 + B) + α 2 (a 2 + B)
= α1T (a 1 ) + α 2 T (a 2 ).
A+B
Now we will show that T is surjective. Any element of is of the
B
form a + b + B, where a ∈ A and b ∈ B.
Now a + b + B = a + B + b + B = a + B + B, since b ∈ B
A+B
= a + B , since B is zero element of
B
= T(a ), proving that T is surjective.
A+B
∴ R (T ) =
B
We will now prove that Ker T = A ∩ B.
If a ∈ Ker T, then a ∈ A and T(a) = 0. This means that a + B = B, the zero
A+B
element of . Hence, a ∈ B (by Unit 3, E23). Therefore, a ∈ A ∩ B.
B
On the other hand, a ∈ A ∩ B a ∈ A and a ∈ B a ∈ A and
a + B = B a ∈ A and T(a ) = T(0) = 0.
a ∈ Ker T.
This proves that A ∩ B = Ker T.
i) T is one-one if T (u 1 ) = T (u 2 ) u1 = u 2 ∀ u1 , u 2 ∈ U.
ii) T is onto if, for any v ∈ V ∃ u ∈ U such that T(u) = v.
iii) T is an isomorphism (or, is invertible) if it is one-one and
onto, and then U and V are called isomorphic spaces. This
is denoted by U ~ V.
5) T : U → V is
29
i) one-one if and only if Ker T = {0}
ii) onto if and only if R (T) = V.
7.7 SOLUTIONS/ANSWERS
E1) For any α1 , α 2 ∈ F and u1 , u 2 ∈ U, we know that α1 u r ∈ U and
α 2 u 2 ∈ U. Therefore, by LT1,
T (α 1 u 1 + α 2 u 2 ) = T ( α 1 u 1 ) + T (α 2 u 2 )
= α1T (u 1 ) + α 2 T (u 2 ) , by LT2.
E2) By LT2, T(0.u) = 0.T(u) for any u ∈ U. Thus, T(0) = 0. Similarly, for
any u ∈ U, T(−u) = T((−1)u) = (−1)T(u) = −T(u).
E4) T((x1 , x 2 , x 3 ) + ( y1 , y 2 , y 3 )) = T( x1 + y1 , x 2 + y 2 , x 3 + y 3 )
= a 1 (x 1 + y1 ) + a 2 ( x 2 + y 2 ) + a 3 ( x 3 + y 3 )
= (a 1x 1 + a 2 x 2 + a 3 x 3 ) + (a 1 y1 + a 2 y 2 + a 3 y 3 )
= T(x 1 , x 2 , x 3 ) + T( y1 , y 2 , y 3 )
Also, for any α ∈ R,
T(α(x 1 , x 2 , x 3 )) = a 1αx 1 + a 2 αx 2 + a 3 αx 3
= α(a 1 x1 + a 2 x 2 + a 3 x 3 ) = αT( x 1 , x 2 , x 3 ).
Thus, LT1 and LT2 hold for T.
30
Unit 7 Linear Transformation-I
= α( x 1 + x 2 − x 3 , 2 x 1 − x 2 , x 2 + 2 x 3 ) = αT( x 1 , x 2 , x 3 ), showing that
LT2 holds.
E6) We want to show that D(αf + βg) = αD(f ) + βD(g), for any α, β ∈ R
and f , g ∈ Pn .
Now, let f ( x ) = a 0 + a 1 x + a 2 x 2 + L + a n x n and
g( x ) = b 0 + b1 x + L + b n x n .
Then (αf + β g )( x ) = (αa 0 + β b 0 ) + (αa 1 + β b1 ) x + L + (αa n + β b n ) x n .
∴[D(αf + βg )](x ) = (αa 1 + βb1 ) + 2(αa 2 + βb 2 ) x + L + n (αa + βb) x n −1
= α(a 1 + 2a 2 x + L + na n x n −1 ) + β(b1 + 2b 2 x + L + nb n x n −1 )
= α(Df )(x ) + β(Dg)(x) = (αDf + βDg)( x)
Thus, D(αf + βg) = αDf + βDg, showing that D is a linear map.
E11) T : U → V : T(u ) = 0 ∀ u ∈ U.
∴, Ker T = {u ∈ U | T(u) = 0} = U
R (T) = {T(u) | u ∈ U} = {0}. ∴1∉ R (T).
31
x 1 0 1 0
E12) a) The image is = x + y . Therefore R (T) = , .
y 0 1 0 1
1 0
The vectors and are linearly independent. So,
0 1
R (T) ⊆ R 2 has dimension 2. So, R (T) = R 2 .
( x, y, z) ∈ Ker (T) if (x, y) = 0, i.e., x = 0, y = 0. ∴ the kernel is
Ker (T) = {2(0, 0, 1) | z ∈ R} = {2(1, 0, 0) | z ∈ R}. This is the line
z
x = 0, y = 0, = t the z -axis.
1
c) We have
x1 − x 2 + x 3 1 − 1 1
x + x − x = x 1 + x 1 + x − 1
1 2 3 1 2 3
x 1 − x 2 − x 3 1 − 1 − 1
1 − 1 1
∴ R (T) = 1 , 1 , − 1 .
1 − 1 − 1
1 1 1
Row reducing − 1 1 − 1 by R 2 → R 2 + R 1 , R 2 → 2
R
2
1 − 1 − 1
1 1 1
− 1
R 3 → R 3 + 2R 2 , R 3 → R 3 we get B = 0 1 0 .
2
0 0 1
So, the basis of R (T) has three elements and therefore
R (T ) = R 3 .
To find the null space we continue and find the RREF of
1 1 1
− 1 1 − 1. Reducing B using row operators
1 − 1 − 1
1 0 0
R 1 → R 1 − R 2 , R 1 → R 1 − R 3 , we get 0 1 0. There are no
0 0 1
non-pivot columns, so, the system has the unique solution
(0, 0, 0). Therefore R (T) = {(0, 0, 0)}.
E15) R (D) = {a 1 + 2a 2 x + L + na n x n −1 | a 1 , K, a n ∈ R}
Thus, R (D) ⊆ Pn−1 . But any element b 0 + b1 x + L + b n −1 x n −1 , in Pn −1 is
b b
D b 0 x + 1 x 2 + L + n −1 x n ∈ R (D).
2 n
Therefore, R (D) = Pn−1 .
∴, a basis for R (D) is {1, x , K, x n−1 }, and rank(D) = n.
Ker = {a 0 + a 1 x + L + a n x n | a 1 + 2a 2 x + L + na n x n −1 = 0, a i ∈ R ∀ i}
= {a 0 + a 1 x + L + a n x n | a 1 = 0, a 2 = 0, K, a n = 0, a i ∈ R ∀ i}
= {a 0 | a 0 ∈ R} = R.
∴, a basis for Ker D is {1}.
nullity (D) = 1.
E16) We have
1 m
x + ym mx + ym 2 2 2
P( x , y) = 2 , = x m + 1 + y m + 1 .
m 2 + 1
2
m +1 m m
m2 + 1 m2 + 1
1 m
2 2
So, the range of P is [S] where S = m + 1 , m + 1 .
m m 2
2
m + 1 m 2 + 1
1 m
1 1 1
+
m m2 + 1
1
Note that 2 = m 1 and 2 = .
m + 1 m m m + 1 m m
2
2 2
m +1 m + 1
1
So, the vector in S are scalar multiples of so, they are
m
1 1
. Therefore the range is . In other word, the range is
m m
the line y = mx as we would expect from our definition of the
projection operator.
The kernel is given by
x + ym mx + ym 2 x + ym x + ym
2 , 2
= (0, 0) or 2 = 0, m 2 = 0.
m +1 m +1 m +1 m +1
−1
Since m ≠ 0, both the equations reduce to x + ym = 0 or y = x.
m
−1
So, the kernel is the line y = x.
m
Let us now consider the reflection operator, we have
33
x (1 − m 2 ) + 2m 2mx + y(m 2 − 1)
R ( x , y) = ,
m2 + 1 m2 + 1
1 − m 2m
2
2 2
= x m + 1 + y m 2 + 1 .
2m m − 1
2
m + 1 m2 + 1
1 − m 2 2m
2 2
Therefore, the range of R is S, where S = m + 1 , m 2 + 1 .
2 m
m − 1
m 2 + 1 m 2 + 1
In general, if S = {v1 , v 2 , K, v k } span a subspace of a vector
space V, then {α1v1 , α 2 v 2 , K, α k v k } also spans the subspace.
This is because, if v = a1v1 + a 2 v 2 + L + a k v k , then
a a a
v = 1 α1v1 + 2 α 2 v 2 + L + k α k v k .
α1 α2 αk
1 − m 2m
2
Therefore, the set S1 = , 2 also spans the range of
2m m − 1
a b
R. Setting a = 1 − m 2 , b = 2m, the set is of the form ,
b − a
where b ≠ 0 .
To show that there vectors are linearly independent, we need to
show that there is no α ∈ R, α ≠ 0 such that
a b
α = . (Why?)
b − a
If such an α exists, we have αv = b, αb = −a.
α 2 b = −αa = −b or (α 2 + 1)b = 0. So, α = ± i and α ∉ R.
a b
∴ and are linearly independent. Therefore, the range of
b − a
R is the whole of R 2 . So, by rank-nullity theorem, the kernel is 0.
34
Unit 7 Linear Transformation-I
If rank (T) = 1, then dim R (T) = 1. That is, R (T) is a vector space
over R generated by a single element, v, say. Then R (T) is the
line R v = {αv | α ∈ R}.
36
UNIT 8
LINEAR TRANSFORMATIONS II
8.1 Introduction
Objectives
8.2 The Vector Space L( U, V )
8.3 The Dual Space
8.4 Composition of Linear Transformations
8.5 Minimal Polynomial
8.6 Summary
8.7 Solutions/Answers
8.1 INTRODUCTION
In the last unit we introduced you to linear transformations and their
properties. We will now show that the set of all linear transformations
from a vector space U to a vector space V forms a vector space itself,
and its dimension is (dim U) (dim V). In particular, we define and
discuss the dual space of a vector space.
You must revise Units 2, BMTC-131 and Unit 7 of this course before
going further.
Objectives
Let U, V be vector spaces over a field F. Consider the set of all linear
transformations from U to V. We denote this set by L(U, V).
To get used to these Eij try the following exercise before continuing the
proof.
Let us first show that this set is linearly independent over F. For this,
suppose 39
n m
c E
i =1 j=1
ij ij =0 (1)
c E
i =1 j=1
ij ij (e k ) = 0 ∀ k = 1, K, m.
c
i =1
ik if = 0,
Now,
n m n
c E (e
i =1 j=1
ij ij k ) = cik f i = T(e k ), by (2). This implies (3).
i =1
40
***
Example 2: Let U, V be the vector spaces of dimension m and n,
respectively. Suppose W is a subspace of V of dimension p(≤ n ). Let
X = {T ∈ L( U, V ) : T(u ) ∈ W for all u ∈ U}.
If X a subspace of L(U, V) ? If yes, find its dimension.
E3) What can be a basis for L(R 2 , R), and for L(R, R 2 ) ? Notice that
both these spaces have the same dimension over R.
After having looked at L(U, V), we now discuss this vector space for
the particular case when V = F.
E4) Prove that any linear functional on R 3 is of the form given in the
example above.
n
Now c f
j=1
j j =0
n
c jf j (e i ) = 0, for each i.
j=1
n
c j (f j (e i )) = 0 ∀ i
j=1
n
c jδ ji = 0 ∀ i c i = 0 ∀ i.
j=1
= c j.
n
This implies that ci = f (e i ) ∀ i = 1, K , n. Therefore, f = f (e i )f i .
i=1
E5) What is the dual basis for the basis {1, x, x 2 } of the space
P2 = {a 0 + a 1x + a 2 x 2 | a i ∈ R} ?
Now let us look at the dual of the dual space. If you like, you may skip
this portion and go straight to Sec. 8.4.
Next, we show that S is onto, that is, for any v ∈ V, ∃ u ∈ V such that
S(u ) = v. Now, for any v ∈ V,
v = I( v) = S o T(v) = S(T( v)) = S(u ), where u = T( v) ∈ V. Thus, S is onto.
Hence, S is 1 − 1 and onto, that is, S is an isomorphism.
By now you must have got used to handling the elements of A(V). The
next section deals with polynomials that are related to these elements.
Example 6: For any vector space V, find the minimal polynomials for I,
the identity transformation, and 0, the zero transformation.
We will now state and prove a criterion by which we can obtain the
minimal polynomial of a linear operator T, once we know any
polynomial f ∈ F[ x ] with f (T) = 0. It says that the minimal polynomial
must be a factor of any such f .
x + ym mx + ym 2
∴ P 2 ((x , y)) = P 2 , 2
= P((x ′, y′)) (say)
m + 1 m + 1
x + ym mx + ym 2
where x = 2 ′ ,y = ′
m +1 m2 + 1
x ′ + y′m mx ′ + y′m 2
P((x ′, y′)) = 2 ,
m +1 m 2 + 1
Substituting the values of x ′, y′, we get
x + ym mx + ym 2 x + ym mx + ym 2
2
2 + m m + m
m +1 m 2 + 1 m + 1 m + 1
2 2
P((x ′, y′)) = ,
m2 + 1 m2 + 1
2 2 3
x + ym m(mx + ym ) x + ym + m x + ym
We have 2 + =
m +1 m2 + 1 m2 + 1
x (1 + m 2 ) + ym(1 + m 2 )
= = x + ym.
m2 +1
Therefore,
x + ym mx + ym 2
+ m
m2 + 1 2
m + 1 = x + ym .
m2 + 1 m2 + 1
Similarly, we have
x + ym mx + ym 2 mx + ym 2 + m3 x + ym 4
2
m 2 + 2
m =
m +1 m +1 m2 + 1
mx (1 + m 2 ) + m 2 y(1 + m 2 )
= 2
= mx + m 2 y.
m +1
Therefore
x + ym mx + ym 2
2
m 2
+ 2
m
m +1 m +1 mx + m 2 y
= .
m2 + 1 m2 + 1
52
x + ym mx + m 2 y
∴ P ((x , y)) = 2
2
, = P((x, y)).
m +1 m 2 + 1
∴ (P 2 − P) ((x , y)) = 0 or P satisfies x 2 − x = 0.
x 2 − x = x (x − 1). Since the minimal polynomial divides x 2 − x, it has to
be either x , x − 1 or x (x − 1).
If it is x , we have P((x , y)) = 0 ∀ ( x, y) ∈ R 2 . But
1 m
P((1, 0)) = 2 , 2 ≠ (0, 0) since we assumed that m ≠ 0.
m +1 m +1
If the minimal polynomial is x − 1, we have P − I = 0 or P is the identity
1 m 2
operator. Again P((1, 0)) = 2 , 2 ≠ (1, 0) is x − x = 0.
m + 1 m + 1
***
We will now end the unit by summarising what we have covered in it.
8.6 SUMMARY
In this unit we covered the following points.
1. L(U, V), the vector space of all linear transformations from U to
V is of dimension (dim U) (dim V).
8.7 SOLUTIONS/ANSWERS
E1) We have to check that VS1-VS10 are satisfied by L(U, V). We
have already shown that VS1 and VS6 are true:
VS2: For any L, M, N ∈ L(U, V), we have
∀ u ∈ U, [(L + M) + N ) (u )
= (L + M) (u ) + N(u ) = [L(u ) + M(u )] + N(u )
= L(u ) + [M(u ) + N (u )], since addition is associative in V.
= [L + (M + N)] (u )
∴ (L + M) + N = L + (M + N).
VS3: 0 : U → V : 0(u ) = 0 ∀ u ∈ U is zero element of L(U, V).
VS4: For any S ∈ L(U, V), (−1)S = −S, is the additive inverse of S.
VS5: Since addition is commutative in V, S + T = T + S ∀ S, T in
L(U, V).
VS7: ∀ α ∈ F and S, T ∈ L( U, V),
α(S + T) (u ) = (αS + αT) (u ) ∀ u ∈ U,
∴ α(S + T) = αS + αT.
VS8: ∀, α, β ∈ F and S ∈ L(U, V), (α + β)S = αS + βS.
VS9: ∀ α, β ∈ F and S ∈ L( U, V ), (αβ)S = α(βS).
VS10: ∀ S ∈ L(U, V), 1.S = S.
E5) Let the dual basis be {f1 , f 2 , f 3}. Then, for any
v ∈ P2 , v = f1 ( v).1 + f 2 ( v).x + f 3 ( v).x 2 .
55
∴, if v = a 0 + a 1 x + a 2 x 2 , then f1 ( v) = a 0 , f 2 ( v) = a 1 , f 3 ( v) = a 2 .
That is,
f1 (a 0 + a 1 x + a 2 x 2 ) = a 0 , f 2 (a 0 + a1 x + a 2 x n ) = a 1 , f3 (a 0 + a 2 x + a 2 x 2 ) = a 2 ,
for any a 0 + a1 x + a 2 x 2 ∈ P2 .
E6) Let the dual basis be {f1 , f 2 , f 3}. Let its dual basis be
{θ1 , K, θ n }, θi ∈ V∗∗ . Let ei ∈ V such that φ(ei ) = θi (ref. Theorem
3) for i = 1, K, n.
Then {e1 , K, e n } is a basis of V, since φ −1 is an isomorphism and
maps a basis to {e1 , K , e n }. Now f i (e j ) = φ(e j ) (f i ) = θ j (f j ) = δ ji , by
definition of a dual basis.
∴{f1 , K, f n } is the dual of {e1 , K, e n }.
E9) S ∈ A (R 2 ), T ∈ A(R 2 ).
S o T( x1 , x 2 ) = S(−x 2 , x1 ) = ( x1 , x 2 )
T o S( x1 , x 2 ) = T(x 2 , − x1 ) = ( x1 , x 2 )
∀ ( x1 , x 2 ) ∈ R 2 .
∴ S o T = T o S = I, and hence, both S and T are invertible.
E19) Let p = a 0 + a 1 x + L + a n x n , q = b 0 + b1 x + L + b m x m .
a) Then ap + bq = aa 0 + aa 1 x + L + aa n x n + bb 0 + bb1 x + L + bb m x m .
∴ φ(ap + bq ) = aa 0 I + aa 1T + L + aa n T n + bb 0 I + bb1T + L + bb m T m
= ap(T) + bq (T) = aφ(p) + bφ(q )
57
b) pq = (a 0 + a 1 x + L + a n x n ) (b 0 + b1 x + L + b m x m )
= a 0 b 0 + (a 1b 0 + a 0 b1 ) x + L + a n b m x n + m
∴ φ(pq ) = a 0 b 0 I + (a 1b 0 + a 0 b1 )T + L + a n b m T n +m
= (a 0 I + a 1T + L + a n T n ) (b 0 I + b1T + L + b m T m )
= φ(p) φ(q ).
E22) We have
x (1 − m 2 ) + 2 ym 2mx + y(m 2 − 1)
R ( x , y) = , .
m2 +1 m2 + 1
The calculation is again conceptually simple but a little tedious.
Let us write R ((x , y)) = ( x′, y′) where
x (1 − m 2 ) + 2 ym 2mx + y(m 2 − 1)
x′ = , y ′ = .
m2 + 1 m2 + 1
x′(1 − m 2 ) + 2 y′m 2mx′ + y′(m 2 − 1)
Therefore, R ( x′, y′) = , .
m2 + 1 m2 + 1
′
Substituting the values of x and y , we get ′
x (1 − m 2 ) + 2 ym 2mx + y(m 2 − 1)
x ′(1 − m 2 ) + 2 y′m = 2
(1 − m 2
) + 2m 2
m + 1 m + 1
x (1 − m 2 ) + 2 ym(1 − m 2 ) + 4m 2 x + 2my (m 2 − 1)
=
m2 + 1
x − 2m 2 x + xm 4 + 2 ym − 2 ym 3 + 4m 2 x + 2m 3 y − 2 ym
=
m2 + 1
58
x + 2m 2 x + m 4 x x (m 2 + 1) 2
= 2
= 2
= x (m 2 + 1).
m +1 (m + 1)
x ′(1 − m 2 ) + 2 y′m x (m 2 + 1)
∴ = = x.
m2 + 1 m2 + 1
Similarly, check that
2mx′ + y′(m 2 − 1)
= y.
m2 +1
Therefore, R 2 ((x , y)) = R ((x′, y′)) = ( x, y) or (R 2 − I) ( x , y) = 0 for
all ( x, y) ∈ R 2 . So, R satisfies the polynomial x 2 − 1 = 0. We have
( x 2 − 1) = ( x − 1) ( x + 1). So, the minimal polynomial is x − 1, x + 1 or
x 2 − 1. If it is x − 1, we have R − I = 0 or R = I. Again, taking
2m m 2 − 1
(x , y) = (1, 0) we get R ((0, 1)) = 2 , 2 ≠ (0, 1) since m ≠ 0.
m +1 m +1
Therefore, x − 1 is not the minimal polynomial.
If R + I = 0, R = −I. ∴ R ((x, y)) = −( x, y) = (− x, − y).
2m m 2 − 1
Again, for ( x, y) = (0, 1), 2 , 2 ≠ (0, − 1).
m + 1 m + 1
Therefore x + 1 is not the minimal polynomial of R. So, the minimal
polynomial is x 2 − 1.
E24) T : R 2 → R 2 : T( x, y) = ( x, − y).
Check that T 2 − I = 0
∴ the minimal polynomial p must divide x 2 − 1.
∴ p( x ) can be x − 1, x + 1 or x 2 − 1. Since T − I ≠ 0 and T + I ≠ 0, we
see that p( x ) = x 2 − 1.
By Theorem 10, T is invertible. Now T 2 − I = 0.
∴ T(−T) = I. ∴ T −1 = −T.
60
Unit 1 Errors and Approximations
UNIT 9
9.1 INTRODUCTION
In Unit 1, we discussed matrices. In Unit 2, we saw that M n , m ( F) is a
vector space over F. In this unit, we will look at the matrices; the
matrices will be representations of linear transformations. Such a
representation will enable us to compute several things like the rank
and nullity of a linear transformation. We will also see that composition
of linear transformations translates to multiplication of matrices. This
gives us an insight into matrix multiplication which looks very contrived
otherwise.
Objectives
After studying this unit, you should be able to:
• define and give examples of various types of matrices;
• define a linear transformation, if you know its associated matrix;
• evaluate the sum, difference, product and scalar multiple of
matrices;
• obtain the transpose and conjugate of a matrix;
• determine if a given matrix is invertible;
• obtain the inverse of a matrix;
• discuss the effect that the change of basis has on the matrix of a
linear transformation.
We use the notation [T ]BB12 for this matrix. Thus, to obtain [T ]BB12 we
consider T(ei ) ei B1 , and write them as linear combination of the
elements of B 2 .
1 0 0
Thus, [T]BB12 = .
0 1 0
***
E1) Choose two other bases B1 and B2 of R 3 and R 2 , respectively.
(In Unit 4 you came across a lot of bases of both these vector
spaces.) for T in the example above, give the matrix [T ]BB12
63
Block 1 Solution of Nonlinear Equations in One Variable
What E1 shows us is that the matrix of a transformation depends
on the bases that we use for obtaining it. The next two exercises
also bring out the same fact.
1 2 4
Example 2: Describe T : R → R such that [T]B = 2 3 1 , when B
3 3
3 1 2
is the standard basis of R 3 .
T( x , y, z) = ( x + 2 y + 4z, 2x + 3y + z, 3x + y + 2z) .
***
1 1 0
E5) Describe T : R 3 → R 2 such that [T]BB12 = , where B1 and
0 1 1
B2 are the standard bases of R 3 and R 2 , respectively.
In Unit 7 you studied about the sum and scalar multiples of linear
transformation. In the following theorem we will see what happens to
the matrices associated with the linear transformations that are sums or
scalar multiples of given linear transformations.
a m1 a m 2 a mn
In other words, A is the m n matrix whose (i, j) th element is
times the (i, j) th element of A.
Remark 1: The way we have defined the sum and scalar multiple of
matrices allows us to write Theorem 1 as follows:
[S + T]BB12 = [S]BB12 + [T]BB12
[S]`BB12 = [S]BB12 .
The following exercise will help you in checking if you have understood
the contents of Sections 2.2.2 and 2.2.3.
66
Unit 1 Errors and Approximations
What is the dimension of M mn (F) over F ? To answer this question we
prove the following theorem. But, before you go further, check whether
you remember the definition of a vector space isomorphism (Unit 7).
d1 0 0
0 d 0
[T ]BB12 = 2
.
0 0 dn
Such a matrix is called a diagonal matrix. Let us see what this means.
Note: The d i ' s may or may not be zero. What happens if all the d i ' s
are zero? Well, we get the n n zero matrix, which corresponds to the
zero operator.
68
Unit 1 Errors and Approximations
E10) Show that I, is the matrix associated to the identity operator from
R n to R n .
0 0 0 a nn
Note that a ij = 0 i j.
A square matrix A such that a ij = 0 i j is called an upper triangular
matrix. If a ij = 0 i j, then A is called strictly upper triangular.
1 3 1 0 1 0
For example, , , are all upper triangular, while
0 2 0 0 0 1
0 3
0 0 is strictly upper triangular.
1 2 3
Remark 4: If A is an upper triangular 3 3 matrix, say A = 0 4 5 ,
0 0 6
1 0 0
then A = 2 4
t
0 , a lower triangular matrix.
3 5 6
In fact, for any n n upper triangular matrix A, its transpose is lower
triangular, and vice-versa.
coefficients of g i .
n
Thus, [ST ]BB13 = [cik ]mp , where c ik = a ij b jk
j=1
1 1
Also,
S(f1 ) = S(1, 0, 0) = (0, 0) = 0.e1 + 0.e 2
S(f 2 ) = S(0, 1, 0) = (−1, 1) = −e1 + e 2
S(f 3 ) = S(0, 0, 1) = (2, − 1) = 2e1 + e 2
Thus,
0 − 1 2
[S]BB12 =
0 1 − 1
2 1
0 − 1 2
B2
So, [S] [T] B1
= 1 2
−
B1 B2
0 1 1 1 1
1 0
= = I2
0 1
71
Block 1 Solution of Nonlinear Equations in One Variable
Also, S T ( x , y) = S(2 x + y, x + 2 y, x + y)
= (− x − 2 y + 2 x + 2 y, x + 2 y − x − y)
= ( x , y).
Thus, S T − 1, the identity map.
This means [S T]B1 = I 2 .
Hence, [S T]B1 = [S]BB12 [T]BB12 .
72
Unit 1 Errors and Approximations
Proof: Suppose T is invertible. Then S L(V, V ) such that
TS = ST = I. Then, by Theorem 2, [TS]B = [ST ]B = I. That is,
[T ]B[S]B = [S]B[T ]B = I. Take A = [S]B . Then [T]B A = I = A[T]B .
We will now make a few observation about the matrix inverse, in the
form of a theorem.
ii) If we take transposes in Equation (1) and use the property that
(AB) t = Bt A t , we get
(A −1 ) t A t = A t (A −1 ) t = I t = I.
So A t is invertible and (A t ) −1 = (A −1 ) t .
From the following example you can see how Theorem 7 can be useful.
1 0 1
Example 4: Let A = 0 1 1 M 3 (R).
1 1 1
Determine whether or not A is invertible.
1 0 1
The row operations R 3 → R 3 − R 1 , R 3 → R 3 − R 2 gives 0 1 1 .
0 0 − 1
Since there are no zero rows, the rows are linearly independent.
Thus, by Theorem 5, A is invertible.
***
2 0 1
E15) Check if 0 0 1 M 3 (Q) is invertible.
0 3 0
Fig.1
0 0 3
What happens if we change the basis more than once? The following
theorem tells us something about the corresponding matrices.
Theorem 9: Let B, B, B be three bases of V. Then M BBM BB = M BB .
Proof: By Theorem 9,
M BBM BB = M BB = I
Similarly, M BBM BB = M BB = I.
But, how does the change of basis affect the matrix associated to a
given linear transformation? In Sec.7.2 we remarked that the matrix of a
linear transformation depends upon the pair of bases chosen. The
76
Unit 1 Errors and Approximations
relation between the matrices of a transformation with respect to two
pairs of bases can be described as follows.
Now, a corollary to Theorem 10, which will come in handy in the next
block.
We can compute the matrix M BB using row reduction when B and B
are subsets of n . Let us look at an example.
Solution: We have
1 1 1 1 1 1 1 a11 a11
0 = a 1 + a 1 + a 0 = 1 1 0 a = B a .
11 21 31 12 21
1 1 0 0 1 0 0 a13 a 31
Here we write B for the matrix whose columns are the elements of B.
Since B is a linearly independent set, the columns of B are linearly
independent. So, B is invertible. Therefore
a 11 1
a = B −1 0.
21
a 31 1
Similarly, we can write down the equations
a12 1 a13 0
a = B −1 1, a = B −1 1.
22 23
a 32 0 a 33 0
77
Block 1 Solution of Nonlinear Equations in One Variable
a 31 a 32 a 33
where B is the matrix formed by the rows of B. To find B −1B we set
up a 3 6 matrix C = [ B | B]. We then use row operations to reduce the
matrix formed by the first three column to identity. We get [ I | D] when
D = B −1B. So, we form the matrix
1 1 1 1 1 0
C = 1 1 0 0 1 1.
1 0 0 1 0 0
Carrying out row operations R 2 → R 2 − R1 , R 3 → R 3 − R1 , R 2 R 3 ,
R 2 → −R 2 , R 1 → R 1 − R 2 , R 3 → −R 3 , R 2 − R 2 − R 3 , we get
1 0 0 1 0 0
C = 0 1 0 − 1 1 1 . (Notice that C is the RREF of C. )
0 0 1 1 0 − 1
1 0 0
B
Therefore M B − 1 1 1 .
1 0 − 1
***
E17) For the following pairs of basis B and B, find the change of base
matrix M BB .
i) B = {(1, − 1), (2, 1)}, B{(1, 1), (1, 2)}.
ii) B{(1, − 1, 1), (−1, 0, 1), (1, 1, 1)}, B = {(1, − 1, 1), (1, 1, − 1), (0, 0, 1)}.
78
Unit 1 Errors and Approximations
Example 6: Let B = {(1, − 1, − 1), (1,1, 0), (1, 0, 1)} and
B = {(1, 1, − 1), (0, 1, 1), (0, 0, 1)}. Let T : 3 → 3 be a linear operator
1 1 2
such that [T]B = − 1 1 0. Find [T ]B .
0 1 1
***
79
Block 1 Solution of Nonlinear Equations in One Variable
E19) Let T : 3 → 3 be such that its matrix with respect to the basis
1 − 1 2
B = {(1, 0, 1), (−1, 0, 1), (0, 1, 1)} is 2 1 0. Find the matrix of the
0 1 1
linear transformation with respect to the basis
B = {(1, − 1, 1), (1,1,1), (0, 0, 1)}.
2.7 SUMMARY
We briefly sum up what has been done in this unit.
2.8 SOLUTIONS/ANSWERS
E1) Suppose B1 = {(1, 0, 1), (0, 2, − 1), (1, 0, 0)} and B2 = {(0, 1), (1, 0)}
Then T (1, 0, 1) = (1, 0) = 0.(0, 1) + .(1, 0)
T (0, 2, − 1) = (0, 2) = 2.(0, 1) + 0, (1, 0)
T(1, 0, 0) = (1, 0) = 0.(0, 1) + 1.(1, 0).
0 2 0
[T]BB12 =
1 0 1
E7) B1 = {(1, 0), (0, 1)}, B2 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
Now S(1, 0) = (1, 0, 0)
S(0, 1) = (0, 0, 1)
1 0
[S]B1 . B 2 = 0 0 , a 3 2 matrix
0 1
Again, T(1, 0) = (0, 1, 0)
T (0, 1) = (0, 0, 1)
81
Block 1 Solution of Nonlinear Equations in One Variable
0 0
[T]B1 . B 2 = 1 0 , a 3 2 matrix
0 1
1 0 0 0 1 0
[S + T]B1 . B 2 = [S]B1 . B 2 + [T]B1 . B 2 = 0 0 + 1 0 = 1 0 , and
0 1 0 1 0 2
1 0 0
[S]B1 . B 2 = [S]B1 . B 2 = 0 0 = 0 0 , for any R.
0 1 0
E8) Since dim M 2 3 (R) is 6, any linearly independent subset can have
6 elements, at most.
E10) I : R n → R n : I( x1 , , x n ) = ( x1 , , x n ).
Then, for any basis B = {e1 , , e n } for R n , I(ei ) = ei .
1 0 0
0 1 0
[I]B = =I
n
0 0 1
E11) Since A is upper triangular, all its elements below the diagonal
are zero. Again, since A = A t , a lower triangular matrix, all the
entries of A above the diagonal are zero. , all the off-diagonal
entries of A are zero. A is a diagonal matrix.
0 0 0 1 0 0
E12) [S]B = 1 0 0 , [T]B = 0 0 0
0 1 0 0 1 0
0 0 0
[S]B [T]B = 1 0 0
0 0 0
0 0 0
Also, [S T] = 1 0 0 = [S]B[T]B
0 0 0
82
Unit 1 Errors and Approximations
d1 0 0 0 e1 0 0 0
0 d 0 0 0 e 0 0
2 2
AB =
0 0 d n 0 0 en
d1 0 0 0
0 d e 0 0
2 2
= 0 0 d 3e3 0
0 0 0 d n en
= diag (d1e1 , d 2e 2 , , d n e n ).
E15) We will show that its rows are linearly independent over .
R R
Carrying out row operations R 1 → 1 , R 2 R 3 , R 2 → 2 , we get
2 3
1 0 1 / 2
0 1 0 .
0 0 1
Since there are no zeros, the rows are linearly independent.
the rows of the given matrix is linearly independent.
1 0 0 1 1 0
Setting up C = 0 1 0 − 1 1 0 and finding its RREF,
0 0 1 − 1 − 1 1
1 0 0 1 / 2 − 1 / 2 1 / 4
we get C = 0 1 0 − 1 − 1 1 / 2. The row operations
0 0 1 − 1 / 2 1 / 2 1 / 4
are
R 2 → R 2 + R1 , R 3 → R 3 − R1 , R 2 = −R 2 , R1 → R1 + R 2 ,
R
R 3 → R 3 − 2R 2 , R 3 → 3 , R 1 → R 1 + 2R 3 .
4
1 / 2 − 1 / 2 1 / 4
Therefore, M B = − 1
B
− 1 1 / 2.
− 1 / 2 1 / 2 1 / 4
E18) We have B = {(1, 0), (0, 1)}, B = {( −2, 1), (3, 2)}.
−1
− 2 3 1 0 1 − 2 3
−1
P=B B= = .
1 2 0 1 7 1 2
−1
1 − 2 3 − 2 3
P =
−1
=
.
7 1 2 1 2
1 6 −1 1 − 2 3 1 6 − 2 3 1 − 2 3 4 15
P= P = 7 1 2 2 2 1 2 = 7 1 2 − 2 10
2 2
1 − 14 0
=
7 0 35
− 2 0
= .
0 5
84
Unit 1 Errors and Approximations
1 / 2 − 1 / 2 − 1 / 2 1 0 0
A = 1 / 2 − 1 / 2 1 / 2 0 1 0.
0 2 1 0 0 1
R1
Row reducing using row operations R1 → 2R1 , R 2 → , R 2 R3,
2
R2 R R
R2 → , R → R1 + R 2 , R1 → R1 + 3 , R 2 → R 2 − 3 gives
2 2 2
1 0 0 3 / 2 1 / 2 1 / 2 3 / 2 1 / 2 1 / 2
0 1 0 1 / 2 − 1 / 2 1 / 2. P −1 = 1 / 2 − 1 / 2 1 / 2.
0 0 1 − 1 1 0 − 1 0 0
1 6 2
−1
[T]B = P [T]B P = − 1 / 2 7 / 2 3 / 2 .
3 / 2 − 11 / 2 − 3 / 2
85
Block 3
We have R ( v) = w 1 − w 2 , R (v′) = w ′1 − w ′2 .
R ( v + v′) = R ((w 1 + w 1′ ) + ( w 2 + w ′2 )) = ((w 1 + w 1′ ) − (w 2 + w ′2 ))
= ( w 1 − w ′1 ) + ( w 2 − w ′2 ) = R ( v) + R ( v′).
If α ∈ F, αv = (αw 1 + αw 2 ).
∴ R (αv) = αw 1 − αw 2 = α(w 1 − w 2 ) = αR ( v).
Therefore R is a linear operator.
R2.
iii) Prove that R 2 = W1 ⊕ W2 is an orthogonal decomposition of
R2.
iv) Compute projection and reflection operators with respect to
this decomposition.
88
Block 3
Let us now prove the converse. Let v ≠ 0 be in V. Then,
W[{v}] is a subspace of V. Then, T(v) ∈ W, therefore
T( v) = λ 0 v. We claim that T( v) = λ 0 v for all v ∈ V. Let
v′ ∈ V, v′ ≠ v. If
v′ = αv, T( v′) = T(αv) = αT( v) = αλ 0 v = λ 0 (αv) = λ 0 v′.
So, suppose that there is no α such that v′ = αv.
Taking W = [{v′}], there is λ1 ∈ F such that T( v′) = λ1 v.
Consider u = v + v′. Then u ≠ 0; otherwise v′ = (−1)v, v′ = αv
with α = −1.
There is a λ 3 such that T(u ) = λ 3 u.
∴ λ 3 ( v + v′) = T(u ) = T( v + v′) = λ 0 v + λ1v′.
Since there is no α such that v′ = αv, v and v′ are linearly
independent. From λ 3 v + λ 3 v′ − λ 0 v + λ1 v′, we get λ1 = λ 0 = λ 3
and we are done.
SOLUTIONS/ANSWERS
E1) We have
2P( v) − I( v) = 2P(w 1 + w 2 ) − ( w 1 + w 2 ) = 2w 1 − w 1 − w 2 = w 1 − w 2 .
(1, m) (−m, 1)
3) ∈ W1 and ∈ W2 .
m2 +1 m2 +1
(1, m) (−m, 1)
Further, if α +β = 0. Taking dot products
2
m +1 m2 +1
(1, m)
sides with , we get
m2 +1
1 ⋅1 + m ⋅ m 1 ⋅ ( − m ) + 1 ⋅ m
α 2 + β 2
= 0 or α = 0 .
m + 1 (m + 1)
89
Block 3
(− m, 1)
Similarly taking dot products both sides with , we get
m2 + 1
β = 0.
(1, m) (− m, 1) 2
So, , is a linearly independent set in R
m +1 m +1
2 2
(1, m) (−m, 1)
∈ W1 , ∈ W2 , R 2 = W1 + W2 .
2 2
m +1 m +1
(1, m) (− m, 1)
If v ∈ W1 ∩ W2 , v = α = β
.
m +1 m +1
2 2
(1, m)
Taking dot product with both sides, we get
m2 + 1
1 ⋅1 + m ⋅ m 1 ⋅ ( − m ) + m ⋅1
α 2 = β = 0 or α = 0. ∴ v = 0.
m +1 m2 +1
2
R = W1 ⊕ W2 .
α(1, m) β(−m, 1)
If w 1 ∈ W1 and w 2 ∈ W2 , we get w 1 = , w2 = .
m2 +1 m2 + 1
∴ w 1 ⋅ w 2 = 0 and R 2 = W1 ⊕ W2 is an orthogonal
decomposition.
x + my y−m
w 1 ∈ W1 and w 2 ∈ W2 since u ∈ W1 and v ∈ W2 .
m2 +1 m21
x + my x + my (1, m) x + my mx + m 2 y
∴ P((x , y)) = u= = 2 ,
m2 + 1 m2 +1 m +1 m 2 + 1
x + my (1, m) y − mx (−m, 1)
R ((x , y)) = w 1 − w 2 = − .
m 2 +1 m2 +1 m2 +1 m2 +1
x + my mx + m 2 y m 2 x − my y − mx
= 2 , − , 2
(m + 1) m 2 + 1 m 2 + 1 m + 1
(1 − m 2 ) x + 2my (m 2 − 1) y + 2mx
= , .
m2 +1 m2 +1
90