0% found this document useful (0 votes)
3 views

Linear_Maps_from_R_m_to_R_n

Uploaded by

Stapheton Ian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Linear_Maps_from_R_m_to_R_n

Uploaded by

Stapheton Ian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Linear Maps from Rm to Rn

S Kumaresan
[email protected]

December 8, 2023

Abstract

We classify (find) all linear maps T : Rm → Rn . The plan is in the following list
of steps.
1. Find all linear maps from R to R. They are of the from f ( x ) = ax where
a = f (1).
2. Find all linear maps from Rn to R. They are of the from f ( x ) = ∑k ak xk where
a k = f ( e k ).
3. The projection maps πi : Rm → R, 1 ≤ i ≤ m, defined by π( x1 , . . . , xm ) = xi
are linear. Note that πi ( x ) = x · ei .
4. Composition of linear maps is a linear map.
5. Let T : Rm → Rn be linear. Let Ti := πi ◦ T. We ”know” what Ti looks like
from Item 2. Ti x = ai1 x1 + · · · + aim xm , 1 ≤ i ≤ n. Note that aij = Te j · ei .
6. Let Tx = (y1 , . . . , yn ). Then yi := πi (y) = πi ( Tx ) = (πi ◦ T )( x ).
7. Matrix representation of T : Rm → Rn . Let
 
    x1
a11 a12 . . . a1m a11 a12 . . . a1m  
 .. .. .. ..  .  .. .. .. ..   x2 
A :=  . . . .  Note that Tx =  . . . .  ..  .

.
an1 an2 . . . anm an1 an2 . . . anm
 
xm

8. What is Tei ? Geometric meaning of the matrix A of T: Aei is the i-th column of
A.
9. What is the geometric meaning of the rows? The i-th row constitutes the coor-
dinates of Ti x.
10. Matrix representation of a linear map T : Rm → Rn .

We start with the question: What are all the linear maps from R to R? Let f : R → R
be linear. Given x ∈ R, we write it as x = x · 1 where we treat x on the right side as
a scalar and 1 as a (basic) vector. That is, we consider {1} as a basis for R and express

1
the vector x as a scalar multiple of the basic vector 1. Since f is linear, f ( x ) = f ( x · 1) =
x f (1) = ax where a := f (1). Thus, if f : R → R is linear, then f is given by f ( x ) = ax
where a = f (1). Conversely, if a ∈ R and if we define f : R → R by f ( x ) = ax, then f
is linear. We also note that f (1) = a. Thus we have arrived at the following lemma.

Lemma 1. A map f : R → R is linear iff f ( x ) = ax for all x ∈ R, where a = f (1).

Let us now look at a linear map f : Rm → R. We consider Rm as the space of column


vectors. Let ei := (0, . . . , 0, 1, 0, . . . , 0)t be the standard basic vectors, 1 ≤ i ≤ m. If
x = ( x1 , . . . , xm )t ∈ Rm , we note that x = x1 e1 + · · · + xm em . Since f is linear we see that

f ( x ) = f ( x 1 e1 + · · · + x m e m )
= f ( x 1 e1 ) + · · · + f ( x m e m )
= x 1 f ( e1 ) + · · · + x m f ( e m ) .

If we let ai := f (ei ), we see that f ( x ) = a1 x1 + · · · + am xm . In conclusion, if f : Rm → R


is linear then f is of the from f ( x ) = a1 x1 + · · · + am xm where ai = f (ei ), 1 ≤ i ≤ m.
Conversely, if f : Rm → R is defined by f ( x ) = a1 x1 + · · · + am xm where ai ∈ R are
arbitrary, then f is linear and we have ai = f (ei ), 1 ≤ i ≤ m. We have thus arrived at
the following result.

Lemma 2. Any linear map T : Rm → R is of the form T ( x ) = a1 x1 + · · · + am xm and we


have ai = T (ei ), 1 ≤ i ≤ m. In particular, T is a first degree polynomial in the variables xi ,
1 ≤ i ≤ m, with zero constant term.

Recall the standard dot product on Rm : If x, y ∈ Rm , then x · y = x1 y1 + · · · + xm ym .


Using this dot product, we can reformulate the last lemma as follows.

Proposition 3. Let T : Rm → R be a linear map. Then there exists a unique vector a ∈ Rn


such that for all x ∈ Rm we have T ( x ) = x · a. The vector a = ( a1 , . . . , am ) is prescribed by
ai = T (ei ), 1 ≤ i ≤ m.

Let us look at the projection map πi : Rm → R defined by πi (( x1 , . . . , xm )t ) = xi ,


1 ≤ i ≤ m. It is easy to check that it is linear. By Lemma 2, we know that it is of the form
πi ( x ) = ∑k ak xk . What are ak ’s? Note that ak = 0 if k 6= i and ai = 1. Let us formulate
this as a lemma.

Lemma 4. Let the projection map πi : Rm → R be(defined by πi (( x1 , . . . , xm )t ). Then πi


1 k=i
is linear and we have πi ( x ) = ∑k ak xk where ak = In terms of the dot-product,
0 k 6= i.
πi ( x ) = x · ei , 1 ≤ i ≤ m.

2
We claim that the composition of linear maps is linear.

Lemma 5. Let U. V and W be vector spaces. Let A : U → V and B : V → W be linear maps.


Then composition B ◦ A : U → W is a linear map.

Proof. This is an easy verification and is left to the reader.

Now, let a linear map T : Rm → Rn be given. Do you know how to generate n-linear
maps Ti : Rm → R? Lemmas 4-5 give us a way of generating such maps. How about
Ti := π ◦ T : Rm → R? We know how to express these maps Ti : Rm → R. They are
given by
Ti x := Ti ( x ) = ai1 x1 + ai2 x2 + · · · + aim xm , where aij ∈ R. (1)
But we wanted to know Tx! Observe that if we write Tx = y, then to know y is the
same as knowing its coordinates yi , 1 ≤ i ≤ n. That is easily achieved. Note that
yi = πi (y) = πi ( Tx ) = (πi ◦ T )( x ). Hence

y = Tx = ((π1 ◦ T )( x ), (π2 ◦ T )( x ), . . . , (πn ◦ T )( x ))t


= ( T1 ( x ), T2 ( x ), . . . , Tn ( x ))t
= ( a11 x1 + a12 x2 + · · · + a1m xm , . . . , an1 x1 + an2 x2 + · · · , anm xm )t (2)
 
  x1
a11 a12 . . . a1m  
 ..
= . .. . . ..   x2  = Ax, say. (3)
.
. .   ... 

an1 an2 . . . anm
xm

Observation 6. If T : Rm → Rn is a linear map, then we found the following:


(i) For x ∈ Rm , the vector Tx = ( p1 ( x ), . . . , pn ( x ))t where each pi is a first degree
polynomial with zero constant term.
(ii) Each pi is given by pi ( x ) = ai1 x1 + · · · + aim xm where aij = Ti (e j ) = πi ( Te j ) =
( Te j ) · ei .
(iii) There is an (n × m)-matrix A such that Tx = Ax where x is considered as a matrix
of size m × 1.

Conversely, if A is an (n × m)-matrix and if we define Tx := Ax for x ∈ Rm , then T


is a linear map from Rm to Rn . This follows from the properties of the matrix multipli-
cation. Thus we have proved the following theorem.

Theorem 7. A map T : Rm → Rn is linear iff there exists (n × m)-matrix A such that Tx =


Ax.

There are a few more observations that bring out the ‘geometric perspective’ to a
matrix. We already saw the geometric meaning of the rows. The entries of the i-th row

3
are the coefficients of the Ti x ∈ Rn expressed as a linear combination of the standard
basic vectors of Rn . The columns also admit a geometric meaning. Look at Tei = Aei .
This is nothing other than the i-th column of A.
The matrix A is unique: If Tx = Ax = Bx, then ( A − B)( x ) = 0 for all x ∈ Rm . In
particular, if we take x = ei , then the i-th column of the matrix A − B is the zero vector
in Rn , 1 ≤ i ≤ n. Thus each column of A − B is the zero vector in Rn and hence A − B
is the zero matrix. Hence, A = B.
The matrix A in (3) is known as the matrix of the linear map T. Why is the last
paragraph relevant here?
An observation: Note that the proofs of Lemmas 1-2 indicate the following result.

Theorem 8. Let V be a finite dimensional vector space and let W be a vector space. Let T : V →
W be linear. Let {v1 , . . . , vn } be a basis of V. If we “know” Tvi , say, Tvi = wi , 1 ≤ i ≤ n, then
we “know” Tv for any v ∈ V. If v = ∑i ai vi , then Tv = ∑i ai wi .

There is a ‘converse’ to the last result.

Theorem 9. Let V be a finite dimensional vector space and let W be a vector space. Fix a basis
{vk : 1 ≤ k ≤ n} of V. Let w1 , w2 , . . . , wn be a finite sequence of vectors in W. (Note that w j ’s
need not be distinct.) Define T : V → W by setting
n
Tv = a1 w1 + a2 w2 + · · · + an wn , where v = ∑ ak vk . (4)
k =1

Then T is a linear map from V to W.

Proof. If u, v ∈ V and if u = ∑k ak vk and v = ∑k bk vk , note that v + w = ∑k ( ak + bk )vk


and for λ ∈ R, λv = ∑k (λak )vk . Hence we have

T (u + v) = ∑(ak + bk )wk = ∑ ak wk + ∑ bk wk = Tu + Tv
k k k
T (λv) = ∑(λak )wk = λ(∑ ak wk ) = λTv.
k k

Hence T is linear.

What is significant in the last theorem is that we can “prescribe” arbitrary vectors
(from W) to Tvk and “extend T linearly” by (4) to any vector v. Theorems 8 and 9 sub-
stantiate the following important take-way: when dealing with a linear map T : V → W,
we should not be concerned with the explicit expressions for T in terms of “coordi-
nates”, but we should focus on its action on a (conveniently chosen) basis.

4
To appreciate this, carry out the following exercise: Construct a linear map from R2
to R3 and another from R3 to R2 and express them in terms of coordinates! Use (u, v)
as coordinates for R2 and ( x, y, z) as coordinates for R3 .
Observation 6-(i) suggests we may define A : R3 → R2 and B : R2 → R3 as follows:

A( x, y, z) = ( p1 ( x, y, z), p2 ( x, y, z)) = (3x − 4y + 5z, − x + 2y + z), say. (5)


B(u, v) = (q1 (u, v), q2 (u, v), q3 (u, v)) = (3u + 2v, u − v, u + v), say. (6)

Theorems 8 and 9 suggest to define A : R3 → R2 , we choose arbitrary vectors w1 , w2 , w3 ∈


R2 and set Aei = wi , 1 ≤ i ≤ 3. For instance, take w1 = (3, −1), w2 = (−4, 2), w3 =
(5, 1). Then

A( x, y, z) = xAe1 + yAe2 + zAe3 = xw1 + yw2 + zw3 = (3x − 4y + 5z, − x + 2y + z).

Voilà! We got the expression for A in (5). Can you carry out a similar exercise for B?
That is, can you define Be1 , Be2 ∈ R3 suitably so that you get the expression for B in (6)?
Here e1 = (1, 0) and e2 = (0, 1) are the standard basis of R2 .

Remark 10. If I remember correctly, this article was written after recording the following
video. Though it is not an exact transcription, this article captures the spirit of the
lecture! Linear Maps RmtoRn: https://youtu.be/njIqG4-aQr4

You might also like