Linear_Maps_from_R_m_to_R_n
Linear_Maps_from_R_m_to_R_n
S Kumaresan
[email protected]
December 8, 2023
Abstract
We classify (find) all linear maps T : Rm → Rn . The plan is in the following list
of steps.
1. Find all linear maps from R to R. They are of the from f ( x ) = ax where
a = f (1).
2. Find all linear maps from Rn to R. They are of the from f ( x ) = ∑k ak xk where
a k = f ( e k ).
3. The projection maps πi : Rm → R, 1 ≤ i ≤ m, defined by π( x1 , . . . , xm ) = xi
are linear. Note that πi ( x ) = x · ei .
4. Composition of linear maps is a linear map.
5. Let T : Rm → Rn be linear. Let Ti := πi ◦ T. We ”know” what Ti looks like
from Item 2. Ti x = ai1 x1 + · · · + aim xm , 1 ≤ i ≤ n. Note that aij = Te j · ei .
6. Let Tx = (y1 , . . . , yn ). Then yi := πi (y) = πi ( Tx ) = (πi ◦ T )( x ).
7. Matrix representation of T : Rm → Rn . Let
x1
a11 a12 . . . a1m a11 a12 . . . a1m
.. .. .. .. . .. .. .. .. x2
A := . . . . Note that Tx = . . . . .. .
.
an1 an2 . . . anm an1 an2 . . . anm
xm
8. What is Tei ? Geometric meaning of the matrix A of T: Aei is the i-th column of
A.
9. What is the geometric meaning of the rows? The i-th row constitutes the coor-
dinates of Ti x.
10. Matrix representation of a linear map T : Rm → Rn .
We start with the question: What are all the linear maps from R to R? Let f : R → R
be linear. Given x ∈ R, we write it as x = x · 1 where we treat x on the right side as
a scalar and 1 as a (basic) vector. That is, we consider {1} as a basis for R and express
1
the vector x as a scalar multiple of the basic vector 1. Since f is linear, f ( x ) = f ( x · 1) =
x f (1) = ax where a := f (1). Thus, if f : R → R is linear, then f is given by f ( x ) = ax
where a = f (1). Conversely, if a ∈ R and if we define f : R → R by f ( x ) = ax, then f
is linear. We also note that f (1) = a. Thus we have arrived at the following lemma.
f ( x ) = f ( x 1 e1 + · · · + x m e m )
= f ( x 1 e1 ) + · · · + f ( x m e m )
= x 1 f ( e1 ) + · · · + x m f ( e m ) .
2
We claim that the composition of linear maps is linear.
Now, let a linear map T : Rm → Rn be given. Do you know how to generate n-linear
maps Ti : Rm → R? Lemmas 4-5 give us a way of generating such maps. How about
Ti := π ◦ T : Rm → R? We know how to express these maps Ti : Rm → R. They are
given by
Ti x := Ti ( x ) = ai1 x1 + ai2 x2 + · · · + aim xm , where aij ∈ R. (1)
But we wanted to know Tx! Observe that if we write Tx = y, then to know y is the
same as knowing its coordinates yi , 1 ≤ i ≤ n. That is easily achieved. Note that
yi = πi (y) = πi ( Tx ) = (πi ◦ T )( x ). Hence
There are a few more observations that bring out the ‘geometric perspective’ to a
matrix. We already saw the geometric meaning of the rows. The entries of the i-th row
3
are the coefficients of the Ti x ∈ Rn expressed as a linear combination of the standard
basic vectors of Rn . The columns also admit a geometric meaning. Look at Tei = Aei .
This is nothing other than the i-th column of A.
The matrix A is unique: If Tx = Ax = Bx, then ( A − B)( x ) = 0 for all x ∈ Rm . In
particular, if we take x = ei , then the i-th column of the matrix A − B is the zero vector
in Rn , 1 ≤ i ≤ n. Thus each column of A − B is the zero vector in Rn and hence A − B
is the zero matrix. Hence, A = B.
The matrix A in (3) is known as the matrix of the linear map T. Why is the last
paragraph relevant here?
An observation: Note that the proofs of Lemmas 1-2 indicate the following result.
Theorem 8. Let V be a finite dimensional vector space and let W be a vector space. Let T : V →
W be linear. Let {v1 , . . . , vn } be a basis of V. If we “know” Tvi , say, Tvi = wi , 1 ≤ i ≤ n, then
we “know” Tv for any v ∈ V. If v = ∑i ai vi , then Tv = ∑i ai wi .
Theorem 9. Let V be a finite dimensional vector space and let W be a vector space. Fix a basis
{vk : 1 ≤ k ≤ n} of V. Let w1 , w2 , . . . , wn be a finite sequence of vectors in W. (Note that w j ’s
need not be distinct.) Define T : V → W by setting
n
Tv = a1 w1 + a2 w2 + · · · + an wn , where v = ∑ ak vk . (4)
k =1
T (u + v) = ∑(ak + bk )wk = ∑ ak wk + ∑ bk wk = Tu + Tv
k k k
T (λv) = ∑(λak )wk = λ(∑ ak wk ) = λTv.
k k
Hence T is linear.
What is significant in the last theorem is that we can “prescribe” arbitrary vectors
(from W) to Tvk and “extend T linearly” by (4) to any vector v. Theorems 8 and 9 sub-
stantiate the following important take-way: when dealing with a linear map T : V → W,
we should not be concerned with the explicit expressions for T in terms of “coordi-
nates”, but we should focus on its action on a (conveniently chosen) basis.
4
To appreciate this, carry out the following exercise: Construct a linear map from R2
to R3 and another from R3 to R2 and express them in terms of coordinates! Use (u, v)
as coordinates for R2 and ( x, y, z) as coordinates for R3 .
Observation 6-(i) suggests we may define A : R3 → R2 and B : R2 → R3 as follows:
Voilà! We got the expression for A in (5). Can you carry out a similar exercise for B?
That is, can you define Be1 , Be2 ∈ R3 suitably so that you get the expression for B in (6)?
Here e1 = (1, 0) and e2 = (0, 1) are the standard basis of R2 .
Remark 10. If I remember correctly, this article was written after recording the following
video. Though it is not an exact transcription, this article captures the spirit of the
lecture! Linear Maps RmtoRn: https://youtu.be/njIqG4-aQr4