Lin Transform
Lin Transform
T :V !W
Def: Let V and W be vector spaces over the same …eld F . We call a function
T : V ! W a linear transformation from V to W if, for all x; y 2 V and c 2 F ,
we have
a. T (x + y) = T (x) + T (y), preserving vector addition
b. T (cx) = cT (x), preserving scalar multiplication
1
This is the matrix representation of rotation by an angle in R2 .
(a1 ; a2 ); (b1 ; b2 )
show T (c(a1 ; a2 ) + (b1 ; b2 )) = cT (a1 ; a2 ) + T (b1 ; b2 )
c(a1 ; a2 ) + (b1 ; b2 ) = (ca1 ; ca2 ) + (b1 ; b2 )
= (ca1 + b1 ; ca2 + b2 )
T (c(a1 ; a2 ) + (b1 ; b2 )) = T (ca1 + b1 ; ca2 + b2 )
= ((ca1 + b1 ) cos (ca2 + b2 ) sin ;
(ca1 + b1 ) sin + (ca2 + b2 ) cos )
Ex: Let V denote the set of all real-valued functions de…ned on the real line
that have derivative of every order.
De…ne T : V ! V by T (f ) = f 0 , the derivative of f . Let g; h 2 V and a 2 R.
Now T (ag + h) = (ag + h)0 = ag 0 + h0 = aT (g) + T (h)
Ex: Let V = C(R) the vector space of continuous real valued functions on
Rb
R. Let a; b 2 R, a < b. De…ne T : V ! R by T (f ) = a f (t)dt for all f 2 V .
2
T is a linear transformation because the de…nite integral of a linear com-
bination of functions is the same as the linear transformation of the de…nite
integrals of the functions.
Rb Rb Rb
a
[cg(t) + h(t)]dt = c a g(t)dt + a h(t)dt
Def: Let V and W be vector spaces, and let T : V ! W be linear. Let the
Kernel (or Nullspace) N (T ) or ker(T ) of T to be the set of all vectors x in V
such that T (x) = 0; that is, ker(T ) = fx 2 V : T (x) = 0g.
1.0
-4 0.5 -4
-2 -2
z 0.0
0 0
-0.5
y-1.0 2
2 x
4 4
3
Ex: T 2: R2 ! 3
3 R de…ned by
1 2
T x = 43 45 x
5 26 3 2 3
1 2 x1 + 2x2
x x
T 1 = 43 45 1 = 43x1 + 4x2 5
x2 x2
5 6 5x1 + 6x2
Check on preservation of vector addition
x x0
x = 1 ; x0 = 10
x2 x2
2 3 2 0 3
0 1 2 0 x1 + 2x02 + x1 + 2x2
x + x1 x + x1
T (x + x0 ) = T 1 = 43 45 1 = 43x01 + 4x02 + 3x1 + 4x2 5
x2 + x02 x2 + x02
2 0 3 5 6 5x01 + 6x02 + 5x1 + 6x2
0
x1 + x1 + 2x2 + 2x2
= 43x01 + 3x1 + 4x02 + 4x2 5
5x201 + 5x1 + 6x02 + 6x2 3
(x01 + x1 ) + 2(x02 + x2 )
= 43(x01 + x1 ) + 4(x02 + x2 )5
5(x01 + x1 ) + 6(x02 + x2 )
2 3 2 3 2 3
1 2 1 2 x01 + 2x02 + x1 + 2x2
x x0
T x + T x0 = 43 45 1 + 43 45 10 = 43x01 + 4x02 + 3x1 + 4x2 5 =
x2 x2
2 0 5 6 5 6 5x01 + 6x02 + 5x1 + 6x2
3
x1 + x1 + 2x02 + 2x2
43x01 + 3x1 + 4x02 + 4x2 5
5x01 + 5x1 + 6x02 + 6x2
4
Lemma: Let T : V ! W be a linear transformation. Then T is one-to-one
i¤ (if and only if) ker(T ) = f0g.
Proof:
Suppose that T is one-to-one, we need to show that ker(T ) = f0g. 0 2
ker(T ), because T 0v = 0w . Now show there is no other elements in ker(T ).
Prove by contradiction, there was a nonzero vector v 2 V such that v 2 ker(T ),
T v = 0 = T 0. Since T is one-to-one, this has to force v = 0, then we have a
contradiction now.
5
Contradicts the assumption that L is linearly independent. Hence n > m;
that is, n m + 1. Say b is nonzero
u1 = ( ab11 ) + ( ab12 )v2 + ::: + ( abm
1
)vm b11 vm+1
+( bb21 )u2 + ::: + ( bnb1m )un m
Let H = fu2 ; :::; un m g. Then u1 2 span(L[H), and because v1 ; :::; vm ; u2 ; :::; un m
are in the span(L [ H), it follows that
fv1 ; :::; vm ; u1 ; :::; un m g span(L [ H)
because fv1 ; :::; vm ; u1 ; :::; un m g generates V , by the following theorem
The span of any subset S of a vector space V is a subspace of V that contains
S.
ker(T ) has a basis fv1 ; :::; vk g of k elements. Since this basis is in ker(T ),
so it is also in V , and the basis vectors are linearly independent, therefore it
must be part of a basis of V , which must have n = dim(V ). A basis for V is
fv1 ; :::; vn g; we can add n k extra basis vectors vk+1 ; :::; vn .
Since vk+1 ; :::; vn lie in V , the elements T vk+1 ; :::; T vn lie in image(T ). Now
we can claim that fT vk+1 ; :::; T vn g are a basis for image(T ), so image(T ) has
dimension n k.
Verify that fT vk+1 ; :::T vn g form a basis, we need to show that they span
image(T ) and are linearly independent. Every vector in image(T ) is a linear
combination of fT vk+1 ; :::T vn g. We pick a vector w from image(T ), and show
w is linear combination of fT vk+1 ; :::; T vn g. By de…nition of image, w = T v.
6
Now we show that fvk+1 ; :::; vn g is linearly independent. Prove by contra-
diction that they are dependent,
ak+1 T vk+1 + ::: + an T vn = 0 for some scalars ak+1 ; :::; an which are not all
zeros.
T (ak+1 vk+1 + ::: + an vn ) = 0
because ak+1 vk+1 + ::: + an vn 2 ker(T ) and it is spanned by fv1 ; :::; vk g, we
have
ak+1 vk+1 + ::: + an vn = a1 v1 + ::: + ak vk for some scalars a1 ; :::; ak
a1 v1 ::: ak vk + ak+1 vk+1 + ::: + an vn = 0
since fv1 ; :::; vn g is linearly independent, that means all coe¢ cients ai must
be zero. This will lead to a contradiction with the hypothesis. Thus fT vk+1 ; :::; T vn g
must be linearly independent.
7
0 = a1 v1 + ::: + an vn
But since fv1 ; :::; vn g is linearly independent, implies a1 ; :::; an are all zero.
Ex: The sequence ((1; 0; 0); (0; 1; 0); (0; 0; 1)) is an ordered basis for R3 ; the
sequence ((0; 1; 0); (1; 0; 0); (0; 0; 1)) is a di¤erent ordered basis for R3 .
Ex: v :=
2 (3;
3 4; 5). If := ((1; 0; 0); (0; 1; 0); (0; 0; 1)), then
3
[v] = 445
5
and (3; 4; 5) = 3(1; 0; 0) + 4(0; 1; 0) + 5(0; 0; 1)
2
Ex: P2 (R),
2 3 and let f = 3x + 4x + 6. If := (1; x; x2 ), then
6
[f ] = 445
3
8
(cT )(v) = c(T v)
Lemma: The space L(V; W ) is a subspace of F (V; W ), the space of all func-
tions from V to W . In particular, L(V; W ) is a vector space.
2
Ex: pick
2 a3 v 2 P3 (R), v = 3x + 7x + 5
5
677
[v] = 6435
7
0
d
Then we have T v = dx v = 6x + 7
so that 2 3
7
[T v] = 465
0
General base
v = x1 v1 + ::: + xn vn
Let us apply T to both sides
T v = x1 T v1 + ::: + xn T vn
from the formula [T v] we have
9
T v = y1 w1 + ::: + ym wm
The vectors T v1 ; :::; T vn lie in W , and so they are linear combinations of
w1 ; :::; wm
T v1 = a11 w1 + a21 w2 + ::: + am1 wm
T v2 = a12 w1 + a22 w2 + ::: + am2 wm
:::
T vn = a1n w1 + a2n w2 + ::: + amn wm
x1 x x2 y 2y1
Ex: T = 1 ;S 1 =
x2 x1 + x2 y2 y2
S T [a]
x x x2 2(x1 x2 )
S T 1 =S 1 =
x2 x1 + x2 (x1 + x2 )
10
2 2 x1
=
1 1 x2
1 1 2 0
T = ;S
1 1 0 1
2 0 1 1 2 2
ST = =
0 1 1 1 1 1
Lemma: Let T : V ! W be a linear transformation, and let S : W ! V
and S 0 : W ! V both be inverses of T . Then S = S 0 .
Proof: S = SIW = S(T S 0 ) = (ST )S 0 = IV S 0 = S 0
11
Lemma: Two …nite-dimensional spaces V and W are isomorphic if and only
if dim(V ) = dim(W ).
Proof: if V and W are isomorphic, then there is an invertible linear trans-
formation T : V ! W from V to W , which by the lemma on one-to-one and
onto properties of inverse linear transformations, is one-to-one and onto. Since
T is one-to-one, nullity(T ) = 0. Sine T is onto, rank(T ) = dim(W ). By the
dimension theorem we have dim(V ) = dim(W ).
Now suppose that dim(V ) and dim(W ) are equal; let’s say that dim(V ) =
dim(W ) = n. Then V has a basis (v1 ; :::; vn ), and W has a basis fw1 ; ::; wn g.
We can …n a linear transformation T : V ! W such that T v1 = w1 ; :::; T vn =
wn . fw1 ; :::; wn g must span image(T ). But since w1 ; :::; wn span W , we have
image(T ) = W , T is onto, and therefore T is one-to-one, and hence is an
isomorphism. Thus V and W are isomorphic.
2 3 21 3
2 0 0 2 0 0
Ex: If A = 40 3 05 ; A 1
= 40 1
3 05
1
0 0 4 0 0 4
Theorem: Let V be a vector space with …nite ordered basis , and let W
be a vector space with …nite ordered basis . Then a linear transformation
T : V ! W is invertible if and only if the matrix [T ] is invertible. Furthermore
([T ] ) 1 = [T 1 ]
Now suppose that [T ] is invertible, with inverse B. We will prove that there
exists a linear transformation S : W ! V with [S] = B.
Assume that we have
[ST ] = [S] [T ] = B[T ] = [IV ]
and hence ST = IV . A similar argument gives T S = IW , and so S is the
inverse of T and so T is invertbile.
It remains to show that we can …nd a transformation S : W ! V with
[S] = B. Write = (v1; :::; vn ) and = (w1 ; :::; wm ). Then we want a linear
transformation S : W ! V such that
Sw1 = B11 v1 + ::: + B1n vn
Sw2 = B21 v1 + ::: + B2n vn
:::
Swm = Bm1 v1 + ::: + Bmn vn
12
Corollary: An m n matrix A is invertible if and only if the linear transfor-
mation LA : Rn ! Rm is invertible. Furthermore, the inverse of LA is LA 1 .
Proof: If is the standard basis for Rn and is the standard basis for Rm ,
then
[LA ] = A
By the theorem above, A is invertible if and only if LA is. Also, from the
above theorem we have
[LA1 ] = ([LA ] ) 1 = A 1 = [LA 1 ]
and hence LA1 = LA 1 as desired.
13