0% found this document useful (0 votes)
45 views

Math 110: Linear Algebra Homework #7: 2.6: Dual Spaces

This document contains a homework assignment for a linear algebra course. It includes 7 problems on topics like dual spaces, linear functionals, and matrices. The homework defines several concepts, proves properties of bases and dual bases, and asks students to show that certain matrices are lower triangular. It provides detailed solutions and explanations for full credit.

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Math 110: Linear Algebra Homework #7: 2.6: Dual Spaces

This document contains a homework assignment for a linear algebra course. It includes 7 problems on topics like dual spaces, linear functionals, and matrices. The homework defines several concepts, proves properties of bases and dual bases, and asks students to show that certain matrices are lower triangular. It provides detailed solutions and explanations for full credit.

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

MATH 110: LINEAR ALGEBRA

HOMEWORK #7
FARMER SCHLUTZENBERG
2.6: Dual Spaces
Problem 1. Note that vector space really means nite-dim vector space in these
questions.
(a) F. The codomain of a linear functional must be the scalar eld.
(b) T. The dimension of a eld F considered as a vector space over itself is 1, so if
T : F F is linear, and , are bases for F, then [T]

is dim(F) dim(F), which is


1 1.
(c) T. This is a corollary to Theorem 2.24: the basis

has the same number of elements


as . If V is innite-dimensional, this may be false, but the proof of this fact isnt
within the scope of the course.
(d) T/F. This depends on how you interpret is. Certainly V is isomorphic to the dual
of V

(Theorem 2.26), so in that sense the statement is true. However V may not
literally be a set of linear functionals, and in this sense, the statement is false. Again
if V is ininite-dimensional this can be false, and the proof of this is also outside the
scope of the course.
(e) F. is ordered, so if = {e
1
, . . . , e
n
},

= {e

1
, . . . , e

n
} where e

i
is the i
th
co-
ordinate projection functional, then T() =

means T(e
i
) = e

i
for each i, i.e. T
must preserve the order of the basis elements. So by Theorem 2.6, there is only one
linear T such that T() =

(and this is an isomorphism). But if n > 1, to get


extra ismorphisms between V and V

, we could for example set T(e
i
) = e

i+1
(and
T(e
n
) = e

1
), and extend T to all of V linearly (by Theorem 2.6). We could also
choose T so that T(e
1
) /

, for example. If n = 1 and char(F) = 2, there are also


multiple isomorphisms.
(f) T. T
t
: W

V

so (T
t
)
t
: V

W

.
(g) T. Let : V W be an isomorphism. Then
t
is an isomorphism between W

and
V

. To see
t
is one-one, suppose
t
(g
1
) =
t
(g
2
), so g
1
= g
2
, so g
1
and g
2
must agree (give the same outputs) on all of Rg(), but is onto, so g
1
= g
2
. To
see
t
is onto, let f V

. Setting g = f
1
, as
1
: W V, g W

. Then

t
(g) = f
1
= f.
(h) F. The derivative of a function is another function, but the comdomain of a linear
functional is the scalar eld.
Problem 10. Here V = P
n
(F) and c
0
, . . . , c
n
F are all distinct.
(a) Firstly note that f
i
is linear as f
i
(ap + q) = (ap + q)(c
i
) = ap(c
i
) + q(c
i
). Also, the
comdomain of f
i
is F, so f
i
V

. We want to see that {f
0
, . . . , f
n
} is an ordered
Date: October 20.
1
2 FARMER SCHLUTZENBERG
basis for V

. dim(V

) = dim(V ) = n + 1 (by Theorem 2.24), so as long as all the
f
i
s are distinct, we have the right number of basis elements. So suppose
a
0
f
0
+ . . . + a
n
f
n
= 0.
Following the hint in the book, and generalizing, let
(1) p(x) = (x c
0
)(x c
1
) . . . (x c
i1
)(x c
i+1
) . . . (x c
n
).
Then
(a
0
f
0
+ . . . + a
n
f
n
)(p) = 0(p) = 0;
(2) a
0
f
0
(p) + . . . + a
n
f
n
(p) = 0.
But plugging c
j
s in for x in (1), p(c
j
) = 0 if j = i, and
p(c
i
) = (c
i
c
0
)(c
i
c
1
) . . . (c
i
c
i1
)(c
i
c
i+1
) . . . (c
i
c
n
).
The c
j
s are all distinct, so p(c
i
) is the product of non-zero elements of F, so p(c
i
) = 0.
But then (2) gives
0 + . . . + 0 + a
i
p(c
i
) + 0 + . . . + 0 = 0,
i.e. a
i
p(c
i
) = 0, so a
i
= 0. Thus, for each i, a
i
= 0.
Therefore the f
i
s are all distinct (or its easy to get a non-trivial linear combination),
and are linearly independent, and so form a basis for V

, by the earlier remarks on
dimensions.
(b) Using the corollary to Theorem 2.26, let = {p
0
, . . . , p
n
} be an ordered basis for V
such that

= {f
0
, . . . , f
n
} is the dual basis to . Then by denition of the dual
basis, f
j
(p
i
) =
ij
, so p
i
(c
j
) =
ij
.
To see the p
i
s are unique, suppose p
1
i
and p
2
i
have the property that p
k
i
(c
j
) =
ij
for
k = 1, 2. Let p = p
1
i
p
2
i
. Then clearly f
j
(p) = p(c
j
) = 0 for each j. But by (a), the
f
j
s span V

, so f(p) = 0 for each f V

, so by the lemma to Theorem 2.26, p = 0,
and p
1
i
= p
2
i
.
(c) As (from (b)) forms a basis for V , we have V = span(). If b
i
F, then
(3) (
i=n

i=0
b
i
p
i
)(c
j
) =
i=n

i=0
b
i
p
i
(c
j
) =
i=n

i=0
b
i

ij
= b
j
.
So if we set q =

i=n
i=0
a
i
p
i
, q(c
j
) = a
j
follows. The uniqueness of q with this property
also follows, as if p = q then p is not expressed by the same linear combination of
as q is.
(d) Let p V . Let b
i
F be such that p =

i=n
i=0
b
i
p
i
. Then by (3), p(c
j
) = b
j
, so in fact,
p =

i=n
i=0
p(c
i
)p
i
.
(e) Using (d),
_
b
a
p(t)dt =
=
_
b
a
_
i=n

i=0
p(c
i
)p
i
_
(t) dt =
_
b
a
i=n

i=0
p(c
i
)p
i
(t) dt
=
i=n

i=0
_
p(c
i
)
_
b
a
p
i
(t) dt
_
=
i=n

i=0
p(c
i
)d
i
,
MATH 110: HOMEWORK #7 3
where d
i
=
_
b
a
p
i
(t) dt. Note the linearity of integration has been used, and note that
the p(c
i
)s are just scalars, so they are pulled outside the integral.
Suppose c
i
= a + (b a)
i
n
and a < b. If n = 1, then setting p
0
(x) =
bx
ba
and
p
1
(x) =
xa
ba
, then p
0
, p
1
are the unique polynomials (of degree 1) of part (b).
Then we get d
0
= d
1
=
ba
2
, and substituting this in (e) gives the trapezoidal rule.
The n = 2 case is similar.
Matrices
For several problems in this section there are various possible solutions, so in several cases
I give a couple of these.
Problem 2.
Lower triangular.
Let A and B be lower triangular, m n and n p respectively. Let L = AB, so L is m p.
We want to see that L is lower triangular.
Solution 1:
We can write each of A and B as 2 2 block matrices, and calculate L = AB using the
lemma covered in class on multiplication of block matrices (also generalized in problem 10
of this homework). We need to see that if k < j then L
kj
= 0. But notice that if k < j and
we write L as a block matrix
L =
_
L
11
L
12
L
21
L
22
_
.
where L
11
is k k, the (k, j)
th
entry of L is within L
12
, and L
12
lies above the main diagonal.
(Notice if k = m then the partition would actually be 1 2. Ill assume k < m as this is
the more complicated case. Its easy to adapt the following argument to the k = m case, by
setting all partitions of A to be 1 x.) So if we can show that L
12
= 0, we will have proven
L is lower triangular. We will do this.
We want to partition A and B into block matrices so that, block-multiplying them, we
produce L, with the given partition. There are two cases:
Case 1: k < n.
Partition A and B into 2 2 block matrices with submatrices A
ij
, B
ij
, where A
11
and
B
11
are both k k (therefore A
21
is m k k, etc). (Note B is n p and k < p so this
partition will be 2 2 for B). Its easy to check that because AB makes sense, block-
multiplication makes sense with these partitions (if youve printed this out you should write
the dimensions on the diagram below to check it all works; I cant work out how to do it
with good alignment). Moreover, the partition induced on L by block-multiplication agrees
with the above partition on L. Because A
11
and B
11
are square and A and B are lower
triangular, we have A
12
= B
12
= 0. So block-multiplying, we have
_
A
11
0
A
21
A
22
_ _
B
11
0
B
21
B
22
_
=
_
L
11
L
12
L
21
L
22
_
,
so
L
12
= A
11
0 + 0B
22
= 0.
Case 2: k n.
In this case we will partition A as a 2 1 block matrix where A
11
is k n and B as a 1 2
4 FARMER SCHLUTZENBERG
block matrix with B
11
nk. Again this makes sense for block-multiplication and agrees with
the established partition of L. As n k we have B
12
= 0, so we get L
12
= A
11
0 = 0, as
required.
So we have shown what we needed, and therefore L is lower triangular.
Solution 2:
Let i < j; we just need (AB)
ij
= 0. But
(AB)
ij
=
k=n

k=1
A
ik
B
kj
=
k=i

k=1
A
ik
B
kj
+
k=n

k=i+1
A
ik
B
kj
.
In the last expression, the left summand is 0 because if k i then k < j, so B
kj
= 0 as B
is lower triangular. Similarly, the right summand is 0 because if i + 1 k, then A
ik
= 0
because i < k and A is lower triangular.
Upper triangular. If A and B are upper triangular, then (AB)
t
= B
t
A
t
and B
t
and A
t
are
lower triangular, so by the rst part (AB)
t
is lower triangular, so AB is upper triangular.
Problem 3.
Lower triangular.
Let A be lower triangular and invertible, n n.
Solution 1:
Let a < b. We need to see A
1
ab
= 0. Consider the partition of B = A
1
into a 2 2 block
matrix with partition dimensions n
j
p
k
where n
1
= p
1
= a (so n
2
= p
2
= n a). Note that
the (a, b)
th
entry of B lies in B
12
, so we will be done if we can show B
12
= 0.
Let A also be partitioned as 22, with the same partition dimensions. Note that as n
1
= p
1
,
A
11
and B
11
are square and A
12
(and B
12
) lies above the diagonal, so A
12
= 0. Moreover,
the partitions are suitable for block-multiplying, giving
_
A
11
0
A
21
A
22
_ _
B
11
B
12
B
21
B
22
_
=
_
I
a
0
0 I
na
_
.
The partitioning of AB = I
n
is again the same, and the upper-left and lower-right blocks
are square, a a and (n a) (n a) respectively, so it clearly has the form shown. But
this gives
A
11
B
11
+ 0B
21
= I
a
= A
11
B
11
= I
a
.
But then by problem 2.4.10 from homework 6, this implies A
11
is invertible. (One could
also use the fact that a square lower triangular matrix with non-zero diagonal entries is
invertible.) We also get
A
11
B
12
+ 0B
22
= 0 = A
11
B
12
= 0.
But then left-multiplying by (A
11
)
1
, we get B
12
= 0.
Solution 2:
Suppose k < j are such that A
1
kj
= 0. We may assume that A
1
ij
= 0 for i < k (by reducing
MATH 110: HOMEWORK #7 5
k if necessary). But then
(AA
1
)
kj
=
i=n

i=1
A
ki
A
1
ij
=

i<k
A
ki
A
1
ij
+ A
kk
A
1
kj
+

i>k
A
ki
A
1
ij
.
In the last term, the left summand is 0 because A
1
ij
= 0 for i < k. The right summand is 0
because A is lower triangular. But as A is also invertible, A
kk
= 0. But then (AA
1
)
kj
= 0,
contradicting I
kj
= 0.
Solution 3:
Let = {e
1
, . . . , e
n
} be the standard ordered basis for F
n
. Note rst that because L
A
(e
j
) =

i=n
i=1
A
ij
e
i
and A
ij
= 0 for i < j, we actually have,
(4) L
A
(e
j
) =
i=n

i=j
A
ij
e
i
; L
A
(e
j
) span(e
j
, . . . , e
n
).
Then letting W
j
= span(e
j
, . . . , e
n
), its easy to see that W
j
is L
A
-invariant for each j.
Assuming A
1
is not lower triangular, choose k, j with k < j as in solution 2, and let
a = A
1
kj
= 0. Then
L
A
1(e
j
) =
i=n

i=1
A
1
ij
e
i
= ae
k
+
i=n

i=k+1
A
1
ij
e
i
by the choice of k. Let w =

i=n
i=k+1
A
1
ij
e
i
, noting that w W
k+1
. Then
e
j
= L
A
(L
A
1(e
j
)) = L
A
(ae
k
+ w) = aL
A
(e
k
) + L
A
(w),
where the rst equality is because AA
1
= I
n
. As w W
k+1
, so is L
A
(w), by L
A
-invariance.
Using (4) we have L
A
(e
k
) = A
kk
e
k
+ v for some v W
k+1
. Also e
j
W
k+1
as j k + 1. So
e
j
L
A
(w) av = aA
kk
e
k
.
But a = 0 by assumption and A
kk
= 0 as A is lower triangular and invertible, so we can
divide through and get e
k
W
k+1
= span(e
k+1
, . . . , e
n
), a contradiction, as is a basis.
Upper triangular. Finally, let A be upper triangular and invertible. By problem 5 of
section 2.4, A
t
is invertible and (A
t
)
1
= (A
1
)
t
. But A
t
is also lower triangular, so by the
previous part, (A
t
)
1
is lower triangular. So A
1
= ((A
1
)
t
)
t
= ((A
t
)
1
)
t
is upper triangular.
Problem 4. Let A and B be unit lower triangular matrices, m n and n p. By problem 2
we need only check that C = AB is also unit.
Solution 1:
If n = 1, A is a column vector and B is a row of the form [10 . . . 0]. Computing the
product directly, Cs only non-zero column is its rst column, and has the same entries as
A. Therefore it is a unit matrix. If m = 1 or p = 1 it is also easy to check that C is unit.
So assume m, n, p > 1. We use induction on max(m, n, p).
Partition A into a 2 2 block matrix A

where A

11
is 1 1 (so the partition dimensions are
given by m
1
= 1, m
2
= m1, n
1
= 1 and n
2
= n 1). Dene B

from B in the same way,


6 FARMER SCHLUTZENBERG
so that B

11
is also 1 1. (All the dimensions are > 0 as m, n, p > 1.) Block-multiplying, we
produce a 2 2 partition C

of C:
C

=
_
[1] 0
A

21
A

22
_ _
[1] 0
B

21
B

22
_
.
(A

and B

have this form because A and B are unit lower trianuglar.) But then C

11
=
[1][1] + 0B

21
= [1], so C
11
= 1. And C

22
= A

21
0 + A

22
B

22
= A

22
B

22
. But this is a prod-
uct of unit lower triangular matrices of smaller dimensions, so by inductive hypothesis, C

22
is also unit. As all of Cs diagonal entries are either C
11
= 1 or lie within C

22
, C must be unit.
Solution 2:
More directly, just calculate (AB)
ii
:
(5) (AB)
ii
=
k=n

k=1
A
ik
B
ki
=
k=i1

k=1
A
ik
B
ki
+ A
ii
B
ii
+
k=n

k=i+1
A
ik
B
ki
.
In the last term, the left summand is 0 because B
ki
= 0 for k < i. The right summand
is 0 because A
ik
= 0 for k > i. But A
ii
= B
ii
= 1 as A and B are unit matrices, so
(AB)
ii
= 1.1 = 1 also.
Upper triangular. If A and B are unit upper triangular then, like before, (AB)
t
= B
t
A
t
is unit lower triangular, so AB is unit upper triangular.
Problem 5.
Lower triangular. Let A be a unit lower triangular square matrix. As all diagonal entries
are non-zero, A is invertible. By problem 3 we already know A
1
is lower triangular.
Solution 1:
Use induction on the dimensions of A.
If A is 1 1, A = [1] as it is unit, so clearly A
1
= [1] also.
Now suppose A is (n + 1) (n + 1) where n > 0. Partition A into a 2 2 block matrix A

with A

11
of size 1 1, and dene B

from B in the same way. Block-multiplying, we get the


same partition of I
n+1
:
_
I
1
0
0 I
n
_
=
_
[1] 0
A

21
A

22
_ _
B

11
0
B

21
B

22
_
.
(B

12
= 0 because we already know it is lower triangular.) But then I
1
= [1]B

11
+ 0B

21
=
[1]B

11
, and therefore B

11
= [1], so B
11
= 1.
Now A

22
is unit lower triangular square (so invertible), because A is. Moreover, its inverse
is B

22
, because, from the matrix equation,
I
n
= A

21
0 + A

22
B

22
= A

22
B

22
.
As A

22
is nn, we may apply the inductive hypothesis, so B

22
is unit. Now weve dealt with
all of Bs diagonal entries, so B is unit also.
Solution 2:
Let B = A
1
, and consider (5). By problem 3, A
1
is lower triangular, so again the left and
right summands of the last term are 0. Therefore 1 = (AA
1
)
ii
= A
ii
A
1
ii
(its 1 because
MATH 110: HOMEWORK #7 7
AA
1
= I), and A
ii
= 1, so A
1
ii
= 1 also.
Upper triangular. If A is a unit upper triangular square matrix, then applying the lower
triangular case to A
t
, we get (A
t
)
1
is unit, and therefore A
1
= ((A
t
)
1
)
t
is unit also.
Problem 10. Let A be m n and B be n p matrices. Suppose
m = m
1
+ . . . + m
r
n = n
1
+ . . . + n
s
p = p
1
+ . . . + p
t
.
Ill distinguish here between a matrix and a block-representation by giving them dierent
names. We can form an r s block matrix A

representing A:
A

=
_

_
A

11
A

12
. . . A

1s
A

21
A

22
. . . A

2s
.
.
.
.
.
.
.
.
.
.
.
.
A

r1
A

r2
. . . A

rs
_

_
,
where A

ij
is an m
i
n
j
(regular) matrix with coecients matching the section of A it
corresponds to. More precisely, dene (A

ij
)
ab
= A
(

k<i
m
i
+a)(

k<j
n
i
+b)
. For i r + 1, set
m
<i
=

k<i
m
i
, and similarly dene n
<j
and p
<k
. Then we have
(A

ij
)
ab
= A
(m
<i
+a)(n
<j
+b)
.
We can represent B similarly, forming an s t block matrix B

, where block B

jk
is an n
j
p
k
matrix. Then the multiplication A

ij
B

jk
makes sense, and yields an m
i
p
k
matrix. This
holds for any j, so we can dene an r t block matrix D

by D

ik
=

j=s
j=1
A

ij
B

jk
. Let D be
the corresponding m p (regular) matrix.
Note that the way I have dened things, A

is actually dierent from A, it is not just a


dierent way of representing A. The dimensions of A

are r s, not m n, and its entries


are matrices, not scalars, as with A. Likewise for B

, and the product A

. It is most
convenient for this proof to view things this way.
On the other hand, letting C = AB (with regular matrix multiplication), C is m p, and
we can form the r t block matrix C

, where block C

ik
is m
i
p
k
(as we did for A and B).
So we have that the two block matrices C

and D

have the same dimensions. We need to


verify that C

ik
= D

ik
. One can do this inductively, but here Ill just do it directly.
To make things more readable, Ill move some subscripts to superscipts. For X and X

any
of the matrices and corresponding block matrices above, let X
ij
= X

ij
. So X
ij
is a submatrix
of X. However X
ij
, as usual, is the (i, j)
th
entry of X.
Now, we need to see that C

ik
= D

ik
. Firstly, each is m
i
p
k
. So let 1 a m
i
and
1 c p
k
. We just need to check that C
ik
ac
= D
ik
ac
. Computing,
D
ik
ac
= (
j=s

j=1
A
ij
B
jk
)
ac
=
j=s

j=1
(A
ij
B
jk
)
ac
=
=
j=s

j=1
b=n
j

b=1
(A
ij
ab
B
jk
bc
)
8 FARMER SCHLUTZENBERG
=
j=s

j=1
b=n
j

b=1
A
(m
<i
+a)(n
<j
+b)
B
(n
<j
+b)(p
<k
+c)
=
j=s

j=1
b=n
j

b=1
A
a

(n
<j
+b)
B
(n
<j
+b)c

(where a

= m
<i
+ a and c

= p
<k
+ c)
=
j=s

j=1
b=n
<(j+1)

b=(n
<j
)+1
A
a

b
B
bc
,
as n
<j
+ n
j
= n
<(j+1)
. But this is just
=
b=n

b=1
A
a

b
B
bc
= C
a

c
= C
(m
<i
+a)(p
<k
+c)
= C
ij
ac
,
as required.
Note: Proving this inductively avoids the heavy computation done here, but still seems to
need a fair bit of notation. I originally wrote an inductive proof also, but it became too
involved to be worth including in the solutions, and appeared more complicated than it
really is. But the idea is fairly straightforward.
Suppose we are given some block matrices A

and B

partitioning matrices A and B. We


want to prove that if we compute the product A

at the partition level, then throw away


the partition on the resulting matrix, that we get the regular product AB. To use induction,
we need to break the problem into a few smaller ones. This can be done by partitioning
the block matrices A

and B

into 2 2 matrices A

and B

. These matrices are another


level up - their entries are block matrices (pieces of A

and B

), whose entries are (regular)


matrices. But the 2 2 base case still applies to these nested matrices. To multiply A

with B

we need to multiply their block-matrix components, but these are smaller than the
ones we started with (A

and B

), so we can use the inductive hypothesis to conclude that


the result is the same as multiplying the corresponding pieces of A and B. To write all this
carefully seems to need a fair bit of notation, which could well cause the reader to lose the
ideas amongst the symbols. So if youre interested in how this proof works, I suggest you
think about the details on your own.

You might also like