0% found this document useful (0 votes)
3 views

Ch 4-4

The document discusses linear maps and systems of linear equations, defining key concepts such as linear maps, matrices, and their properties, including rank and invertibility. It explains how to represent linear maps using matrices and introduces Gaussian elimination for solving systems of linear equations. Additionally, it includes problem sets to apply the concepts learned.

Uploaded by

lol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Ch 4-4

The document discusses linear maps and systems of linear equations, defining key concepts such as linear maps, matrices, and their properties, including rank and invertibility. It explains how to represent linear maps using matrices and introduces Gaussian elimination for solving systems of linear equations. Additionally, it includes problem sets to apply the concepts learned.

Uploaded by

lol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

4.4.

Linear Maps and Systems of Linear Equations

4.4.1. Linear Maps & Matrices

As previously defined, a linear map (mapping) between two F-VS (vector spaces of the same
field F), or in this case between two R-VS’s is defined by the properties:



where and .
Sometimes these two properties are written as .

Remember that this means whole is the preimage (from school times known as
«domain»), while not all is in the image of ϕ, sometimes also known as the image of
under ϕ (in school times we called this the «range» of the function).
It is easy to prove that image of ϕ is always a subspace of . This means, that we could also
talk about stuff like the dimension of the image, and even compare it to the dimension of the
preimage.
Ex: Projection onto one of the coordinates axes is a linear map, where the image is the axis, a
one-dimensional subspace of .

Picture the general linear map as n single equations, each of which describes
one coordinate of the image vector in as a linear combination of the m coordinates of the
vector in .

We write , or

where the coefficients (real scalars) «represent» the degrees of freedom. It is a representation,
because the exact numbers depend on our choice of basis, which determines the coordinates
of a vector.
With the «function equations» written out like that, it is easy to prove that linear maps from
the m-dimensional real space onto the n-dimensional real space form a real vector space
themselves. Looking at the coefficients, we can see (doing pure maths, we do not just «see»
things, though, we prove our hunches) that this VS has the dimension nm (as there are nm
different free parameters).
Because understanding the dimensions of the image-space and the preimage is just as
important, we never multiply these two numbers, but always write them as a product.
A mapping from onto the plane for instance (not onto the plane as a subspace of ,
but onto the VS itself) is 8-dimensional, but we do not write it as , but
.
The n equations above already suggest some sort of box form instead of the columns or rows
that we use to represent vectors. A box like that is called a matrix. We could write it in
parentheses or in a box, both are correct.

This may be abbreviated by writing only a single generic term, possibly along with
indices, as in , or or in the case that
. 1

The – here – real numbers are called the entries of the matrix A, and sometimes its
elements or coordinates. denotes the entry in the i-th row and the j-th column of A.

For Matrices to belong to a R-VS though, we need to define addition and scalar
multiplication. It is pretty intuitive, if we keep in mind that a matrix represents a linear map

between vector spaces, so that .

So, we define the addition and scalar multiplication of matrices for each entry, meaning:

1. ,

1 Wikepedia: https://en.wikipedia.org/wiki/Matrix_(mathematics):
«Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency
matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all
matrices represent linear maps or may be viewed as such.»
«Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of
linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics. »
«A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as
computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix. »
2. ,

Following these definitions, we could easily imply that the zero matrix, as the identity
element of matrix multiplication, has to be one with zero entries all over.
In other words, .

We use the terms row vectors and column vectors of a matrix, which means the entries on
one row or one column of the matrix seen (treated) as vectors. Next to the idea of a linear
map being n linear equations, looking at a n×m matrix as n row vectors or m columns also
poses the question of «linear independence». Could we ask about the linear independence of a
matrix, although it is one single person? If yes, what does that mean? Is there some kind of
dimension involved when talking about the row or column vectors the same way we talked
about the dimension of a vector space as the maximum number of linearly independent
vectors that we could find?
Trust mathematics to have found meaningful answers to both questions.

Def Rank of a matrix is the dimension of its image as a subset of . It goes


without saying that , namely the actual linear map that
the matrix A represents.
We could prove that the maximum number of linearly independent row vectors of a matrix
has to be equal to that of the column vectors (we call them – respectively – row rank and
column rank of the matrix.). This number is the rank of the matrix A.
The first consequence of this, is that
• .
• only the zero matrix has rank zero
• ϕ (and with that A) is injective (or "one-to-one") if and only if A has rank m (full
column rank). Remember that this is the same as the lhs of the linear equations for ϕ
being all linearly independent, and reducing the degree of freedom by m.
• ϕ (and with that A) is surjective (or "onto") if and only if A has rank n (full row
rank). Again, going back to the linear equations, this means we would get any
combination of the that we would want on the rhs of the equations, and
thus would have the full n degrees of freedom for our image. If a subspace has the same
dimension as the VS itself, then it IS the VS itself.
• If (we call this a square matrix), then ϕ (and with that A) is invertible if
and only if A has rank n (that is, A has full rank).
Def As with real functions from school maths, «invertible» means that the map ,
where has an inverse map, such that and and
for every x and every y.
Note that every map (i.e. every function) has an inverse, but not every inversion is a map
itself.
Ex: if the average temperatures are seen as a function of each day of the year, we could easily
connect the temperatures with the days of the year, but it would generally be a relation,
because different days could have the same average temperature. In addition, we would
pretty sure not have every temperature on the list. For instance, there might easily have been
no day of the year with an average temperature of 13.01°C.
So, when we talk about the inverse of a map, we explicitly mean an inversion that is a map
itself, like the case with srh-students and their student ID numbers. (Although there are two
many viable combinations «left», valid numbers are only the ones associated with enrolled
students.)

So far, we did all of this without actually knowing how a matrix would «map» a vector onto
another. For that we need to define another necessary part of matrix calculation.

Def The matrix multiplication AB for the two matrices and is defined:

multiplying rows with


Here again: Scalar product of a row vector with a column vector. columns, visual help

But for this to be possible, we could not just multiply a matrix with any other. What’s more,
we could not multiply n×m matrices with each other unless they’re square matrices (n=m).
To be exact, the number of the entries in a row of A has to be the same as the number of
entries in a column of B. Remembering the image of a box, it’s easy to deduct that this is the
same as the number of columns in A and rows in B.
This means, that could only be multiplied with , with being any
nonzero natural number.
This alone, should answer our first question:
No, the matrix multiplication is not commutative. In the general case, if we have AB,
this means that BA could not even be (because , or rather unless ).
The second question:
The dimension of the product of and is provided by the number
of rows of A and columns of B:
The mapping of a vector: representation
With a choice of basis for both vector spaces, we could represent the preimage and image
vectors of , and with that also ϕ itself through the matrix A.
The expression then becomes . As column vectors, x could be seen as a
matrix and y a matrix («naturally» resulting from the multiplication of A with a
matrix).

This perfectly fulfills our first intuition of a linear map,

given by the n equations:

scalar products of rows of A with the


column vector x

Ex: One of the famous matrices is the rotation of a point (i.e. its position vector) about the
origin at an angle θ. It is given by the matrix .

Try the trigonometry!

• transpose
• Diagonal mat
• upper tri mat
• lower tri mat
• identity or unit mat
• determinant
• kernel

Problem Set 4.4.1.

1. Is the map given by ,


i. linear?
ii. injective?
iii. Surjective?
iv. If bijective, what is its inverse map?

2. Is the map given by ,

i. injective?
ii. Linear? (if yes, give the matrix representing the map)

3. Is the map given by ,

i. injective?

ii. Which fulfill ? Answer without solving the system.

(in other words give the kernel/ null space of f)

4. For and calculate AB, BA, BAA, ABA.

5. For and , calculate

i. and
ii. the angle between v and Av in the plane.

6. For the vectors and calculate both vw and wv as matrix

multiplications and compare the results.

7. Given the linear maps ,

and ,

i. give the respective matrices A and B, representing fA and fB.

ii. Give the linear map represented by the matrix

iii. What is the matrix representation of the linear map ?

8. Is the linear map represented by invertible? Why?


4.4.2. Systems of Linear Equations

One of the most important questions that linear algebra deals with (not as in most
fundamental, but with the widest ranges of application) is systems of linear equations. While
we learned a trick or two in school for 2×2 and 3×3 systems, it is easy to feel lost when
dealing with four or more equations.

Here the are our variables, and

constants provided by the system. Should all the constants be zero, namely
, then we call our system homogeneous, otherwise it’s inhomogeneous.

The idea of Gaussian elimination is changing the equations in a way that would not change
the set of all possible solutions, but so that we would have more zero coefficients the
further «down» we go. It should result to – possibly – making maximum number of zero-rows.

vs. or

With the idea of a linear map being , and represented

by a matrix, it should not come as a surprise that we could do all the row transformations by
means of matrix multiplications.

Def We call the coefficient matrix of the equation .

The augmented coefficient matrix is the one we get by «adding» the constant column vector

to the matrix. So, we’d get . The end goal is to have an upper

triangular matrix after the «elimination».


• Row Switching

• Row Multiplying

• Row Addition

Problem Set 4.4.2.


Solve the following systems:

1. What is the right vector b that fulfills Ax = b, where and

? Which vector space does b belong to?

2. Let and with real parameters α and β.

i. For which α-s does the matrix A has rank 1 and for which α-s the maximum rank?
ii. How about the rank of the augmented matrix depending on both α and β?
iii. Discuss the existence and uniqueness of solutions for the linear system given by
. (you don’t need to solve the system)
iv. Solve the system for α = 0 and β = 1.
3. Solve the system of linear equations given by:

i. I:

ii. II:

You might also like