Ch 4-4
Ch 4-4
As previously defined, a linear map (mapping) between two F-VS (vector spaces of the same
field F), or in this case between two R-VS’s is defined by the properties:
•
•
where and .
Sometimes these two properties are written as .
Remember that this means whole is the preimage (from school times known as
«domain»), while not all is in the image of ϕ, sometimes also known as the image of
under ϕ (in school times we called this the «range» of the function).
It is easy to prove that image of ϕ is always a subspace of . This means, that we could also
talk about stuff like the dimension of the image, and even compare it to the dimension of the
preimage.
Ex: Projection onto one of the coordinates axes is a linear map, where the image is the axis, a
one-dimensional subspace of .
Picture the general linear map as n single equations, each of which describes
one coordinate of the image vector in as a linear combination of the m coordinates of the
vector in .
We write , or
where the coefficients (real scalars) «represent» the degrees of freedom. It is a representation,
because the exact numbers depend on our choice of basis, which determines the coordinates
of a vector.
With the «function equations» written out like that, it is easy to prove that linear maps from
the m-dimensional real space onto the n-dimensional real space form a real vector space
themselves. Looking at the coefficients, we can see (doing pure maths, we do not just «see»
things, though, we prove our hunches) that this VS has the dimension nm (as there are nm
different free parameters).
Because understanding the dimensions of the image-space and the preimage is just as
important, we never multiply these two numbers, but always write them as a product.
A mapping from onto the plane for instance (not onto the plane as a subspace of ,
but onto the VS itself) is 8-dimensional, but we do not write it as , but
.
The n equations above already suggest some sort of box form instead of the columns or rows
that we use to represent vectors. A box like that is called a matrix. We could write it in
parentheses or in a box, both are correct.
This may be abbreviated by writing only a single generic term, possibly along with
indices, as in , or or in the case that
. 1
The – here – real numbers are called the entries of the matrix A, and sometimes its
elements or coordinates. denotes the entry in the i-th row and the j-th column of A.
For Matrices to belong to a R-VS though, we need to define addition and scalar
multiplication. It is pretty intuitive, if we keep in mind that a matrix represents a linear map
So, we define the addition and scalar multiplication of matrices for each entry, meaning:
1. ,
1 Wikepedia: https://en.wikipedia.org/wiki/Matrix_(mathematics):
«Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency
matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all
matrices represent linear maps or may be viewed as such.»
«Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of
linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics. »
«A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as
computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix. »
2. ,
Following these definitions, we could easily imply that the zero matrix, as the identity
element of matrix multiplication, has to be one with zero entries all over.
In other words, .
We use the terms row vectors and column vectors of a matrix, which means the entries on
one row or one column of the matrix seen (treated) as vectors. Next to the idea of a linear
map being n linear equations, looking at a n×m matrix as n row vectors or m columns also
poses the question of «linear independence». Could we ask about the linear independence of a
matrix, although it is one single person? If yes, what does that mean? Is there some kind of
dimension involved when talking about the row or column vectors the same way we talked
about the dimension of a vector space as the maximum number of linearly independent
vectors that we could find?
Trust mathematics to have found meaningful answers to both questions.
So far, we did all of this without actually knowing how a matrix would «map» a vector onto
another. For that we need to define another necessary part of matrix calculation.
Def The matrix multiplication AB for the two matrices and is defined:
But for this to be possible, we could not just multiply a matrix with any other. What’s more,
we could not multiply n×m matrices with each other unless they’re square matrices (n=m).
To be exact, the number of the entries in a row of A has to be the same as the number of
entries in a column of B. Remembering the image of a box, it’s easy to deduct that this is the
same as the number of columns in A and rows in B.
This means, that could only be multiplied with , with being any
nonzero natural number.
This alone, should answer our first question:
No, the matrix multiplication is not commutative. In the general case, if we have AB,
this means that BA could not even be (because , or rather unless ).
The second question:
The dimension of the product of and is provided by the number
of rows of A and columns of B:
The mapping of a vector: representation
With a choice of basis for both vector spaces, we could represent the preimage and image
vectors of , and with that also ϕ itself through the matrix A.
The expression then becomes . As column vectors, x could be seen as a
matrix and y a matrix («naturally» resulting from the multiplication of A with a
matrix).
Ex: One of the famous matrices is the rotation of a point (i.e. its position vector) about the
origin at an angle θ. It is given by the matrix .
• transpose
• Diagonal mat
• upper tri mat
• lower tri mat
• identity or unit mat
• determinant
• kernel
i. injective?
ii. Linear? (if yes, give the matrix representing the map)
i. injective?
i. and
ii. the angle between v and Av in the plane.
and ,
One of the most important questions that linear algebra deals with (not as in most
fundamental, but with the widest ranges of application) is systems of linear equations. While
we learned a trick or two in school for 2×2 and 3×3 systems, it is easy to feel lost when
dealing with four or more equations.
constants provided by the system. Should all the constants be zero, namely
, then we call our system homogeneous, otherwise it’s inhomogeneous.
The idea of Gaussian elimination is changing the equations in a way that would not change
the set of all possible solutions, but so that we would have more zero coefficients the
further «down» we go. It should result to – possibly – making maximum number of zero-rows.
vs. or
by a matrix, it should not come as a surprise that we could do all the row transformations by
means of matrix multiplications.
The augmented coefficient matrix is the one we get by «adding» the constant column vector
to the matrix. So, we’d get . The end goal is to have an upper
• Row Multiplying
• Row Addition
i. For which α-s does the matrix A has rank 1 and for which α-s the maximum rank?
ii. How about the rank of the augmented matrix depending on both α and β?
iii. Discuss the existence and uniqueness of solutions for the linear system given by
. (you don’t need to solve the system)
iv. Solve the system for α = 0 and β = 1.
3. Solve the system of linear equations given by:
i. I:
ii. II: