Matrix Operations: Definition A Matrix Is A Rectangular Array of Numbers Called The Entries
Matrix Operations: Definition A Matrix Is A Rectangular Array of Numbers Called The Entries
Matrix Operations
Definition A matrix is a rectangular array of numbers called the entries,
or elements, of the matrix.
The following are all examples of matrices:
· ¸ · √ ¸ 2
1 2 5 −1 0 £ ¤
, , 4 , 1 1 1 1
0 3 2 π 21
17
5.1 1.2 −1
6.9 0 4.4 , [7]
−7.3 9 8.5
If the columns of A are the vectors ~a1 , ~a2 , . . . , ~an , then we may represent A
as
A = [~a1~a2 . . . ~an ]
~1, A
If the rows of A are A ~2, . . . , A
~ n , then we may represent A as
~
A1
A~
2
A= .
..
A~m
The diagonal entries of A are a11 , a22 , a33 , . . . , and if m = n (that is, if A
has the same number of rows as columns), then A is called a square matrix.
MATH10212 • Linear Algebra • Brief lecture notes 15
A square matrix whose nondiagonal entries are all zero is called a diagonal
matrix. A diagonal matrix all of whose diagonal entries are the same is
called a scalar matrix. If the scalar on the diagonal is 1, the scalar matrix
is called an identity matrix.
For example, let
· ¸ · ¸
2 5 0 3 1
A= , B= ,
−1 4 1 4 5
3 0 0 1 0 0
C= 0 6 0 , D= 0 1 0
0 0 2 0 0 1
The diagonal entries of A are 2 and 4, but A is not square; B is a square
matrix of size 2 × 2 with diagonal entries 3 and 5; C is a diagonal matrix;
D is 3 × 3 identity matrix. The n × n identity matrix is denoted by In (or
simply I if its size is understood).
We can view matrices as generalizations of vectors. Indeed, matrices can
and should be thought of as being made up of both row and column vectors.
(Moreover, an m × n matrix can also be viewed as a single “wrapped vector”
of length mn.) Many of the conventions and operations for vectors carry
through (in an obvious way) to matrices.
Two matrices are equal if they have the same size and if their corre-
sponding entries are equal. Thus, if A = [aij ]m×n and B = [bij ]r×s , then
A = B if and only if m = r and n = s and aij = bij for all i and j.
Neither A nor B can be equal to C (no matter what the values of x and y),
since A and B are 2 × 2 matrices and C is 2 × 3. However, A = B if and only
if a = 2, b = 0,c = 5 and d = 3.
· ¸
4 3
C=
2 1
Then · ¸
−2 5 −1
A+B =
1 6 7
but neither A + C nor B + C is defined.
If A is m × n matrix and c is a scalar, then the scalar multiple cA is the
m × n matrix obtained by multiplying each entry of A by c. More formally,
we have
cA = c[aij ] = [caij ]
[In terms of vectors, we could have equivalently stipulate that each column
(or row) of cA is c times the corresponding column (or row) of A.]
A − B = A + (−B)
A+O =A=O+A
and
A − A = O = −A + A
MATH10212 • Linear Algebra • Brief lecture notes 17
Matrix Multiplication
Definition If A is m × n matrix and B is n × r matrix, then the product
C = AB is an is m × r matrix. The (i, j) entry of the product is computed as
follows:
cij = ai1 b1j + ai2 b2j + · · · + ain bnj
Remarks
• Notice that A and B need not be the same size. However, the number
of columns of A must be the same as the number of rows of B. If we
write the sizes of A, B and AB in order, we can see at a glance whether
this requirement is satisfied. Moreover, we can predict the size of the
product before doing any calculations, since the number of rows of AB
is the same as the number of rows of A, while the number of columns
of AB is the same as the number of columns of B
A B = AB
• The formula for the entries of the product looks like a dot product,
and indeed it is. It says that (i, j) entry of the matrix AB is the dot
product of the ith row of A and the jth column of B:
a11 a12 · · · a1n
.. .. .. b11 · · · b1j · · · b1r
. .
. b21 · · · b2j · · · b2r
ai1 ai2 · · · ain
.. .. ..
.. .. .. . . .
. . . bn1 · · · bnj · · · bnr
am1 am2 · · · amn
the “outer subscripts" on each ab term in the sum are always i and j
whereas the “inner subscripts" always agree and increase from 1 to n.
We see this pattern clearly if we write cij using summation notation:
n
X
cij = aik bkj
k=1
Proof We prove (b) and leave proving (a) as an exercise. If ~a1 , . . . , ~an are
the columns of A, then the product Ae~j can be written
Partitioned Matrices
It will often be convenient to regard a matrix as being composed of a num-
ber of smaller submatrices. By introducing vertical and horizontal lines
into a matrix, we can partition it into blocks. There is a natural way to
partition many matrices, particularly those arising in certain applications.
For example, consider the matrix
1 0 0 2 −1
0 1 0 1 3
A= 0 0 1 4 0
0 0 0 1 7
0 0 0 7 2
especially when the matrices are large and have many blocks of zeros. It
turns out that the multiplication of partitioned matrices is just like ordi-
nary matrix multiplication.
We begin by considering some special cases of partitioned matrices. Each
gives rise to a different way of viewing the product of two matrices.
Suppose A is m × n matrix and B is n × r, so the product AB exists. If
we partition B in terms of its column vectors, as
. . .
B = [~b1 .. ~b2 .. · · · .. ~br ]
(here vertical dots are used simply as dividers, as in the textbook), then
. . . . . .
AB = A[~b1 .. ~b2 .. · · · .. ~br ] = [A~b1 .. A~b2 .. · · · .. A~br ]
then ~ ~
A1 A1 B
−− −−
A
~ A
~
2 2B
−− −−
AB = B =
. ..
.. .
−− −−
~m
A ~mB
A
MATH10212 • Linear Algebra • Brief lecture notes 21
1. Ar As = Ar+s
2. (Ar )s = Ars
Matrix Algebra
Theorem 3.2. Algebraic Properties of Matrix Addition and Scalar
Multiplication
Let A, B and C be matrices of the same size and let c and d be scalars. Then
a . A + B = B + A Commutativity
b. (A + B) + C = A + (B + C) Associativity
c. A + O = A
d. A + (−A) = O
e. c(A + B) = cA + cB Distributivity
f. (c + d)A = cA + dA Distributivity
g. c(dA) = (cd)A
h. 1A = A
c1 A1 + c2 A2 + · · · + ck Ak = O
We now check that every ij-th entry is the same. Starting from the left:
((AB)C)ij = by def. of matrix product
X s Xs ³X n ´
= (AB)ik Ckj = Ail Blk Ckj =
k=1 k=1 l=1
change order of summation, which is OK since it is simply the sum over all
pairs k, l either way
Xn Xs
= Ail Blk Ckj =
l=1 k=1
collecting terms
n
X ³X
s ´
= Ail Blk Ckj =
l=1 k=1
now the inner sum is (BC)lj by def. of product
n
X
= Ail (BC)lj = (A(BC))ij ,
l=1
a. (AT )T = A
b. (A + B)T = AT + B T
c. (kA)T = k(A)T
d. (AB)T = B T AT
e. (Ar )T = (AT )r for all nonnegative integers r
Theorem 3.5.
· ¸ · ¸ · ¸ · ¸ · ¸
1 2 1 −2 1 −2 1 2 1 0
Examples. 1) · = · = .
0 1 0 1 0 1 0 1 0 1
a11 x1 a12 x2 a1n xn
a21 x1 a22 x2 a2n xn
.. + .. + ··· + .. =
. . .
am1 x1 a x amn xn
m2 2
a11 a12 a1n
a21 a22 a2n
= .. · x1 + .. · x2 + · · · + .. · xn .
. . .
am1 am2 amn
x
a11 a12 ... a1n 1 a11 x1 + a12 x2 + . . . + a1n xn
a21 x2
a22 ... a2n
· = a21 x1 + a22 x2 + . . . + a2n xn
.
... ... ... . . . ... ... ... ... ...
am1 am2 ... amn xm am1 x1 + am2 x2 + . . . + amn xn
We now can write the linear system in the matrix form: A · ~x = ~b, where
a11 a12 . . . a1n
a21 a22 . . . a2n
A= ...
is the coefficient matrix of the system,
... ... ...
am1 am2 . . . amn
x1 b1
x2 b2
~x = . is the column matrix (vector) of variables, and ~b = . is the
.. ..
xn bm
column matrix of constant terms (right-hand sides).
· ¸
a b
Theorem 3.8. If A = , then A is invertible if and only if
c d
ad − bc 6= 0, in which case
· ¸
1 d −b
A−1 =
ad − bc −c a
(So, if ad − bc = 0, then A is not invertible.)
The expression ad − bc is called the determinant of A (in this special 2 × 2
case), denoted det A.