0% found this document useful (0 votes)
124 views20 pages

Elementary Linear Algebra

This document provides an introduction and overview of linear equations and systems of linear equations. It begins by defining linear equations as those involving variables only to the first power without products, exponents, or trigonometric terms. A system of linear equations is a set of two or more linear equations. The document introduces key concepts such as the augmented matrix representation of a system, elementary row operations for solving systems, echelon and reduced row-echelon forms, Gaussian elimination, homogeneous systems, and using back-substitution to solve systems. The goal of these techniques is to systematically eliminate variables from a system to arrive at its solution(s) or determine if no solution exists.

Uploaded by

Arviena Jasmine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views20 pages

Elementary Linear Algebra

This document provides an introduction and overview of linear equations and systems of linear equations. It begins by defining linear equations as those involving variables only to the first power without products, exponents, or trigonometric terms. A system of linear equations is a set of two or more linear equations. The document introduces key concepts such as the augmented matrix representation of a system, elementary row operations for solving systems, echelon and reduced row-echelon forms, Gaussian elimination, homogeneous systems, and using back-substitution to solve systems. The goal of these techniques is to systematically eliminate variables from a system to arrive at its solution(s) or determine if no solution exists.

Uploaded by

Arviena Jasmine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Elementary Linear Algebra

Chapter I

1. Introduction to System of Linear Equations


1.1. Linear Equations

 Any straight line in xy-plane can be represented algebraically by an


equation of the form:
a1x + a2y = b
 General form: Define a linear equation in the n variables x1, x2, …, xn :
a1x1 + a2x2 + ··· + anxn = b
where a1, a2, …, an and b are real constants.
 The variables in a linear equation are sometimes called unknowns.
 Example 1

1
The equations x  3 y  7, y  x  3 z  1, and x1  2 x2  3x3  x4  7
are linear 2

 A linear equation does not involve any products or roots of


variables
 All variables occur only to the first power and do not appear
as arguments for trigonometric, logarithmic, or exponential
functions.
 The equations are not
linear
x  3 y  5, 3x  2 y  z  xz  4, and y  sin x

 A solution of a linear equation is a sequence of n numbers s1, s2,


…, sn such that the equation is satisfied.
 The set of all solutions of the equation is called its solution set or
general solution of the equation.
 Example 2
 Find the solution of x1 – 4x2 + 7x3 = 5
 Solution:
 We can assign arbitrary values to any two variables and
solve for the third variable
 For example x1 = 5 + 4s – 7t, x2 = s, x3 = t
where s, t are arbitrary values

1.2. Linear Systems


a11x1  a12 x2  ...  a1n xn  b1
a21x1  a22 x2  ...  a2 n xn  b2
   
am1 x1  am 2 x2  ...  amn xn  bm
 A finite set of linear equations in the variables x1, x2, …, xn is called a
system of linear equations or a linear system.
 A sequence of numbers s1, s2, …, sn is called a solution of the system
 A system has no solution is said to be inconsistent.
 If there is at least one solution of the system, it is called consistent.
 Every system of linear equations has either no solutions, exactly one
solution, or infinitely many solutions
 A general system of two linear equations:
a1x + b1y = c1 (a1, b1 not both zero)
a2x + b2y = c2 (a2, b2 not both zero)
 Two line may be parallel – no solution
 Two line may be intersect at only one point – one solution
 Two line may coincide – infinitely many solutions

1.3. Augmented Matrics

 The location of the +s, the xs, and the =s can be abbreviated by writing
only the rectangular array of numbers.
 This is called the augmented matrix for the system.
 It must be written in the same order in each equation as the unknowns
and the constants must be on the right

a11x1  a12 x2  ...  a1n xn  b1


a21x1  a22 x2  ...  a2 n xn  b2
   
am1 x1  am 2 x2  ...  amn xn  bm

1.4. Elementary Row Operations


 The basic method for solving a system of linear equations is to replace
the given system by a new system that has the same solution set but
which is easier to solve.
 Since the rows of an augmented matrix correspond to the equations in
the associated system, new systems is generally obtained in a series of
steps by applying the following three types of operations to eliminate
unknowns systematically.

a11x1  a12 x2  ...  a1n xn  b1 a11 a12 ... a1n b1 


a a ... a b2 
a21x1  a22 x2  ...  a2 n xn  b2  21 22 2n

         
 
am1 x1  am 2 x2  ...  amn xn  bm am1 am 2 ... amn bm 
 Elementary row operations
 Multiply an equation through by a nonzero constant
 Interchange two equation
 Add a multiple of one equation to another
1 0 0 1
2. Gaussian Elimination 0 1 0 2
 
2.1. Echelon Forms 0 0 1 3
 A matrix is in reduced row-echelon form
 If a row does not consist entirely of zeros, then the first nonzero
number in the row is a 1. We call this a leader 1.
 If there are any rows that consist entirely of zeros, then they are
grouped together at the bottom of the matrix.
 In any two successive rows that do not consist entirely of zeros, the
leader 1 in the lower row occurs farther to the right than the leader 1
in the higher row.
 Each column that contains a leader 1 has zeros everywhere else.
 A matrix that has the first three properties is said to be in row-echelon form.
Note: A matrix in reduced row-echelon form is of necessity in row-echelon form,
but not conversely.
2.2. Elimination Methods
 A step-by-step elimination procedure that can be used to reduce any matrix
to reduced row-echelon form

0 0  2 0 7 12 
2 4  10 6 12 28
 
2 4  5 6  5  1

 Step1. Locate the leftmost column that does not consist entirely of zeros.

 Step2. Interchange the top row with another row, to bring a nonzero
entry to top of the column found in Step1

 Step3. If the entry that is now at the top of the column found in Step1 is
a, multiply the first row by 1/a in order to introduce a leading 1.

 Step4. Add suitable multiples of the top row to the rows below so that all
entries below the leading 1 become zeros
 Step5. Now cover the top row in the matrix and begin again with Step1
applied to the submatrix that remains. Continue in this way until the
entire matrix is in row-echelon form

 The last matrix is in reduced row-echelon form


 Step1~Step5: the above procedure produces a row-echelon form and is
called Gaussian elimination
 Step1~Step6: the above procedure produces a reduced row-echelon form and
is called Gaussian-Jordan elimination
 Every matrix has a unique reduced row-echelon form but a row-echelon
form of a given matrix is not unique
 Back-Substitution
 To solve a system of linear equations by using Gaussian
elimination to bring the augmented matrix into row-echelon form
without continuing all the way to the reduced row-echelon
form.
 When this is done, the corresponding system of equations can be
solved by solved by a technique called back-substitution
2.3. Homogeneous Linear Systems
 A system of linear equations is said to be homogeneous if the constant
terms are all zero.

a11x1  a12 x2  ...  a1n xn  0


a21x1  a22 x2  ...  a2 n xn  0
   
am1 x1  am 2 x2  ...  amn xn  0
 Every homogeneous system of
linear equation is consistent, since all such system have x1 = 0, x2 = 0,
…, xn = 0 as a solution.
 This solution is called the trivial solution.
 If there are another solutions, they are called nontrivial solutions.
 There are only two possibilities for its solutions:
 There is only the trivial solution
 There are infinitely many solutions in addition to the trivial
solution
 Example
 Solve the homogeneous system of linear equations by Gauss-
Jordan elimination

2 x1  2 x2  x3  x5  0
 x1  x2  2 x3  3x4  x5  0
x1  x2  2 x3  x5  0
x3  x4  x5  0

 The augmented matrix


 2 2 1 0 1 0
 1  1 2  3 1 0

 1 1  2 0 1 0
 
0 0 0 1 0 0
 Reducing this matrix to reduced row-echelon form
1 1 0 0 1 0
0 0 1 0 1 0

0 0 0 1 0 0
 
0 0 0 0 0 0
 The general solution is
x1   s  t , x2  s
x3  t , x4  0, x5  t
 Note: the trivial solution is obtained when s = t = 0

 Two important points:


 None of the three row operations alters the final column of
zeros, so the system of equations corresponding to the
reduced row-echelon form of the augmented matrix must
also be a homogeneous system.
 If the given homogeneous system has m equations in n
unknowns with m < n, and there are r nonzero rows in
reduced row-echelon form of the augmented matrix, we will
have r < n. It will have the form:
 xk 1   ()  0 xk1   ()
 xk 2   ()  0 xk 2   ()
  
xkr   ()  0 xkr   ()
a11x1  a12 x2  ...  a1n xn  b1
a21x1  a22 x2  ...  a2 n xn  b2
   
am1 x1  am 2 x2  ...  amn xn  bm

 Theorem 1.2.1
 A homogeneous system of linear equations with more unknowns
than equations has infinitely many solutions.

 Remark
 This theorem applies only to homogeneous system!
 A nonhomogeneous system with more unknowns than
equations need not be consistent; however, if the system is
consistent, it will have infinitely many solutions.
 e.g., two parallel planes in 3-space

3. Matrices and Matrix Operations


3.1. Definition and Notation
 A matrix is a rectangular array of numbers. The numbers in the array are
called the entries in the matrix
a11 a12 ... a1n 
a a22 ... a2 n 
A   21 
    
 
am1 am 2 ... amn 
 A general mn matrix A is denoted as
 The entry that occurs in row i and column j of matrix A will be denoted aij
or Aij. If aij is real number, it is common to be referred as scalars
 The preceding matrix can be written as [aij]mn or [aij]

 Two matrices are defined to be equal if they have the same size and their
corresponding entries are equal
 If A = [aij] and B = [bij] have the same size, then A = B if and only if
aij = bij for all i and j
 If A and B are matrices of the same size, then the sum A + B is the matrix
obtained by adding the entries of B to the corresponding entries of A.
 The difference A – B is the matrix obtained by subtracting the entries of B
from the corresponding entries of A
 If A is any matrix and c is any scalar, then the product cA is the matrix
obtained by multiplying each entry of the matrix A by c. The matrix cA is
said to be the scalar multiple of A
 If A = [aij], then cAij = cAij = caij
 If A is an mr matrix and B is an rn matrix, then the product AB is the
mn matrix whose entries are determined as follows.
 (AB)mn = Amr Brn

 a11 a12  a1r 


a  a2 r  b11 b12  b1 j  b1n 
 21 a22  
     b21 b22  b2 j  b2 n 
AB   
 ai1 ai 2  air       
   
   br1 br 2  brj  brn 
 
am1 am 2  amr 
ABij = ai1b1j + ai2b2j + ai3b3j + … + airbrj

3.2. Partitioned Matrices


 A matrix can be partitioned into smaller matrices by inserting horizontal
and vertical rules between selected rows and columns
 For example, three possible partitions of a 34 matrix A:
 The partition of A into four  a11 a12 a13 a14 
submatrices A11, A12, A21, A
A  a21 a22 a23 a24    11
A12 

and A22  a31 a32 a33 a34   21
A A22 

 The partition of A into its row


 a11 a12 a13 a14   r1 
matrices r1, r2, and r3
A  a21 a22 a23 a24   r2 
 The partition of A into its
 a31 a32 a33 a34  r3 
column matrices c1, c2, c3,
and c4  a11 a12 a13 a14 
 A  a21 a22 
a23 a24   c1 c 2 c3 c4 
 a31 a32 a33 a34 
3.3. Multiplication by Columns and by Rows
 It is possible to compute a particular row or column of a matrix product
AB without computing the entire product:
jth column matrix of AB = A[jth column matrix of B]
ith row matrix of AB = [ith row matrix of A]B
 If a1, a2, ..., am denote the row matrices of A and b1 ,b2, ...,bn denote the
column matrices of B,then
AB  Ab1 b 2  b n   Ab1 Ab 2  Ab n 
 a1   a1 B 
a  a B 
AB   2  B   2 
   
   
a m  a m B 

3.4. Matrix Products as Linear Combinations


 Let
 a11 a12  a1n   x1 
a a22  a2 n  x 
A   21 and x   2
     
   
am1 am 2  amn   xn 
 Then
 a11x1  a12 x2    a1n xn   a11   a12   a1n 
 a x  a x  a x  a  a  a 
Ax   21 1 22 2 2n n 
x  21 
x  22 
   xn  2 n 
   1   2      
       
am1 x1  am 2 x2    amn xn  am1   am 2  amn 
 The product Ax of a matrix A with a column matrix x is a linear
combination of the column matrices of A with the coefficients coming from
the matrix x
 Example 1
 Example 2

3.5. Matrix Form of a Linear System


 Consider any system of m linear equations in n unknowns:

 The matrix A is called the coefficient matrix of the system


 The augmented matrix of the system is given by

 a11 a12  a1n b1 


a a22  a2 n b2 
A b   21

    
 
am1 am 2  amn bm 
 If A is any mn matrix, then the transpose of A, denoted by AT, is defined
to be the nm matrix that results from interchanging the rows and
columns of A
 That is, the first column of AT is the first row of A, the second
column of AT is the second row of A, and so forth

 If A is a square matrix, then the trace of A , denoted by tr(A), is defined to


be the sum of the entries on the main diagonal of A. The trace of A is
undefined if A is not a square matrix.
n
tr ( A)   aii
 For an nn matrix A = [aij], i 1
4. Inverses; Rules of Matrix Arithmetic
4.1. Properties of Matrix Operations
 For real numbers a and b ,we always have ab = ba, which is called the
commutative law for multiplication. For matrices, however, AB and
BA need not be equal.

 Equality can fail to hold for three reasons:


 The product AB is defined but BA is undefined.
 AB and BA are both defined but have different sizes.
 It is possible to have AB  BA even if both AB and BA are
defined and have the same size.
 Theorem 1.4.1
(Properties of Matrix Arithmetic)
o Assuming that the sizes of the matrices are such that the
indicated operations can be performed, the following rules of
matrix arithmetic are valid:
 A+B=B+A (commutative law for addition)
 A + (B + C) = (A + B) + C (associative law for addition)
 A(BC) = (AB)C (associative law for multiplication)
 A(B + C) = AB + AC (left distributive law)
 (B + C)A = BA + CA (right distributive law)
 A(B – C) = AB – AC, (B – C)A = BA – CA
 a(B + C) = aB + aC, a(B – C) = aB – aC
 (a+b)C = aC + bC, (a-b)C = aC – bC
 a(bC) = (ab)C, a(BC) = (aB)C = B(aC)
o Note: the cancellation law is not valid for matrix multiplication!

4.2. Zero Matrices


 A matrix, all of whose entries are zero, is called a zero matrix
 A zero matrix will be denoted by 0
 If it is important to emphasize the size, we shall write 0mn for the mn zero
matrix.
 In keeping with our convention of using boldface symbols for matrices
with one column, we will denote a zero matrix with one column by 0
 Example
o The cancellation law does not hold

3 4
o AB=AC =
6 8 
 

0 0 
o AD = 0 0 
 
 Theorem 1.4.2 (Properties of Zero Matrices)
o Assuming that the sizes of the matrices are such that the
indicated operations can be performed ,the following rules of
matrix arithmetic are valid
 A+0=0+A=A
 A–A=0
 0 – A = -A
 A0 = 0; 0A = 0

4.3. Identity Matrices


 A square matrix with 1s on the main diagonal and 0s off the main
diagonal is called an identity matrix and is denoted by I, or In for the
nn identity matrix
 If A is an mn matrix, then AIn = A and ImA = A
 An identity matrix plays the same role in matrix arithmetic as the number
1 plays in the numerical relationships a·1 = 1·a = a
 Theorem 1.4.3
o If R is the reduced row-echelon form of an nn matrix A, then
either R has a row of zeros or R is the identity matrix In

4.4. Invertible
 If A is a square matrix, and if a matrix B of the same size can be found
such that AB = BA = I, then A is said to be invertible and B is called an
inverse of A. If no such matrix B can be found, then A is said to be
singular.

 Remark:
 The inverse of A is denoted as A-1
 Not every (square) matrix has an inverse
 An inverse matrix has exactly one inverse
 Theorem 1.4.4
 If B and C are both inverses of the matrix A, then B = C
 Theorem 1.4.5
 The matrix A  
a b
c d 

is invertible if ad – bc  0, in which case the inverse is given by the


formula
1  d  b
A1 
ad  bc  c a 
 Theorem 1.4.6
 If A and B are invertible matrices of the same size ,then AB is
invertible and (AB)-1 = B-1A-1

4.5. Powers of a Matrix


 If A is a square matrix, then we define the nonnegative integer powers of A
to be
A0  I An  
AA

A (n  0)
n factors

 If A is invertible, then we define the negative integer powers to be

A n  ( A1 ) n  
A
1 1
A 
 A1 (n  0)


n factors
 Theorem 1.4.7 (Laws of Exponents)
 If A is a square matrix and r and s are integers, then ArAs = Ar+s,
(Ar)s = Ars
 If A is an invertible matrix, then:
o A-1 is invertible and (A-1)-1 = A
o An is invertible and (An)-1 = (A-1)n for n = 0, 1, 2, …
o For any nonzero scalar k, the matrix kA is invertible and
(kA)-1 = (1/k)A-1

4.6. Polynomial Expressions Involving Matrices


 If A is a square matrix, say mm , and if
p(x) = a0 + a1x + … + anxn
is any polynomial, then we define
p(A) = a0I + a1A + … + anAn
where I is the mm identity matrix.
 That is, p(A) is the mm matrix that results when A is substituted for x in
the above equation and a0 is replaced by a0I
 Example

 Theorems 1.4.9 (Properties of the Transpose)


o If the sizes of the matrices are such that the stated operations can
be performed, then
 ((AT)T = A
 (A + B)T = AT + BT and (A – B)T = AT – BT
 (kA)T = kAT, where k is any scalar
 (AB)T = BTAT
 Theorem 1.4.10 (Invertibility of a Transpose)
o If A is an invertible matrix, then AT is also invertible and (AT)-1 =
(A-1)T
 Example

5. Elementary Matrices and a Method for Finding A-1


5.1. Elementary Row Operation
 An elementary row operation (sometimes called just a row operation) on a
matrix A is any one of the following three types of operations:
 Interchange of two rows of A
 Replacement of a row r of A by cr for some number c  0
 Replacement of a row r1 of A by the sum r1 + cr2 of that row and a
multiple of another row r2 of A
5.2. Elementary Matrix
 An nn elementary matrix is a matrix produced by applying exactly one
elementary row operation to In
 Eij is the elementary matrix obtained by interchanging the i-th and j-
th rows of In
 Ei(c) is the elementary matrix obtained by multiplying the i-th row of
In by c  0
 Eij(c) is the elementary matrix obtained by adding c times the j-th row
to the i-th row of In, where i  j
 Example

5.3. Elementary Matrices and Row Operations


 Theorem 1.5.1
 Suppose that E is an mm elementary matrix produced by
applying a particular elementary row operation to Im, and that A is
an mn matrix. Then EA is the matrix that results from applying
that same elementary row operation to A
 Example
5.4. Inverse Operations
 If an elementary row operation is applied to an identity matrix I to produce
an elementary matrix E, then there is a second row operation that, when
applied to E, produces I back again

 Example
 Theorem 1.5.2
o Elementary Matrices and Nonsingularity
 Each elementary matrix is nonsingular, and its inverse is
itself an elementary matrix. More precisely,
 Eij-1 = Eji (= Eij)
 Ei(c)-1 = Ei(1/c) with c  0
 Eij(c)-1 = Eij(-c) with i  j

 Theorem 1.5.3(Equivalent Statements)


o If A is an nn matrix, then the following statements are equivalent,
that is, all true or all false
 A is invertible
 Ax = 0 has only the trivial solution
 The reduced row-echelon form of A is In
 A is expressible as a product of elementary matrices

5.5. A Method for Inverting Matrices


 To find the inverse of an invertible matrix A, we must find a sequence of
elementary row operations that reduces A to the identity and then
perform this same sequence of operations on In to obtain A-1

 Remark
 Suppose we can find elementary matrices E1, E2, …, Ek such that
Ek … E2 E1 A = In
then
A-1 = Ek … E2 E1 In
 Example 4
(Using Row Operations to Find A-1)
o Find the inverse of

1 2 3
A  2 5 3
1 0 8
o Solution:
 To accomplish this we shall adjoin the identity matrix to the
right side of A, thereby producing a matrix of the form [A |
I]
 We shall apply row operations to this matrix until the left
side is reduced to I; these operations will convert the right
side to A-1, so that the final matrix will have the form [I | A-
1]
6. Further Results on Systems of Equations and Invertibility
6.1. Theorems
 Theorems 1.6.1
o Every system of linear equations has either no solutions, exactly
one solution, or in finitely many solutions.
 Theorem 1.6.2
o If A is an invertible nn matrix, then for each n1 matrix b, the
system of equations Ax = b has exactly one solution, namely, x =
A-1b.
 Example

6.2. Linear Systems with a Common Coefficient Matrix


 To solve a sequence of linear systems, Ax = b1, Ax = b1, …, Ax = bk, with
common coefficient matrix A
 If A is invertible, then the solutions x 1 = A-1b1, x 2 = A-1b2 , …, x k = A-1bk
 A more efficient method is to form the matrix [A|b1|b2|…|bk]
 By reducing it to reduced row-echelon form we can solve all k systems at
once by Gauss-Jordan elimination.
 Theorems 1.6.3
o Let A be a square matrix
 If B is a square matrix satisfying BA = I, then B = A-1
 If B is a square matrix satisfying AB = I, then B = A-1
 Theorem 1.6.4 (Equivalent Statements)
o If A is an nn matrix, then the following statements are equivalent
 A is invertible
 Ax = 0 has only the trivial solution
 The reduced row-echelon form of A is In
 A is expressible as a product of elementary matrices
 Ax = b is consistent for every n×1 matrix b
 Ax = b has exactly one solution for every n×1 matrix b
 Theorem 1.6.5
o Let A and B be square matrices of the same size. If AB is invertible,
then A and B must also be invertible.
o Let A be a fixed m × n matrix. Find all m × 1 matrices b such that
the system of equations Ax=b is consistent.
7. Diagonal, Triangular, and Symmetric Matrices
7.1. Diagonal Matrix
 A square matrix A is mn with m = n; the (i,j)-entries for 1  i  m form the
main diagonal of A
 A diagonal matrix is a square matrix all of whose entries not on the main
diagonal equal zero. By diag(d1, …, dm) is meant the mm diagonal matrix
whose (i,i)-entry equals di for 1  i  m
7.1.1. Properties of Diagonal Matrices
 A general nn diagonal matrix D can be written as
 A diagonal matrix is invertible if and only if all of its diagonal entries
are nonzero
 Powers of diagonal matrices are easy to compute
d1 0  0  1 / d1 0  0  d1k 0  0 
0 d  0  
 D 1   0 1 / d 2  0  k  0 d 2k  0 
D 2
D 
          
     k
 0 0  dn   0 0  1/ dn   0 0  d n 
7.1.2. Properties of Diagonal Matrices
 Matrix products that involve diagonal factors are especially easy to
compute

7.2. Triangular Matrices


 A mn lower-triangular matrix L satisfies (L)ij = 0 if i < j, for 1  i  m and 1
jn
 A mn upper-triangular matrix U satisfies (U)ij = 0 if i > j, for 1  i  m and
1jn
 A unit-lower (or –upper)-triangular matrix T is a lower (or upper)-triangular
matrix satisfying (T)ii = 1 for 1  i  min(m,n)
 Example

 The diagonal matrix


 both upper triangular and lower triangular
 A square matrix in row-echelon form is upper triangular
 Theorem 1.7.1
o The transpose of a lower triangular matrix is upper triangular, and
the transpose of an upper triangular matrix is lower triangular
o The product of lower triangular matrices is lower triangular, and
the product of upper triangular matrices is upper triangular
o A triangular matrix is invertible if and only if its diagonal entries
are all nonzero
o The inverse of an invertible lower triangular matrix is lower
triangular, and the inverse of an invertible upper triangular matrix
is upper triangular

7.3. Symmetric Matrices


 A (square) matrix A for which AT = A, so that Aij = Aji for all i and j, is
said to be symmetric.
 Example

 Theorem 1.7.2
o If A and B are symmetric matrices with the same size, and if k is
any scalar, then
 AT is symmetric
 A + B and A – B are symmetric
 kA is symmetric
o Remark
 The product of two symmetric matrices is symmetric if and
only if the matrices commute, i.e., AB = BA
o Example

 Theorem 1.7.3
o If A is an invertible symmetric matrix, then A-1 is symmetric.
o Remark:
 In general, a symmetric matrix needs not be invertible.
 The products AAT and ATA are always symmetric
o Example
 Theorem 1.7.4
o If A is an invertible matrix, then AAT and ATA are also invertible

You might also like