0% found this document useful (0 votes)
34 views

11 Linear Algebra Summary

The document provides an overview of key concepts in linear algebra, including: - Vectors in Rd represent tuples of real numbers and can be added or scaled. - A set of vectors is linearly dependent if they are scalar multiples of each other. - A basis is a minimal set of linearly independent vectors that span the space. - The dot product measures similarity between vectors and is used to define length. - Matrices are arrays of numbers that can be added or scaled and act on vectors.

Uploaded by

georgi georgiev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

11 Linear Algebra Summary

The document provides an overview of key concepts in linear algebra, including: - Vectors in Rd represent tuples of real numbers and can be added or scaled. - A set of vectors is linearly dependent if they are scalar multiples of each other. - A basis is a minimal set of linearly independent vectors that span the space. - The dot product measures similarity between vectors and is used to define length. - Matrices are arrays of numbers that can be added or scaled and act on vectors.

Uploaded by

georgi georgiev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Linear algebra cheat sheet

The space Rd Linear combination & (in)dependence


The vector space Rd contains d-tuples of real For a set of vectors {~v1 , . . . , ~vn } ⊆ Rd a linear
numbers which we call vectors. To distinguish combination is any summation of the form
them from ordinary tuples, we decorate vector
c1~v1 + c2~v2 + . . . + cn~vn
variables with an arrow and write the entries of
a vectors as a column: where the coefficients ci are taken from R.
 
x1 A set of vectors {~v1 , . . . , ~vn } ⊆ Rd is linearly
 x2  dependent if there exist coefficients c1 , . . . , cn
~x =  . 
  such that
 .. 
xd c1~v1 + c2~v2 + . . . + cn~vn = ~0,

It is often said that a vector represents “a direc- and not all coefficients are zero. If not such lin-
tion, not a location”, in contrast to points in d- ear combination exists, the set {~v1 , . . . , ~vn } is
dimensional space. However, the above vec- linearly independent.
tor ~x can also be used to represent the point
(x1 , x2 , . . . , xd ) if we consider where it ends
when positioned at the origin. Basis
The zero vector ~0 of Rd is the vector in which
A basis for Rd is a minimal set of vectors
every entry is equal zero.
{~b1 , . . . , ~bk } such that for every vector ~y ∈ Rd
there exist cofficients c1 , . . . , ck with
Vector addition, subtraction ~y = c1~b1 + . . . + cn~bn .
Addition of vectors of the same length (di-
mension) is done component-wise: In other words: every vector in Rd can be rep-
resented as a linear combination of B.
     
x1 y1 x1 + y1
x2  y2  x2 + y2  ! Every basis of Rd has size exactly d.
~x + ~y =  .  +  .  =  ..
     
 ..   ..  

. 
xd yd xd + yd
Every maximal set of linearly indepen-
!
Vector addition commutes (is commutative), dent vectors from Rd is a basis for Rd .
meaning that ~x + ~y = ~y + ~x. Vector subtrac-
tion also works component-wise, similar to the The standard basis for Rd is
subtraction of numbers it does not commute.
1 0 0
     
The additive inverse of a vector ~x is the vec-
0 1 0
tor −~x with ~x + (−~x) = ~0. It turns out that −~x .,.,...,.
 ..   ..   .. 
is simply the vector ~x with every component
negated. 0 0 1
The neutral element of vector addition is
the zero vector, which simply means that ~x +
~0 = ~x. Euclidean norm
The Euclidean norm of a vector refers to its
magnitude: the length of the corresponding
Vector scaling arrow in an arrow diagram.
The scaling of a vector refers to the multipli- For ~x ∈ Rd it is defined as
cation of a vector ~x with a number s (a scalar): q
Pd 2
    k~xk := i=1 xi .
x1 sx1
s~x = s  ...  =  ...  The above expression can be rewritten using
   
the dot product:
xd sxd
√ √
k~xk = ~x · ~x = ~x2 .
Unit vector, normalization Matrices
A unit vector is a vector whose norm (in our A n × m matrix over R is a rectangular array
context: the Euclidean norm) is one. For any of real numbers with n rows and m columns.
non-zero vector ~x ∈ Rd the vector k~x1k ~x is a unit The entries of a matrix are indexed by the
vector that points in the same direction as ~x. row/column:
a11 a12 . . . a1m
 
 a21 a22 . . . a2m 
Dot product, orthogonality A=  ... .. .. 
. . 
The dot product (also scalar product) of two an1 an2 . . . anm
vectors ~x, ~y ∈ Rd is defined as
Matrices with n = m are called square. The
x1 y1 entries aii are called the diagonal, all other en-
   
d
 x2   y2  X tries are called off-diagonal.
~x · ~y =  ·
 ...   ... 
   = xi yi The n × m zero matrix 0 has all entries equal
i=1
xd yd zero. The n × n identity matrix I is defined as
 
We will often write ~x2 instead of ~x · ~x. 1 0 ... 0
0 1 . . . 0
If the dot product is zero, the vectors ~x, ~y
I = . . ..  ,
 
are orthogonal i.e. the angle between them is  .. .. .
precisely 90◦ . If it is positive non-zero, the an- 0 0 ... 1
gle between the vectors is acute (< 90◦ ), if it is
negative non-zero the angle is obtuse (> 90◦ ). that is square matrix in which all diagonal en-
The dot product can alternatively be com- tries are 1 and all off-diagonal entries are 0.
puted using the angle θ between ~x and ~y (think
of drawing ~x, ~y as arrows rooted at the origin
and measuring the angle between them): Matrix scaling
~x · ~y = k~xk k~y k cos θ, The scaling of a matrix refers to the multipli-
cation of a matrix A with a number s (a scalar).
By rearranging the above equation we get an It, again, works component-wise:
expression to compute the cosine of the angle
a11 . . . a1m sa11 . . . sa1m
   
θ or θ itself:
sA = s  ... ..  =  ..
. .
.. 
.
~x · ~y ~x · ~y
cos θ = ⇐⇒ θ = arccos . an1 . . . anm san1 . . . sanm
k~xk k~y k k~xk k~y k

We call two vectors orthogonal if their dot-


product is zero. Note that this happens pre- Matrix addition
cisely when θ is a right angle! The addition of two matrices with the same
number of rows and columns is defined
component-wise:
Orthogonal basis, normal basis
a11 . . . a1m b11 . . . b1m
! !
Vector bases with special properties make our A+B= .
.
.
. + .
.
.
.
. . . .
lives much easier. an1 . . . anm bn1 . . . bnm
A basis B of Rd is called normal if every  
a11 + b11 ... a1m + b1m
~x ∈ B satisfies k~xk = 1. In words: all vectors
:= 
 .. .. 
in B have unit length. Every basis B can be . . 
turned into a normal basis B 0 by normalizing an1 + bn1 ... anm + bnm
all vectors in B.
A basis B of Rd is called orthogonal if every Note that matrix addition commutes:
pair of distinct vectors ~x, ~x0 ∈ B satisfies ~x ·~x0 =
0. In words: all vectors in the basis are at right A + B = B + A.
angles to each other.
A basis that is normal and orthogonal is Matrix subtraction is defined component-wise
called orthonormal. The standard basis for Rd in the same manner.
is orthonormal.
Row- and column vectors Matrix-matrix product
It is often useful to imagine a matrix A as a col- We can also multiply a n×m matrix with a m×p
lection of columns: matrix (the first matrix must be as wide as the
a11
  
a1m
 second one is high). The result is a n × p matrix
computed as follows:
A =  ...  . . .  ...  = ~c1 . . . ~cm
 

an1 anm   
a11 ... a1m b11 ... b1p
where ~c1 , . . . , ~cm are the column vectors of A.  .. ..   .. .. 
AB =  . .  . . 
Similarly, we can imagine A as a collection
of row vectors ~r1 , . . . , ~rn : an1 . . . anm bm1 . . . bmp
 Pm Pm 
  i=1 a1i bi1 ... i=1 a1i bip
a11 . . . a1m  T
= .
. ..
~r1
 
Pm . . 
  .. 
 
 .. .
.
Pm
A= . .  = .  i=1 ani bi1 . . . i=1 ani bip
 
an1 . . . anm ~rnT
This is much easier to understand if we un-
derstand A as a collection of row vectors
The superscript T (for transpose) indicates that ~a1 , . . . , ~an and B as a collection of column vec-
the vectors are ‘flipped on their sides’. tors ~b1 , . . . , ~bp :

~a1T
 

Matrix-vector product  ~a2T  


 ~ ~

AB =  . b1 b2 . . . ~bp

 ..

The multiplication of an n × m matrix A with a 
vector ~x ∈ Rm is a vector with n entries, de- ~anT
fined as follows:
~a1 · ~b1 ~a1 · ~b2 . . . ~a1 · ~bp
 
 
 x1  Pm ~a2 · ~b1 ~a2 · ~b2 . . . ~a2 · ~bp 
i=1 xi a1i
 
a11 a12 . . . a1m = .
 
Pm .. .. 
 a21 a22 . . . a2m  x2  xi a2i   ..
 
 .  :=  i=1.
 . . 
A~x = 
 ... ... ..  
.
 .
 . 
   .
.
 ~an · ~b1 ~an · ~b2 . . . ~an · ~bp
Pm
an1 an2 . . . anm i=1 xi ani
xm The matrix product does not commute: AB is
This operation is easier to understand if we ex- different to BA and the latter is often not even
press A as a collection of row vectors: defined properly because the sizes of matrices
mismatches.
~r1T If A is square and I is the identity matrix of
   
~r1 · ~x
 ~r2T  ~r2 · ~x  the same size, then
A~x =  . ~x =  . 
   
 ..  .. 

 AI = IA = A.
~rnT ~rn · ~x

Note that this operation does not commute:


we only defined A~x here, the operation “~xA”
is undefined.

You might also like