0% found this document useful (0 votes)
60 views

Linear Algebra and Matrices: Methods For Dummies 20 October, 2010 Melaine Boly Christian Lambert

The document provides an overview of linear algebra and matrices. It defines scalars, vectors, and matrices. Vectors are defined as columns of numbers, while matrices are rectangular displays of vectors in rows and columns that can represent the same data at different times or locations. The key concepts covered include: - Matrix calculations such as addition, subtraction, scalar multiplication, and matrix multiplication. Matrix multiplication is only possible if the number of columns in the first matrix equals the number of rows in the second matrix. - Identity matrices, which play a similar role to the number 1 in regular multiplication. For any matrix A, A multiplied by its identity matrix equals A. - Transposition, which involves flipping a matrix over its

Uploaded by

Puja Kumari
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Linear Algebra and Matrices: Methods For Dummies 20 October, 2010 Melaine Boly Christian Lambert

The document provides an overview of linear algebra and matrices. It defines scalars, vectors, and matrices. Vectors are defined as columns of numbers, while matrices are rectangular displays of vectors in rows and columns that can represent the same data at different times or locations. The key concepts covered include: - Matrix calculations such as addition, subtraction, scalar multiplication, and matrix multiplication. Matrix multiplication is only possible if the number of columns in the first matrix equals the number of rows in the second matrix. - Identity matrices, which play a similar role to the number 1 in regular multiplication. For any matrix A, A multiplied by its identity matrix equals A. - Transposition, which involves flipping a matrix over its

Uploaded by

Puja Kumari
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Linear Algebra and Matrices

Methods for Dummies


20th October, 2010

Melaine Boly
Christian Lambert
Overview
• Definitions-Scalars, vectors and matrices
• Vector and matrix calculations
• Identity, inverse matrices & determinants
• Eigenvectors & dot products
• Relevance to SPM and terminology

Linear Algebra & Matrices, MfD 2010


Part I
Matrix Basics

Linear Algebra & Matrices, MfD 2010


Scalar
• A quantity (variable), described by a single
real number

e.g. Intensity of each


voxel in an MRI scan

Linear Algebra & Matrices, MfD 2010


Vector
• Not a physics vector (magnitude, direction)

EXAMPLE:

 x1 
 x 2
VECTOR=
 
i.e. A column of
numbers  xn 
Linear Algebra & Matrices, MfD 2010
Matrices
• Rectangular display of vectors in rows and columns
• Can inform about the same vector intensity at different
times or different voxels at the same time
• Vector is just a n x 1 matrix

 x11 x12 x13 


 x 21 x 22 x 23
 
 x31 x32 x33

Linear Algebra & Matrices, MfD 2010


Matrices
Matrix locations/size defined as rows x columns (R x C)

 d11 d12 d13 


D  d 21 d 22 d 23 
 d 31 d 32 d 33 
d i j : ith row, jth column
1 4 7  1 4 
A  2 5 8  A  2 5
3 6 9   x11 x12 x13  3 6
 x 21x11 x12 x13
x 22 x 23  
Square (3 x 3)   x 21x11 x 22x12 x 23  
 x13 Rectangular (3 x 2)
  x 21
 x31 xx32
11 xx33
x 22 x 23  
12  x13
  x 21
 x31 x11
x32 x 22 x 12
x33x 23  
 x13
  x 21
 x31 x32 x 22  
x33x 23

 x31 x32 x33 
 x31 x32 x33
3 dimensional (3 x 3 x 5)
Linear Algebra & Matrices, MfD 2010
Matrices in MATLAB
Description Type into MATLAB Meaning
1 4 7 
Matrix(X) X=[1 4 7;2 5 8;3 6 9] X  2 5 8
3 6 9
;=end of a row

Reference matrix values (X(row,column))


Note the : refers to all of row or column and , is the divider between rows and columns
3rd row X(3, :) 3 6 9
2nd Element of 3rd column X(2,3) 8
 4
Elements 2&3 of column 2 X( [1 2], 2)  
5

Special types of matrix 0


 
All zeros size 3x1 zeros(3,1) 0
0
 

1 1
All ones size 2x2 ones(2,2)  
1
1

Linear Algebra & Matrices, MfD 2010


Transposition
1   3
 
b  1  d   4
T
bT  1 1 2 d   3 4 9
2 9
column row row column

1 2 3 1 5 6 
A  5 4 1 AT  2 4 7
6 7 4 3 1 4

Linear Algebra & Matrices, MfD 2010


Matrix Calculations
Addition
– Commutative: A+B=B+A
– Associative: (A+B)+C=A+(B+C)

2 4 1 0  2  1 4  0 3 4
AB     
2 5 3 1 2  3 5  1  5 6
  

Subtraction
- By adding a negative matrix

2 42 12 21 12 42 2 1 12  11 12 1
A  BA  B                         
5 32 32 41 15 32 2 3 14  12 1 11

Linear Algebra & Matrices, MfD 2010


Scalar multiplication
Scalar * matrix = scalar multiplication

Linear Algebra & Matrices, MfD 2010


Matrix Multiplication
“When A is a mxn matrix & B is a kxl matrix, AB is only possible
if n=k. The result will be an mxl matrix”
Simply put, can ONLY perform A*B IF:
Number of columns in A = Number of rows in B
n l

A1 A 2 A3 B13 B14
m A4 A 5 A6 x B15 B16 k = m x l matrix
A7 A 8 A9 B17 B18
A10 A11 A12

Hint:
1) If you see this message in MATLAB:
??? Error using ==> mtimes
Inner matrix dimensions must agree -Then columns in A is not equal to rows in B
Linear Algebra & Matrices, MfD 2010
Matrix multiplication
• Multiplication method:
Sum over product of respective rows and columns

Matlab does all this for you!


Simply type: C = A * B

Hints:
1) You can work out the size of the output (2x2). In MATLAB, if you pre-allocate a
matrix this size (e.g. C=zeros(2,2)) then the calculation is quicker
Linear Algebra & Matrices, MfD 2010
Matrix multiplication
• Matrix multiplication is NOT commutative
i.e the order matters!
– AB≠BA

• Matrix multiplication IS associative


– A(BC)=(AB)C

• Matrix multiplication IS distributive


– A(B+C)=AB+AC
– (A+B)C=AC+BC
Linear Algebra & Matrices, MfD 2010
Identity matrix
A special matrix which plays a similar role as the number 1 in number
multiplication?

For any nxn matrix A, we have A In = In A = A


For any nxm matrix A, we have In A = A, and A Im = A (so 2 possible matrices)

If the answers always A, why use an identity matrix?


Can’t divide matrices, therefore to solve may problems have to use the inverse. The
identity is important in these types of calculations.
Linear Algebra & Matrices, MfD 2010
Identity matrix

Worked example 1 2 3 1 0 0 1+0+0 0+2+0 0+0+3


A I3 = A 4 5 6 X 0 1 0 = 4+0+0 0+5+0 0+0+6
for a 3x3 matrix: 7 8 9 0 0 1 7+0+0 0+8+0 0+0+9

• In Matlab: eye(r, c) produces an r x c identity matrix

Linear Algebra & Matrices, MfD 2010


Part II
More Advanced Matrix Techniques

Linear Algebra & Matrices, MfD 2010


Vector components
& orthonormal base
• A given vector (a b) can be summarized by its components,
but only in a particular base (set of axes; the vector itself
can be independent from the choice of this particular base).

y axis

example
a and b are the components of b
 in the given base (axes chosen 
v v
for expression of the coordinates
in vector space)

v

a x axis
Orthonormal base: set of vectors chosen to express the components
of the others, perpendicular to each other and all with norm (length) = 1

Linear Algebra & Matrices, MfD 2010


Linear combination &
dimensionality
Vectorial space: space defined by different vectors (for example for
dimensions…).
The vectorial space defined by some vectors is a space that contains them and all
the vectors that can be obtained by multiplying these vectors by a real number
then adding them (linear combination).

A matrix A (mn) can itself be decomposed in as many vectors as its number of


columns (or lines). When decomposed, one can represent each column of the
matrix by a vector. The ensemble of n vector-column defines a vectorial space
proper to matrix A.
Similarly, A can be viewed as a matricial representation of this ensemble of
vectors, expressing their components in a given base.

Linear Algebra & Matrices, MfD 2010


Linear dependency and rank
If one can find a linear relationship between the lines or columns of a matrix,
then the rank of the matrix (number of dimensions of its vectorial space) will not
be equal to its number of column/lines – the matrix will be said to be rank-
deficient.

 
Example
X12 y


24
x1  12
When representing the vectors, we see that
4

4
x 2
x1 and x2 are superimposed. If we look
better, we see that we can express one by a 
linear combination of the other: x2 = 2 x1. x2
The rank of the matrix will be 1. 2
In parallel, the vectorial space defined will 2
has only one dimension.


x1
x
1 2
Linear Algebra & Matrices, MfD 2010
Linear dependency and rank
• The rank of a matrix corresponds to the dimensionality of the vectorial space
defined by this matrix. It corresponds to the number of vectors defined by the
matrix that are linearly independents from each other.

• Linealy independent vectors are vectors defining each one one more
dimension in space, compared to the space defined by the other vectors. They
cannot be expressed by a linear combination of the others.

Note. Linearly independent vectors are not 


necessarily orthogonal (perpendicular).
Example: take 3 linearly independent vectors
x3
x1, x2 et x3.
 plan
Vectors x1 and x2 define a plane (x,y) x1
And vector x3 has an additional non-zero 
component in the z axis. x2
But x3 is not perpendicular to x1 or x2.

Linear Algebra & Matrices, MfD 2010


Eigenvalues et eigenvectors
One can represent the vectors from matrix X (eigenvectors of A) as a set of orthogonal vectors
(perpendicular), and thus representing the different dimensions of the original matrix A.
The amplitude of the matrix A in these different dimensions will be given by the eigenvalues
corresponding to the different eigenvectors of A (the vectors composing X).
Note: if a matrix is rank-deficient, at least one of its eigenvalues is zero.

For A’:
u1, u2 = eigenvectors
k1, k2 = eigenvalues

In Principal Component Analysis (PCA), the matrix is decomposed into eigenvectors and
eigenvalues AND the matrix is rotated to a new coordinate system such that the greatest variance
by any projection of the data comes to lie on the first coordinate (called the first principal
component), the second greatest variance on the second coordinate, and so on.

Linear Algebra & Matrices, MfD 2010


Vector Products
 x1   y1 
Two vectors: x x
 2
 y y
 2


 x3 
 
 y3 

Inner product XTY is a scalar
Inner product = scalar (1xn) (nx1)
 y1  3
xT y   x1 x3  y   x y  x y  x y 
x2  2 1 1 2 2 3 3 
i 1
xi yi
 y3 

Outer product = matrix


x1  x1y1 x1y 2 x1y 3  Outer product XYT is a matrix
   
xy T  x 2 y1 y2 y 3   x 2 y1 x2y2 x 2 y 3 (nx1) (1xn)

x 3 
 
x 3 y1 x3y2 x 3 y 3


Linear Algebra & Matrices, MfD 2010


Scalar product of vectors
Calculate the scalar product of two vectors is equivqlent to
make the projection of one vector on the other one. One can
indeed show that x1 x2 = x1 . x2 . cos where  is the
angle that separates two vectors when they have both the
same origin.


x1
x1 x2 =   .   . cos

 x1 x2
 
cos x2
In parallel, if two vectors are orthogonal, their scalar product is
zero: the projection of one onto the other will be zero.

Linear Algebra & Matrices, MfD 2010


Determinants

Le déterminant d’une matrice est un nombre scalaire


représentant certaines propriétés intrinsèques de cette
matrice. Il est noté detA ou |A|.

Sa définition est un détour indispensable avant d’aborder


l’opération correspondant à la division de matrices, avec
le calcul de l’inverse.

Linear Algebra & Matrices, MfD 2010


Determinants
For a matrix 11:
det(a11)a11
For a matrix 22:
a11 a12 a a a a
a21a2 2 11 2 2 12 2 1
For a matrix 33:
a11 a12 a13
a21 a22 a23 = a11a22a33+a12a23a31+a13a21a32–a11a23a32 –a12a21a33 –a13a22a31
a31 a32 a33
= a11(a22a33 –a23a32)–a12(a21a33 –a23a31)+a13(a21a32 –a22a31)

The determinant of a matrix can be calculate by multiplying each element of one of its
lines by the determinant of a sub-matrix formed by the elements that stay when one
suppress the line and column containing this element. One give to the obtained
product the sign (-1)i+j.

Linear Algebra & Matrices, MfD 2010


Determinants
• Determinants can only be found for square matrices.
• For a 2x2 matrix A, det(A) = ad-bc. Lets have at closer look at that:

det(A) =
[ ]
a
c
b
d
= ad - bc

• In Matlab: det(A) = det(A)

The determinant gives an idea of the ’volume’ occupied by the matrix in vector space
A matrix A has an inverse matrix A-1 if and only if det(A)≠0.

Linear Algebra & Matrices, MfD 2010


Determinants
The determinant of a matrix is zero if and only if there exist a linear relationship
between the lines or the columns of the matrix – if the matrix is rank-deficient.
In parallel, one can define the rank of a matrix A as the size of the largest
square sub-matrix of A that has a non-zero determionant.

X12   y


24
x1  12
Here x1 and x2 are superimposed in space,
4

4
x 2
because one can be expressed by a linear
combination of the other: x2 = 2 x1. 
x2
The determinant of the matrix X will thus be
zero. 2
2
The largest square sub-matrix with a non-
zero determinant will be a matrix of 1x1 =>
the rank of the matrix is 1. 
x1
x
1 2
Linear Algebra & Matrices, MfD 2010
Determinants
• In a vectorial space of n dimensions, there will be no more
than n linearly independent vectors.
• If 3 vectors (21) x’1, x’2, x’3 are represented by a matrix X’:

Graphically, we have:

Here x3 can be expressed by


a linear combination of x1 and
y y
 
X' 1223
21
x2.

  
The determinant of the matrix
X’ will thus be zero.
3 3 x2 ax1 bx3
(1,2) (2,2)
The largest square sub-matrix 2  2 
with a non-zero determinant  x2  ax1
will be a matrix of 2x2 => the x1 (3,1) x2
rank of the matrix is 2.
1  1
x3 
bx3
x x
1 2 3 1 2 3
Linear Algebra & Matrices, MfD 2010
Determinants
The notions of determinant, of the rank of a matrix and of linear dependency are
closely linked.

Take a set of vectors x1, x2,…,xn, all with the same number of elements: these
vectors are linearly dependent if one can find a set of scalars c1, c2,…,cn non
equal to zero such as:
c1 x1+ c2 x2+…+ cn xn= 0
A set of vectors are linearly dependent if one of then can be expressed as a
linear combination of the others. They define in space a smaller number of
dimensions than the total number of vectors in the set. The resulting matrix will
be rank-deficient and the determinant will be zero.

Similarly, if all the elements of a line or column are zero, the determinant of the
matrix will be zero.

If a matrix present two rows or columns that are equal, its determinant will also
be zero

Linear Algebra & Matrices, MfD 2010


Matrix inverse
• Definition. A matrix A is called nonsingular or invertible if there
exists a matrix B such that:
-1
2 2+1 -1 + 1
1 1 X = = 1 0
3 3 3 3 3
3
-2+ 2
1 1 1+2
-1 2 3 0 1
3 3 3 3
3
• Notation. A common notation for the
inverse of a matrix A is A-1. So:

• The inverse matrix is unique when it exists. So if A is invertible, then A-1 is


also invertible and then (AT)-1 = (A-1)T

• In Matlab: A-1 = inv(A) • Matrix division: A/B= A*B-1

Linear Algebra & Matrices, MfD 2010


Matrix inverse
• For a XxX square matrix:

• The inverse matrix is:

• E.g.: 2x2 matrix

For a matrix to be invertible, its determinant has to be non-zero (it has to be square
and of full rank).
A matrix that is not invertible is said to be singular.
Reciprocally, a matrix that is invertible is said to be non-singular.

Linear Algebra & Matrices, MfD 2010


Pseudoinverse
In SPM, design matrices are not square (more
lines than columns, especially for fMRI).

The system is said to be overdetermined – there


is not a unique solution, i.e.
there is more than one solution possible.

SPM will use a mathematical trick called the


pseudoinverse, which is an approximation used in
overdetermined systems, where the solution is
constrained to be the one where the  values that
are minimum.

Linear Algebra & Matrices, MfD 2010


Part III
How are matrices relevant
to fMRI data?

Linear Algebra & Matrices, MfD 2010


Image time-series
Spatial filter Design matrix Statistical Parametric Map

Realignment Smoothing General Linear Model

Statistical RFT
Normalisation Inference

Anatomical p <0.05
reference Parameter estimates

Linear Algebra & Matrices, MfD 2010


Voxel-wise time series analysis

Model
Model
specification
specification
Parameter
Parameter
estimation
estimation

Time
Hypothesis
Hypothesis
Statistic
Statistic
Ti
m
e

BOLD signal
single
singlevoxel
voxel
time SPM
timeseries
series SPM
Linear Algebra & Matrices, MfD 2010
How are matrices relevant to fMRI data?
GLM equation

s
er
et
at n

ve ror
ve ta

m esig

am
or

or
x
da

er
ri
ct

ct
r
d

pa
a

b3
N of scans

= b4 +
b5

b6

b7

b8
Y = X ´ b
b
+ e
9

Linear Algebra & Matrices, MfD 2010


How are matrices relevant to fMRI data?
ve ta
or
da
ct

Preprocessing .
..
Response variable
Y
e.g BOLD signal at a particular
voxel

Time
me
Ti
A single voxel sampled at successive time
points.
Each voxel is considered as independent
observation.
Intens
ity
Y
Y= X.β +ε
Linear Algebra & Matrices, MfD 2010
How are matrices relevant to fMRI data?

rs
Explanatory variables

e
et
at n
m sig

m
x

ra
ri
de

pa
– These are assumed to be
a measured without error.
m – May be continuous;
b3 – May be dummy,
b4
indicating levels of an
experimental factor.
b5

b6
Solve equation for β – tells us how
b7 much of the BOLD signal is
explained by X
b8
X ´ b
b Y= X.β +ε
9
Linear Algebra & Matrices, MfD 2010
In Practice
• Estimate MAGNITUDE of signal changes
• MR INTENSITY levels for each voxel at
various time points
• Relationship between experiment and
voxel changes are established
• Calculation and notation require linear
algebra and matrices manipulations

Linear Algebra & Matrices, MfD 2010


Summary
• SPM builds up data as a matrix.
• Manipulation of matrices enables unknown
values to be calculated.

Y = X . β + ε
Observed = Predictors * Parameters + Error
BOLD = Design Matrix * Betas + Error

Linear Algebra & Matrices, MfD 2010


References
• SPM course http://www.fil.ion.ucl.ac.uk/spm/course/
• Web Guides
http://mathworld.wolfram.com/LinearAlgebra.html
http://www.maths.surrey.ac.uk/explore/emmaspages/option1.ht
ml
http://www.inf.ed.ac.uk/teaching/courses/fmcs1/
(Formal Modelling in Cognitive Science course)
• http://www.wikipedia.org
• Previous MfD slides

Linear Algebra & Matrices, MfD 2010


ANY QUESTIONS ?

You might also like