Linear Algebra and Matrices: Methods For Dummies 20 October, 2010 Melaine Boly Christian Lambert
Linear Algebra and Matrices: Methods For Dummies 20 October, 2010 Melaine Boly Christian Lambert
Melaine Boly
Christian Lambert
Overview
• Definitions-Scalars, vectors and matrices
• Vector and matrix calculations
• Identity, inverse matrices & determinants
• Eigenvectors & dot products
• Relevance to SPM and terminology
EXAMPLE:
x1
x 2
VECTOR=
i.e. A column of
numbers xn
Linear Algebra & Matrices, MfD 2010
Matrices
• Rectangular display of vectors in rows and columns
• Can inform about the same vector intensity at different
times or different voxels at the same time
• Vector is just a n x 1 matrix
1 1
All ones size 2x2 ones(2,2)
1
1
1 2 3 1 5 6
A 5 4 1 AT 2 4 7
6 7 4 3 1 4
2 4 1 0 2 1 4 0 3 4
AB
2 5 3 1 2 3 5 1 5 6
Subtraction
- By adding a negative matrix
2 42 12 21 12 42 2 1 12 11 12 1
A BA B
5 32 32 41 15 32 2 3 14 12 1 11
A1 A 2 A3 B13 B14
m A4 A 5 A6 x B15 B16 k = m x l matrix
A7 A 8 A9 B17 B18
A10 A11 A12
Hint:
1) If you see this message in MATLAB:
??? Error using ==> mtimes
Inner matrix dimensions must agree -Then columns in A is not equal to rows in B
Linear Algebra & Matrices, MfD 2010
Matrix multiplication
• Multiplication method:
Sum over product of respective rows and columns
Hints:
1) You can work out the size of the output (2x2). In MATLAB, if you pre-allocate a
matrix this size (e.g. C=zeros(2,2)) then the calculation is quicker
Linear Algebra & Matrices, MfD 2010
Matrix multiplication
• Matrix multiplication is NOT commutative
i.e the order matters!
– AB≠BA
y axis
example
a and b are the components of b
in the given base (axes chosen
v v
for expression of the coordinates
in vector space)
v
a x axis
Orthonormal base: set of vectors chosen to express the components
of the others, perpendicular to each other and all with norm (length) = 1
Example
X12 y
24
x1 12
When representing the vectors, we see that
4
4
x 2
x1 and x2 are superimposed. If we look
better, we see that we can express one by a
linear combination of the other: x2 = 2 x1. x2
The rank of the matrix will be 1. 2
In parallel, the vectorial space defined will 2
has only one dimension.
x1
x
1 2
Linear Algebra & Matrices, MfD 2010
Linear dependency and rank
• The rank of a matrix corresponds to the dimensionality of the vectorial space
defined by this matrix. It corresponds to the number of vectors defined by the
matrix that are linearly independents from each other.
• Linealy independent vectors are vectors defining each one one more
dimension in space, compared to the space defined by the other vectors. They
cannot be expressed by a linear combination of the others.
For A’:
u1, u2 = eigenvectors
k1, k2 = eigenvalues
In Principal Component Analysis (PCA), the matrix is decomposed into eigenvectors and
eigenvalues AND the matrix is rotated to a new coordinate system such that the greatest variance
by any projection of the data comes to lie on the first coordinate (called the first principal
component), the second greatest variance on the second coordinate, and so on.
x1
x1 x2 = . . cos
x1 x2
cos x2
In parallel, if two vectors are orthogonal, their scalar product is
zero: the projection of one onto the other will be zero.
The determinant of a matrix can be calculate by multiplying each element of one of its
lines by the determinant of a sub-matrix formed by the elements that stay when one
suppress the line and column containing this element. One give to the obtained
product the sign (-1)i+j.
det(A) =
[ ]
a
c
b
d
= ad - bc
The determinant gives an idea of the ’volume’ occupied by the matrix in vector space
A matrix A has an inverse matrix A-1 if and only if det(A)≠0.
X12 y
24
x1 12
Here x1 and x2 are superimposed in space,
4
4
x 2
because one can be expressed by a linear
combination of the other: x2 = 2 x1.
x2
The determinant of the matrix X will thus be
zero. 2
2
The largest square sub-matrix with a non-
zero determinant will be a matrix of 1x1 =>
the rank of the matrix is 1.
x1
x
1 2
Linear Algebra & Matrices, MfD 2010
Determinants
• In a vectorial space of n dimensions, there will be no more
than n linearly independent vectors.
• If 3 vectors (21) x’1, x’2, x’3 are represented by a matrix X’:
Graphically, we have:
The determinant of the matrix
X’ will thus be zero.
3 3 x2 ax1 bx3
(1,2) (2,2)
The largest square sub-matrix 2 2
with a non-zero determinant x2 ax1
will be a matrix of 2x2 => the x1 (3,1) x2
rank of the matrix is 2.
1 1
x3
bx3
x x
1 2 3 1 2 3
Linear Algebra & Matrices, MfD 2010
Determinants
The notions of determinant, of the rank of a matrix and of linear dependency are
closely linked.
Take a set of vectors x1, x2,…,xn, all with the same number of elements: these
vectors are linearly dependent if one can find a set of scalars c1, c2,…,cn non
equal to zero such as:
c1 x1+ c2 x2+…+ cn xn= 0
A set of vectors are linearly dependent if one of then can be expressed as a
linear combination of the others. They define in space a smaller number of
dimensions than the total number of vectors in the set. The resulting matrix will
be rank-deficient and the determinant will be zero.
Similarly, if all the elements of a line or column are zero, the determinant of the
matrix will be zero.
If a matrix present two rows or columns that are equal, its determinant will also
be zero
For a matrix to be invertible, its determinant has to be non-zero (it has to be square
and of full rank).
A matrix that is not invertible is said to be singular.
Reciprocally, a matrix that is invertible is said to be non-singular.
Statistical RFT
Normalisation Inference
Anatomical p <0.05
reference Parameter estimates
Model
Model
specification
specification
Parameter
Parameter
estimation
estimation
Time
Hypothesis
Hypothesis
Statistic
Statistic
Ti
m
e
BOLD signal
single
singlevoxel
voxel
time SPM
timeseries
series SPM
Linear Algebra & Matrices, MfD 2010
How are matrices relevant to fMRI data?
GLM equation
s
er
et
at n
ve ror
ve ta
m esig
am
or
or
x
da
er
ri
ct
ct
r
d
pa
a
b3
N of scans
= b4 +
b5
b6
b7
b8
Y = X ´ b
b
+ e
9
Preprocessing .
..
Response variable
Y
e.g BOLD signal at a particular
voxel
Time
me
Ti
A single voxel sampled at successive time
points.
Each voxel is considered as independent
observation.
Intens
ity
Y
Y= X.β +ε
Linear Algebra & Matrices, MfD 2010
How are matrices relevant to fMRI data?
rs
Explanatory variables
e
et
at n
m sig
m
x
ra
ri
de
pa
– These are assumed to be
a measured without error.
m – May be continuous;
b3 – May be dummy,
b4
indicating levels of an
experimental factor.
b5
b6
Solve equation for β – tells us how
b7 much of the BOLD signal is
explained by X
b8
X ´ b
b Y= X.β +ε
9
Linear Algebra & Matrices, MfD 2010
In Practice
• Estimate MAGNITUDE of signal changes
• MR INTENSITY levels for each voxel at
various time points
• Relationship between experiment and
voxel changes are established
• Calculation and notation require linear
algebra and matrices manipulations
Y = X . β + ε
Observed = Predictors * Parameters + Error
BOLD = Design Matrix * Betas + Error