0% found this document useful (0 votes)
3 views

LinearAlgebra Bareminimum MLFA-1

Uploaded by

Himanshi Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

LinearAlgebra Bareminimum MLFA-1

Uploaded by

Himanshi Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

ons

Linear Algebra Basics


Mahesh Mohan M R
Centre of Excellence in AI
Indian Institute of Technology
Kharagpur
Scalar times vector
Scalar times vector
Scalar times vector
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)

1XN NX1 1X1

• MATLAB: ‘inner matrix dimensions must agree’ Outer dimensions give


size of resulting matrix
Dot product geometric intuition:
“Overlap” of 2 vectors
Example: linear feed-forward network

Input neurons’
Firing rates

r1

r2

ri

rn
Example: linear feed-forward network

Input neurons’
Firing rates

r1
g hts
wei
ptic
r2 a
Syn

ri

rn
Example: linear feed-forward network

Input neurons’
Firing rates

r1
g hts
wei
ptic
r2 a
Syn
Output neuron’s
firing rate
ri

rn
Example: linear feed-forward network

Input neurons’ • Insight: for a given input


Firing rates (L2) magnitude, the
response is maximized
r1 when the input is parallel
ts to the weight vector
g h
wei • Receptive fields also can
ptic be thought of this way
r2 a
Syn
Output neuron’s
firing rate
ri

rn
Matrix times a vector

MX1 MXN NX1


Matrix times a vector:
inner product interpretation

• Rule: the ith element of y is the dot product of


the ith row of W with x
Matrix times a vector:
inner product interpretation

• Rule: the ith element of y is the dot product of


the ith row of W with x
Matrix times a vector:
inner product interpretation

• Rule: the ith element of y is the dot product of


the ith row of W with x
Matrix times a vector:
inner product interpretation

• Rule: the ith element of y is the dot product of


the ith row of W with x
Matrix times a vector:
inner product interpretation

• Rule: the ith element of y is the dot product of


the ith row of W with x
Example: Linear network

• Wij is the connection


strength (weight) onto
neuron yi from neuron xj.
Example: Linear network:
inner product point of view
• What is the response of cell yi of the second layer?

• The response is the dot


product of the ith row of
W with the vector x
Matrix times a vector:
outer product interpretation

• The product is a weighted sum of the columns


of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation

• The product is a weighted sum of the columns


of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation

• The product is a weighted sum of the columns


of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation

• The product is a weighted sum of the columns


of W, weighted by the entries of x
Example of the outer product method
Example of the outer product method

(0,2)
(3,1)
Example of the outer product method

(0,4)

(3,1)
What do matrices do to vectors?

(3,5)
• The new vector is:
1) rotated
(2,1)
2) scaled

In a Multi-layer network, this kind of transformations


are done by initial layers.
Example: Linear network

• Wij is the connection


strength (weight) onto
neuron yi from neuron xj.
Example: Linear network:
outer product point of view
• How does cell xj contribute to the pattern of firing
of layer 2?
1st column
of W

Contribution
of xj to
network output
How does a matrix transform a square?

(0,2)
(0,1) (3,1)

(1,0)
Geometric definition of the determinant: How
does a matrix transform a square?

(0,1)

(1,0)

You might also like