0% found this document useful (0 votes)
22 views

Linear Algebra

technical

Uploaded by

anil85
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Linear Algebra

technical

Uploaded by

anil85
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Linear Algebra

Abhishek Rishabh
ISB
System of Linear Equations
• Systems of linear equations play a central part of linear algebra.

• Many problems can be formulated as systems of linear


equations.

• Linear algebra gives us the tools for solving them.


System of Linear Equations
System of Linear Equations
System of Linear Equations
• Good to know, but how does it relate to data science?
Matrices
• Matrices play a central role in linear algebra. They can be used
to compactly represent systems of linear equations.
Matrices
• Addition
Matrices
• Multiplication
Matrices
• Multiplication-Example
Matrices
Matrices
Solving System of Linear Equations
Vector Spaces
Linear Independence, Basis and Rank
Linear Independence, Basis and Rank
Linear Independence, Basis and Rank
Linear Independence, Basis and Rank
Linear Independence, Basis and Rank
Topics in Analytic Geometry

• Until now we looked at vectors, vector spaces, etc at a general


but abstract level. Now we will add some geometric
interpretation and intuition.
Norms

• When we think of geometric vectors, i.e., directed line segments


that start at the origin, then intuitively the length of a vector is
the distance of the “end” of this directed line segment from the
origin.
Norms
• A norm on a vector space V is a function
Norms

• In geometric terms, the triangle inequality states that for any


triangle, the sum of the lengths of any two sides must be
greater than or equal to the length of the remaining side
Norms

• Manhattan Norm

• The Manhattan norm is also called L1norm.


Norms

• Euclidean Norm

• The Euclidean norm is also called L2 norm.


Inner Products

• Inner products allow for the introduction of intuitive geometrical


concepts, such as the length of a vector and the angle or
distance between two vectors. A major purpose of inner
products is to determine whether vectors are orthogonal to each
other.
Inner Products
• Scalar dot product.

• Use Boyd’s examples to show.


• Remember dot product is a subset of inner products. Meaning, inner product
can be multidimensional. For this course, we don’t want to delve into that.
Inner Products – examples
Inner Products – examples
Symmetric, Positive Definite Matrices
• Symmetric, positive definite matrices play an important role in
machine learning, and they are defined via the inner product.
• Math definition:
• To check if A is positive definitive 𝑋 ! 𝐴𝑋 > 0 for all X.
• To check if A is positive semi-definitive 𝑋 ! 𝐴𝑋 ≥ 0 for all X.
Symmetric, Positive Definite Matrices

Example
Angle between vectors.

• Intuitively, the angle between two vectors tells us how similar


their orientations are.
• Math: Angle omega between two vectors x and y is given by
• Where numerator is inner product and denominator the norms.
Angle between vectors.

• Compute angle between

• X=[1,1]T and Y=[1,2]T


Angle between vectors.

• X=[1,1]T and Y=[1,2]T

• Nearly, 18 degree
Orthogonality

• Two vectors x and y are orthogonal if and orthogonal

• If and only if <x,y>= 0, and we write x ⟂ y.

• Meaning angle b/w them is 90 degrees.


Orthogonal Matrix

• Matrix A is orthogonal iff its columns are orthogonal.

• AAT=I

• This is useful for rotations we might discuss this later


Topics in Matrix Decomposition
• In these topics we will cover essentially how to summarize
matrices, how matrices can be decomposed, and how these
decompositions can be used for matrix approximations.
Determinant and Trace
• A determinant is a mathematical object in the analysis and
solution of systems of linear equations. Determinants are only
defined for square matrices.

• Used for proving existence of an inverse.


Determinant and Trace
• Determinant det(A) is the signed volume of an n-dimensional
parallelepiped formed by columns of the matrix A.
Determinant and Trace
Determinant and Trace
Cholesky Decomposition
• The Cholesky decomposition /Cholesky factorization provides a
square-root equivalent operation on symmetric, positive definite
matrices that is useful in practice.
Cholesky Decomposition
• A symmetric, positive definite matrix A can be factorized into a
product A = LLT, where L is a lower triangular matrix with
positive diagonal elements:
Cholesky Decomposition
• A symmetric, positive definite matrix A can be factorized into a
product A = LLT, where L is a lower triangular matrix with positive
diagonal elements.

• Cholesky decomposition helps in efficient numerical calculations.

• Typical places where it helps : MCMC, Non-Linear Optimization etc.

• How does it help?


Cholesky Decomposition

You might also like