0% found this document useful (0 votes)
67 views

Strang Data

This document provides a table of contents for a book on deep learning and neural networks. The book is divided into 7 chapters covering topics such as linear algebra, computations with large matrices, low rank approximation, special matrices, probability, optimization, and learning from data. Chapter 1 provides an overview of key concepts in linear algebra. Chapter 2 discusses numerical linear algebra and least squares approximations. Chapter 3 covers low rank matrix approximation and compressed sensing.

Uploaded by

JustA Dummy
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Strang Data

This document provides a table of contents for a book on deep learning and neural networks. The book is divided into 7 chapters covering topics such as linear algebra, computations with large matrices, low rank approximation, special matrices, probability, optimization, and learning from data. Chapter 1 provides an overview of key concepts in linear algebra. Chapter 2 discusses numerical linear algebra and least squares approximations. Chapter 3 covers low rank matrix approximation and compressed sensing.

Uploaded by

JustA Dummy
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

TOC

Chapter 0
- Deep Learning And Neural Nets
- Preface And Acknowledgments

Chapter 1: Highlights Of Linear Algebra


1.1 Multiplication Ax using columns of A
1.2 Matrix Matrix Multiplication AB
1.3 The Four Fundamental Subspaces
1.4 Elimination And A = LU
1.5 Orthogonal Matrices And Subspaces
1.6 Eigenvalues And Eigenvectors
1.7 Symmetric Positive Definite Matrices
1.8 Singular Values And Singular Vectors In The SVD
1.9 Principal Components And The Best Low Rank Matrix
1.10 Rayleigh Coefficients And Generalized Eigen Values
1.11 Norms Of Vectors And Function And Matrices
1.12 Factoring Matrices And Tensors: Positive And Sparse

Chapter 2: Computations With Large Matrices


2.1 Numerical Linear Algebra
2.2 Least Squares: Four Ways
2.3 Three Bases For The Column Space
2.4 Randomized Linear Algebra

Chapter 3: Low Rank And Compressed Sensing


3.1 Changes in A-inv from changes in A
3.2 Interfacing Eigen Values And Low Rank Signals
3.3 Rapidly Decaying Singular Values
3.4 Split Algorithms -l^2 and l^1
3.5 Compressed Sensing And Matrix Completion

Chapter 4: Special Matrices


4.1 Fourier Transforms: Discrete And Continuous
4.2 Shift Matrices And Circular Matrices
4.3 The Kronecker Product A circle-cross B
4.4 Sine And Cosine Transforms from Kronecker Sums
4.5 Toeplitz Matrices And Shift Invariant Filters
4.6 Graphs, Laplacians, And Kirchhoff's Laws
4.7 Clustering By Spectral Methods And k-means
4.8 Completing Rank Ones
4.9 The Orthogonal Procrustes Problem
4.10 Distance Matrices

Chapter 5: Probability
5.1 Mean, Variance, And Probability
5.2 Probability Distributions
5.3 Moments, Cumulants, And Inequalities Of Statistics
5.4 Covariance Matrices And Joint Probabilities
5.5 Multivariate Gaussian And Weighted Least Squares
5.6 Markov Chains

Chapter 6. Optimization
6.1 Convexity And Newton's Method
6.2 Lagrange Multipliers = Derivatives Of The Cost
6.3 Linear Programming, Game Theory, And Duality
6.4 Gradient Descent: Towards The Minimum
6.5 Stochastic Gradient Descent And ADAM

Chapter 7. Learning From Data


7.1 Construction Of Deep Neural Networks
7.2 Convolutional Neural Networks
7.3 BackPropagation And Chain Rule
7.4 Hyperparameters: The Fateful Decision
7.5 The World Of Machine Learning

You might also like