SlideShare a Scribd company logo
Machine Learning
Linear Regression
Agenda
• Single Dimension Linear Regression

• Multi Dimension Linear Regression

• Gradient Descent

• Generalisation, Over-fitting & Regularisation

• Categorical Inputs
What is Linear Regression?
• Learning

• A supervised algorithm that learns from a set of training samples.

• Each training sample has one or more input values and a single output value.

• The algorithm learns the line, plane or hyper-plane that best fits the training
samples.

• Prediction

• Use the learned line, plane or hyper-plane to predict the output value for any
input sample.
Single Dimension Linear
Regression
Single Dimension Linear Regression
• Single dimension linear regression
has pairs of x and y values as input
training samples. 

• It uses these training sample to
derive a line that predicts values of y.

• The training samples are used to
derive the values of a and b that
minimise the error between actual
and predicated values of y. 

Single Dimension Linear Regression
• We want a line that minimises the
error between the Y values in
training samples and the Y values
that the line passes through.

• Or put another way, we want the
line that “best fits’ the training
samples.

• So we define the error function for
our algorithm so we can minimise
that error.
Single Dimension Linear Regression
• To determine the value of a that
minimises the error E, we look for
where the partial differential of E
with respect to a is zero.
Single Dimension Linear Regression
• To determine the value of b that
minimises the error E, we look for
where the partial differential of E
with respect to b is zero.
Single Dimension Linear Regression
• By substituting the final equations
from the previous two slides we
derive equations for a and b that
minimise the error
Single Dimension Linear Regression
• We also define a function which we can
use to score how well derived line fits.

• A value of 1 indicates a perfect fit. 

• A value of 0 indicates a fit that is no
better than simply predicting the mean
of the input y values. 

• A negative value indicates a fit that is
even worse than just predicting the
mean of the input y values.
Single Dimension Linear Regression
Single Dimension Linear Regression
Single Dimension Linear Regression
Multi Dimension Linear
Regression
Multi Dimension Linear Regression
• Each training sample has an x made
up of multiple input values and a
corresponding y with a single value. 

• The inputs can be represented as
an X matrix in which each row is
sample and each column is a
dimension. 

• The outputs can be represented as
y matrix in which each row is a
sample.
Multi Dimension Linear Regression
• Our predicated y values are
calculated by multiple the X matrix
by a matrix of weights, w.

• If there are 2 dimension, then this
equation defines plane. If there are
more dimensions then it defines a
hyper-plane.
Multi Dimension Linear Regression
• We want a plane or hyper-plane
that minimises the error between
the y values in training samples
and the y values that the plane or
hyper-plane passes through.

• Or put another way, we want the
plane/hyper-plane that “best fits’
the training samples.

• So we define the error function for
our algorithm so we can minimise
that error.
Multi Dimension Linear Regression
• To determine the value of w that
minimises the error E, we look for
where the differential of E with
respect to w is zero.

• We use the Matrix Cookbook to
help with the differentiation!
Multi Dimension Linear Regression
• We also define a function which we can
use to score how well derived line fits.

• A value of 1 indicates a perfect fit. 

• A value of 0 indicates a fit that is no
better than simply predicting the mean
of the input y values. 

• A negative value indicates a fit that is
even worse than just predicting the
mean of the input y values.
Multi Dimension Linear Regression
Multi Dimension Linear Regression
Multi Dimension Linear Regression
• In addition to using the X matrix to represent basic features our training
data, we can can also introduce additional dimensions (i.e. columns in
our X matrix) that are derived from those basic feature values.

• If we introduce derived features whose values are powers of basic
features, our multi-dimensional linear regression can then derive
polynomial curves, planes and hyper-planes.
Multi Dimension Linear Regression
• For example, if we have just one
basic feature in each sample of X, we
can include a range of powers of that
value into our X matrix like this:

• In non-matrix form our multi-
dimensional linear equation is: 

• Inserting the powers of the basic
feature that we have introduced this
becomes a polynomial:
Multi Dimension Linear Regression
Multi Dimension Linear Regression
Gradient Descent
Singular Matrices
• As we have seen, we can use
numpy’s linalg.solve() function to
determine the value of the weights
that result in the lowest possible error.

• But this doesn’t work if np.dot(X.T, X)
is a singular matrix.

• It results in the matrix equivalent of a
divide by zero.

• Gradient descent is an alternative
approach to determining the optimal
weights that in works for all cases,
including this singular matrix case.
Gradient Descent
• Gradient descent is a technique we can use to find the minimum of
arbitrarily complex error functions.

• In gradient descent we pick a random set of weights for our algorithm and
iteratively adjust those weights in the direction of the gradient of the error
with respect to each weight.

• As we iterate, the gradient approaches zero and we approach the
minimum error.

• In machine learning we often use gradient descent with our error function
to find the weights that give the lowest errors.
Gradient Descent
• Here is an example with a very
simple function:

• The gradient of this function is
given by:

• We choose an random initial
value for x and a learning rate of
0.1 and then start descent.

• On each iteration our x value is
decreasing and the gradient (2x)
is converging towards 0.
Gradient Descent
• The learning rate is a what is know as a hyper-parameter.

• If the learning rate is too small then convergence may take a very long
time.

• If the learning rate is too large then convergence may never happen
because our iterations bounce from one side of the minima to the other.

• Choosing a suitable value for hyper-parameters is an art so try different
values and plot the results until you find suitable values.
Multi Dimension Linear Regression
with Gradient Descent
• For multi dimension linear
regression our error function
is:

• Differentiating this with
respect to the weights vector
gives:

• We can iteratively reduce the
error by adjusting the weights
in the direction of these
gradients.
Multi Dimension Linear Regression
with Gradient Descent
Multi Dimension Linear Regression
with Gradient Descent
Generalisation, Over-fitting &
Regularisation
Generalisation & Over-fitting
• As we train our model with more and more data the it may start to fit the training data more and
more accurately, but become worse at handling test data that we feed to it later. 

• This is know as “over-fitting” and results in an increased generalisation error.

• To minimise the generalisation error we should 

• Collect as much sample data as possible. 

• Use a random subset of our sample data for training.

• Use the remaining sample data to test how well our model copes with data it was not trained
with.

• Also, experiment with adding higher degrees of polynomials (X2, X3, etc) as this can reduce
overfitting.
L1 Regularisation (Lasso)
• Having a large number of samples (n) with respect to the number of
dimensionality (d) increases the quality of our model. 

• One way to reduce the effective number of dimensions is to use those that
most contribute to the signal and ignore those that mostly act as noise.

• L1 regularisation achieves this by adding a penalty that results in the
weight for the dimensions that act as noise becoming 0. 

• L1 regularisation encourages a sparse vector of weights in which few are
non-zero and many are zero.
L1 Regularisation (Lasso)
• In L1 regularisation we add a penalty to
the error function: 

• Expanding this we get: 

• Take the derivative with respect to w to
find our gradient:

• Where sign(w) is -1 if w < 0, 0 if w = 0
and +1 if w > 0

• Note that because sign(w) has no
inverse function we cannot solve for w
and so must use gradient descent.
L1 Regularisation (Lasso)
L1 Regularisation (Lasso)
L2 Regularisation (Ridge)
• Another way to reduce the complexity of our model and prevent overfitting
to outliers is L2 regression, which is also known as ridge regression.

• In L2 Regularisation we introduce an additional term to the cost function
that has the effect of penalising large weights and thereby minimising this
skew.
L2 Regularisation (Ridge)
• In L2 regularisation we the sum of
the squares of the weights to the
error function.

• Expanding this we get: 

• Take the derivative with respect to
w to find our gradient:
L2 Regularisation (Ridge)
• Solving for the values of w that give
minimal error:
L2 Regularisation (Ridge)
L2 Regularisation (Ridge)
L1 & L2 Regularisation (Elastic Net)
• L1 Regularisation minimises the impact of dimensions that have low
weights and are thus largely “noise”.

• L2 Regularisation minimise the impacts of outliers in our training data.

• L1 & L2 Regularisation can be used together and the combination is
referred to as Elastic Net regularisation.

• Because the differential of the error function contains the sigmoid which
has no inverse, we cannot solve for w and must use gradient descent.
Categorical Inputs
One-hot Encoding
• When some inputs are categories (e.g. gender) rather than numbers (e.g.
age) we need to represent the category values as numbers so they can be
used in our linear regression equations.

• In one-hot encoding we allocate each category value it's own dimension in
the inputs. So, for example, we allocate X1 to Audi, X2 to BMW & X3 to
Mercedes.

• For Audi X = [1,0,0]

• For BMW X = [0,1,0])

• For Mercedes X = [0,0,1]
Summary
• Single Dimension Linear Regression

• Multi Dimension Linear Regression

• Gradient Descent

• Generalisation, Over-fitting & Regularisation

• Categorical Inputs

More Related Content

What's hot (20)

PDF
Classification Based Machine Learning Algorithms
Md. Main Uddin Rony
 
PDF
Dimensionality Reduction
mrizwan969
 
PDF
Naive Bayes Classifier
Yiqun Hu
 
PPTX
Supervised Machine Learning
Livares Technologies Pvt Ltd
 
PPTX
Naive bayes
Ashraf Uddin
 
PPSX
Perceptron (neural network)
EdutechLearners
 
PPTX
Presentation on supervised learning
Tonmoy Bhagawati
 
PPTX
Data preprocessing in Machine learning
pyingkodi maran
 
PPTX
supervised learning
Amar Tripathi
 
ODP
NAIVE BAYES CLASSIFIER
Knoldus Inc.
 
PPTX
Naïve Bayes Classifier Algorithm.pptx
Shubham Jaybhaye
 
PPT
Data Preprocessing
Object-Frontier Software Pvt. Ltd
 
PDF
Support Vector Machines ( SVM )
Mohammad Junaid Khan
 
PDF
Decision tree
R A Akerkar
 
PPTX
Naive Bayes
Abdullah al Mamun
 
PPTX
Classification and Regression
Megha Sharma
 
PPTX
Support vector machines (svm)
Sharayu Patil
 
PPT
backpropagation in neural networks
Akash Goel
 
PPTX
Machine learning ppt.
ASHOK KUMAR
 
Classification Based Machine Learning Algorithms
Md. Main Uddin Rony
 
Dimensionality Reduction
mrizwan969
 
Naive Bayes Classifier
Yiqun Hu
 
Supervised Machine Learning
Livares Technologies Pvt Ltd
 
Naive bayes
Ashraf Uddin
 
Perceptron (neural network)
EdutechLearners
 
Presentation on supervised learning
Tonmoy Bhagawati
 
Data preprocessing in Machine learning
pyingkodi maran
 
supervised learning
Amar Tripathi
 
NAIVE BAYES CLASSIFIER
Knoldus Inc.
 
Naïve Bayes Classifier Algorithm.pptx
Shubham Jaybhaye
 
Support Vector Machines ( SVM )
Mohammad Junaid Khan
 
Decision tree
R A Akerkar
 
Naive Bayes
Abdullah al Mamun
 
Classification and Regression
Megha Sharma
 
Support vector machines (svm)
Sharayu Patil
 
backpropagation in neural networks
Akash Goel
 
Machine learning ppt.
ASHOK KUMAR
 

Similar to Linear regression (20)

PPTX
Linear regression in machine learning
Shajun Nisha
 
PDF
Lecture 5 - Linear Regression Linear Regression
viyah59114
 
PDF
CSE357 fa21 (6) Linear Machine Learning11-11.pdf
NermeenKamel7
 
PDF
Linear Regression
SourajitMaity1
 
PPTX
Regression ppt
SuyashSingh70
 
PPTX
Linear Regression for Data Mining Application
SudiptaDas684406
 
PDF
The normal presentation about linear regression in machine learning
dawasthi952
 
PDF
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Maninda Edirisooriya
 
PDF
X01 Supervised learning problem linear regression one feature theorie
Marco Moldenhauer
 
PDF
Logistic regression
MartinHogg9
 
PDF
Lecture 2 neural network covers the basic
anteduclass
 
PDF
Linear logisticregression
kongara
 
PDF
Machine learning
Shreyas G S
 
PPTX
Machine Learning Unit 3 Semester 3 MSc IT Part 2 Mumbai University
Madhav Mishra
 
PDF
Linear Regression With One or More Variables
tadeuferreirajr
 
PPTX
Regression Techniques in Machine learning.pptx
RajivVyas6
 
PPSX
Lasso and ridge regression
SreerajVA
 
PPTX
Linear regression, costs & gradient descent
Revanth Kumar
 
PPTX
Regresión
Ramiro Aduviri Velasco
 
PPTX
Lecture of Regression for supervised Machine Learning
nouralmohaisen
 
Linear regression in machine learning
Shajun Nisha
 
Lecture 5 - Linear Regression Linear Regression
viyah59114
 
CSE357 fa21 (6) Linear Machine Learning11-11.pdf
NermeenKamel7
 
Linear Regression
SourajitMaity1
 
Regression ppt
SuyashSingh70
 
Linear Regression for Data Mining Application
SudiptaDas684406
 
The normal presentation about linear regression in machine learning
dawasthi952
 
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Maninda Edirisooriya
 
X01 Supervised learning problem linear regression one feature theorie
Marco Moldenhauer
 
Logistic regression
MartinHogg9
 
Lecture 2 neural network covers the basic
anteduclass
 
Linear logisticregression
kongara
 
Machine learning
Shreyas G S
 
Machine Learning Unit 3 Semester 3 MSc IT Part 2 Mumbai University
Madhav Mishra
 
Linear Regression With One or More Variables
tadeuferreirajr
 
Regression Techniques in Machine learning.pptx
RajivVyas6
 
Lasso and ridge regression
SreerajVA
 
Linear regression, costs & gradient descent
Revanth Kumar
 
Lecture of Regression for supervised Machine Learning
nouralmohaisen
 
Ad

Recently uploaded (20)

PDF
MiniTool Power Data Recovery Crack New Pre Activated Version Latest 2025
imang66g
 
PPTX
Presentation about Database and Database Administrator
abhishekchauhan86963
 
PDF
Step-by-Step Guide to Install SAP HANA Studio | Complete Installation Tutoria...
SAP Vista, an A L T Z E N Company
 
PPT
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
PDF
SAP GUI Installation Guide for macOS (iOS) | Connect to SAP Systems on Mac
SAP Vista, an A L T Z E N Company
 
PPTX
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
PPTX
Contractor Management Platform and Software Solution for Compliance
SHEQ Network Limited
 
PPTX
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
PDF
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
PDF
Protecting the Digital World Cyber Securit
dnthakkar16
 
PDF
New Download FL Studio Crack Full Version [Latest 2025]
imang66g
 
PDF
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
PPTX
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
PPTX
Role Of Python In Programing Language.pptx
jaykoshti048
 
PDF
Enhancing Security in VAST: Towards Static Vulnerability Scanning
ESUG
 
PDF
Why Are More Businesses Choosing Partners Over Freelancers for Salesforce.pdf
Cymetrix Software
 
PDF
Troubleshooting Virtual Threads in Java!
Tier1 app
 
PPTX
Employee salary prediction using Machine learning Project template.ppt
bhanuk27082004
 
PPTX
Farrell__10e_ch04_PowerPoint.pptx Programming Logic and Design slides
bashnahara11
 
PDF
10 posting ideas for community engagement with AI prompts
Pankaj Taneja
 
MiniTool Power Data Recovery Crack New Pre Activated Version Latest 2025
imang66g
 
Presentation about Database and Database Administrator
abhishekchauhan86963
 
Step-by-Step Guide to Install SAP HANA Studio | Complete Installation Tutoria...
SAP Vista, an A L T Z E N Company
 
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
SAP GUI Installation Guide for macOS (iOS) | Connect to SAP Systems on Mac
SAP Vista, an A L T Z E N Company
 
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
Contractor Management Platform and Software Solution for Compliance
SHEQ Network Limited
 
Web Testing.pptx528278vshbuqffqhhqiwnwuq
studylike474
 
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
Protecting the Digital World Cyber Securit
dnthakkar16
 
New Download FL Studio Crack Full Version [Latest 2025]
imang66g
 
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
ASSIGNMENT_1[1][1][1][1][1] (1) variables.pptx
kr2589474
 
Role Of Python In Programing Language.pptx
jaykoshti048
 
Enhancing Security in VAST: Towards Static Vulnerability Scanning
ESUG
 
Why Are More Businesses Choosing Partners Over Freelancers for Salesforce.pdf
Cymetrix Software
 
Troubleshooting Virtual Threads in Java!
Tier1 app
 
Employee salary prediction using Machine learning Project template.ppt
bhanuk27082004
 
Farrell__10e_ch04_PowerPoint.pptx Programming Logic and Design slides
bashnahara11
 
10 posting ideas for community engagement with AI prompts
Pankaj Taneja
 
Ad

Linear regression

  • 2. Agenda • Single Dimension Linear Regression • Multi Dimension Linear Regression • Gradient Descent • Generalisation, Over-fitting & Regularisation • Categorical Inputs
  • 3. What is Linear Regression? • Learning • A supervised algorithm that learns from a set of training samples. • Each training sample has one or more input values and a single output value. • The algorithm learns the line, plane or hyper-plane that best fits the training samples. • Prediction • Use the learned line, plane or hyper-plane to predict the output value for any input sample.
  • 5. Single Dimension Linear Regression • Single dimension linear regression has pairs of x and y values as input training samples. • It uses these training sample to derive a line that predicts values of y. • The training samples are used to derive the values of a and b that minimise the error between actual and predicated values of y. 

  • 6. Single Dimension Linear Regression • We want a line that minimises the error between the Y values in training samples and the Y values that the line passes through. • Or put another way, we want the line that “best fits’ the training samples. • So we define the error function for our algorithm so we can minimise that error.
  • 7. Single Dimension Linear Regression • To determine the value of a that minimises the error E, we look for where the partial differential of E with respect to a is zero.
  • 8. Single Dimension Linear Regression • To determine the value of b that minimises the error E, we look for where the partial differential of E with respect to b is zero.
  • 9. Single Dimension Linear Regression • By substituting the final equations from the previous two slides we derive equations for a and b that minimise the error
  • 10. Single Dimension Linear Regression • We also define a function which we can use to score how well derived line fits. • A value of 1 indicates a perfect fit. • A value of 0 indicates a fit that is no better than simply predicting the mean of the input y values. • A negative value indicates a fit that is even worse than just predicting the mean of the input y values.
  • 15. Multi Dimension Linear Regression • Each training sample has an x made up of multiple input values and a corresponding y with a single value. • The inputs can be represented as an X matrix in which each row is sample and each column is a dimension. • The outputs can be represented as y matrix in which each row is a sample.
  • 16. Multi Dimension Linear Regression • Our predicated y values are calculated by multiple the X matrix by a matrix of weights, w. • If there are 2 dimension, then this equation defines plane. If there are more dimensions then it defines a hyper-plane.
  • 17. Multi Dimension Linear Regression • We want a plane or hyper-plane that minimises the error between the y values in training samples and the y values that the plane or hyper-plane passes through. • Or put another way, we want the plane/hyper-plane that “best fits’ the training samples. • So we define the error function for our algorithm so we can minimise that error.
  • 18. Multi Dimension Linear Regression • To determine the value of w that minimises the error E, we look for where the differential of E with respect to w is zero. • We use the Matrix Cookbook to help with the differentiation!
  • 19. Multi Dimension Linear Regression • We also define a function which we can use to score how well derived line fits. • A value of 1 indicates a perfect fit. • A value of 0 indicates a fit that is no better than simply predicting the mean of the input y values. • A negative value indicates a fit that is even worse than just predicting the mean of the input y values.
  • 22. Multi Dimension Linear Regression • In addition to using the X matrix to represent basic features our training data, we can can also introduce additional dimensions (i.e. columns in our X matrix) that are derived from those basic feature values. • If we introduce derived features whose values are powers of basic features, our multi-dimensional linear regression can then derive polynomial curves, planes and hyper-planes.
  • 23. Multi Dimension Linear Regression • For example, if we have just one basic feature in each sample of X, we can include a range of powers of that value into our X matrix like this: • In non-matrix form our multi- dimensional linear equation is: • Inserting the powers of the basic feature that we have introduced this becomes a polynomial:
  • 27. Singular Matrices • As we have seen, we can use numpy’s linalg.solve() function to determine the value of the weights that result in the lowest possible error. • But this doesn’t work if np.dot(X.T, X) is a singular matrix. • It results in the matrix equivalent of a divide by zero. • Gradient descent is an alternative approach to determining the optimal weights that in works for all cases, including this singular matrix case.
  • 28. Gradient Descent • Gradient descent is a technique we can use to find the minimum of arbitrarily complex error functions. • In gradient descent we pick a random set of weights for our algorithm and iteratively adjust those weights in the direction of the gradient of the error with respect to each weight. • As we iterate, the gradient approaches zero and we approach the minimum error. • In machine learning we often use gradient descent with our error function to find the weights that give the lowest errors.
  • 29. Gradient Descent • Here is an example with a very simple function: • The gradient of this function is given by: • We choose an random initial value for x and a learning rate of 0.1 and then start descent. • On each iteration our x value is decreasing and the gradient (2x) is converging towards 0.
  • 30. Gradient Descent • The learning rate is a what is know as a hyper-parameter. • If the learning rate is too small then convergence may take a very long time. • If the learning rate is too large then convergence may never happen because our iterations bounce from one side of the minima to the other. • Choosing a suitable value for hyper-parameters is an art so try different values and plot the results until you find suitable values.
  • 31. Multi Dimension Linear Regression with Gradient Descent • For multi dimension linear regression our error function is: • Differentiating this with respect to the weights vector gives: • We can iteratively reduce the error by adjusting the weights in the direction of these gradients.
  • 32. Multi Dimension Linear Regression with Gradient Descent
  • 33. Multi Dimension Linear Regression with Gradient Descent
  • 35. Generalisation & Over-fitting • As we train our model with more and more data the it may start to fit the training data more and more accurately, but become worse at handling test data that we feed to it later. • This is know as “over-fitting” and results in an increased generalisation error. • To minimise the generalisation error we should • Collect as much sample data as possible. • Use a random subset of our sample data for training. • Use the remaining sample data to test how well our model copes with data it was not trained with. • Also, experiment with adding higher degrees of polynomials (X2, X3, etc) as this can reduce overfitting.
  • 36. L1 Regularisation (Lasso) • Having a large number of samples (n) with respect to the number of dimensionality (d) increases the quality of our model. • One way to reduce the effective number of dimensions is to use those that most contribute to the signal and ignore those that mostly act as noise. • L1 regularisation achieves this by adding a penalty that results in the weight for the dimensions that act as noise becoming 0. • L1 regularisation encourages a sparse vector of weights in which few are non-zero and many are zero.
  • 37. L1 Regularisation (Lasso) • In L1 regularisation we add a penalty to the error function: • Expanding this we get: • Take the derivative with respect to w to find our gradient: • Where sign(w) is -1 if w < 0, 0 if w = 0 and +1 if w > 0 • Note that because sign(w) has no inverse function we cannot solve for w and so must use gradient descent.
  • 40. L2 Regularisation (Ridge) • Another way to reduce the complexity of our model and prevent overfitting to outliers is L2 regression, which is also known as ridge regression. • In L2 Regularisation we introduce an additional term to the cost function that has the effect of penalising large weights and thereby minimising this skew.
  • 41. L2 Regularisation (Ridge) • In L2 regularisation we the sum of the squares of the weights to the error function. • Expanding this we get: • Take the derivative with respect to w to find our gradient:
  • 42. L2 Regularisation (Ridge) • Solving for the values of w that give minimal error:
  • 45. L1 & L2 Regularisation (Elastic Net) • L1 Regularisation minimises the impact of dimensions that have low weights and are thus largely “noise”. • L2 Regularisation minimise the impacts of outliers in our training data. • L1 & L2 Regularisation can be used together and the combination is referred to as Elastic Net regularisation. • Because the differential of the error function contains the sigmoid which has no inverse, we cannot solve for w and must use gradient descent.
  • 47. One-hot Encoding • When some inputs are categories (e.g. gender) rather than numbers (e.g. age) we need to represent the category values as numbers so they can be used in our linear regression equations. • In one-hot encoding we allocate each category value it's own dimension in the inputs. So, for example, we allocate X1 to Audi, X2 to BMW & X3 to Mercedes. • For Audi X = [1,0,0] • For BMW X = [0,1,0]) • For Mercedes X = [0,0,1]
  • 48. Summary • Single Dimension Linear Regression • Multi Dimension Linear Regression • Gradient Descent • Generalisation, Over-fitting & Regularisation • Categorical Inputs