Module 3
Module 3
Module 3
Topics: Basics of Learning theory, Similarity Based Learning, Regression Analysis
Textbook 2: Chapter 3 - 3.1 to 3.4, Chapter 4, chapter 5.1 to 5.4
Chapter 4
Similarity Based Learning
4.1 Similarity or Instance-based Learning
a) KNN
b) Variants of KNN
c) Locally weighted regression
d) Learning vector quantization
e) Self-organizing maps
f) RBF networks
In the above diagram 4.1, 2 classes of objects called C1 and C2. When given a test
instance T,the category of this test instance is determined by looking at the class ofk=3
nearest neighbors. Thus, the class of this test instance T ispredicted as C2.
Consider the student performance training, dataset of 8 data instances shown in Table 4.2
which describes the performance of individual students in a course and their CGPA
obtained in the previous semesters. The independent attributes are CGPA, Assessment and
Project. The target variable is ‘Result’ which is a discrete valued variable that takes two
values ‘Pass’ or ‘Fail. Based on the performance of a student, classify whether a student
will pass or fail in that course.
Where, г is called the bandwidth parameter and controls the rate at which wi reduces to
zero with distance from xi.
CHAPTER 5
REGRESSION ANALYSIS
5.1 Introduction to Regression
Regression analysis is a fundamental concept that consists of a set of machine learning
methods that predict a continuous outcome variable (y) based on the value of one or
multiple predictor variables (x). OR
Regression analysis is a statistical method to model the relationship between a
dependent (target) and independent (predictor) variables with one or more independent
variables.
Regression is a supervised learning technique which helps in finding the correlation
between variables.
It is mainly used for prediction, forecasting, time series modelling, and determining the
causal- effect relationship between variables.
Regression shows a line or curve that passes through all the datapoints on target-
predictor graph in such a way that the vertical distance between the datapoints and the
regression line is minimum." The distance between datapoints and line tells whether a
model has captured a strong relationship or not.
• Function of regression analysis is given by: Y=f(x)
Here, y is called dependent variable and x is called independent variable.
Applications of Regression Analysis
1) Sales of a goods or services
2) Value of bonds in portfolio management
3) Premium on insurance companies
4) Yield of crop in agriculture
5) Prices of real estate
5.2 INTRODUCTION TO LINEARITY, CORRELATION AND CAUSATION
A correlation is the statistical summary of the relationship between two sets of
variables. It is a core part of data exploratory analysis, and is a critical aspect of
numerous advanced machine learning techniques. Correlation between two variables
can be found using a scatter plot.
There are different types of correlation:
• Positive Correlation: Two variables are said to be positively correlated when their
values move in the same direction. For example, in the image below, as the value
for X increases, sodoes the value for Y at a constant rate.
• Negative Correlation: Finally, variables X and Y will be negatively correlated
when their values change in opposite directions, so here as the value for X increases,
the value for Y decreases at a constant rate.
• Neutral Correlation: No relationship in the change of variables X and Y. In this
case, the values are completely random and do not show any sign of correlation,
as shown in the following image:
Causation
Causation is about relationship between two variables as x causes y. This is called x
implies b.Regression is different from causation. Causation indicates that one event is
the result of the occurrence of the other event; i.e. there is a causal relationship between
the two events.
Linear and Non-Linear Relationships
• The relationship between input features (variables) and the output (target) variable is
fundamental. These concepts have significant implications for the choice of algorithms,
model complexity, and predictive performance.
• Linear relationship creates a straight line when plotted on a graph, a Non-Linear
relationship does not create a straight line but instead creates a curve.
• Example:
Linear-the relationship between the hours spent studying and the grades obtained in a
class.
Non-Linear- GPS Signal
• Linearity:
Linear Relationship: A linear relationship between variables means that a change in
one variable is associated with a proportional change in another variable.
Mathematically, it can be represented as y = a * x + b, where y is the output, x is the
input, and a and b are constants.
Linear Models: Goal is to find the best-fitting line (plane in higher dimensions) to the
data points. Linear models are interpretable and work well when the relationship
between variablesis close to being linear.
Limitations: Linear models may perform poorly when the relationship between
variables is non-linear. In such cases, they may underfit the data, meaning they are too
simple to capture the underlying patterns.
• Non-Linearity:
Non-Linear Relationship: A non-linear relationship implies that the change in one
variable isnot proportional to the change in another variable. Non-linear relationships
can take various forms, such as quadratic, exponential, logarithmic, or arbitrary shapes.
Non-Linear Models: Machine learning models like decision trees, random forests,
support vector machines with non-linear kernels, and neural networks can capture non-
linear relationships. These models are more flexible and can fit complex data patterns.
Benefits: Non-linear models can perform well when the underlying relationships in the
data are complex or when interactions between variables are non-linear. They have the
capacity to capture intricate patterns.
Types of Regression
Linear Regression:
Single Independent Variable: Linear regression, also known as simple linear
regression, is used when there is a single independent variable (predictor) and one
dependent variable (target).
Purpose: Linear regression is used to establish a linear relationship between two
variables and make predictions based on this relationship. It's suitable for simple
scenarios where there's onlyone predictor.
Multiple Regression:
Multiple Independent Variables: Multiple regression, as the name suggests, is used
when there are two or more independent variables (predictors) and one dependent
variable (target).
Purpose: Multiple regression allows you to model the relationship between the
dependent variable and multiple predictors simultaneously. It is used when there are
multiple factors that may influence the target variable, and you want to understand their
combined effect and makepredictions based on all these factors.
Polynomial Regression:
Polynomial regression is an extension of multiple regression used when the relationship
between the independent and dependent variables is non-linear.
Logistic Regression:
Logistic regression is used when the dependent variable is binary (0 or 1). It models the
probability of the dependent variable belonging to a particular class.
Lasso Regression (L1 Regularization):
Lasso regression is used for feature selection and regularization. It penalizes the
absolutevalues of the coefficients, which encourages sparsity in the model.
A linear regression model used for determining the value of the response variable, ŷ,
can berepresented as the following equation.
y = b0 + b1x1 + b2x2 + … + bnxn + e
where: y - is the dependent variable, b0 is the intercept, e isthe error term
b1, b2, …, bn are the coefficients of the independentvariables x1, x2, …, xn
The coefficients b1, b2, …, bn can also be called the coefficientsof determination. The
goal of the OLS method can be used to estimate the unknown parameters (b1, b2, …,
bn) by minimizingthe sum of squared residuals (RSS). The sum of squared residualsis
also termed the sum of squared error (SSE).
This method is also known as the least-squares method for regression or linear
regression.Mathematically the line of equations for points are:
y1=(a0+a1x1)+e1
y2=(a0+a1x2)+e2 ……. yn=(a0+a1xn)+en.
In general ei=yi - (a0+a1x1)
Coefficient of Determination