100% found this document useful (1 vote)
192 views

MACHINE LEARNING ALGORITHM - Unit-1-1

Learning problems require defining the task, measuring performance, and experience source. For example, a checkers learning problem involves playing checkers as the task, percentage of games won as the performance measure, and practice games as the experience source. Defining these three elements - task, performance metric, and training data - establishes the parameters of a well-defined learning problem.

Uploaded by

akash chandankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
192 views

MACHINE LEARNING ALGORITHM - Unit-1-1

Learning problems require defining the task, measuring performance, and experience source. For example, a checkers learning problem involves playing checkers as the task, percentage of games won as the performance measure, and practice games as the experience source. Defining these three elements - task, performance metric, and training data - establishes the parameters of a well-defined learning problem.

Uploaded by

akash chandankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

ABOUT COURSE

Course Objectives :-

1. To introduce the basic machine learning algorithms.


2. To understand nature of the problem and apply machine learning algorithm

Course Outcomes :-

1. To understand complexity of Machine Learning algorithms and their limitations;


2. To understand modern notions in machine learning and computing;
3. Be capable of confidently applying common Machine Learning algorithms in practice and
implementing their own;
4. Be capable of performing experiments in Machine Learning using real-world data.
5. Implementing different learning models
ABOUT COURSE
● Machine learning is concerned with the question of how to make computers learn from
experience.

● The ability to learn is not only central to most aspects of intelligent behavior, but machine
learning techniques have become key components of many software systems.

● For examples, machine learning techniques are used to create spam filters, to analyze
customer purchase data, or to detect fraud in credit card transactions.

● The field of Machine Learning, which addresses the challenge of producing machines that
can learn, has become an extremely active, and exciting area, with an ever expanding
inventory of practical (and profitable) results, many enabled by recent advances in the
underlying theory.

● This course will introduce the fundamental set of techniques and algorithms that constitute
machine learning
Marks Distribution

Course Title: Machine Learning Algorithms


Teaching Scheme Evaluation Scheme
Semester IV
Theory Practical
Term EVEN Th Tu Pr Credits TAE CAE ESE INT EXT
Course
Category
2 -- 2 3 10 15 25 25 25
Course BAIL203/
Code BAIP203

Teaching
50 50
Mode
4 Total
Duration
100
of ESE
TAE and CAE planning
1. Assignment should be taken on weekly basis as per below
a. MCQ on basics and introduction of MLA (10 M)
b. Implementation of Regression Algorithms , Over fitting , Collaborative
Recommendation (10 M)
c. Implementation of Logistics regression, SVM , Bayes Learning (10 M)
d. Implementation of PCA, Clustering. (10 M)

1. All above Assignments should be mapped with TAE1 and TAE2.


2. CAE examination should be breakup as 10 Marks on implantation/ numerical problem
and 5 Mark on descriptive out of 15 Marks.
3. Syllabus for CAE 1 will be Unit 1 and 2 (11 to 13 Feb) and CAE-2 will be on Unit 3, 4
and 5 ( 22 to 24 Apr).
COURSE CONTENT
Introduction: Basic definitions, Probability and Bayes learning,
types of learning, hypothesis Clustering: k-means, adaptive
Logistic Regression, Support
space and inductive bias, hierarchical clustering,
Vector Machine, Kernel function
evaluation, cross-validation Gaussian mixture model
and Kernel SVM

UNIT II UNIT IV

UNIT I UNIT III: UNIT V


Linear regression, Decision trees, over Computational learning theory,
fitting, Instance based learning, PCA learning model, Sample
Feature reduction, Collaborative complexity, VC Dimension,
filtering based recommendation Ensemble learning
Test Books

Introduction to Machine Python with Machine Learning, Introduction to machine learning


Learning , S. Chand Prakashan. —2nd ed.,
DAS GANU PRAKASHAN The MIT Press, Cambridge,
Massachusetts, London,
England.
Dr. A Krishna Mohan,
Dr. Nilesh Shelke,
Dr. T Murali Mohan,
Dr. Narendra Chaudhari , Ethem Alpaydin
Karunakar
Dr. Gopal Sakarkar
Course Content
Introduction
I II
Machine Learning is a The reason for this might be
buzzword for the past few the high amount of data
years production by applications

III IV
The increase of computation For example, a wearable
power in the past few years fitness tracker like Fitbit, or an
and the development of intelligent home assistant like
better algorithms. Google Home.
Machine Learning Vs Traditional Programming
Problem Solving Approach
Machine Learning Approach
APPLICATION OF
MACHINE LEARNING – PART B
Machine Learning Applications
MACHINE LEARNING Applications
1. Prediction — Machine learning can also be used in the prediction
systems. Considering the loan example, to compute the probability of a
fault, the system will need to classify the available data in groups.
2. Image recognition — Machine learning can be used for face detection
in an image as well. There is a separate category for each person in a
database of several people.
3. Speech Recognition — It is the translation of spoken words into the
text. It is used in voice searches and more. Voice user interfaces
include voice dialing, call routing, and appliance control. It can also be
used a simple data entry and the preparation of structured documents.
4. Medical diagnoses — ML is trained to recognize cancerous tissues.
5. Financial industry and trading — companies use ML in fraud
investigations and credit checks.
Some examples of MACHINE LEARNING Applications
History of Machine Learning
Definitions of Machine Learning
● According to Arthur Samuel, Machine Learning algorithms enable the
computers to learn from data, and even improve themselves, without
being explicitly programmed.

● Machine learning (ML) is a category of an algorithm that allows software


applications to become more accurate in predicting outcomes without
being explicitly programmed.

● The basic premise of machine learning is to build algorithms that can


receive input data and use statistical analysis to predict an output while
updating outputs as new data becomes available.
Understudying of Machine Learning Algorithms
Types of Machine Learning
Types of Machine Learning

Supervised Learning Unsupervised Learning


Supervised Learning Algorithm
● In Supervised learning, an AI system is presented with data which is labeled, which
means that each data tagged with the correct label.
● The goal is to approximate the mapping function so well that when you have new
input data (x) that you can predict the output variables (Y) for that data.
Unsupervised Learning Algorithm
In Unsupervised Learning, an AI system is presented with unlabeled, uncategorized data
and the system’s algorithms act on the data without prior training. The output is
dependent upon the coded algorithms. Subjecting a system to unsupervised learning is
one way of testing AI.

Ducks

Not
Ducks
Unsupervised Learning Algorithm

Exp: In class , make a group of students on the basis of Height .


Semi-supervised Learning Algorithm
Reinforcement Learning

01 02
A reinforcement learning algorithm, It receives rewards by performing
learns by interacting with its correctly and penalties for
environment. performing incorrectly.

03 04
It learns without intervention It is a type of dynamic programming
from a human by maximizing that trains algorithms using a
its reward and minimizing its system of reward and punishment
penalty.
Reinforcement Learning
LEARNING PROBLEMS

1. In general, to have a well-defined learning problem, we must identity these


three features:
2. the class of tasks (T)
3. the measure of performance to be improved (P),
4. and the source of experience (E).
5. A checkers learning problem:
Task T: playing checkers
Performance measure P: percent of games won against opponents
Training experience E: playing practice games against itself
6. A handwriting recognition learning problem:
Task T: recognizing and classifying handwritten words within
sentence
Performance measure P: percent of words correctly classified
Training experience E: a database of handwritten words with given
classifications
7. A robot driving learning problem:
Task T: driving on public four-lane highways using vision sensors
Performance measure P: aver distance traveled before an error (as
judged by human overseer)
Training experience E: a sequence of ims and steering commands
recorded while observing a human driver
Key Elements of Machine Learning

1. Every machine learning algorithm has three components:


2. Representation: how to represent knowledge.
3. Examples include decision trees, sets of rules, instances, graphical models,
neural networks, support vector machines, model ensembles and others.
4. Evaluation: the way to evaluate candidate programs (hypotheses).
5. Examples include accuracy, prediction and recall, squared error, likelihood,
posterior probability, cost, margin, entropy k-L divergence and others.
6. Optimization: the way candidate programs are generated known as the search
process.
7. For example combinatorial optimization, convex optimization, constrained
optimization.
Working with Machine Learning Algorithms
1. Start Loop
2. Understand the domain, prior knowledge and goals. Talk to domain experts. Often the goals are
very unclear. You often have more things to try then you can possibly implement.
3. Data integration, selection, cleaning and pre-processing. This is often the most time consuming
part. It is important to have high quality data. The more data you have, the more it sucks
because the data is dirty. Garb in, garb out.
4. Learning models. The fun part. This part is very mature. The tools are general
5. Interpreting results. Sometimes it does not matter how the model works as long it delivers
results. Other domains require that the model is understandable. You will be challenged by
human experts.
6. Consolidating and deploying discovered knowledge. The majority of projects that are successful
in the lab are not used in practice. It is very hard to get something used.
7. End Loop
Supervised Learning
Supervised Learning
Y
X1 X2 X3 X4 ……………………Xn
If Y is discrete value feature ,
I1 A1 A2 A3 A4 …………………….An Y1 then we called
Classification problem .
Exp: Rain today Or Not
I2 B1 B2 B3 B4……………..………Bn Y2
And If Y is continues value
feature , then we called
I3 C1 C2 C3 C4…………………….CN Y3 Regression problem.
Exp: House price prediction
system
I4 D1 D2 D3 D4……………………..Dn Y4
. .
. .
. .
.
.
In
Supervised Learning
Math Phy Chm Eng Y
Let us consider, X std
having marks
I1 28 35 40 38 P 33 , 43, 38 and 28 ,
what will be his result
?
I2 37 41 35 33 P

I3 44 39 20 24 F

I4 22 33 29 23 F
Supervised Learning
Supervised Learning
Supervised Learning
Features Selection for Machine Learning Algo.
Feature Space
Feature Space
Testing Points ?, ? , ?
Feature Space
Hypothesis Space
The hypothesis space used by a machine learning system is the set
of all legal hypotheses that might possibly be returned by it.
It is typically defined by a Hypothesis, possibly in conjunction with a
Bias.

H h€ H
where, H is a set of Hypothesis
h is o/p of learning algorithms
Feature Space
Feature Space
Hypothesis Space
Hypothesis Space
Inductive Bias

Inductive bias refers to a set of (explicit or implicit) assumptions


made by a learning algorithm in order to perform induction, that
is, to generalize a finite set of observation (training data) into a
general model of the domain.
Inductive Bias
Evaluation and Cross Validation

● To select proper h from a hypothesis space H , for the training


data set S,
● we have to evaluate performance algorithm.

● We perform experimental Evaluation by doing


Error Matrix, Accuracy , Precision / Recall
● Cross Validation is used to split the dataset into training and
testing dataset.
Evaluation and Cross Validation

If y == y : No Error, if not then there is an error


Evaluation and Cross Validation
Type of Errors
1. Absolute Errors 2. Sum of squares Errors

1 𝑛
2
= ℎ 𝑥 −𝑦
𝑛 𝑖=1

Note : Both above methods are used to find errors in Regression Problem
Evaluation and Cross Validation
Type of Errors
3. Number of
Misclassification 4. Confusion Matrix
1 𝑛
= 𝛿 ℎ 𝑥 ,𝑦
𝑛 𝑖=1

Where , 𝛿 return 1 , if h(x)


and y are different and 0 if
both are same.

Note: Above methods are used to find errors in Classification Problem


Evaluation and Cross Validation
How to Evaluate
True Positive:
Your prediction is positive and its turn out to be true. For example, you had
predicted that France would win the world cup, and it won.
True Negative:
When you predicted negative, and it's true. You had predicted that England
would not win and it lost.
False Positive:
Your prediction is positive, and it is false.
You had predicted that England would win, but it lost.
False Negative:
Your prediction is negative, and result it is also false.
You had predicted that France would not win, but it won.
Evaluation and Cross Validation
How to Evaluate

Precision tells us how many of the correctly predicted cases actually turned out to be positive.

Recall tells us how many of the actual positive cases we were able to predict correctly with our
model
Evaluation and Cross Validation
Cross Validation
Evaluation and Cross Validation
Cross Validation
Evaluation and Cross Validation
Cross Validation
Thank You

You might also like