Intro MLT 08Jan25
Intro MLT 08Jan25
Techniques
An Introduction
Machine Learning
“Machine Learning” – “Field of study that gives computers the capability to learn
without being explicitly programmed”.
❑ Prediction: Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).
Types of Learning
• Supervised Learning
• Unsupervised Learning
• Semi-Supervised Learning
Regression
Classification
Types of Supervised Learning:
• Classification
• Regression
The goal here is to predict a value as much closer to actual output value as
our model can and then evaluation is done by calculating error value. The
smaller the error the greater the accuracy of our regression model.
Example of Supervised Learning Algorithms:
❑ Linear Regression
❑ Nearest Neighbor
❑ Decision Trees
❑ Random Forest
Unsupervised Learning:
Unsupervised learning is the training of machine using information that is
neither classified nor labeled and allowing the algorithm to act on that
information without guidance. Here the task of machine is to group
unsorted information according to similarities, patterns and differences
without any prior training of data. Unsupervised machine learning is more
challenging than supervised learning due to the absence of labels.
❑ Clustering
❑ Association
Clustering: A clustering problem is where you want to discover the inherent
groupings in the data, such as grouping customers by purchasing behavior.
• Facts,
• Measurements,
• Observations or
• Descriptions of things.
What is Data?
What is Data ?
Attributes
Quantitative
• Discrete
• Continuous
Knowledge
Patterns
Data
Structured and Unstructured Data
Data
Variety
Velocity - Transaction
- Data Stream - Temporal
- Static - Spatial
…
5
Data Sources
Data Come from Everywhere
Data Preprocessing
Missing Values
Summarization
What is Data?
Structured Data
Attributes
L T P Cr.
3 0 2 4.0
Course Objective: To understand the need, latest trends and design appropriate machine learning
algorithms for problem solving
Introduction Definition of learning systems, machine learning, training data, concept
representation, function approximation for learning system; Objective functions for classification,
regression, and ranking.
Concept of Optimization: Convex function, gradients and sub-gradients, Unconstrained smooth
convex minimization, gradient descent, Constrained optimization, Stochastic gradient descent
Regression and Supervised learning Linear regression and LMS algorithm, Perceptron and
logistic regression, Nonlinear function estimation, Multilayer perceptron and backpropagation,
recurrent networks, Generalization, Underfitting, overfitting, Cross-validation, Regularization,
mixture of Gaussians
Support Vector Machines: Maximum margin linear separators, solution approach to finding
maximum margin separators, Radial basis function network, Kernels for learning non-linear
functions, support vector regression
Decision Tree Learning: Representing concepts as decision trees, Recursive induction, splitting
attributes, simple trees and computational complexity, Overfitting, noisy data, and pruning.
Bayesian Learning: Probability and Bayes rule, Naive Bayes learning algorithm, Parameter
smoothing, Generative vs. discriminative training, Logisitic regression, Bayes nets and Markov nets
for representing dependencies.
Clustering and Unsupervised Learning: Learning from unclassified data. Clustering. k-means
partitional clustering, Fuzzy C-means, Expectation maximization (EM) for soft clustering, Gaussian
Mixture Model
Dimension Reduction Techniques: Feature selection, Principle Component Analysis (PCA),
Linear Discriminant Analysis (LDA)
Applications to Power System: Some of the Power System applications but not restricted to energy
pricing estimation, energy meter analytics, renewable generation forecasting, load profile and
consumer classification, Controller design for ALFC, Filter design.
Laboratory work: The laboratory work includes supervised learning algorithms, linear regression,
logistic regression, decision trees, k-nearest neighbor, Bayesian learning and the naïve Bayes
algorithm, support vector machines and kernels and neural networks with an introduction to Deep
Learning and basic clustering algorithms.
Course Learning Outcomes (CLO):
After the completion of the course the students will be able to:
1. Demonstrate the concept of optimization for various learning functions
2. Analyze the complexity of machine learning algorithms and their limitations
3. Realize learning algorithms as neural computing machine
Text Books:
1. Mitchell T.M., Machine Learning, McGraw Hill(1997).
2. Alpaydin E., Introduction to Machine Learning, MIT Press(2010).
Reference Books:
1. Bishop C., Pattern Recognition and Machine Learning, Springer-Verlag(2006).
2. Michie D., Spiegelhalter D. J., Taylor C. C., Machine Learning, Neural and Statistical
Classification. Overseas Press (2009).
Evaluation Scheme:
S.
Evaluation Elements Weightage (%)
No.
1. MST 25
2. EST 45
3. Sessional (Assignments/Projects/Tutorials/Quizzes/Lab 30
Evaluations)