Machine Learning Techniques - Types of Machine Learning - Applications Mathematical Foundations of Machine Learning
Machine Learning Techniques - Types of Machine Learning - Applications Mathematical Foundations of Machine Learning
Presented By
S.Vijayalakshmi B.E,
Assistant Professor,
Department of Computer Science,
Sri Sarada Niketan College for Women, Karur.
What is Machine Learning?
Supervised Learning:
• The model is trained on labeled data (input-output
pairs).
• Examples: Linear regression, logistic regression,
support vector machines (SVMs).
• Applications: Spam email detection, image
classification.
Unsupervised Learning:
Healthcare:
• Disease diagnosis, medical image analysis, drug
discovery.
Finance:
• Fraud detection, stock market prediction, risk
management.
Marketing:
• Customer segmentation, recommendation systems,
personalized ads.
Applications of Machine Learning
Manufacturing:
• Predictive maintenance, quality control, supply chain
optimization.
Autonomous Systems:
• Self-driving cars, robotics, drones.
Mathematical Foundations of Machine Learning
• Random Variables: A variable whose value is subject to
randomness.
• Probability Theory: A framework for quantifying
uncertainty.
• Decision Theory: Provides a formal framework for
decision-making under uncertainty.
• Bayes Decision Theory: A probabilistic approach to
making decisions based on Bayes' Theorem.
• Information Theory: The study of the quantification of
information.
Random Variables and Probabilities
Random Variables:
• Discrete Random Variable: Takes on a finite number of
values (e.g., coin flip outcomes).
• Continuous Random Variable: Takes on any value in a
range (e.g., temperature).
Probability:
• The likelihood of an event occurring, expressed as a
number between 0 and 1.
Probability Distributions
Probability Distributions
• Definition: A probability distribution describes the likelihood of different
outcomes in a random experiment.
• Common Distributions:
• Normal Distribution: Bell-shaped curve, commonly used in statistics and
machine learning.
• Bernoulli Distribution: Models binary outcomes (e.g., success/failure).
• Poisson Distribution: Models the number of events in fixed intervals of time or
space.
• Binomial Distribution: Models the number of successes in a fixed number of
independent trials.
• Example:
• Normal Distribution: Used for modeling errors in regression models.
Decision Theory
• Definition: A framework for making decisions under
uncertainty, considering both the outcomes and the
probabilities of various decisions.
• Key Concepts:
• Expected Utility: The weighted average of possible
outcomes, with weights as their respective probabilities.
• Risk Aversion: A preference for certain outcomes over
uncertain ones with equal expected utility.
• Loss Functions: Functions that measure the cost
associated with making a wrong decision.
Information Theory
• Definition: Information theory deals with quantifying,
storing, and communicating information.
• Key Concepts:
• Entropy: Measures the uncertainty in a random variable.
• Mutual Information: Measures the amount of information
one random variable contains about another.
• Kullback-Leibler Divergence: Measures the difference
between two probability distributions.
• Applications: Data compression (e.g., Huffman coding),
feature selection in machine learning.
THANK YOU