0% found this document useful (0 votes)
4 views

Lesson-01-Introduction-To-Machine-Learning

The document provides an introduction to machine learning, defining it as the ability of computers to learn from experience without explicit programming. It discusses key definitions, types of learning algorithms (supervised, unsupervised, and reinforcement learning), and provides various real-world scenarios illustrating their applications. Examples include spam filters, recommendation systems, self-driving cars, and customer segmentation.

Uploaded by

Angelo Vita
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lesson-01-Introduction-To-Machine-Learning

The document provides an introduction to machine learning, defining it as the ability of computers to learn from experience without explicit programming. It discusses key definitions, types of learning algorithms (supervised, unsupervised, and reinforcement learning), and provides various real-world scenarios illustrating their applications. Examples include spam filters, recommendation systems, self-driving cars, and customer segmentation.

Uploaded by

Angelo Vita
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

CSA105 Machine Learning

Introduction to
Machine Learning
3

What is Machine Learning?


▸ Arthur Samuel (1959), defined machine
learning as “the field of study that gives
computers the ability to learn without being
explicitly programmed.”
▹ Samuel worked on developing a checkers-playing
program that could learn and improve over time.
▹ His ideas and approaches helped shape the early
understanding of machine learning.
4

What is Machine Learning?


▸ Tom Mitchell (1998), provided a more formal
definition of machine learning which states
that “A computer program is said to learn
from experience E with respect to some
class of tasks T and performance measure P,
if its performance at tasks in T, as measured
by P, improves with experience E.”
5

3 key aspects from Mitchell’s definition

▸ Learning from experience: The program must


be able to process and learn from data
(experience E). This data can be anything from
labeled examples to raw observations.
▸ Task-oriented: The learning is directed towards
a specific goal or task (T). This could be
predicting future values, classifying data points,
or making decisions.
6

3 key aspects from Mitchell’s definition

▸ Performance improvement: The program's


performance on the task should demonstrably
improve (measured by P) as it gains more
experience (E).
7

Examples Illustrating Mitchell’s Definition

▸ Spam Filter
▹ Task (T): Classifying emails as spam or not spam.
▹ Experience (E): A large collection of labeled emails (spam
and not spam).
▹ Performance measure (P): Accuracy of the classification.
▹ Learning process: The machine learning algorithm analyzes
the emails, identifying patterns that differentiate spam from
legitimate emails. As it processes more emails, it refines its
understanding of these patterns, improving its accuracy in
classifying future emails.
8

Examples Illustrating Mitchell’s Definition

▸ Recommendation Systems
▹ Task (T): Recommending products or services to users.
▹ Experience (E): User purchase history, browsing behavior,
and ratings.
▹ Performance measure (P): Click-through rate, conversion
rate, or user satisfaction.
▹ Learning process: The algorithm analyzes user data to identify
preferences and relationships between products. Based on this
understanding, it recommends items that are likely to be of interest to
specific users. As the system receives more data and feedback, it
personalizes recommendations more effectively.
9

Examples Illustrating Mitchell’s Definition


▸ Self-Driving Car
▹ Task (T): Navigating roads safely and efficiently.
▹ Experience (E): Sensor data from cameras, LiDAR,
and radar, along with real-world driving experience.
▹ Performance measure (P): Accident rate, adherence
to traffic rules, and smooth driving.
▹ Learning process: The car's AI constantly processes sensor
data, building a model of its surroundings and learning to make
decisions like steering, braking, and lane changing. As it
accumulates more driving experience, it improves its ability to
handle various road situations and navigate safely.
10

Types of Learning Algorithms

▸ Supervised Learning
▸ Unsupervised Learning
▸ Reinforcement Learning
11

Supervised Learning
▸ Supervised learning algorithms work with
labeled data.
▸ The algorithm learns to map the input to the
output based on the data provided.
▸ The idea is to teach the machine to learn
how to do something.
12

Supervised Learning
▸ Common Tasks:
▸ Classification
▸ Regression

▸ Some algorithms:
▹ Linear Regression
▹ Logistic Regression
▹ Decision Trees
▹ K-Nearest Neighbors
13

Scenario 1
▸ Scenario: Predicting Loan Eligibility at a Bank
▸ Context: A bank wants to improve its loan approval
process by automating the initial assessment of
loan applications.
▸ Data: The bank has historical data on loan
applications, including:
▹ Applicant information (age, income, employment history, credit
score)
▹ Loan details (loan amount, loan purpose, repayment term)
▹ Loan status (approved, rejected, defaulted)
14

Scenario 1
▸ Supervised Learning Task: The bank can use
supervised learning, specifically classification, to
build a model that predicts whether a new loan
application is likely to be approved or rejected.
15

Scenario 2
▸ Scenario: Predicting Housing Prices
▸ Context: A real estate company wants to estimate the selling
price of new houses based on various features.
▸ Data: The company has historical data on sold houses,
including:
▸ Location (city, neighborhood)
▸ Property characteristics (square footage, no. of bedrooms, year
built)
▸ Amenities (pool, garage, etc.)
▸ Selling price
16

Scenario 2
▸ Supervised Learning Task: The company can use
supervised learning, specifically regression models (e.g.,
decision trees with regression) to predict the selling price of
new houses based on their features.
17

Scenario 3
▸ Scenario: Identifying Fake News on Social Media
▸ Context: A social media platform wants to combat the
spread of misinformation by identifying and flagging
potentially fake news articles.
▸ Data: The platform has data on news articles and user
interactions, including:
▸ Article content (text, images, source)
▸ User engagement data (likes, shares, comments)
▸ Fact-checking data (articles labelled as true, false, or misleading
by fact-checking organizations)
18

Scenario 3
▸ Supervised Learning Task: The platform can use
supervised learning, specifically classification, to build a
model that identifies the likelihood of an article being fake
news.
19

Scenario 4
▸ Scenario: Predicting Energy Consumption in Buildings
▸ Context: A utility company wants to predict the energy
consumption of buildings to optimize energy production and
distribution.
▸ Data: The company has data on buildings and their energy
consumption, including:
▸ Building characteristics (size, type, insulation level)
▸ Weather data (temperature, humidity)
▸ Occupancy data (number of occupants, daily schedule)
▸ Historical energy consumption data
20

Scenario 4
▸ Supervised Learning Task: The company can use
supervised learning specifically, regression models (e.g.,
neural networks) to predict the energy consumption of
buildings based on their features and external factors.
21

Unsupervised Learning
▸ Unsupervised learning deals with unlabeled
data.
▸ The algorithm's job is to find hidden
patterns, structures, or relationships within
the data itself.
▸ The idea is to let the machine learn by itself.
22

Unsupervised Learning
▸ Common Tasks:
▸ Dimensionality Reduction
▸ Clustering

▸ Some algorithms:
▹ Principal Component Analysis (PCA)
▹ K-Means Clustering
▹ Hierarchical Clustering
▹ DBSCAN
23

Scenario 1
▸ Scenario: Customer Segmentation for a Streaming Service
▸ Context: A streaming service has a large user base with
diverse viewing preferences. They want to understand their
customer base better to personalize content
recommendations and marketing strategies.
▸ Data: The service has data on user behavior, including:
▸ Viewing history (movies and shows watched, time spent
watching)
▸ Search queries
▸ Account information (subscription plan, demographics)
24

Scenario 1
▸ Unsupervised Learning Task: The service can use
unsupervised learning, specifically clustering, to segment its
user base into groups with similar viewing preferences.
▸ This will help identify distinct customer segments, such as
fans of specific genres, avid movie watchers, or occasional
viewers.
25

Scenario 2
▸ Scenario: Anomaly Detection in Network Traffic
▸ Context: A network security company monitors network
traffic for potential security threats. They want to identify
unusual patterns that might indicate malicious activity.
▸ Data: The company collects network traffic data, including:
▸ IP addresses
▸ Packet size and frequency
▸ Port usage
▸ Time of access
26

Scenario 2
▸ Unsupervised Learning Task: The company can use
unsupervised learning, specifically anomaly detection, to
identify deviations from normal network traffic patterns.
▸ Deviations from these patterns, such as sudden spikes in
traffic from unusual locations or access attempts at odd
hours, can be flagged as potential anomalies for further
investigation.
27

Reinforcement Learning
▸ Reinforcement learning involves an agent
interacting with an environment.
▸ The agent learns an optimal strategy by
getting feedback in the form of rewards for
good actions and penalties for bad ones.
28

Reinforcement Learning
▸ Common Tasks:
▸ Game playing
▸ Robotics
▸ Resource Optimization

▸ Some algorithms:
▹ Q-Learning
▹ SARSA (State-Action-Reward-State-Action)
▹ Deep Q-Networks (DQNs)
29

Scenario 1
▸ Scenario: Self-Driving Car Navigation
▸ Context: Develop an autonomous car that can navigate
various environments safely and efficiently.
▸ Agent: The self-driving car acts as the agent in this scenario.
▸ Environment: The traffic environment, including roads,
other vehicles, pedestrians, and weather conditions, forms the
dynamic environment.
30

Scenario 1
▸ Action Space: The agent has various actions it can take,
such as accelerating, braking, turning, and changing lanes.
▸ Reward Signal: The agent receives rewards for reaching the
destination safely and efficiently, avoiding collisions, and
obeying traffic rules. Penalties can be given for unsafe actions
or failing to reach the destination.
▸ Goal: The agent's goal is to learn an optimal policy for
navigating in different traffic scenarios, maximizing its
cumulative reward over time.
31

Scenario 2
▸ Scenario: Resource Management in a Smart Grid
▸ Context: Optimize energy distribution and consumption in a
smart grid with connected devices and dynamic electricity
demands.
▸ Agent: A central controller acts as the agent in this scenario.
▸ Environment: The smart grid with its connected devices,
energy sources, and varying demands forms the environment.
32

Scenario 2
▸ Action Space: The agent can adjust energy production from
different sources, manage battery storage levels, and control
energy distribution to different areas.
▸ Reward Signal: The agent receives rewards for meeting
electricity demands efficiently, minimizing energy waste, and
maintaining grid stability. Penalties can be given for power
outages, exceeding demand capacity, or inefficient resource
allocation.
▸ Goal: The agent's goal is to learn an optimal policy for
managing energy resources in the smart grid, balancing
demand, supply, and cost while maintaining stability.
33

Scenario 3
▸ Scenario: Playing a Complex Video Game
▸ Context: Train an AI agent to learn and play a complex video
game, requiring strategic decision-making and adaptation to
different situations.
▸ Agent: The AI player acts as the agent in this scenario.
▸ Environment: The game world with its rules, objectives, and
challenges forms the environment.
▸ Action Space: The agent has various actions it can take,
depending on the game, such as moving, attacking, using
items, or interacting with the environment.
34

Scenario 3
▸ Reward Signal: The agent receives rewards for achieving
objectives within the game, such as defeating enemies,
completing levels, or maximizing score. Penalties can be given
for failing tasks, taking damage, or losing the game.
▸ Goal: The agent's goal is to learn an optimal policy for
playing the game, maximizing its cumulative reward and
achieving the game's objectives.
35

Some Applications of ML
▸ 1
▸ 2
▸ 3
▸ 4
▸ 5
▸ And many more….
36

THANKS!
Any questions?

You might also like