0% found this document useful (0 votes)
6 views

ML RUSA Module 1 Intro

Uploaded by

mohamed2003imran
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

ML RUSA Module 1 Intro

Uploaded by

mohamed2003imran
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

CS6301 – RUSA

Module – 1 : Learning

Dr.P.Uma Maheswari
Professor
Department of CSE
College of Engineering, Guindy
Anna University, Chennai-25
To do better in the future based on what was experienced
in the past.
Learning ?
Learning

• Learning from data


• Learning from experience  Human and animal
• remembering, adapting, and generalising
• reasoning, and logical deduction -> intelligence
What is Machine Learning ?
• ML is an application of artificial intelligence (AI) that provides systems
the ability to automatically learn and improve experience without being
explicitly programmed.

• The process of learning begins with observations or data and look for
patterns in data and make better decisions in the future based on the
examples that we provide.

 computational complexity of the machine learning methods


 complexity of training, and the complexity of applying the trained algorithm
Tom Michel defines ML as
• A computer program is said to learn from
experience E with respect to some class of
tasks T and performance measure P, if its
performance at tasks in T, as measured by P,
improves with experience E.
Key Elements of Machine Learning

Representation • how to represent knowledge

• accuracy, prediction and recall,


Evaluation squared error, likelihood, posterior
probability, cost, margin, entropy

• combinatorial optimization, convex


Optimization optimization, constrained
optimization
Supervised vs unsupervised Learning
I have labeled class I don’t have labeled class
Predictive or Supervised learning approach
Goal : To learn a mapping from inputs x to outputs y, given a
labeled set of input-output pairs

Here
D is called the training set, and
N is the number of training examples.

Training input Xi is The output or response variable Yi can be


anything,
• a D-dimensional
• a categorical or nominal variable from
vector of numbers, {the height some finite set, (such as male or female)
and weight of a person}. classification or pattern recognition
• a complex • a real-valued scalar (such as income level).
structured object, such as regression.
an image, a sentence, an email • Y has some natural ordering, such as
message, a time series, a grades A–F.
yi  is a categorical or nominal variable from some finite set,
yi ∈ {1,...,C}for classification or that yi is a real-valued scalar for regression.

If C = 2, this is called binary classification (in which case we often assume y ∈ {0,
1});
if C > 2, this is called multiclass classification. If the class labels are not mutually
exclusive
Supervised learning - Classification

Here the goal is to learn a mapping from inputs x to outputs y, where y ∈ {1, . . . , C}, with C
being the number of classes.

If C = 2, binary classification (in which case we often assume y ∈ {0, 1});


if C > 2, multiclass classification.

To formalize the problem is as function approximation.


We assume y = f(x) for some unknown function f, and the
goal of learning is to estimate the function f given a labeled
training set, and then to make predictions using ˆy = ˆ f(x).
Given a probabilistic output, we can always compute our “best guess” as to the “true label” using
yˆ = ˆf(x) = C argmax c=1 p(y = c|x, D)
most probable class label,
and is called the mode of the distribution p(y|x, D); it is also known as a MAP estimate (MAP stands for maximum a
posteriori).
Descriptive or Unsupervised learning approach

Goal : To learn a mapping from inputs x to outputs y, to find


“interesting
patterns” in the data.

Here
D is called the training set, and
N is the number of training examples.
Types of USL….

– Dimension reduction
– Clustering
– Association analysis ( e-commerce - recommendation engine).
Reinforcement learning
• In some applications, the output of the system is a sequence of
actions.
• In such a case, a single action is not important; what is
important is the policy that is the sequence of correct actions to
reach the goal.
• There is no such thing as the best action in any intermediate
state; an action is good if it is part of a good policy.
– In such a case, the machine learning program should be able to assess
the goodness of policies and learn from past good action sequences to
be able to generate a policy.
Example of Reinforcement Learning
• Game playing where a single move by itself is not that
important; it is the sequence of right moves that is good. A
move is good if it is part of a good game playing policy.
• A robot navigating in an environment in search of a goal
location
– At any time, the robot can move in one of a number of directions.
After a number of trial runs, it should learn the correct sequence of
actions to reach to the goal state from an initial state, doing this as
quickly as possible and without hitting any of the obstacles.
The Brain and the Neuron
Four parts of biological neurons

• Dendrites − They are tree-like branches, responsible for


receiving the information from other neurons it is connected
to.
• Soma − It is the cell body of the neuron and is responsible for
processing of information, they have received from dendrites.
• Axon − It is just like a cable through which neurons send the
information.
• Synapses − It is the connection between the axon and other
neuron dendrites.
Biological Neural Network BNN Vs Artificial Neural Network
ANN

• Soma  Node
• Dendrites  Input
• Synapse  Weights or Interconnections
• Axon  Output
DESIGNING A LEARNING SYSTEM

1. Choosing the Training Experience


2. Choosing the Target Function
3. Choosing a Representation for the Target Function
4. Estimating training values
5. Adjusting the weights
Choosing the Training Experience

• choose the type of training experience from which our


system will learn
• For example, in learning to play checkers
• direct training examples - consisting of individual
checkers board states and the correct move for each.
• indirect information consisting of the move sequences
and final outcomes of various games played.
– In this later case, information about the correctness of specific
moves early in the game must be inferred indirectly from the
fact that the game was eventually won or lost.
learning from direct training feedback is typically easier than learning from indirect feedback
attributes of the training experience

• degree to which the learner controls the sequence of


training examples.
• how well it represents the distribution of examples over
which the final system performance P must be measured.
A checkers learning problem
• Task T: playing checkers
• Performance measure P: percent of games won in the world
tournament
• Training experience E: games played against itself
• In order to complete the design of the learning system, we
must now choose
• 1. the exact type of knowledge to be learned
• 2. a representation for this target knowledge
• 3. a learning mechanism
Choosing the Target Function
• to determine exactly what type of knowledge will be learned
and how this will be used by the performance program.
• if b is a final board state that is won, then V(b) = 100
• if b is a final board state that is lost, then V(b) = -100 .
• if b is a final board state that is drawn, then V(b) = 0 .
• if b is a not a final state in the game, then V(b) = V(bl), where
b' is the best final board state that can be achieved starting
from b and playing optimally until the end of the game
(assuming the opponent plays optimally, as well).

You might also like