0% found this document useful (0 votes)
2 views

module-4

The document discusses various concepts in machine learning, focusing on regression trees, Bayesian learning, and artificial neural networks (ANNs). It explains the principles of Bayes' theorem, types of learning models, and the structure and functionality of biological and artificial neurons. Additionally, it outlines different types of ANNs, their applications, and key innovations in neural network design.

Uploaded by

Spoorthi Harkuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

module-4

The document discusses various concepts in machine learning, focusing on regression trees, Bayesian learning, and artificial neural networks (ANNs). It explains the principles of Bayes' theorem, types of learning models, and the structure and functionality of biological and artificial neurons. Additionally, it outlines different types of ANNs, their applications, and key innovations in neural network design.

Uploaded by

Spoorthi Harkuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 84

MACHINE LEARNING

Module-IV

Prof.Ravindra Patil
Dept. of CSE KLS V.D.I.T,Haliyal
• 6.2.4 Regression Trees

Regression trees are a variant of decision


trees where the target feature is a
continuous valued variable.
These trees can be constructed using
standard deviation to choose the best
splitting attribute.
• Bayesian Learning deals with reasoning
in uncertain domains.
• It uses probability to represent and reason
about knowledge.
• Bayes' theorem is applied to infer unknown
parameters of models.
• Useful in applications like game theory,
medicine, and diagnosis.
• Introduction to Probability-Based Learning:
• Probability-based learning combines prior knowledge or
probabilities with observed data.
• It uses probability theory to model randomness,
uncertainty, and noise in predicting future events.
• This method is ideal for large datasets and employs Bayes'
rule to infer unknown values and learn from data.
• There are two types of models:
• Probabilistic model: Involves randomness and provides a
probability distribution of outcomes.
• Deterministic model: Has no randomness and always gives
the same result under the same conditions.
• Bayesian learning is a subset of probabilistic learning,
relying on subjective probabilities (based on belief or
• Two key Bayesian algorithms:
• Naïve Bayes Learning
• Bayesian Belief Network (BBN)
• These algorithms use prior probabilities and Bayes'
rule to infer useful insights.
• FUNDAMENTALS OF BAYES THEOREM:
• Naïve Bayes Model relies on Bayes theorem that works on the
principle of three kinds of probabilities
• Prior Probability:
• The initial belief about an event before seeing any
evidence.
• Represents the general probability of an event occurring.
• Likelihood Probability:
• Likelihood Probability helps us understand how likely it
is to see a certain observation if we assume a particular
hypothesis is true.
• It is written as:
P(Evidence | Hypothesis)
This means: "The probability of seeing the evidence,
given that a certain hypothesis is true."
• Posterior Probability
• It is the updated or revised probability of an event
taking into account the observations from the
training data. P(Hypothesis | Evidence) is the
posterior distribution representing the belief about
the hypothesis, given the evidence from the
training data. Therefore,
• Posterior probability = prior probability × new
evidence
• Classification Using Bayes Model :
• Naïve Bayes Classification relies on Bayes’
Theorem to classify data. It calculates the
posterior probability of a hypothesis (i.e., class)
given some evidence (i.e., test data), using:
• Naïve Bayes Algorithm
• Artificial Neural Networks
• Artificial Neural Networks
• Artificial Neural Networks (ANNs) mimic how the human brain learns
and behaves.
• The brain has neurons connected in a network (similar to a directed
graph), where each neuron processes and transmits information.
• ANNs simulate this mechanism using nodes (computing units) capable of
complex calculations.
• These networks solve non-linear and complex problems by learning from
observations.
• They are foundational in Machine Learning and have inspired both
research and industry.
• Applications include:
• Natural Language Processing (NLP) ,Pattern recognition ,
Face/speech/character recognition ,Text processing ,Stock
prediction ,Computer vision
• ANNs are also used in:
• Chemical industry , Medicine ,Robotics, Banking,
Communications ,Marketing
• Neurons are functional nerve cells that make up this learning system.
• Neurons helps us understand, remember, recognize, and correlate
information.
The nervous system is divided into:
• CNS (Central Nervous System): Includes the brain and spinal
cord.
• PNS (Peripheral Nervous System): Includes neurons outside the
CNS.
• Types of neurons:
• Sensory neurons: Carry information from body parts to the CNS.
• Motor neurons: Transmit commands from CNS to the body.
• Interneurons: Connect one neuron to another within the CNS.
Functionality of a neuron:
• Receives information,
• Processes it,
• Transmits it to another neuron or to a body part.
BIOLOGICAL NEURONS :
• A biological neuron consists of:
• Dendrites: Receive input information.
• Soma (Cell body): Processes input from dendrites.
• Axon: Transmits processed signals to other neurons.
• Synapse: Junction where signal is transmitted to another
neuron.
Key Concepts:
• A neuron fires if the input exceeds a threshold value.
• Firing involves electrical impulses (spikes) that travel
across the synapse.
• A neuron can be connected to ~10,000 neurons via axons.
• Neurons form a network that receives input, processes it,
and produces a response.
• Figure 10.1 illustrates:
• Input entering via dendrites,
• Passing through the cell body and axon,
• Exiting via synapses to other neurons.
• ARTIFICIAL NEURONS

Figure 10.2 explain how artificial


neurons (also called nodes) are
modeled after biological neurons
• Simple Model of an Artificial Neuron explains
the McCulloch & Pitts model the first
mathematical model of a biological neuron,
introduced in 1943. Here's a simplified
explanation:
• Limitation of the model:
• The model can only represent a limited set of
Boolean functions.
• Boolean functions use binary inputs (0 or 1) and
produce a binary output (0 or 1).
• Example implementations:
– AND gate: Neuron fires (outputs 1) only if all inputs
are 1.
– OR gate: Neuron fires (outputs 1) if at least one input
is 1.
• A major constraint: The weights and threshold
values are fixed, so the model cannot learn or
adapt.
• Artificial Neural Network Structure:
• What Are Activation Functions?

• Activation functions:
• Decide whether a neuron should "fire" (be
activated).
• Map input signals to output signals.
• Normalize outputs, typically between:
– 0 and 1, or
– -1 and +1.
• Can be linear or non-linear.
• Types of Activation Functions
• Perceptron and Learning Theory :
• Invented by: Frank Rosenblatt in 1958.
• Type: Linear binary classifier used in supervised
learning.
• Based on:
– McCulloch-Pitts Neuron Model (basic neuron model)
– Hebbian Learning Rule (learning via weight adjustment)
• Key Innovations by Rosenblatt:
• Introduced variable weights
• Added an extra bias input
• Proposed that neurons could learn weights and
thresholds from data using a supervised learning
algorithm
• TYPES OF ARTIFICIAL NEURAL
NETWORKS
• ANNs consist of multiple neurons arranged in
layers.
• There are different types of ANNs that differ by the
network structure, activation function involved and
the learning rules used.
• In an ANN, there are three layers called input
layer, hidden layer and output layer.
• Any general ANN would consist of one input layer,
one output layer and zero or more hidden layers.
• Feed Forward Neural Network (FFNN)
• Flow: One-way (forward).
• Complexity: Simple, no back propagation.
• Use Cases: Simple classification, image processing.
• Variants:
– Single-layered
– Multi-layered (still no feedback)
• Multi-Layer
Perceptron (MLP)
• Structure: Input → Hidden
layer(s) → Output.
• Connections: Fully
connected layers.
• Learning: Supports
backpropagation.
• Use Cases: Deep learning
tasks like:
– Speech recognition
– Medical diagnosis
– Forecasting
• Complexity: Higher than
FFNN, slower but more
capable.
• Feedback Neural
Network
• Structure: Has
feedback connections.
• Flow: Bi-directional
(includes loops).
• Dynamic: More
adaptive during
training.
• Use Cases: Tasks
requiring memory or
dynamic context.

You might also like