0% found this document useful (0 votes)
19 views35 pages

Neuron, Neueral Network

Uploaded by

mohanmanidharb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views35 pages

Neuron, Neueral Network

Uploaded by

mohanmanidharb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Neurons,

Neural Networks

V.TAGORE (EC21B071)
B.VINAY NAIK(EC21B017)
CH.SRIRAM(EC21B021)
Introduction
WHAT IS NEURON?

Neurons are nerve cells that send messages all


over your body to allow you to do everything
from breathing to talking, eating, walking,
and thinking. Until recently, most neuroscientists
(scientists who study the brain) thought we were
born with all the neurons we were ever going to
have.
How do our
brains work?
• The Brain is A massively parallel information
processing system.

• Our brains are a huge network of processing


elements. A typical brain contains a network
of 10 billion neurons.

• Neurons: The basic units of the brain,


neurons are specialized cells that transmit
information.

• Synapses: The junctions between neurons


where communication occurs. When a signal
(action potential) reaches the end of an axon,
it triggers the release of neurotransmitters
into the synaptic cleft, which bind to
receptors on the receiving neuron's
dendrites.
Signal Transmission

• Action Potentials: Neurons communicate through


action potentials, which are rapid changes in electrical
charge across the neuron's membrane. When a neuron
is sufficiently stimulated, it depolarizes, creating an
electrical impulse that travels along the axon.

• Neurotransmitter Release: When the action potential


reaches the axon terminals, it causes vesicles filled
with neurotransmitters to fuse with the membrane
and release their contents into the synapse.

• Receptor Binding: Neurotransmitters bind to specific


receptors on the postsynaptic neuron, which can
either excite or inhibit that neuron, influencing
whether it will fire its own action potential.
Learning and Memory

• Hebbian Learning: A principle often summarized as "cells that fire


together wire together." When two neurons are repeatedly
activated together, the synaptic connection between them
strengthens, facilitating learning and memory formation.

• Long-Term Potentiation (LTP): A process that enhances the


efficiency of synaptic transmission, believed to be crucial for
learning and memory.

In summary, the brain operates through a dynamic and intricate


network of neurons that communicate via electrical and chemical
signals. This complex interplay enables the brain to process
information, learn, and adapt.
• A neuron is connected to other neurons through about 10,000 synapses.

• A neuron receives input from other neurons. Inputs are combined.

• The axon endings almost touch the dendrites or cell body of the next neuron.

• Transmission of an electrical signal from one neuron to the next is effected by


neurotransmitters.
Dendrites: Input
Cell Nucleus:
Processor
Synaptic: Link
Axon: Output
FUNCTION OF DENDRITES
Receive Signals: Dendrites receive incoming signals
(neurotransmitters) from the axons of other neurons at
synapses (the junctions between neurons).
Integration of Inputs: These signals can be either excitatory
(promoting the neuron to fire) or inhibitory (preventing the
neuron from firing). The dendrites integrate all these inputs,
and if the cumulative signal is strong enough, it will influence
the neuron's overall response.
FUNCTION OF CELL NUCLEUS
Stores Genetic Information: Holds DNA, which
contains instructions for the neuron's function.
Supports Protein Synthesis: Transcribes DNA into
mRNA, which is used to create proteins vital for the
neuron's structure and signaling.
Influences Neuronal Communication: Regulates
production of neurotransmitters and receptors,
affecting how neurons communicate and adapt.
FUNCTION OF SYNAPSE

Signal Transmission: Transfers electrical signals from


one neuron to another or to target cells.
Neurotransmitter Release: Action potentials trigger
the release of neurotransmitters into the synaptic
cleft.
Integration of Signals: Postsynaptic neurons
integrate multiple inputs to decide whether to
generate an action potential.
FUNCTION OF AXON

Signal Transmission: Carries electrical impulses (action


potentials) away from the cell body to other neurons, muscles,
or glands.
Communication: Connects to axon terminals, which release
neurotransmitters to transmit signals to target cells.
Maintenance of Neuron Health: Supports the transport of
essential molecules and organelles along its length to
maintain neuron function.
History of the Artificial Neural Networks
• History of the ANNs stems from the 1940s, the decade of the first electronic
computer.

• However, the first important step took place in 1957 when Rosenblatt introduced
the first concrete neural model, the perceptron. Rosenblatt also took part in
constructing the first successful neurocomputer, the Mark I Perceptron. After this,
the development of ANNs has proceeded as described in Figure.
History of the Artificial Neural Networks

• In 1986, The application area of the MLP networks remained


rather limited until the breakthrough when a general back
propagation algorithm for a multi-layered perceptron was
introduced by Rummelhart and Mclelland.

• In 1982, Hopfield brought out his idea of a neural network.


Unlike the neurons in MLP, the Hopfield network consists of
only one layer whose neurons are fully connected with each
other.
• In 1988, Radial Basis Function (RBF) networks were first introduced by
Broomhead & Lowe. Although the basic idea of RBF was developed 30 years
ago under the name method of potential function, the work by Broomhead &
Lowe opened a new frontier in the neural network community

• In 1982, A totally unique kind of network model is the Self-Organizing Map


(SOM) introduced by Kohonen. SOM is a certain kind of topological map which
organizes itself based on the input patterns that it is trained with. The SOM
originated from the LVQ (Learning Vector Quantization) network the
underlying idea of which was also Kohonen's in 1972.

Since then, research on artificial neural networks has remained active, leading to
many new network types, as well as hybrid algorithms and hardware for neural
information processing.
Why Artificial Neural
Networks?
There are two basic reasons why we are interested in
building artificial neural networks (ANNs):

• Technical viewpoint: Some problems such as


character recognition or the prediction of future
states of a system require massively parallel and
adaptive processing.

• Biological viewpoint: ANNs can be used to


replicate and simulate components of the human
(or animal) brain, thereby giving us insight into
natural information processing.
How do ANNs
work?

An artificial neuron is an imitation of a human


neuron
When inputs are not equal, weights adjust the contribution of each input,
allowing the network to prioritize certain features over others. This enables
the network to learn complex patterns by optimizing the weights during
training.
The signal is not passed down to
the
next neuron verbatim

In a neural network, a *transfer function* (also known as an *activation function*) is a


mathematical function applied to the weighted sum of inputs to determine the
neuron's output. It introduces non-linearity, allowing the network to model complex
relationships in the data
Introduction of Perceptron

* The perceptron was first proposed by Rosenblatt (1958) is a simple neuron


that is used to classify its input into one of two categories.

* A perceptron is a single processing unit of a neural network. A perceptron


uses a step function that returns +1 if weighted sum of its input >= 0 and-1
otherwise.
The perceptron provides a simple
model of how a neuron works,
making it a great starting point for
understanding neural networks. It
introduces key ideas like weights,
biases, and activation functions,
which are critical in more advanced
models

PERCEPTRON Binary Classification


We use perceptrons because they are a L(e.g., distinguishing between
fundamental concept in machine learning two classes such as spam vs.
non-spam emails). The output is
and serve as the foundation for more
typically 0 or 1, representing
complex neural networks. Here are the main the class labels.
reasons why perceptrons are useful:

Linearly Separable
Problems
the data can be divided into two
classes using a straight line (or
hyperplane in higher
dimensions). Examples include
simple logical functions like
AND and OR.
Introduction to
Learning Algorithms
The perceptron introduces
basic learning principles,
such as adjusting weights
based on errors through a
*learning rule*. This process *Low Computational
of weight adjustment forms Complexity
the basis for more complex
learning algorithms, like Perceptrons are
backpropagation in deep computationally simple and
learning. efficient. They are often used
in situations where the
computational cost is a
Historical Significance
concern, or where a simple, The perceptron has historical
fast solution is needed. importance in the
development of neural
networks. It was one of the
earliest algorithms for
supervised learning and
paved the way for modern
machine learning methods.
The output is a function of the input, that is
affected by the weights, and the transfer
functions
Advantages of Neural Network

Ability to Learn Handling Non- High Accuracy for


and Adapt Linear Complex Tasks
Neural networks can learn Relationships
Neural networks are effective They have demonstrated high
from data and improve
at capturing complex, non- accuracy in tasks like image
their performance over
linear relationships between recognition, language
time, making them ideal
inputs and outputs, which translation, and pattern
for complex, real-world
traditional models may recognition.
problems.
struggle with.
Advantages of Neural Network

Automation of Parallel Fault Tolerance


Feature Processing
Capability Neural networks are generally
UnlikeExtraction
traditional models robust to noise in the input
Neural networks can process
that often require manual data and can still produce
multiple inputs simultaneously,
feature extraction, neural reasonable outputs, making
making them highly efficient
networks can them suitable for uncertain
when implemented on
automatically learn environments.
specialized hardware like
important features from
GPUs.
raw data.
Limitations of Neural Network

Data Overfitting Computational


Requirements Resources
• Neural networks They can easily overfit to
Training deep neural
typically require large training data, performing
networks can be resource-
amounts of labeled well on that data but
intensive, requiring
data for effective poorly on unseen data,
significant computational
training, which can be especially with complex
power and time.
difficult and costly to models.
obtain.
Limitations of Neural Network

Lack of Generalization Hyperparameter Sensitivity


Common Sense Performance can heavily
They may struggle to
They typically lack innate depend on the choice of
generalize well to new,
understanding or hyperparameters (like
unseen data if the
common sense learning rate,
reasoning, which can training data lacks
architecture, etc.), which
lead to unrealistic or diversity or is biased.
often requires extensive
nonsensical outputs. tuning.
Applications of Neural
Network

Natural Language
Image Recognition Speech Recognition Medical Diagnosis
Processing (NLP)
• Identifying and • Understanding and • Converting spoken • Analyzing medical images
classifying objects generating human language into text. and patient data for
within images. language. • Example: Virtual disease detection.
• Example: ChatGPT and assistants like Siri and • Example: IBM Watson
• Example: Google Photos
other conversational Google Assistant utilize Health uses neural
uses neural networks for
agents use neural networks to analyze
facial recognition and neural networks for
networks for generating medical images for cancer
organizing photos. voice recognition.
human-like text detection.
responses.
Applications of Neural
Network

Autonomous Vehicles Recommendation Time Series


Fraud Detection
Systems Forecasting
• Enabling self-driving cars • Identifying fraudulent • Predicting future values
• Suggesting products or
to perceive their based on historical
content based on user transactions in financial
environment. data.
preferences. systems.
• Example: Tesla’s • Example: Netflix and • Example: Businesses
• Example: Banks use
Autopilot system Amazon use neural use neural networks to
neural networks to
employs neural networks networks to recommend
analyze transaction forecast sales, weather,
for object detection and movies and products,
patterns and detect and resource
decision-making. respectively.
anomalies. consumption.
Any
Queries?
THANK
YOU
For watching this presentation

You might also like