SlideShare a Scribd company logo
2
Most read
3
Most read
5
Most read
SWPNA.C
Asst.Prof.
IT Department
SriDevi Women’s Engineering College
ADVANCED TOPICS IN ARTIFICIAL
NEURAL NETWORKS
Alternative Error Functions:
 Adding a penalty term for weight magnitude. we can add a
term to E that increases with the magnitude of the weight
vector.
 This causes the gradient descent search to seek weight
vectors with small magnitudes, thereby reducing the risk of
over fitting. One way to do this is to redefine E as
 Each weight is multiplied by the constant upon each
iteration.
Swapna.C
 Adding a term for errors in the slope, or derivative of
the target function.
 Minimizing the cross entropy of the network with
respect to the target values.
Swapna.C
 Altering the effective error function can also be
accomplished by weight sharing, or "tying together"
weights associated with different units or inputs.
 The idea here is that different network weights are
forced to take on identical values, usually to enforce
some constraint known in advance to the human
designer.
 The various units that receive input from different
portions of the time window are forced to share
weights. The net effect is to constrain the space of
potential hypotheses, thereby reducing the risk of
over fitting and improving the chances for accurately
generalizing to unseen situations.
Swapna.C
Alternative Error Minimization
Procedures
 One optimization method, known as line search, involves
a different approach to choosing the distance for the
weight update.
 A second method, that builds on the idea of line search,
is called the conjugate gradient method. Here, a
sequence of line searches is performed to search for a
minimum in the error surface. On the first step in this
sequence, the direction chosen is the negative of the
gradient. On each subsequent step, a new direction is
chosen so that the component of the error gradient that
has just been made zero, remains zero.
Swapna.C
Recurrent Networks
 Recurrent networks are artificial neural networks that
apply to time series data and that use outputs of
network units at time t as the input to other units at
time t + 1.
 One limitation of such a network is that the prediction
of y(t + 1 ) depends only on x(t) and cannot capture
possible dependencies of y (t + 1 ) on earlier values
of x.
Swapna.C
Swapna.C
Dynamically Modifying Network
Structure
 One idea is to begin with a network containing no
hidden units, then grow the network as needed by
adding hidden units until the training error is
reduced to some acceptable level.
The CASCADE- CORRELATION Algorithm (Fahlman
and Lebiere 1990) is one such algorithm.
CASCADE-CORRELATIONS by constructing a
network with no hidden units.
Swapna.C
 second idea for dynamically altering network structure
is to take the opposite approach. Instead of beginning
with the simplest possible network and adding
complexity, we begin with a complex network and
prune it as we find that certain connections are
inessential.
 One way to decide whether a particular weight is
inessential is to see whether its value is close to zero. A
second way, which appears to be more successful in
practice, is to consider the effect that a small variation
in the weight has on the error E.
Swapna.C

More Related Content

What's hot (20)

PPTX
Logics for non monotonic reasoning-ai
ShaishavShah8
 
PDF
Decision tree learning
Dr. Radhey Shyam
 
PPTX
Integration of Sensors & Actuators With Arduino.pptx
NShravani1
 
PPTX
Principal source of optimization in compiler design
Rajkumar R
 
PPTX
Knowledge representation in AI
Vishal Singh
 
PPTX
Issues in knowledge representation
Sravanthi Emani
 
PPTX
Multilayer perceptron
omaraldabash
 
PPT
Adaline madaline
Nagarajan
 
PDF
Address in the target code in Compiler Construction
Muhammad Haroon
 
PPTX
Convolutional Neural Network and Its Applications
Kasun Chinthaka Piyarathna
 
PDF
I. AO* SEARCH ALGORITHM
vikas dhakane
 
PPTX
Learning rule of first order rules
swapnac12
 
PPT
Conceptual dependency
Jismy .K.Jose
 
PPTX
Hetro associative memory
DEEPENDRA KORI
 
PPTX
Control Strategies in AI
Amey Kerkar
 
PPTX
Semantic nets in artificial intelligence
harshita virwani
 
PPT
backpropagation in neural networks
Akash Goel
 
PPTX
Predicate logic
Harini Balamurugan
 
Logics for non monotonic reasoning-ai
ShaishavShah8
 
Decision tree learning
Dr. Radhey Shyam
 
Integration of Sensors & Actuators With Arduino.pptx
NShravani1
 
Principal source of optimization in compiler design
Rajkumar R
 
Knowledge representation in AI
Vishal Singh
 
Issues in knowledge representation
Sravanthi Emani
 
Multilayer perceptron
omaraldabash
 
Adaline madaline
Nagarajan
 
Address in the target code in Compiler Construction
Muhammad Haroon
 
Convolutional Neural Network and Its Applications
Kasun Chinthaka Piyarathna
 
I. AO* SEARCH ALGORITHM
vikas dhakane
 
Learning rule of first order rules
swapnac12
 
Conceptual dependency
Jismy .K.Jose
 
Hetro associative memory
DEEPENDRA KORI
 
Control Strategies in AI
Amey Kerkar
 
Semantic nets in artificial intelligence
harshita virwani
 
backpropagation in neural networks
Akash Goel
 
Predicate logic
Harini Balamurugan
 

Similar to Advanced topics in artificial neural networks (20)

PPTX
CST413 KTU S7 CSE Machine Learning Neural Networks and Support Vector Machine...
resming1
 
PPTX
Multilayer & Back propagation algorithm
swapnac12
 
PDF
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
Cemal Ardil
 
PPT
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
sureshkumarece1
 
PPTX
ML_ Unit 2_Part_B
Srimatre K
 
PDF
Community Detection on the GPU : NOTES
Subhajit Sahu
 
PPT
Chapter 09 classification advanced
Houw Liong The
 
PPT
Chapter 09 class advanced
Houw Liong The
 
PDF
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
ijsrd.com
 
PDF
09 classadvanced
JoonyoungJayGwak
 
PDF
Neural Networks on Steroids (Poster)
Adam Blevins
 
PDF
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
PPT
Chapter 9. Classification Advanced Methods.ppt
Subrata Kumer Paul
 
PPTX
Unit ii supervised ii
Indira Priyadarsini
 
PPT
Ann
vini89
 
PPT
Artificial neural networks in hydrology
Jonathan D'Cruz
 
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
PDF
N ns 1
Thy Selaroth
 
PPT
2.5 backpropagation
Krish_ver2
 
CST413 KTU S7 CSE Machine Learning Neural Networks and Support Vector Machine...
resming1
 
Multilayer & Back propagation algorithm
swapnac12
 
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
Cemal Ardil
 
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
sureshkumarece1
 
ML_ Unit 2_Part_B
Srimatre K
 
Community Detection on the GPU : NOTES
Subhajit Sahu
 
Chapter 09 classification advanced
Houw Liong The
 
Chapter 09 class advanced
Houw Liong The
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
ijsrd.com
 
09 classadvanced
JoonyoungJayGwak
 
Neural Networks on Steroids (Poster)
Adam Blevins
 
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
Chapter 9. Classification Advanced Methods.ppt
Subrata Kumer Paul
 
Unit ii supervised ii
Indira Priyadarsini
 
Ann
vini89
 
Artificial neural networks in hydrology
Jonathan D'Cruz
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
N ns 1
Thy Selaroth
 
2.5 backpropagation
Krish_ver2
 
Ad

More from swapnac12 (16)

PPTX
Awt, Swing, Layout managers
swapnac12
 
PPTX
Applet
swapnac12
 
PPTX
Event handling
swapnac12
 
PPTX
Asymptotic notations(Big O, Omega, Theta )
swapnac12
 
PPTX
Performance analysis(Time & Space Complexity)
swapnac12
 
PPTX
Introduction ,characteristics, properties,pseudo code conventions
swapnac12
 
PPTX
Combining inductive and analytical learning
swapnac12
 
PPTX
Analytical learning
swapnac12
 
PPTX
Learning set of rules
swapnac12
 
PPTX
Genetic algorithms
swapnac12
 
PPTX
Instance based learning
swapnac12
 
PPTX
Computational learning theory
swapnac12
 
PPTX
Artificial Neural Networks 1
swapnac12
 
PPTX
Introdution and designing a learning system
swapnac12
 
PPTX
Inductive bias
swapnac12
 
PPTX
Concept learning and candidate elimination algorithm
swapnac12
 
Awt, Swing, Layout managers
swapnac12
 
Applet
swapnac12
 
Event handling
swapnac12
 
Asymptotic notations(Big O, Omega, Theta )
swapnac12
 
Performance analysis(Time & Space Complexity)
swapnac12
 
Introduction ,characteristics, properties,pseudo code conventions
swapnac12
 
Combining inductive and analytical learning
swapnac12
 
Analytical learning
swapnac12
 
Learning set of rules
swapnac12
 
Genetic algorithms
swapnac12
 
Instance based learning
swapnac12
 
Computational learning theory
swapnac12
 
Artificial Neural Networks 1
swapnac12
 
Introdution and designing a learning system
swapnac12
 
Inductive bias
swapnac12
 
Concept learning and candidate elimination algorithm
swapnac12
 
Ad

Recently uploaded (20)

PPTX
Unit 2 COMMERCIAL BANKING, Corporate banking.pptx
AnubalaSuresh1
 
PDF
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
PPTX
SPINA BIFIDA: NURSING MANAGEMENT .pptx
PRADEEP ABOTHU
 
PDF
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PDF
community health nursing question paper 2.pdf
Prince kumar
 
PDF
CONCURSO DE POESIA “POETUFAS – PASSOS SUAVES PELO VERSO.pdf
Colégio Santa Teresinha
 
PDF
QNL June Edition hosted by Pragya the official Quiz Club of the University of...
Pragya - UEM Kolkata Quiz Club
 
PPTX
How to Set Maximum Difference Odoo 18 POS
Celine George
 
PPTX
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
PPTX
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
PDF
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
PDF
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
PDF
Generative AI: it's STILL not a robot (CIJ Summer 2025)
Paul Bradshaw
 
PPTX
2025 Winter SWAYAM NPTEL & A Student.pptx
Utsav Yagnik
 
PDF
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
PDF
LAW OF CONTRACT ( 5 YEAR LLB & UNITARY LLB)- MODULE-3 - LEARN THROUGH PICTURE
APARNA T SHAIL KUMAR
 
PDF
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
PPTX
A PPT on Alfred Lord Tennyson's Ulysses.
Beena E S
 
PPTX
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 
Unit 2 COMMERCIAL BANKING, Corporate banking.pptx
AnubalaSuresh1
 
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
SPINA BIFIDA: NURSING MANAGEMENT .pptx
PRADEEP ABOTHU
 
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
community health nursing question paper 2.pdf
Prince kumar
 
CONCURSO DE POESIA “POETUFAS – PASSOS SUAVES PELO VERSO.pdf
Colégio Santa Teresinha
 
QNL June Edition hosted by Pragya the official Quiz Club of the University of...
Pragya - UEM Kolkata Quiz Club
 
How to Set Maximum Difference Odoo 18 POS
Celine George
 
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
CATEGORIES OF NURSING PERSONNEL: HOSPITAL & COLLEGE
PRADEEP ABOTHU
 
Biological Bilingual Glossary Hindi and English Medium
World of Wisdom
 
Stokey: A Jewish Village by Rachel Kolsky
History of Stoke Newington
 
Generative AI: it's STILL not a robot (CIJ Summer 2025)
Paul Bradshaw
 
2025 Winter SWAYAM NPTEL & A Student.pptx
Utsav Yagnik
 
0725.WHITEPAPER-UNIQUEWAYSOFPROTOTYPINGANDUXNOW.pdf
Thomas GIRARD, MA, CDP
 
LAW OF CONTRACT ( 5 YEAR LLB & UNITARY LLB)- MODULE-3 - LEARN THROUGH PICTURE
APARNA T SHAIL KUMAR
 
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
A PPT on Alfred Lord Tennyson's Ulysses.
Beena E S
 
How to Handle Salesperson Commision in Odoo 18 Sales
Celine George
 

Advanced topics in artificial neural networks

  • 2. ADVANCED TOPICS IN ARTIFICIAL NEURAL NETWORKS Alternative Error Functions:  Adding a penalty term for weight magnitude. we can add a term to E that increases with the magnitude of the weight vector.  This causes the gradient descent search to seek weight vectors with small magnitudes, thereby reducing the risk of over fitting. One way to do this is to redefine E as  Each weight is multiplied by the constant upon each iteration. Swapna.C
  • 3.  Adding a term for errors in the slope, or derivative of the target function.  Minimizing the cross entropy of the network with respect to the target values. Swapna.C
  • 4.  Altering the effective error function can also be accomplished by weight sharing, or "tying together" weights associated with different units or inputs.  The idea here is that different network weights are forced to take on identical values, usually to enforce some constraint known in advance to the human designer.  The various units that receive input from different portions of the time window are forced to share weights. The net effect is to constrain the space of potential hypotheses, thereby reducing the risk of over fitting and improving the chances for accurately generalizing to unseen situations. Swapna.C
  • 5. Alternative Error Minimization Procedures  One optimization method, known as line search, involves a different approach to choosing the distance for the weight update.  A second method, that builds on the idea of line search, is called the conjugate gradient method. Here, a sequence of line searches is performed to search for a minimum in the error surface. On the first step in this sequence, the direction chosen is the negative of the gradient. On each subsequent step, a new direction is chosen so that the component of the error gradient that has just been made zero, remains zero. Swapna.C
  • 6. Recurrent Networks  Recurrent networks are artificial neural networks that apply to time series data and that use outputs of network units at time t as the input to other units at time t + 1.  One limitation of such a network is that the prediction of y(t + 1 ) depends only on x(t) and cannot capture possible dependencies of y (t + 1 ) on earlier values of x. Swapna.C
  • 8. Dynamically Modifying Network Structure  One idea is to begin with a network containing no hidden units, then grow the network as needed by adding hidden units until the training error is reduced to some acceptable level. The CASCADE- CORRELATION Algorithm (Fahlman and Lebiere 1990) is one such algorithm. CASCADE-CORRELATIONS by constructing a network with no hidden units. Swapna.C
  • 9.  second idea for dynamically altering network structure is to take the opposite approach. Instead of beginning with the simplest possible network and adding complexity, we begin with a complex network and prune it as we find that certain connections are inessential.  One way to decide whether a particular weight is inessential is to see whether its value is close to zero. A second way, which appears to be more successful in practice, is to consider the effect that a small variation in the weight has on the error E. Swapna.C