SlideShare a Scribd company logo
Neural Network to solve Traveling Salesman Problem
Roadmap Hopfield Neural Network Solving TSP using Hopfield Network Modification of Hopfield Neural Network Solving TSP using Concurrent Neural Network Comparison between Neural Network and SOM for solving TSP
Background Neural Networks Computing device composed of processing elements called  neurons Processing power comes from interconnection between  neurons Various models are  Hopfield, Back propagation, Perceptron, Kohonen Net etc
Associative memory Associative memory Produces for any input pattern a similar stored pattern Retrieval by part of data Noisy input can be also recognized Original Degraded Reconstruction
Hopfield Network Recurrent network  Feedback from output to input Fully connected Every neuron connected to every other neuron
Hopfield Network Symmetric connections Connection weights from unit  i  to unit  j  and from unit  j  to unit  i   are identical for all  i  and  j No self connection, so weight matrix is 0-diagonal and symmetric Logic levels are +1 and -1
Computation For any neuron i, at an instant t input is  Σ j = 1 to n, j≠i  w ij  σ j (t)  σ j (t) is the activation of the j th  neuron Threshold function  θ  = 0 Activation  σ i (t+1)=sgn( Σ j=1 to n, j≠i  w ij σ j (t))  where  Sgn(x) = +1  x>0 Sgn(x) = -1  x<0
Modes of operation Synchronous All neurons are updated simultaneously Asynchronous Simple : Only one unit is randomly selected at each step General : Neurons update themselves independently and randomly based on probability distribution over time.
Stability Issue of stability arises since there is a feedback in Hopfield network May lead to  fixed point, limit cycle  or  chaos Fixed point : unique point attractor Limit cycles : state space repeats itself in periodic cycles Chaotic : aperiodic strange attractor
Procedure Store and stabilize the vector which has to be part of memory. Find the value of weight w ij , for all i, j such that : < σ 1 ,  σ 2 ,  σ 3  ……  σ N >  is stable in Hopfield Network of N neurons.
Weight learning Weight learning is given by w ij   = 1/(N-1)   σ i  σ j  1/(N-1) is Normalizing factor σ i  σ j  derives from  Hebb’s rule If two connected neurons are ON then weight of the connection is such that mutual excitation is sustained. Similarly, if two neurons inhibit each other then the connection should sustain the mutual inhibition.
Multiple Vectors If multiple vectors need to be stored in memory like < σ 1 1 ,  σ 2 1 ,  σ 3 1  ……  σ N 1 > < σ 1 2 ,  σ 2 2 ,  σ 3 2  ……  σ N 2 > ……………………………… . < σ 1 p ,  σ 2 p ,  σ 3 p  ……  σ N p > Then the weight are given by: w ij   = 1/(N-1)   Σ m=1 to p σ i m   σ j m
Energy Energy is associated with the state of the system. Some patterns need to be made stable this corresponds to minimum energy state of the system.
Energy function Energy at state  σ ’   =  < σ 1 ,  σ 2 ,  σ 3  ……  σ N >   E( σ ’) = -½  Σ i  Σ j≠i  w ij  σ i σ j Let the p th  neuron change its state from  σ p initial   to   σ p final  so E initial  = -½  Σ j≠p  w pj  σ p initial  σ j  + T E final  = -½  Σ j≠p  w pj  σ p final  σ j  + T Δ E = E final  – E initial T is independent of  σ p
Continued… Δ E = - ½ ( σ p final   -  σ p initial  )   Σ j≠p  w pj  σ j i.e.  Δ E = -½  Δσ p  Σ j≠p  w pj  σ j Thus:  Δ E = -½  Δσ p  x (netinput p ) If p changes from +1 to -1 then  Δσ p  is negative and   netinput p  is negative and vice versa. So,  Δ E is always negative . Thus energy always  decreases  when neuron changes state.
Applications of Hopfield Nets Hopfield nets are applied for  Optimization problems. Optimization problems maximize or minimize a function. In Hopfield Network the energy gets minimized.
Traveling Salesman Problem Given a set of cities and the distances between them, determine the shortest closed path passing through all the cities exactly once.
Traveling Salesman Problem One of the classic and highly  researched problem in the field of computer science. Decision problem “Is there a tour with length less than k&quot;  is NP - Complete Optimization problem  “What is the shortest tour?” is NP - Hard
Hopfield Net for TSP N cities are represented by an  N X N  matrix of neurons Each row has exactly one 1   Each column has exactly one 1 Matrix has exactly  N  1’s σ kj  = 1 if city k is in position j σ kj  = 0 otherwise
Hopfield Net for TSP For each element of the matrix take a neuron and fully connect the assembly with symmetric weights Finding a suitable energy function E
Determination of Energy Function E function for TSP has four components satisfying four constraints Each city can have no more than one position i.e. each row can have no more than one activated neuron E 1 = A/2  Σ k  Σ i  Σ j≠i  σ ki  σ kj  A - Constant
Energy Function (Contd..) Each position contains no more than one city i.e. each column contains no more than one activated neuron E 2 = B/2  Σ j  Σ k  Σ r≠k  σ kj  σ rj  B - constant
Energy Function (Contd..) There are exactly  N  entries in the output matrix i.e. there are N 1’s in the output matrix E 3 = C/2 (n -  Σ k Σ i  σ ki ) 2  C - constant
Energy Function (cont..) Fourth term incorporates the requirement of the shortest path E 4 = D/2  Σ k Σ r≠k Σ j  d kr  σ kj ( σ r(j+1)  +   σ r(j-1) ) where d kr  is the distance between city-k and city-r E total  = E 1  + E 2  + E 3  + E 4
Energy Function (cont..) Energy equation is also given by   E= -½ Σ ki Σ rj  w (ki)(rj)  σ ki  σ rj σ ki   – City k at position i σ rj   – City r at position j Output function  σ ki   σ ki  =   ½ ( 1 + tanh(u ki /u 0 )) u 0  is a constant u ki  is the net input
Weight Value Comparing above equations with the energy equation obtained previously W (ki)(rj)  = -A  δ kr (1 –  δ rj ) - B δ ij (1 –  δ kr ) –C –Dd kr ( δ j(i+1)  +  δ j(i-1) ) Kronecker Symbol :  δ kr δ kr   = 1 when k = r δ kr   = 0 when k ≠ r
Observation Choice of constants A,B,C and D that provide a good solution vary between Always obtain legitimate loops (D is small relative to A, B and C) Giving heavier weights to the distances (D is large relative to A, B and C)
Observation (cont..) Local minima Energy function full of dips, valleys and local minima Speed Fast due to rapid computational capacity of network
Concurrent Neural Network  Proposed by N. Toomarian in 1988  It requires N(log(N)) neurons to compute TSP of N cities. It also has a much higher probability to reach a valid tour.
Objective Function Aim is to minimize the distance between city k at position i and city r at position i+1 E i  =  Σ k≠r Σ r Σ i  δ ki δ r(i+1)  d kr Where  δ  is the Kronecers Symbol
Cont … E i  = 1/N 2   Σ k≠r Σ r Σ i  d kr   Π i= 1 to ln(N)  [1 + (2 ע i  – 1)  σ ki ] [1 + (2µ i  – 1)  σ ri ]   Where  (2µ i  – 1) = (2 ע i  – 1) [1 –  Π j= 1 to i-1  ע i  ] Also to ensure that 2 cities don’t occupy same position E error  =  Σ k≠r Σ r  δ kr
Solution E error  will have a value 0 for any valid tour. So we have a constrained optimization problem to solve. E = E i  +  λ  E error λ  is the Lagrange multiplier to be calculated form the solution.
Minimization of energy function Minimizing Energy function which is in terms of  σ ki Algorithm is an iterative procedure which is usually used for minimization of quadratic functions The iteration steps are carried out  in the direction of steepest decent with respect to the energy function E
Minimization of energy function Differentiating the energy   dU ki /dt = -  δ E/  δ σ ki  = -  δ E i /  δ σ ki  -  λδ E error /  δ σ ki d λ /dt = ±  δ E/  δλ  = ± E error σ ki  = tanh( α U ki ) ,    α  – const.
Implementation Initial Input Matrix and the value of  λ  is randomly selected and specified At each iteration, new value of  σ ki  and  λ  is calculated in the direction of steepest descent of energy function Iterations will stop either when convergence is achieved or when the number of iterations exceeds a user specified number
Comparison – Hopfield vs Concurrent NN Converges faster than Hopfield Network Probability to achieve valid tour is higher than Hopfield Network Hopfield doesn’t have systematic way to determine the constant terms.
Comparison – SOM and Concurrent NN Data set consists of 52 cities in Germany and its subset of 15 cities. Both algorithms were run for 80 times on 15 city data set. 52 city dataset could be analyzed only using SOM while Concurrent Neural Net failed to analyze this dataset.
Result Concurrent neural network always converged and never missed any city, where as SOM is capable of missing cities. Concurrent Neural Network is very erratic in behavior , whereas SOM has higher reliability to detect every link in smallest path. Overall Concurrent Neural Network performed poorly as compared to SOM.
Shortest path generated Concurrent Neural Network (2127 km) Self Organizing Maps (1311km)
Behavior in terms of probability  Concurrent Neural Network Self Organizing Maps
Conclusion Hopfield Network can also be used for optimization problems. Concurrent Neural Network performs better than Hopfield network and uses less neurons. Concurrent and Hopfield Neural Network are less efficient than SOM for solving TSP.
References N. K. Bose and P. Liang, ” Neural Network Fundamentals with Graphs, Algorithms and Applications”,  Tata McGraw Hill Publication, 1996 P. D. Wasserman,  “Neural computing: theory and practice” , Van Nostrand Reinhold Co., 1989 N. Toomarian, “ A Concurrent Neural Network algorithm for the Traveling Salesman Problem ”, ACM Proceedings of the third conference on Hypercube concurrent computers and applications, pp. 1483-1490, 1988.
References R. Reilly, “ Neural Network approach to solving the Traveling Salesman Problem ”, Journals of Computer Science in Colleges, pp. 41-61,October 2003 Wolfram Research inc., “ Tutorial on Neural Networks ”,  http://documents.wolfram.com/applications/neuralnetworks/NeuralNetworkTheory/2.7.0.html , 2004 Prof. P. Bhattacharyya, “ Introduction to computing with Neural Nets ”, http://www.cse.iitb.ac.in/~pb/Teaching.html .
 
NP-complete NP-hard When a decision version of a combinatorial  optimization problem  is proved to belong to the class of  NP-complete  problems, which includes well-known problems such as satisfiability, traveling  salesman , the  bin packing problem , etc., then the optimization version is NP-hard.
NP-complete NP-hard “Is there a tour with length less than k&quot; is NP-complete:  It is easy to determine if a proposed  certificate  has length less than k  The optimization problem : &quot;what is the shortest tour?&quot;, is NP-hard  Since there is no easy way to determine if a certificate is the shortest.
Path lengths Concurrent Neural Network Self Organizing Maps

More Related Content

What's hot (20)

PPTX
Nural network ER.Abhishek k. upadhyay
abhishek upadhyay
 
PPT
Principles of soft computing-Associative memory networks
Sivagowry Shathesh
 
PDF
The Back Propagation Learning Algorithm
ESCOM
 
PPSX
Perceptron (neural network)
EdutechLearners
 
PPT
2.5 backpropagation
Krish_ver2
 
PPT
Back propagation
Nagarajan
 
PDF
neural networksNnf
Sandilya Sridhara
 
PPTX
Unit 1
Vinod Srinivasan
 
PPTX
Associative memory network
Dr. C.V. Suresh Babu
 
PPTX
Back propagation method
Prof. Neeta Awasthy
 
PPT
Adaline madaline
Nagarajan
 
PPTX
Back propagation network
HIRA Zaidi
 
PPTX
Neural Networks
Sagacious IT Solution
 
PPTX
04 Multi-layer Feedforward Networks
Tamer Ahmed Farrag, PhD
 
PDF
Multi Layer Perceptron & Back Propagation
Sung-ju Kim
 
PPT
Multilayer perceptron
smitamm
 
PPTX
Artificial neural network - Architectures
Erin Brunston
 
PPT
Perceptron
Nagarajan
 
PDF
The Perceptron and its Learning Rule
Noor Ul Hudda Memon
 
PPTX
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Simplilearn
 
Nural network ER.Abhishek k. upadhyay
abhishek upadhyay
 
Principles of soft computing-Associative memory networks
Sivagowry Shathesh
 
The Back Propagation Learning Algorithm
ESCOM
 
Perceptron (neural network)
EdutechLearners
 
2.5 backpropagation
Krish_ver2
 
Back propagation
Nagarajan
 
neural networksNnf
Sandilya Sridhara
 
Associative memory network
Dr. C.V. Suresh Babu
 
Back propagation method
Prof. Neeta Awasthy
 
Adaline madaline
Nagarajan
 
Back propagation network
HIRA Zaidi
 
Neural Networks
Sagacious IT Solution
 
04 Multi-layer Feedforward Networks
Tamer Ahmed Farrag, PhD
 
Multi Layer Perceptron & Back Propagation
Sung-ju Kim
 
Multilayer perceptron
smitamm
 
Artificial neural network - Architectures
Erin Brunston
 
Perceptron
Nagarajan
 
The Perceptron and its Learning Rule
Noor Ul Hudda Memon
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Simplilearn
 

Viewers also liked (15)

PPTX
hopfield neural network
Abhishikha Sinha
 
PPTX
Solving the traveling salesman problem by genetic algorithm
Alex Bidanets
 
PPT
Perform brute force
SHC
 
PDF
Traveling salesman problem__theory_and_applications
Sachin Kheveria
 
PPTX
Artificial intelligence - TSP
Tung Le
 
PPTX
Traveling salesman problem
Mohamed Gad
 
PPTX
Travelling salesman problem
Dimitris Mavrommatis
 
PPTX
Travelling salesman problem ( Operation Research)
Muhammed Abdulla N C
 
PPTX
Travelling salesman problem using genetic algorithms
Shivank Shah
 
PDF
Branch and bound technique
ishmecse13
 
PPT
The Travelling Salesman Problem
guest3d82c4
 
PPTX
Travelling Salesman Problem
Daniel Raditya
 
PDF
Genetic Algorithms
Karthik Sankar
 
PPTX
Genetic Algorithm by Example
Nobal Niraula
 
PPTX
Travelling Salesman Problem
Shikha Gupta
 
hopfield neural network
Abhishikha Sinha
 
Solving the traveling salesman problem by genetic algorithm
Alex Bidanets
 
Perform brute force
SHC
 
Traveling salesman problem__theory_and_applications
Sachin Kheveria
 
Artificial intelligence - TSP
Tung Le
 
Traveling salesman problem
Mohamed Gad
 
Travelling salesman problem
Dimitris Mavrommatis
 
Travelling salesman problem ( Operation Research)
Muhammed Abdulla N C
 
Travelling salesman problem using genetic algorithms
Shivank Shah
 
Branch and bound technique
ishmecse13
 
The Travelling Salesman Problem
guest3d82c4
 
Travelling Salesman Problem
Daniel Raditya
 
Genetic Algorithms
Karthik Sankar
 
Genetic Algorithm by Example
Nobal Niraula
 
Travelling Salesman Problem
Shikha Gupta
 
Ad

Similar to Neural Network (20)

PPTX
Unit iii update
Indira Priyadarsini
 
PDF
Echo state networks and locomotion patterns
Vito Strano
 
PPTX
Using Hopfield Networks for Solving TSP
Tolga Varol
 
PDF
NN-Ch7.PDF
gnans Kgnanshek
 
PPT
Chapter No 7 Other Important Neural Networks Models
RamkrishnaPatil17
 
PPTX
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 
PDF
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
PPTX
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
PPTX
sathiya new final.pptx
sathiyavrs
 
PDF
Capstone paper
Muhammad Saeed
 
PPTX
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
sgamitgill77
 
PDF
Lecture artificial neural networks and pattern recognition
Hưng Đặng
 
PDF
Lecture artificial neural networks and pattern recognition
Hưng Đặng
 
PPT
Annintro
kaushaljha009
 
PDF
Neural Network as a function
Taisuke Oe
 
PPT
Artificial neural networks
Institute of Technology Telkom
 
PDF
Artificial Neural Network (ANN
Andrew Molina
 
PPT
tutorial.ppt
Vara Prasad
 
PDF
International Refereed Journal of Engineering and Science (IRJES)
irjes
 
PDF
Artificial neural networks
stellajoseph
 
Unit iii update
Indira Priyadarsini
 
Echo state networks and locomotion patterns
Vito Strano
 
Using Hopfield Networks for Solving TSP
Tolga Varol
 
NN-Ch7.PDF
gnans Kgnanshek
 
Chapter No 7 Other Important Neural Networks Models
RamkrishnaPatil17
 
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
sathiya new final.pptx
sathiyavrs
 
Capstone paper
Muhammad Saeed
 
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
sgamitgill77
 
Lecture artificial neural networks and pattern recognition
Hưng Đặng
 
Lecture artificial neural networks and pattern recognition
Hưng Đặng
 
Annintro
kaushaljha009
 
Neural Network as a function
Taisuke Oe
 
Artificial neural networks
Institute of Technology Telkom
 
Artificial Neural Network (ANN
Andrew Molina
 
tutorial.ppt
Vara Prasad
 
International Refereed Journal of Engineering and Science (IRJES)
irjes
 
Artificial neural networks
stellajoseph
 
Ad

Recently uploaded (20)

PPTX
The Future of Artificial Intelligence Opportunities and Risks Ahead
vaghelajayendra784
 
PPTX
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
PPTX
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
PPTX
Cybersecurity: How to Protect your Digital World from Hackers
vaidikpanda4
 
PPTX
THE JEHOVAH’S WITNESSES’ ENCRYPTED SATANIC CULT
Claude LaCombe
 
PPTX
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
PPTX
Command Palatte in Odoo 18.1 Spreadsheet - Odoo Slides
Celine George
 
PPTX
Translation_ Definition, Scope & Historical Development.pptx
DhatriParmar
 
PPTX
Digital Professionalism and Interpersonal Competence
rutvikgediya1
 
PPTX
TOP 10 AI TOOLS YOU MUST LEARN TO SURVIVE IN 2025 AND ABOVE
digilearnings.com
 
PPTX
Applied-Statistics-1.pptx hardiba zalaaa
hardizala899
 
PDF
A guide to responding to Section C essay tasks for the VCE English Language E...
jpinnuck
 
PPTX
Constitutional Design Civics Class 9.pptx
bikesh692
 
PPTX
Virus sequence retrieval from NCBI database
yamunaK13
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
PPTX
IDEAS AND EARLY STATES Social science pptx
NIRANJANASSURESH
 
PDF
My Thoughts On Q&A- A Novel By Vikas Swarup
Niharika
 
PDF
Stepwise procedure (Manually Submitted & Un Attended) Medical Devices Cases
MUHAMMAD SOHAIL
 
PPTX
10CLA Term 3 Week 4 Study Techniques.pptx
mansk2
 
PPTX
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 
The Future of Artificial Intelligence Opportunities and Risks Ahead
vaghelajayendra784
 
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
Cybersecurity: How to Protect your Digital World from Hackers
vaidikpanda4
 
THE JEHOVAH’S WITNESSES’ ENCRYPTED SATANIC CULT
Claude LaCombe
 
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
Command Palatte in Odoo 18.1 Spreadsheet - Odoo Slides
Celine George
 
Translation_ Definition, Scope & Historical Development.pptx
DhatriParmar
 
Digital Professionalism and Interpersonal Competence
rutvikgediya1
 
TOP 10 AI TOOLS YOU MUST LEARN TO SURVIVE IN 2025 AND ABOVE
digilearnings.com
 
Applied-Statistics-1.pptx hardiba zalaaa
hardizala899
 
A guide to responding to Section C essay tasks for the VCE English Language E...
jpinnuck
 
Constitutional Design Civics Class 9.pptx
bikesh692
 
Virus sequence retrieval from NCBI database
yamunaK13
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
IDEAS AND EARLY STATES Social science pptx
NIRANJANASSURESH
 
My Thoughts On Q&A- A Novel By Vikas Swarup
Niharika
 
Stepwise procedure (Manually Submitted & Un Attended) Medical Devices Cases
MUHAMMAD SOHAIL
 
10CLA Term 3 Week 4 Study Techniques.pptx
mansk2
 
INTESTINALPARASITES OR WORM INFESTATIONS.pptx
PRADEEP ABOTHU
 

Neural Network

  • 1. Neural Network to solve Traveling Salesman Problem
  • 2. Roadmap Hopfield Neural Network Solving TSP using Hopfield Network Modification of Hopfield Neural Network Solving TSP using Concurrent Neural Network Comparison between Neural Network and SOM for solving TSP
  • 3. Background Neural Networks Computing device composed of processing elements called neurons Processing power comes from interconnection between neurons Various models are Hopfield, Back propagation, Perceptron, Kohonen Net etc
  • 4. Associative memory Associative memory Produces for any input pattern a similar stored pattern Retrieval by part of data Noisy input can be also recognized Original Degraded Reconstruction
  • 5. Hopfield Network Recurrent network Feedback from output to input Fully connected Every neuron connected to every other neuron
  • 6. Hopfield Network Symmetric connections Connection weights from unit i to unit j and from unit j to unit i are identical for all i and j No self connection, so weight matrix is 0-diagonal and symmetric Logic levels are +1 and -1
  • 7. Computation For any neuron i, at an instant t input is Σ j = 1 to n, j≠i w ij σ j (t) σ j (t) is the activation of the j th neuron Threshold function θ = 0 Activation σ i (t+1)=sgn( Σ j=1 to n, j≠i w ij σ j (t)) where Sgn(x) = +1 x>0 Sgn(x) = -1 x<0
  • 8. Modes of operation Synchronous All neurons are updated simultaneously Asynchronous Simple : Only one unit is randomly selected at each step General : Neurons update themselves independently and randomly based on probability distribution over time.
  • 9. Stability Issue of stability arises since there is a feedback in Hopfield network May lead to fixed point, limit cycle or chaos Fixed point : unique point attractor Limit cycles : state space repeats itself in periodic cycles Chaotic : aperiodic strange attractor
  • 10. Procedure Store and stabilize the vector which has to be part of memory. Find the value of weight w ij , for all i, j such that : < σ 1 , σ 2 , σ 3 …… σ N > is stable in Hopfield Network of N neurons.
  • 11. Weight learning Weight learning is given by w ij = 1/(N-1) σ i σ j 1/(N-1) is Normalizing factor σ i σ j derives from Hebb’s rule If two connected neurons are ON then weight of the connection is such that mutual excitation is sustained. Similarly, if two neurons inhibit each other then the connection should sustain the mutual inhibition.
  • 12. Multiple Vectors If multiple vectors need to be stored in memory like < σ 1 1 , σ 2 1 , σ 3 1 …… σ N 1 > < σ 1 2 , σ 2 2 , σ 3 2 …… σ N 2 > ……………………………… . < σ 1 p , σ 2 p , σ 3 p …… σ N p > Then the weight are given by: w ij = 1/(N-1) Σ m=1 to p σ i m σ j m
  • 13. Energy Energy is associated with the state of the system. Some patterns need to be made stable this corresponds to minimum energy state of the system.
  • 14. Energy function Energy at state σ ’ = < σ 1 , σ 2 , σ 3 …… σ N > E( σ ’) = -½ Σ i Σ j≠i w ij σ i σ j Let the p th neuron change its state from σ p initial to σ p final so E initial = -½ Σ j≠p w pj σ p initial σ j + T E final = -½ Σ j≠p w pj σ p final σ j + T Δ E = E final – E initial T is independent of σ p
  • 15. Continued… Δ E = - ½ ( σ p final - σ p initial ) Σ j≠p w pj σ j i.e. Δ E = -½ Δσ p Σ j≠p w pj σ j Thus: Δ E = -½ Δσ p x (netinput p ) If p changes from +1 to -1 then Δσ p is negative and netinput p is negative and vice versa. So, Δ E is always negative . Thus energy always decreases when neuron changes state.
  • 16. Applications of Hopfield Nets Hopfield nets are applied for Optimization problems. Optimization problems maximize or minimize a function. In Hopfield Network the energy gets minimized.
  • 17. Traveling Salesman Problem Given a set of cities and the distances between them, determine the shortest closed path passing through all the cities exactly once.
  • 18. Traveling Salesman Problem One of the classic and highly researched problem in the field of computer science. Decision problem “Is there a tour with length less than k&quot; is NP - Complete Optimization problem “What is the shortest tour?” is NP - Hard
  • 19. Hopfield Net for TSP N cities are represented by an N X N matrix of neurons Each row has exactly one 1 Each column has exactly one 1 Matrix has exactly N 1’s σ kj = 1 if city k is in position j σ kj = 0 otherwise
  • 20. Hopfield Net for TSP For each element of the matrix take a neuron and fully connect the assembly with symmetric weights Finding a suitable energy function E
  • 21. Determination of Energy Function E function for TSP has four components satisfying four constraints Each city can have no more than one position i.e. each row can have no more than one activated neuron E 1 = A/2 Σ k Σ i Σ j≠i σ ki σ kj A - Constant
  • 22. Energy Function (Contd..) Each position contains no more than one city i.e. each column contains no more than one activated neuron E 2 = B/2 Σ j Σ k Σ r≠k σ kj σ rj B - constant
  • 23. Energy Function (Contd..) There are exactly N entries in the output matrix i.e. there are N 1’s in the output matrix E 3 = C/2 (n - Σ k Σ i σ ki ) 2 C - constant
  • 24. Energy Function (cont..) Fourth term incorporates the requirement of the shortest path E 4 = D/2 Σ k Σ r≠k Σ j d kr σ kj ( σ r(j+1) + σ r(j-1) ) where d kr is the distance between city-k and city-r E total = E 1 + E 2 + E 3 + E 4
  • 25. Energy Function (cont..) Energy equation is also given by E= -½ Σ ki Σ rj w (ki)(rj) σ ki σ rj σ ki – City k at position i σ rj – City r at position j Output function σ ki σ ki = ½ ( 1 + tanh(u ki /u 0 )) u 0 is a constant u ki is the net input
  • 26. Weight Value Comparing above equations with the energy equation obtained previously W (ki)(rj) = -A δ kr (1 – δ rj ) - B δ ij (1 – δ kr ) –C –Dd kr ( δ j(i+1) + δ j(i-1) ) Kronecker Symbol : δ kr δ kr = 1 when k = r δ kr = 0 when k ≠ r
  • 27. Observation Choice of constants A,B,C and D that provide a good solution vary between Always obtain legitimate loops (D is small relative to A, B and C) Giving heavier weights to the distances (D is large relative to A, B and C)
  • 28. Observation (cont..) Local minima Energy function full of dips, valleys and local minima Speed Fast due to rapid computational capacity of network
  • 29. Concurrent Neural Network Proposed by N. Toomarian in 1988 It requires N(log(N)) neurons to compute TSP of N cities. It also has a much higher probability to reach a valid tour.
  • 30. Objective Function Aim is to minimize the distance between city k at position i and city r at position i+1 E i = Σ k≠r Σ r Σ i δ ki δ r(i+1) d kr Where δ is the Kronecers Symbol
  • 31. Cont … E i = 1/N 2 Σ k≠r Σ r Σ i d kr Π i= 1 to ln(N) [1 + (2 ע i – 1) σ ki ] [1 + (2µ i – 1) σ ri ] Where (2µ i – 1) = (2 ע i – 1) [1 – Π j= 1 to i-1 ע i ] Also to ensure that 2 cities don’t occupy same position E error = Σ k≠r Σ r δ kr
  • 32. Solution E error will have a value 0 for any valid tour. So we have a constrained optimization problem to solve. E = E i + λ E error λ is the Lagrange multiplier to be calculated form the solution.
  • 33. Minimization of energy function Minimizing Energy function which is in terms of σ ki Algorithm is an iterative procedure which is usually used for minimization of quadratic functions The iteration steps are carried out in the direction of steepest decent with respect to the energy function E
  • 34. Minimization of energy function Differentiating the energy dU ki /dt = - δ E/ δ σ ki = - δ E i / δ σ ki - λδ E error / δ σ ki d λ /dt = ± δ E/ δλ = ± E error σ ki = tanh( α U ki ) , α – const.
  • 35. Implementation Initial Input Matrix and the value of λ is randomly selected and specified At each iteration, new value of σ ki and λ is calculated in the direction of steepest descent of energy function Iterations will stop either when convergence is achieved or when the number of iterations exceeds a user specified number
  • 36. Comparison – Hopfield vs Concurrent NN Converges faster than Hopfield Network Probability to achieve valid tour is higher than Hopfield Network Hopfield doesn’t have systematic way to determine the constant terms.
  • 37. Comparison – SOM and Concurrent NN Data set consists of 52 cities in Germany and its subset of 15 cities. Both algorithms were run for 80 times on 15 city data set. 52 city dataset could be analyzed only using SOM while Concurrent Neural Net failed to analyze this dataset.
  • 38. Result Concurrent neural network always converged and never missed any city, where as SOM is capable of missing cities. Concurrent Neural Network is very erratic in behavior , whereas SOM has higher reliability to detect every link in smallest path. Overall Concurrent Neural Network performed poorly as compared to SOM.
  • 39. Shortest path generated Concurrent Neural Network (2127 km) Self Organizing Maps (1311km)
  • 40. Behavior in terms of probability Concurrent Neural Network Self Organizing Maps
  • 41. Conclusion Hopfield Network can also be used for optimization problems. Concurrent Neural Network performs better than Hopfield network and uses less neurons. Concurrent and Hopfield Neural Network are less efficient than SOM for solving TSP.
  • 42. References N. K. Bose and P. Liang, ” Neural Network Fundamentals with Graphs, Algorithms and Applications”, Tata McGraw Hill Publication, 1996 P. D. Wasserman, “Neural computing: theory and practice” , Van Nostrand Reinhold Co., 1989 N. Toomarian, “ A Concurrent Neural Network algorithm for the Traveling Salesman Problem ”, ACM Proceedings of the third conference on Hypercube concurrent computers and applications, pp. 1483-1490, 1988.
  • 43. References R. Reilly, “ Neural Network approach to solving the Traveling Salesman Problem ”, Journals of Computer Science in Colleges, pp. 41-61,October 2003 Wolfram Research inc., “ Tutorial on Neural Networks ”, http://documents.wolfram.com/applications/neuralnetworks/NeuralNetworkTheory/2.7.0.html , 2004 Prof. P. Bhattacharyya, “ Introduction to computing with Neural Nets ”, http://www.cse.iitb.ac.in/~pb/Teaching.html .
  • 44.  
  • 45. NP-complete NP-hard When a decision version of a combinatorial optimization problem is proved to belong to the class of NP-complete problems, which includes well-known problems such as satisfiability, traveling salesman , the bin packing problem , etc., then the optimization version is NP-hard.
  • 46. NP-complete NP-hard “Is there a tour with length less than k&quot; is NP-complete: It is easy to determine if a proposed certificate has length less than k The optimization problem : &quot;what is the shortest tour?&quot;, is NP-hard Since there is no easy way to determine if a certificate is the shortest.
  • 47. Path lengths Concurrent Neural Network Self Organizing Maps