This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
The document provides an overview of perceptrons and neural networks. It discusses how neural networks are modeled after the human brain and consist of interconnected artificial neurons. The key aspects covered include the McCulloch-Pitts neuron model, Rosenblatt's perceptron, different types of learning (supervised, unsupervised, reinforcement), the backpropagation algorithm, and applications of neural networks such as pattern recognition and machine translation.
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data through a process of training.
- The basic ANN architecture includes an input layer, hidden layers, and an output layer. Information flows from the input to the output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks like face recognition, prediction, and classification through supervised, unsupervised, or reinforcement learning.
- While ANNs have advantages like fault tolerance
Artificial neural networks mimic the human brain by using interconnected layers of neurons that fire electrical signals between each other. Activation functions are important for neural networks to learn complex patterns by introducing non-linearity. Without activation functions, neural networks would be limited to linear regression. Common activation functions include sigmoid, tanh, ReLU, and LeakyReLU, with ReLU and LeakyReLU helping to address issues like vanishing gradients that can occur with sigmoid and tanh functions.
Artificial neural network for machine learninggrinu
An Artificial Neurol Network (ANN) is a computational model. It is based on the structure and functions of biological neural networks. It works like the way human brain processes information. ANN includes a large number of connected processing units that work together to process information. They also generate meaningful results from it.
This document presents information on Hopfield networks through a slideshow presentation. It begins with an introduction to Hopfield networks, describing them as fully connected, single layer neural networks that can perform pattern recognition. It then discusses the properties of Hopfield networks, including their symmetric weights and binary neuron outputs. The document proceeds to provide derivations of the Hopfield network model based on an additive neuron model. It concludes by discussing applications of Hopfield networks.
Pattern recognition and Machine Learning.Rohit Kumar
Machine learning involves using examples to generate a program or model that can classify new examples. It is useful for tasks like recognizing patterns, generating patterns, and predicting outcomes. Some common applications of machine learning include optical character recognition, biometrics, medical diagnosis, and information retrieval. The goal of machine learning is to build models that can recognize patterns in data and make predictions.
The document discusses various neural network learning rules:
1. Error correction learning rule (delta rule) adapts weights based on the error between the actual and desired output.
2. Memory-based learning stores all training examples and classifies new inputs based on similarity to nearby examples (e.g. k-nearest neighbors).
3. Hebbian learning increases weights of simultaneously active neuron connections and decreases others, allowing patterns to emerge from correlations in inputs over time.
4. Competitive learning (winner-take-all) adapts the weights of the neuron most active for a given input, allowing unsupervised clustering of similar inputs across neurons.
This document provides an overview of activation functions in deep learning. It discusses the purpose of activation functions, common types of activation functions like sigmoid, tanh, and ReLU, and issues like vanishing gradients that can occur with some activation functions. It explains that activation functions introduce non-linearity, allowing neural networks to learn complex patterns from data. The document also covers concepts like monotonicity, continuity, and differentiation properties that activation functions should have, as well as popular methods for updating weights during training like SGD, Adam, etc.
This document discusses neural networks and fuzzy logic. It explains that neural networks can learn from data and feedback but are viewed as "black boxes", while fuzzy logic models are easier to comprehend but do not come with a learning algorithm. It then describes how neuro-fuzzy systems combine these two approaches by using neural networks to construct fuzzy rule-based models or fuzzy partitions of the input space. Specifically, it outlines the Adaptive Network-based Fuzzy Inference System (ANFIS) architecture, which is functionally equivalent to fuzzy inference systems and can represent both Sugeno and Tsukamoto fuzzy models using a five-layer feedforward neural network structure.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Introduction to Adaptive Resonance Theory (ART) neural networks including:
Introduction (Stability-Plasticity Dilemma)
ART Network
ART Types
Basic ART network Architecture
ART Algorithm and Learning
ART Computational Example
ART Application
Conclusion
Main References
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
The document provides an introduction to artificial neural networks and their components. It discusses the basic neuron model, including the summation function, activation function, and bias. It also covers various neuron models based on different activation functions. The document introduces different network architectures, including single-layer feedforward networks, multilayer feedforward networks, and recurrent networks. It discusses perceptrons, ADALINE networks, and the backpropagation algorithm for training multilayer networks. The limitations of perceptrons for non-linearly separable problems are also covered.
This document discusses classifying handwritten digits using the MNIST dataset with a simple linear machine learning model. It begins by introducing the MNIST dataset of images and corresponding labels. It then discusses using a linear model with weights and biases to make predictions for each image. The weights represent a filter to distinguish digits. The model is trained using gradient descent to minimize the cross-entropy cost function by adjusting the weights and biases based on batches of training data. The goal is to improve the model's ability to correctly classify handwritten digit images.
The document discusses the perceptron, which is a single processing unit of a neural network that was first proposed by Rosenblatt in 1958. A perceptron uses a step function to classify its input into one of two categories, returning +1 if the weighted sum of inputs is greater than or equal to 0 and -1 otherwise. It operates as a linear threshold unit and can be used for binary classification of linearly separable data, though it cannot model nonlinear functions like XOR. The document also outlines the single layer perceptron learning algorithm.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
The document discusses artificial neural networks. It describes their basic structure and components, including dendrites that receive input signals, a soma that processes the inputs, and an axon that transmits output signals. It also explains how neurons are connected at synapses to transfer signals between neurons. Finally, it mentions different types of activation functions that can be used in neural networks.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This document provides an outline for a course on neural networks and fuzzy systems. The course is divided into two parts, with the first 11 weeks covering neural networks topics like multi-layer feedforward networks, backpropagation, and gradient descent. The document explains that multi-layer networks are needed to solve nonlinear problems by dividing the problem space into smaller linear regions. It also provides notation for multi-layer networks and shows how backpropagation works to calculate weight updates for each layer.
This document discusses various regularization techniques for deep learning models. It defines regularization as any modification to a learning algorithm intended to reduce generalization error without affecting training error. It then describes several specific regularization methods, including weight decay, norm penalties, dataset augmentation, early stopping, dropout, adversarial training, and tangent propagation. The goal of regularization is to reduce overfitting and improve generalizability of deep learning models.
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)UMBC
We trained a large, deep convolutional neural network to classify the 1.2 million
high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif-
ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5%
and 17.0% which is considerably better than the previous state-of-the-art. The
neural network, which has 60 million parameters and 650,000 neurons, consists
of five convolutional layers, some of which are followed by max-pooling layers,
and three fully-connected layers with a final 1000-way softmax. To make train-
ing faster, we used non-saturating neurons and a very efficient GPU implemen-
tation of the convolution operation. To reduce overfitting in the fully-connected
layers we employed a recently-developed regularization method called “dropout”
that proved to be very effective. We also entered a variant of this model in the
ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,
compared to 26.2% achieved by the second-best entry.
This document provides an overview of activation functions in deep learning. It discusses the purpose of activation functions, common types of activation functions like sigmoid, tanh, and ReLU, and issues like vanishing gradients that can occur with some activation functions. It explains that activation functions introduce non-linearity, allowing neural networks to learn complex patterns from data. The document also covers concepts like monotonicity, continuity, and differentiation properties that activation functions should have, as well as popular methods for updating weights during training like SGD, Adam, etc.
This document discusses neural networks and fuzzy logic. It explains that neural networks can learn from data and feedback but are viewed as "black boxes", while fuzzy logic models are easier to comprehend but do not come with a learning algorithm. It then describes how neuro-fuzzy systems combine these two approaches by using neural networks to construct fuzzy rule-based models or fuzzy partitions of the input space. Specifically, it outlines the Adaptive Network-based Fuzzy Inference System (ANFIS) architecture, which is functionally equivalent to fuzzy inference systems and can represent both Sugeno and Tsukamoto fuzzy models using a five-layer feedforward neural network structure.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Introduction to Adaptive Resonance Theory (ART) neural networks including:
Introduction (Stability-Plasticity Dilemma)
ART Network
ART Types
Basic ART network Architecture
ART Algorithm and Learning
ART Computational Example
ART Application
Conclusion
Main References
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
The document provides an introduction to artificial neural networks and their components. It discusses the basic neuron model, including the summation function, activation function, and bias. It also covers various neuron models based on different activation functions. The document introduces different network architectures, including single-layer feedforward networks, multilayer feedforward networks, and recurrent networks. It discusses perceptrons, ADALINE networks, and the backpropagation algorithm for training multilayer networks. The limitations of perceptrons for non-linearly separable problems are also covered.
This document discusses classifying handwritten digits using the MNIST dataset with a simple linear machine learning model. It begins by introducing the MNIST dataset of images and corresponding labels. It then discusses using a linear model with weights and biases to make predictions for each image. The weights represent a filter to distinguish digits. The model is trained using gradient descent to minimize the cross-entropy cost function by adjusting the weights and biases based on batches of training data. The goal is to improve the model's ability to correctly classify handwritten digit images.
The document discusses the perceptron, which is a single processing unit of a neural network that was first proposed by Rosenblatt in 1958. A perceptron uses a step function to classify its input into one of two categories, returning +1 if the weighted sum of inputs is greater than or equal to 0 and -1 otherwise. It operates as a linear threshold unit and can be used for binary classification of linearly separable data, though it cannot model nonlinear functions like XOR. The document also outlines the single layer perceptron learning algorithm.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
The document discusses artificial neural networks. It describes their basic structure and components, including dendrites that receive input signals, a soma that processes the inputs, and an axon that transmits output signals. It also explains how neurons are connected at synapses to transfer signals between neurons. Finally, it mentions different types of activation functions that can be used in neural networks.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
This document provides an outline for a course on neural networks and fuzzy systems. The course is divided into two parts, with the first 11 weeks covering neural networks topics like multi-layer feedforward networks, backpropagation, and gradient descent. The document explains that multi-layer networks are needed to solve nonlinear problems by dividing the problem space into smaller linear regions. It also provides notation for multi-layer networks and shows how backpropagation works to calculate weight updates for each layer.
This document discusses various regularization techniques for deep learning models. It defines regularization as any modification to a learning algorithm intended to reduce generalization error without affecting training error. It then describes several specific regularization methods, including weight decay, norm penalties, dataset augmentation, early stopping, dropout, adversarial training, and tangent propagation. The goal of regularization is to reduce overfitting and improve generalizability of deep learning models.
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)UMBC
We trained a large, deep convolutional neural network to classify the 1.2 million
high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif-
ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5%
and 17.0% which is considerably better than the previous state-of-the-art. The
neural network, which has 60 million parameters and 650,000 neurons, consists
of five convolutional layers, some of which are followed by max-pooling layers,
and three fully-connected layers with a final 1000-way softmax. To make train-
ing faster, we used non-saturating neurons and a very efficient GPU implemen-
tation of the convolution operation. To reduce overfitting in the fully-connected
layers we employed a recently-developed regularization method called “dropout”
that proved to be very effective. We also entered a variant of this model in the
ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,
compared to 26.2% achieved by the second-best entry.
The document provides an overview of artificial neural networks (ANN). It discusses how ANN are constructed to model the human brain and can perform tasks like pattern matching and classification. The key points are:
- ANN consist of interconnected nodes that operate in parallel, and connections between nodes are associated with weights. Each node receives weighted inputs and its activation level is calculated.
- Early models include the McCulloch-Pitts neuron model and Hebb network. Learning can be supervised, unsupervised, or reinforcement. Common activation functions and learning rules like backpropagation and Hebbian learning are described.
- Terminology includes weights, bias, thresholds, learning rates, and more. Different network architectures like feed
- Artificial Neural Networks (ANN) are constructed to model the human brain and perform tasks like pattern matching, classification, and data analysis that are difficult for traditional computers.
- ANN consist of large numbers of connected processing elements called neurons that operate in parallel. Neurons are connected by links with weights containing input signal information. Each neuron has an activation level determined by the weighted inputs it receives.
- There are different types of ANN based on their connections - feedforward, recurrent, single/multi-layer. Learning can be supervised, unsupervised, or reinforcement based. Common activation functions include sigmoid, step, and identity functions.
This document provides an overview of artificial neural networks. It begins with definitions of artificial neural networks and how they are analogous to biological neural networks. It then discusses the basic structure of artificial neural networks, including different types of networks like feedforward, recurrent, and convolutional networks. Key concepts in artificial neural networks like neurons, weights, forward/backward propagation, and overfitting/underfitting are also explained. The document concludes with limitations of neural networks and references.
Neural networks of artificial intelligencealldesign
An artificial neural network (ANN) is a machine learning approach that models the human brain. It consists of artificial neurons that are connected in a network. Each neuron receives inputs, performs calculations, and outputs a value. ANNs can be trained to learn patterns from data through examples to perform tasks like classification, prediction, clustering, and association. Common ANN architectures include multilayer perceptrons, convolutional neural networks, and recurrent neural networks.
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have many applications including voice recognition, image recognition, robotics and more due to their ability to learn from large amounts of data.
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have various applications including voice recognition, image recognition, and robotics due to their ability to learn from large amounts of data.
Artificial neural networks (ANNs) are computational models inspired by biological neural networks in the human brain. ANNs contain artificial neurons that are interconnected in layers and transmit signals to one another. The connections between neurons are associated with weights that are adjusted during training to produce the desired output. ANNs can learn complex patterns and relationships through a process of trial and error. They are widely used for tasks like pattern recognition, classification, prediction, and data clustering.
Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs consist of interconnected nodes that operate in parallel to solve problems. The document discusses ANN components like neurons and weights, compares ANNs to biological neural networks, and outlines ANN architectures, learning methods, applications, and more. It provides an overview of ANNs and their relationship to the human brain.
This document provides an overview of neural networks and fuzzy systems. It outlines a course on the topic, which is divided into two parts: neural networks and fuzzy systems. For neural networks, it covers fundamental concepts of artificial neural networks including single and multi-layer feedforward networks, feedback networks, and unsupervised learning. It also discusses the biological neuron, typical neural network architectures, learning techniques such as backpropagation, and applications of neural networks. Popular activation functions like sigmoid, tanh, and ReLU are also explained.
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data using a process similar to biological learning.
- The basic ANN architecture consists of an input layer, hidden layers, and an output layer. Information flows from the input to output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks using supervised, unsupervised, or reinforcement learning.
- ANNs have many applications including face recognition, ridesharing, handwriting
The document discusses different types of machine learning paradigms including supervised learning, unsupervised learning, and reinforcement learning. It then provides details on artificial neural networks, describing them as consisting of simple processing units that communicate through weighted connections, similar to neurons in the human brain. The document outlines key aspects of artificial neural networks like processing units, connections between units, propagation rules, and learning methods.
We need to layer the technology onto existing workflows
Follow the teachers who inspire you because that instills passion Curiosity & Lifelong Learning.
You can benefit from generative AI even when its intelligence is worse-because of the potential for cost and time savings in low-cost-of-error environments.
Bot tutors are already yielding effective results on learning and mastery.
GenAI may increase the digital divide- its gains may accrue disproportionately to those who already have domain expertise.
GenAI can be used for Coding
Complex structures
Make the content
Manage the content
Solutions to complex numerical problems
Lesson plan
Assignment
Quiz
Question bank
Report & summary of content
Creating videos
Title of abstract & summaries and much more like...
Improving Grant Writing
Learning by Teaching Chatbots
GenAI as peer Learner
Data Analysis for Non-Coders
Student Course Preparation
To reduce Plagiarism
Legal Problems for classes
Understanding Student Learning in Real Time
Simulate a poor
Faculty co-pilot chatbot
Generate fresh Assessments
Data Analysis Partner
Summarize student questions in real-time
Assess depth of students' understanding
The skills to foster are Listening
Communicating
Approaching the problem & solving
Making Real Time Decisions
Logic
Refining Memories
Learning Cultures & Syntax (Foreign Language)
Chatbots & Agentic AI can never so what a professor can do.
The need of the hour is to teach Creativity
Emotions
Judgement
Psychology
Communication
Human Emotions
…………Through various content!
National Education Policy: A Complete Guide. The National Education Policy 2020 (NEP 2020) was approved by the Union Cabinet on July 29, 2020. The policy aims to bring equity and inclusiveness to the education system.
The NEP 2020 replaces the 10+2 school system with a new 5+3+3+4 system. The new system corresponds to the following age groups:
Foundational stage: 3–8 years
Preparatory stage: 8–11 years
Middle stage: 11–14 years
Secondary stage: 14–18 years
The NEP 2020 curriculum includes:
Proficiency in languages
Scientific temper and evidence-based thinking
Creativity and innovativeness
Sense of aesthetics and art
Oral and written communication
Health and nutrition
Physical education, fitness, wellness, and sports
Collaboration and teamwork
Problem solving
Slides are 30% of a presentation, 70% are Knowledge, Experience, Verbal and Nonverbal Communication. Knowledge with enough practice can make a long-lasting mark on your audience
National Education Policy is a game-changer with equal stress on our Indian Knowledge System and International Education System. Freedom to chose subjects amongst 'Streams' with 'No Silos', multiple entries & multiple exit and flexibility in completion of Higher studies as different learners have different capabilities... are some salient points. The policy is very original, Indic and strong but it all depends now on Implementation. I understand we will get our first batch of freshers in Engineering colleges in 2024. I welcome any more observations.
India is the best case study of digitization to nano level. The demographic divide is a wonderful subject matter for anthropologists. Using Data Science and Artificial Intelligence in such a huge population and area is noteworth.
Course Flow of Engineering Subjects, finding the important and redundant courses and ooptimising the time-effort duo for your own students will give any Engineering faculty a strength as mentor.
Finding the right fit from your mentees for the industry is the most satisfying job for any Engineering Faculty.
This presentation is part of FDP being held in Rajkiya Engineering College, kannauj
Capstone Projects are intense final task as part of completion of Professional Studies. This slide show is focussed on Capstone Project for Engineering Studies. The target audience are Engineering Faculty.
Design Thinking is an iterative exercise on Inspiration, Insight, Ideation & Implementation.
Fail early, Test Often and be creative about your mistakes... never a repeated one!
The effort in this webinar is to make the Civil, Mechanical, and Sanitation Engineers understand, that, DSAI is there to make the best use of the understanding of knowledge they have
Deep Learning was constrained with two key factors for practical applicability. One was the availability of Big Data. With the explosion of Big Data with Internet growth solving the Data problem, the second issue was that even with Big Data availability, to get the compute power required to harvest valuable knowledge from Big Data.
Here is my perspective
1) The document discusses strategies for making engineering students placement ready. It outlines four approaches - conducting faculty development programs on projects, promoting industry collaboration through problem-solving initiatives, establishing research-based master's programs, and developing specific skills in students.
2) A key strategy is to involve students from the first year in long-term projects with industry mentors and publishing papers to improve skills. Industry is encouraged to provide problems and funding while students gain experience.
3) A proposed "Innovate" portal would facilitate collaboration between industry and academics on practical problems. Industry involvement could reduce training costs through co-developed resources.
Today, During a Management Development Program at Radisson Hotel, Noida.
Participant from PSUs like NTPC, GAIL and HR Personnel from Corporate with more than 20 years of experience.
A grand Teaching Learning Experience
The document discusses how big data analytics can be used for defense and national security purposes. It describes some of the challenges faced by India such as its long land borders and insurgencies. Big data from sources like images, audio, video, locations and social media could help identify threats and patterns. Analytics can provide insights into separatist groups and habitual stone pelters in Kashmir. Challenges include data quality, scalability and privacy concerns. Coordination between different agencies is important to disseminate information and take action.
Gradient descent is an optimization algorithm used to minimize a cost function by iteratively adjusting parameter values in the direction of the steepest descent. It works by calculating the derivative of the cost function to determine which direction leads to lower cost, then taking a step in that direction. This process repeats until reaching a minimum. Gradient descent is simple but requires knowing the gradient of the cost function. Backpropagation extends gradient descent to neural networks by propagating error backwards from the output to calculate gradients to update weights.
This document discusses the steepest descent method, also called gradient descent, for finding the nearest local minimum of a function. It works by iteratively moving from each point in the direction of the negative gradient to minimize the function. While effective, it can be slow for functions with long, narrow valleys. The step size used in gradient descent is important - too large will diverge it, too small will take a long time to converge. The Lipschitz constant of a function's gradient provides an upper bound for the step size to guarantee convergence.
In tube drawing process, a tube is pulled out through a die and a plug to reduce its diameter and thickness as per the requirement. Dimensional accuracy of cold drawn tubes plays a vital role in the further quality of end products and controlling rejection in manufacturing processes of these end products. Springback phenomenon is the elastic strain recovery after removal of forming loads, causes geometrical inaccuracies in drawn tubes. Further, this leads to difficulty in achieving close dimensional tolerances. In the present work springback of EN 8 D tube material is studied for various cold drawing parameters. The process parameters in this work include die semi-angle, land width and drawing speed. The experimentation is done using Taguchi’s L36 orthogonal array, and then optimization is done in data analysis software Minitab 17. The results of ANOVA shows that 15 degrees die semi-angle,5 mm land width and 6 m/min drawing speed yields least springback. Furthermore, optimization algorithms named Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Genetic Algorithm (GA) are applied which shows that 15 degrees die semi-angle, 10 mm land width and 8 m/min drawing speed results in minimal springback with almost 10.5 % improvement. Finally, the results of experimentation are validated with Finite Element Analysis technique using ANSYS.
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
Sorting Order and Stability in Sorting.
Concept of Internal and External Sorting.
Bubble Sort,
Insertion Sort,
Selection Sort,
Quick Sort and
Merge Sort,
Radix Sort, and
Shell Sort,
External Sorting, Time complexity analysis of Sorting Algorithms.
π0.5: a Vision-Language-Action Model with Open-World GeneralizationNABLAS株式会社
今回の資料「Transfusion / π0 / π0.5」は、画像・言語・アクションを統合するロボット基盤モデルについて紹介しています。
拡散×自己回帰を融合したTransformerをベースに、π0.5ではオープンワールドでの推論・計画も可能に。
This presentation introduces robot foundation models that integrate vision, language, and action.
Built on a Transformer combining diffusion and autoregression, π0.5 enables reasoning and planning in open-world settings.
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...Infopitaara
A feed water heater is a device used in power plants to preheat water before it enters the boiler. It plays a critical role in improving the overall efficiency of the power generation process, especially in thermal power plants.
🔧 Function of a Feed Water Heater:
It uses steam extracted from the turbine to preheat the feed water.
This reduces the fuel required to convert water into steam in the boiler.
It supports Regenerative Rankine Cycle, increasing plant efficiency.
🔍 Types of Feed Water Heaters:
Open Feed Water Heater (Direct Contact)
Steam and water come into direct contact.
Mixing occurs, and heat is transferred directly.
Common in low-pressure stages.
Closed Feed Water Heater (Surface Type)
Steam and water are separated by tubes.
Heat is transferred through tube walls.
Common in high-pressure systems.
⚙️ Advantages:
Improves thermal efficiency.
Reduces fuel consumption.
Lowers thermal stress on boiler components.
Minimizes corrosion by removing dissolved gases.
International Journal of Distributed and Parallel systems (IJDPS)samueljackson3773
The growth of Internet and other web technologies requires the development of new
algorithms and architectures for parallel and distributed computing. International journal of
Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to
publish high quality scientific papers arising from original research and development from
the international community in the areas of parallel and distributed systems. IJDPS serves
as a platform for engineers and researchers to present new ideas and system technology,
with an interactive and friendly, but strongly professional atmosphere.
Concept of Problem Solving, Introduction to Algorithms, Characteristics of Algorithms, Introduction to Data Structure, Data Structure Classification (Linear and Non-linear, Static and Dynamic, Persistent and Ephemeral data structures), Time complexity and Space complexity, Asymptotic Notation - The Big-O, Omega and Theta notation, Algorithmic upper bounds, lower bounds, Best, Worst and Average case analysis of an Algorithm, Abstract Data Types (ADT)
Analysis of reinforced concrete deep beam is based on simplified approximate method due to the complexity of the exact analysis. The complexity is due to a number of parameters affecting its response. To evaluate some of this parameters, finite element study of the structural behavior of the reinforced self-compacting concrete deep beam was carried out using Abaqus finite element modeling tool. The model was validated against experimental data from the literature. The parametric effects of varied concrete compressive strength, vertical web reinforcement ratio and horizontal web reinforcement ratio on the beam were tested on eight (8) different specimens under four points loads. The results of the validation work showed good agreement with the experimental studies. The parametric study revealed that the concrete compressive strength most significantly influenced the specimens’ response with the average of 41.1% and 49 % increment in the diagonal cracking and ultimate load respectively due to doubling of concrete compressive strength. Although the increase in horizontal web reinforcement ratio from 0.31 % to 0.63 % lead to average of 6.24 % increment on the diagonal cracking load, it does not influence the ultimate strength and the load-deflection response of the beams. Similar variation in vertical web reinforcement ratio leads to an average of 2.4 % and 15 % increment in cracking and ultimate load respectively with no appreciable effect on the load-deflection response.
2. Course Objective
To understand, successfully apply
and evaluate Neural Network
structures and paradigms for
problems in Science, Engineering and
Business.
3. PreRequisites
It is expected that, the audience has a flair
to understand algorithms and basic
knowledge of Mathematics, Logic gates
and Programming
4. Outline
Introduction
How the human brain learns
Neuron Models
Different types of Neural Networks
Network Layers and Structure
Training a Neural Network
Application of ANN
5. Introduction:
Soft Computing techniques such as Neural
networks, genetic algorithms and fuzzy logic are
among the most powerful tools available for
detecting and describing subtle relationships in
massive amounts of seemingly unrelated data.
Neural networks can learn and are actually
taught instead of being programmed.
Teaching mode can be supervised or
unsupervised
Neural Networks learn in the presence of noise
8. How does the brain work
• Each neuron receives inputs from other neurons
– Use spikes to communicate
• The effect of each input line on the neuron is controlled
by a synaptic weight
– Positive or negative
• Synaptic weight adapts so that the whole network learns
to perform useful computations
– Recognizing objects, understanding languages,
making plans, controlling the body
• There are 1011 neurons with 104 weights.
9. How the Human Brain learns
In the human brain, a typical neuron collects signals from others through a host of
fine structures called dendrites.
The neuron sends out spikes of electrical activity through a long, thin stand known
as an axon, which splits into thousands of branches.
At the end of each branch, a structure called a synapse converts the activity from
the axon into electrical effects that inhibit or excite activity in the connected
neurons.
10. Modularity and brain
• Different bits of the cortex do different things
• Local damage to the brain has specific effects
• Early brain damage makes function relocate
• Cortex gives rapid parallel computation plus
flexibility
• Conventional computers requires very fast
central processors for long sequential
computations
12. Fundamental concept
• NN are constructed and implemented to
model the human brain.
• Performs various tasks such as pattern-
matching, classification, optimization
function, approximation, vector
quantization and data clustering.
• These tasks are difficult for traditional
computers
13. ANN
• ANN posess a large number of processing
elements called nodes/neurons which operate in
parallel.
• Neurons are connected with others by
connection link.
• Each link is associated with weights which
contain information about the input signal.
• Each neuron has an internal state of its own
which is a function of the inputs that neuron
receives- Activation level
14. Comparison between brain verses computer
Brain ANN
Speed Few ms. Few nano sec. massive
||el processing
Size and complexity 1011 neurons & 1015
interconnections
Depends on designer
Storage capacity Stores information in its
interconnection or in
synapse.
No Loss of memory
Contiguous memory
locations
loss of memory may
happen sometimes.
Tolerance Has fault tolerance No fault tolerance Inf gets
disrupted when
interconnections are
disconnected
Control mechanism Complicated involves
chemicals in biological
neuron
Simpler in ANN
15. Types of Problems ANN can handle
Mathematical Modeling (Function Approximation)
Classification
Clustering
Forecasting
Vector Quantization
Pattern Association
Control
Optimization
16. A Neuron Model
When a neuron receives excitatory input that is sufficiently large
compared with its inhibitory input, it sends a spike of electrical activity
down its axon. Learning occurs by changing the effectiveness of the
synapses so that the influence of one neuron on another changes.
We conduct these neural networks by first trying to deduce the essential
features of neurons and their interconnections.
We then typically program a computer to simulate these features.
17. A Simple Neuron
An artificial neuron is a device with many inputs and one output.
The neuron has two modes of operation;
the training mode and
the using mode.
18. Important terminologies of ANNs
• Weights
• Bias
• Threshold
• Learning rate
• Momentum factor
• Vigilance parameter
• Notations used in ANN
19. Weights
• Each neuron is connected to every other
neuron by means of directed links
• Links are associated with weights
• Weights contain information about the
input signal and is represented as a matrix
• Weight matrix also called connection
matrix
21. Weights contd…
• wij –is the weight from processing element ”i” (source node)
to processing element “j” (destination node)
X1
1
Xi
Yj
Xn
w1j
wij
wnj
bj
22. Activation Functions
• Used to calculate the output response of a
neuron.
• Sum of the weighted input signal is applied with
an activation to obtain the response.
• Activation functions can be linear or non linear
• Already dealt
– Identity function
– Single/binary step function
– Discrete/continuous sigmoidal function.
23. Bias
• Bias is like another weight. Its included by
adding a component x0=1 to the input
vector X.
• X=(1,X1,X2…Xi,…Xn)
• Bias is of two types
– Positive bias: increase the net input
– Negative bias: decrease the net input
24. Why Bias is required?
• The relationship between input and output
given by the equation of straight line
y=mx+c
X YInput
C(bias)
y=mx+C
25. Threshold
• Set value based upon which the final output of
the network may be calculated
• Used in activation function
• The activation function using threshold can be
defined as
26. Learning rate
• Denoted by α.
• Used to control the amount of weight
adjustment at each step of training
• Learning rate ranging from 0 to 1
determines the rate of learning in each
time step
27. Other terminologies
• Momentum factor:
– used for convergence when momentum factor
is added to weight updation process.
• Vigilance parameter:
– Denoted by ρ
– Used to control the degree of similarity
required for patterns to be assigned to the
same cluster
28. The McCulloch-Pitts model
Neurons work by processing information. They receive and provide
information in form of spikes.
Inputs
Output
w2
w1
w3
wn
.
.
.
x1
x2
x3
…
xn-1
xn
y
32. Features of McCulloch-Pitts model
• Allows binary 0,1 states only
• Operates under a discrete-time assumption
• Weights and the neurons’ thresholds are
fixed in the model and no interaction
among network neurons
• Just a primitive model
33. Properties for Mc Culloch and Pitts Model
Input is 0 or 1
Weights are -1, 0 or +1
Threshold is an integer
Output is 0 or 1
Output is 1 if multiplication of weight and input is more than the threshold
else Outputs 0
Represent the NOT gate with the help of this model, using signal flow graph and flow
Truth Table
L=0
-1
x
y
x y
0 1
1 0
Input x
w= -1
Start
L=0
34. Mc Culloch and Pitts Model……… OR Gate and AND Gate
35
OR Gatex y z
0 0 0
0 1 1
1 0 1
1 1 1
z
y
z
x y
L>=1
Start
Stop
wx,wy
Input x,y
37. Advantages and Disadvantages of
McCulloch Pitt model
• Advantages
• Simplistic
• Substantial computing
power
• Disadvantages
– Weights and thresholds
are fixed
– Not very flexible
38. Quiz
• Which of the following tasks are neural
networks good at?
– Recognizing fragments of words in a pre-
processed sound wave.
– Recognizing badly written characters.
– Storing lists of names and birth dates.
– logical reasoning
Neural networks are good at finding statistical regularities that allow
them to recognize patterns. They are not good at flawlessly
applying symbolic rules or storing exact numbers.
40. Perceptron Learning rule
• Learning signal is the difference between the
desired and actual neuron’s response
• Learning is supervised
41. General symbol of neuron consisting of
processing node and synaptic connections
42. Neuron Modeling for ANN
Is referred to activation function. Domain is
set of activation values net.
Scalar product of weight and input vector
Neuron as a processing node performs the operation of summation of
its weighted input.
43. Sigmoid neurons
• These give a real-valued
output that is a smooth and
bounded function of their
total input.
– Typically they use the
logistic function
– They have nice
derivatives which make
learning easy
0.5
0
0
1
44. Activation function
• Bipolar binary and unipolar binary are
called as hard limiting activation functions
used in discrete neuron model
• Unipolar continuous and bipolar continuous
are called soft limiting activation functions
are called sigmoidal characteristics.
50. Quiz
• Suppose we have 3D input x=(0.5,-0.5) connected to a
neuron with weights w=(2,-1) and bias b=0.5. furthermore
the target for x is t=0. in this case we use a binary
threshold neuron for the output so that
y=1 if xTw+b>=0 and 0 otherwise
What will be the weights and bias after 1 iteration of
perceptron learning algorithm?
w= (1.5,-0.5) b=-1.5
w=(1.5,-0.5) b=-0.5
w=(2.5,-1.5) b=0.5
w=(-1.5,0.5) b=1.5
51. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
54. Summary of the simple networks
Single layer nets have limited representation
power (linear separability problem)
Error drive seems a good way to train a net
Multi-layer nets (or nets with non-linear hidden
units) may overcome linear inseparability
problem, learning methods for such nets are
needed
Threshold/step output functions hinders the
effort to develop learning methods for multi-
layered nets
55. Training/ Learning
Learning can be of one of the following forms:
Supervised Learning
Unsupervised Learning
Reinforced Learning
The patterns given to classifier may be on:
Parametric Estimation
Non- Parametric Estimation
56. Machine Learning in ANNs
Supervised Learning − It involves a
teacher that is scholar than the ANN itself.
For example, the teacher feeds some
example data about which the teacher
already knows the answers.
57. Machine Learning in ANNs
Unsupervised Learning − It is required
when there is no example data set with
known answers. For example, searching
for a hidden pattern. In this case, clustering
i.e. dividing a set of elements into groups
according to some unknown pattern is
carried out based on the existing data sets
present.
58. Machine Learning in ANNs
Reinforcement Learning − This strategy
built on observation. The ANN makes a
decision by observing its environment. If
the observation is negative, the network
adjusts its weights to be able to make a
different required decision the next time.
59. Unsupervised Learning: why?
Collecting and labeling a large set of sample patterns can
be costly.
Train with large amounts of unlabeled data, and only then
use supervision to label the groupings found.
In dynamic systems, the samples can change slowly.
To find features that will then be useful for categorization.
To provide a form of data dependent smart processing or
smart feature extraction.
To Perform exploratory data analysis, to find structure of
data, to form proper classes for supervised analysis.
60. Measure of Dissimilarity:
Define a metric or distance function d on the vector space λ as
a real-valued function on the Cartesian product λX λ such that:
Positive Definiteness:
0 < d(x,y) < ∞ for x,y ελ and d(x,y)=0 if and only if x=y
Symmetry:
d(x,y) = d(y,x) for x,y ελ
Triangular Inequality:
d(x,y) = d(x,z) + d(y,z) for x,y,z ελ
Invariance or distance function: d(x+z,y+z) = d(x,y)
61. Error Computation
Minkowski Matrix or Lk norm
Manhattan Distance or L1 norm
Euclidian Distance or L2 norm
Ln norm
62. Neural networks have performed
successfully where other methods have
not, predicting system behavior,
recognizing and matching complicated,
vague, or incomplete data patterns.
Apply ANNs to pattern recognition,
interpretation, prediction, diagnosis,
planning, monitoring, debugging, repair,
instruction, control
Biomedical Signal Processing
Biometric Identification
Pattern Recognition
System Reliability
Business
Target Tracking
Neural Network Applications
63. Pattern Recognition System
Sensing Segmentation
Classification (missing
features & context)
Post-processing (costs/
errors)
Feature Extraction
Input
Output (decision)
65. Feed-forward neural networks
• These are the commonest type of neural
network in practical applications.
– The first layer is the input and the last layer
is the output.
– If there is more than one hidden layer, we
call them “deep” neural networks.
• They compute a series of transformations that
change the similarities between cases.
– The activities of the neurons in each layer
are a non-linear function of the activities in
the layer below.
hidden units
output units
input units
66. Feedforward Network
• Its output and input vectors are
respectively
• Weight wij connects the i’th neuron with
j’th input. Activation rule of ith neuron is
where
EXAMPLE
68. Feedback network
When outputs are directed back as
inputs to same or preceding layer
nodes it results in the formation of
feedback networks
69. Lateral feedback
If the feedback of the output of the processing elements is directed back
as input to the processing elements in the same layer then it is called
lateral feedback
70. Recurrent networks
• These have directed cycles in their connection
graph.
– That means you can sometimes get back to
where you started by following the arrows.
• They can have complicated dynamics and this
can make them very difficult to train.
– There is a lot of interest at present in finding
efficient ways of training recurrent nets.
• They are more biologically realistic.
Recurrent nets with
multiple hidden layers
are just a special case
that has some of the
hiddenhidden
connections missing.
71. Recurrent neural networks for modeling sequences
• Recurrent neural networks are a very natural
way to model sequential data:
– They are equivalent to very deep nets with
one hidden layer per time slice.
– Except that they use the same weights at
every time slice and they get input at every
time slice.
• They have the ability to remember information
in their hidden state for a long time.
– But its very hard to train them to use this
potential.
input
input
input
hidden
hidden
hidden
output
output
output
time
72. An example of what recurrent neural nets can now do
(to whet your interest!)
• Ilya Sutskever (2011) trained a special type of recurrent neural net to
predict the next character in a sequence.
• After training for a long time on a string of half a billion characters
from English Wikipedia, he got it to generate new text.
– It generates by predicting the probability distribution for the next
character and then sampling a character from this distribution.
73. Symmetrically connected networks
• These are like recurrent networks, but the connections between units
are symmetrical (they have the same weight in both directions).
– John Hopfield (and others) realized that symmetric networks are
much easier to analyze than recurrent networks.
– They are also more restricted in what they can do. because they
obey an energy function.
• For example, they cannot model cycles.
• Symmetrically connected nets without hidden units are called
“Hopfield nets”.
74. Symmetrically connected networks
with hidden units
• These are called “Boltzmann machines”.
– They are much more powerful models than Hopfield nets.
– They are less powerful than recurrent neural networks.
– They have a beautifully simple learning algorithm.
75. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
76. Learning
• It’s a process by which a NN adapts itself
to a stimulus by making proper parameter
adjustments, resulting in the production of
desired response
• Two kinds of learning
– Parameter learning:- connection weights are
updated
– Structure Learning:- change in network
structure
77. Training
• The process of modifying the weights in
the connections between network layers
with the objective of achieving the
expected output is called training a
network.
• This is achieved through
– Supervised learning
– Unsupervised learning
– Reinforcement learning
78. Classification of learning
• Supervised learning:-
– Learn to predict an output when given an input
vector.
• Unsupervised learning
– Discover a good internal representation of the
input.
• Reinforcement learning
– Learn to select an action to maximize payoff.
79. Supervised Learning
• Child learns from a teacher
• Each input vector requires a corresponding
target vector.
• Training pair=[input vector, target vector]
Neural
Network
W
Error
Signal
Generator
X
(Input)
Y
(Actual output)
(Desired Output)
Error
(D-Y)
signals
80. • Each training case consists of an input vector x and a
target output t.
• Regression: The target output is a real number or a whole
vector of real numbers.
– The price of a stock in 6 months time.
– The temperature at noon tomorrow.
• Classification: The target output is a class label.
– The simplest case is a choice between 1 and 0.
– We can also have multiple alternative labels.
Two types of supervised learning
81. Unsupervised
Learning
• How a fish or tadpole learns
• All similar input patterns are grouped together as clusters.
• If a matching input pattern is not found a new cluster is formed
• One major aim is to create an internal representation of the input
that is useful for subsequent supervised or reinforcement learning.
• It provides a compact, low-dimensional representation of the input.
82. Self-organizing
• In unsupervised learning there is no
feedback
• Network must discover patterns,
regularities, features for the input data over
the output
• While doing so the network might change
in parameters
• This process is called self-organizing
84. When Reinforcement learning is used?
• If less information is available about the
target output values (critic information)
• Learning based on this critic information is
called reinforcement learning and the
feedback sent is called reinforcement
signal
• Feedback in this case is only evaluative
and not instructive
85. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
86. 1. Identity Function
f(x)=x for all x
2. Binary Step function
3. Bipolar Step function
4. Sigmoidal Functions:- Continuous functions
5. Ramp functions:-
Activation Function
87. Some learning algorithms we will learn
are
• Supervised:
• Adaline, Madaline
• Perceptron
• Back Propagation
• multilayer perceptrons
• Radial Basis Function Networks
• Unsupervised
• Competitive Learning
• Kohenen self organizing map
• Learning vector quantization
• Hebbian learning
88. Neural processing
• Recall:- processing phase for a NN and its
objective is to retrieve the information. The
process of computing o for a given x
• Basic forms of neural information
processing
– Auto association
– Hetero association
– Classification
89. Neural processing-Autoassociation
• Set of patterns can be
stored in the network
• If a pattern similar to a
member of the stored
set is presented, an
association with the
input of closest stored
pattern is made
90. Neural Processing- Heteroassociation
• Associations between
pairs of patterns are
stored
• Distorted input pattern
may cause correct
heteroassociation at
the output
91. Neural processing-Classification
• Set of input patterns is
divided into a number
of classes or
categories
• In response to an
input pattern from the
set, the classifier is
supposed to recall the
information regarding
class membership of
the input pattern.
93. Hebbian Learning Rule
• The learning signal is equal to the neuron’s
output
FEED FORWARD UNSUPERVISED LEARNING
94. Features of Hebbian Learning
• Feedforward unsupervised learning
• “When an axon of a cell A is near enough
to exicite a cell B and repeatedly and
persistently takes place in firing it, some
growth process or change takes place in
one or both cells increasing the efficiency”
• If oixj is positive the results is increase in
weight else vice versa
96. Delta Learning Rule
• Only valid for continuous activation function
• Used in supervised training mode
• Learning signal for this rule is called delta
• The aim of the delta rule is to minimize the error over all training
patterns
97. Delta Learning Rule Contd.
Learning rule is derived from the condition of least squared error.
Calculating the gradient vector with respect to wi
Minimization of error requires the weight changes to be in the negative
gradient direction
98. Widrow-Hoff learning Rule
• Also called as least mean square learning rule
• Introduced by Widrow(1962), used in supervised learning
• Independent of the activation function
• Special case of delta learning rule wherein activation function is an
identity function ie f(net)=net
• Minimizes the squared error between the desired output value di
and neti
100. Winner-Take-All Learning rule Contd…
• Can be explained for a layer of neurons
• Example of competitive learning and used for
unsupervised network training
• Learning is based on the premise that one of the
neurons in the layer has a maximum response
due to the input x
• This neuron is declared the winner with a weight
103. Linear Separability
• Separation of the input space into regions
is based on whether the network response
is positive or negative
• Line of separation is called linear-
separable line.
• Example:-
– AND function & OR function are linear
separable Example
– EXOR function Linearly inseparable. Example
104. Hebb Network
• Hebb learning rule is the simpliest one
• The learning in the brain is performed by the
change in the synaptic gap
• When an axon of cell A is near enough to excite
cell B and repeatedly keep firing it, some growth
process takes place in one or both cells
• According to Hebb rule, weight vector is found to
increase proportionately to the product of the
input and learning signal.
105. Flow chart of Hebb training algorithm
Start
Initialize Weights
For
Each
s:t
Activate input
xi=si
1
1
Activate output
y=t
Weight update
Bias update
b(new)=b(old) + y
Stop
y
n