0% found this document useful (0 votes)
2 views

ML Assignment -4

The document discusses various concepts in neural networks, including biological and artificial neural networks, activation functions, feed-forward and recurrent neural networks, and dimensionality reduction. It explains the roles of activation functions in determining neuron firing, the structure and applications of feed-forward networks, and the memory capabilities of recurrent networks. Additionally, it covers the backpropagation algorithm used for training neural networks by efficiently computing gradients to minimize loss.

Uploaded by

Yash Shukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ML Assignment -4

The document discusses various concepts in neural networks, including biological and artificial neural networks, activation functions, feed-forward and recurrent neural networks, and dimensionality reduction. It explains the roles of activation functions in determining neuron firing, the structure and applications of feed-forward networks, and the memory capabilities of recurrent networks. Additionally, it covers the backpropagation algorithm used for training neural networks by efficiently computing gradients to minimize loss.

Uploaded by

Yash Shukla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

EN19CS301415 (Yash Shukla)

CS-G, 5th Sem


Machine Learning Assignment-4
(Elective-2)

Q1. Explain Biological Neural Network and Artificial Neural network? what do you
mean by activation function in Neural Network?

Ans. A biological neural network is composed of groups of chemically connected or


functionally associated neurons. A single neuron may be connected to many other
neurons and the total number of neurons and connections in a network may be
extensive. Connections, called synapses, are usually formed
from axons to dendrites, though dendrodendritic synapses and other connections are
possible. Apart from electrical signaling, other forms of signaling arise
from neurotransmitter diffusion.
An artificial neural network, composed of artificial neurons or nodes. an artificial
neural network made for solving artificial intelligence (AI) problems. Artificial
intelligence neural networks are information processing paradigms inspired by the
way biological neural systems process data. Artificial intelligence tries to simulate
some properties of biological neural networks. In the artificial intelligence field,
artificial neural networks have been applied successfully to speech
recognition, image analysis, and adaptive control, to construct software
agents (in computer and video games) or autonomous robots.
Activation functions are used to determine the firing of neurons in a neural network.
Given a linear combination of inputs and weights from the previous layer, the
activation function controls how we'll pass that information on to the next layer. An
ideal activation function is both nonlinear and differentiable.

Q2. Explain the feed-forward neural network?

Ans. A Feed Forward Neural Network is an artificial neural network in which the
connections between nodes do not form a cycle. The opposite of a feed-forward
neural network is a recurrent neural network, in which certain pathways are cycled.
The feed-forward model is the simplest form of a neural network as information is
only processed in one direction. While the data may pass through multiple hidden
nodes, it always moves in one direction and never backward.
Applications of Feed Forward Neural Networks:

While Feed Forward Neural Networks are fairly straightforward, their simplified
architecture can be used as an advantage in particular machine learning
applications. For example, one may set up a series of feedforward neural networks
to run them independently from each other, but with a mild intermediary for
moderation. Like the human brain, this process relies on many individual neurons to
handle and process larger tasks. As the individual networks perform their tasks
independently, the results can be combined at the end to produce a synthesized,
and cohesive output.

Q3. What is the basic concept of a Recurrent Neural Network? Why do we use
dimensionality reduction?

Ans. A recurrent Neural Network(RNN) is a type of Neural Network where


the output from the previous step is fed as input to the current step. In traditional
neural networks, all the inputs and outputs are independent of each other, but in
cases like when it is required to predict the next word of a sentence, the previous
words are required and hence there is a need to remember the previous words.
Thus RNN came into existence, which solved this issue with the help of a Hidden
Layer. The main and most important feature of RNN is the Hidden state, which
remembers some information about a sequence.

RNN has a “memory” which remembers all information about what has been
calculated. It uses the same parameters for each input as it performs the same task
on all the inputs or hidden layers to produce the output. This reduces the
complexity of parameters, unlike other neural networks .

Advantages of Recurrent Neural Network:

1. An RNN remembers every information through time. It is useful in time


series prediction only because of the feature to remember previous
inputs as well. This is called Long Short Term Memory.
2. Recurrent neural networks are even used with convolutional layers to
extend the effective pixel neighborhood.

Disadvantages of Recurrent Neural Network:

1. Gradient vanishing and exploding problems.


2. Training an RNN is a very difficult task.
3. It cannot process very long sequences if using tanh or relu as an
activation function.
An intuitive example of dimensionality reduction can be discussed through a simple
e-mail classification problem, where we need to classify whether the e-mail is spam
or not. This can involve a large number of features, such as whether or not the e-
mail has a generic title, the content of the e-mail, whether the e-mail uses a
template, etc. However, some of these features may overlap. In another condition,
a classification problem that relies on both humidity and rainfall can be collapsed
into just one underlying feature, since both of the aforementioned are correlated to
a high degree. Hence, we can reduce the number of features in such problems. A
3-D classification problem can be hard to visualize, whereas a 2-D one can be
mapped to simple 2-dimensional space and a 1-D problem to a simple line.
There are two components of dimensionality reduction:
A. Feature selection: In this, we try to find a subset of the original set of
variables, or features, to get a smaller subset that can be used to model the
problem. It usually involves three ways:
a. Filter
b. Wrapper
c. Embedded

B. Feature extraction: This reduces the data in a high dimensional space to a


lower dimension space, i.e. a space with lesser no. of dimensions.

Benefits of applying Dimensionality Reduction:

Some benefits of applying the dimensionality reduction technique to the given


dataset are given below:

1. By reducing the dimensions of the features, the space required to store the
dataset is also gets reduced.
2. Less Computation training time is required for reduced dimensions of
features.
3. Reduced dimensions of features of the dataset help in visualizing the data
quickly.
4. It removes the redundant features (if present) by taking care of
multicollinearity.
Q4. What do you mean by activation function? What is the purpose of the activation
function?

Ans. An activation function is a function that is added to an artificial neural network to


help the network learn complex patterns in the data. When compared with a neuron-
based model that is in our brains, the activation function is at the end deciding what is
to be fired to the next neuron. That is exactly what an activation function does in an
ANN as well. It takes in the output signal from the previous cell and converts it into
some form that can be taken as input to the next cell.

The purpose for having activation function in a network:

Apart from the biological similarity that was discussed earlier, they also help in

keeping the value of the output from the neuron restricted to a certain limit as per our

requirement. This is important because input into the activation function is W*x +

b where W is the weights of the cell and the x is the inputs and then there is the

bias b added to that. This value if not restricted to a certain limit can go very high in

magnitude especially in the case of very deep neural networks that have millions of

parameters. This will lead to computational issues. For example, there are some

activation functions (like softmax) that out specific values for different values of input

(0 or 1).

Q5. What is back-propagation and how is it used in a neural network?

Ans. In machine learning, backpropagation (backdrop, BP) is a widely


used algorithm for training feedforward neural networks. Generalizations of
backpropagation exist for other artificial neural networks (ANNs), and functions
generally. These classes of algorithms are all referred to generically as
"backpropagation". In fitting a neural network, backpropagation computes
the gradient of the loss function concerning the weights of the network for a single
input-output example, and does so efficiently, unlike a naive direct computation of
the gradient concerning each weight individually. This efficiency makes it feasible to
use gradient methods for training multilayer networks, updating weights to minimize
loss; gradient descent, or variants such as stochastic gradient descent, are
commonly used. The backpropagation algorithm works by computing the gradient of
the loss function concerning each weight by the chain rule, computing the gradient
one layer at a time, iterating backward from the last layer to avoid redundant
calculations of intermediate terms in the chain rule; this is an example of dynamic
programming.

You might also like