ML Assignment -4
ML Assignment -4
Q1. Explain Biological Neural Network and Artificial Neural network? what do you
mean by activation function in Neural Network?
Ans. A Feed Forward Neural Network is an artificial neural network in which the
connections between nodes do not form a cycle. The opposite of a feed-forward
neural network is a recurrent neural network, in which certain pathways are cycled.
The feed-forward model is the simplest form of a neural network as information is
only processed in one direction. While the data may pass through multiple hidden
nodes, it always moves in one direction and never backward.
Applications of Feed Forward Neural Networks:
While Feed Forward Neural Networks are fairly straightforward, their simplified
architecture can be used as an advantage in particular machine learning
applications. For example, one may set up a series of feedforward neural networks
to run them independently from each other, but with a mild intermediary for
moderation. Like the human brain, this process relies on many individual neurons to
handle and process larger tasks. As the individual networks perform their tasks
independently, the results can be combined at the end to produce a synthesized,
and cohesive output.
Q3. What is the basic concept of a Recurrent Neural Network? Why do we use
dimensionality reduction?
RNN has a “memory” which remembers all information about what has been
calculated. It uses the same parameters for each input as it performs the same task
on all the inputs or hidden layers to produce the output. This reduces the
complexity of parameters, unlike other neural networks .
1. By reducing the dimensions of the features, the space required to store the
dataset is also gets reduced.
2. Less Computation training time is required for reduced dimensions of
features.
3. Reduced dimensions of features of the dataset help in visualizing the data
quickly.
4. It removes the redundant features (if present) by taking care of
multicollinearity.
Q4. What do you mean by activation function? What is the purpose of the activation
function?
Apart from the biological similarity that was discussed earlier, they also help in
keeping the value of the output from the neuron restricted to a certain limit as per our
requirement. This is important because input into the activation function is W*x +
b where W is the weights of the cell and the x is the inputs and then there is the
bias b added to that. This value if not restricted to a certain limit can go very high in
magnitude especially in the case of very deep neural networks that have millions of
parameters. This will lead to computational issues. For example, there are some
activation functions (like softmax) that out specific values for different values of input
(0 or 1).