0% found this document useful (0 votes)
2 views

OT_Unit1

The document covers fundamental concepts in neural networks, including the McCulloch-Pitts neuron model, perceptrons, activation functions, and various types of neural networks such as MLPs, CNNs, and RNNs. It discusses training techniques, optimization strategies, and challenges in deep learning, as well as specific architectures like AlexNet. Additionally, it addresses regularization methods, pooling layers, and the significance of pre-trained models in deep learning applications.

Uploaded by

mitali chaudhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OT_Unit1

The document covers fundamental concepts in neural networks, including the McCulloch-Pitts neuron model, perceptrons, activation functions, and various types of neural networks such as MLPs, CNNs, and RNNs. It discusses training techniques, optimization strategies, and challenges in deep learning, as well as specific architectures like AlexNet. Additionally, it addresses regularization methods, pooling layers, and the significance of pre-trained models in deep learning applications.

Uploaded by

mitali chaudhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

UNIT-1

1. Explain the McCulloch-Pitts neuron model and its significance in early AI.
2. What is a perceptron, and how does it learn from data?
3. Why are activation functions important in neural networks?
4. How does a multilayer perceptron (MLP) differ from a single-layer perceptron?
5. What is the sigmoid activation function, and why is it commonly used?
6. What is Gradient Descent (GD), and how is it used in machine learning?
7. What is the purpose of hyperparameters in training deep learning models?
8. Explain the difference between L1 and L2 regularization.
9. What is a Convolutional Neural Network (CNN), and how does it differ from a fully
connected neural network?
10. Compare the depth and width of neural networks. How do they affect performance?
11. xplain the ReLU, Leaky ReLU (LReLU), and Exponential ReLU (ERELU) activation
functions.
12. What is a Recurrent Neural Network (RNN). Explain in detail its architecture and application.
13. What is a Recurrent Neural Network (RNN), and how does it differ from a feedforward neural
network?
14. What are the key differences between a feedforward neural network and a recurrent neural
network?
15. What are some common optimization strategies used to train deep leaming models
effectively? (At least 3)
16. Describe how poor parameter initialization can impact the training of deep neural networks.
17. What are some common challenges in neural network optimization?
18. What is a pre-trained model, and why is it useful in deep leaming?
19. Describe the role of ImageNet in the development of deep learning models for image
classification.
20. What is Audio WaveNet, and how is it used for audio processing tasks?

1. Define Gradient Decent. List and Explain Types of Gradient Decent


2. Explain back propagation algorithm for neural network training.
3. Computg ou@ut of the following neuron if activation function is:
(i) sigmoid function
(ii) Tanh tunction
(ii| RELU function (assume same bias 0.5 for each node).
1. Explain the term overfitting and dropout with respect to Neural Networks.
2. Describe zero padding strategies used in Convolution
3. Write a short note on deep CNN architecture: AlexNet.
4. With ihe help of a diagram, explain basic building blocks of Convolutional Neural
5. Network architecture.
6. Draw and Explain the Architecture of Thresholding Logic. Also, explain how weights are
adjusted in Thresholding Logic
7. Explain the terms weight initialization and hyperparameter training with respect to training of
CNN.
8. Explain various activation functions used in Convolutional Neural Network
9. Define Perceptron. Describe the process of Perceptron Learning Algorithm
10. Difference between L1, L2 and dropout.
11. How does Deep Learning overcome the challenges in conventional machine learning
techniques? Draw and explain the architecture of Convolutional Neural Networks (CNN)
12. Two images shown in figure below need to be convolved with stride = 2. Compute the
resultant image pixels.

1. Why pooling layer is used in CNN architecture? Explain with suitable example.

i Max ii) Min and iii) Average pooling technique


ii Write a note on Sigmoid, Tanh and ReLU Neurons.

Feature Single-Layer Perceptron (SLP) Multilayer Perceptron (MLP)

Structure One layer, no hidden layers Multiple layers, including hidden


layers

Problem Solving Linearly separable problems only Handles non-linear and complex
problems

Activation Step function Non-linear functions (e.g., ReLU,


Function sigmoid)

Learning Perceptron learning algorithm Backpropagation with gradient descent


Algorithm

Decision Linear decision boundary Non-linear decision boundary


Boundary

Complexity Simple and computationally More complex and computationally


inexpensive intensive

Outputs Binary outputs Continuous, binary, or multi-class


outputs

Use Cases Basic tasks like simple Advanced tasks like image recognition,
classification NLP
Feature Learning No hierarchical feature learning Learns hierarchical and complex
features

You might also like