0% found this document useful (0 votes)
16 views

DL Questions

Uploaded by

Somasekhar Lalam
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

DL Questions

Uploaded by

Somasekhar Lalam
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

CM412: Deep Learning

Important Questions
UNIT – I
1. What are the main differences between AI, Machine Learning, and Deep Learning?
2. Why deep learning? Why now? Justify your answer.
3. List the historical trends in Deep Learning.
4. Illustrate the working of deep learning.
5. Discuss briefly about representation learning with suitable illustrations.
6. Reason for calling Feedforward neural networks as networks–Justify.
7. Explain XOR operation.
8. Explain cost function ingradient based learning.
9. Explain sigmoid units for Bernoulli Output Distributions.
10. Write briefly about output units.
11. Explain about Architecture Design.
12. Describe Back Propagation algorithm.
UNIT – II
1. Write briefly about Parameter Norm Penalties.
2. Analyze and write short notes on Dataset Augmentation.
3. Explain Multi-Task Learning.
4. List and explain briefly about various Optimization Strategies and Meta-Algorithms.
5. Discuss the application of second-order methods to the training of deep networks.
6. Compare AdaGrad, RMSProp, and Adam in terms of their approach to adaptive learning
rates. What are the pros and cons of each method?
7. Why is parameter initialization important in deep neural networks? Compare different
parameter initialization strategies.
8. How does the momentum technique improve upon standard gradient descent? Illustrate
with a mathematical explanation.
9. Describe the Stochastic gradient descent (SGD) with momentum.
10. Describe the Stochastic gradient descent (SGD) with Nesterov momentum.
11. Summarize several of the most prominent challenges involved in optimization for training
deep models.
12. How Learning Differs from Pure Optimization? Explain.
UNIT - III
1. Write an example function for Convolution operation and explain in detail.
2. Explain the following with suitable diagram.
i. Sparse interactions.
ii. Parameter sharing.
3. Describe Pooling with suitable illustrations.
4. Discuss briefly about Convolution and Pooling as an Infinitely Strong Prior.
5. Discuss briefly about structured Outputs.
6. Discuss in detail the variants of the Basic Convolution Function.
7. Discuss local connections, convolution and full connections with diagram.
8. Illustrate about random or Unsupervised Features.
9. Discuss in detail about Pooling with downsampling.
10. Discuss the different formats of data that can be used with convolutional networks.
11. Explain the role of Neuroscientific Basis for Convolutional Networks.
12. Give three properties of V1 that a convolutional network layer is designed to capture.
UNIT – IV
1. Describe Unfolding Computational Graphs.
2. Explain briefly about Recurrent Neural Networks with illustrations.
3. How to Computing the Gradient in a Recurrent Neural Network? Explain.
4. Write briefly about Recurrent Networks as Directed Graphical Models.
5. Explain Bidirectional RNNs.
6. Illustrate Encoder-Decoder sequence-to-sequence Architecture.
7. Write about Deep Recurrent Networks.
8. Describe the following.
i. Long Short-Term Memory.
ii. Other Gated RNNs.
9. Discuss Echo State Networks.
10. Illustrate Regularizing to Encourage Information Flow
11. Explain Leaky Units and Other Strategies for Multiple Time Scales.
12. Illustrate about Explicit Memory.

You might also like