0% found this document useful (0 votes)
2 views

Deep Learning_Average Learner problems

The document outlines a series of questions aimed at teaching fundamental concepts in deep learning, including decision boundaries, support vector machines, optimization strategies, neural networks, autoencoders, CNNs, and LSTMs. Each question includes specific aims, problem objectives, and hints for implementation, focusing on practical applications and model training techniques. The exercises cover a range of topics from basic classification to advanced architectures and generative models.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Deep Learning_Average Learner problems

The document outlines a series of questions aimed at teaching fundamental concepts in deep learning, including decision boundaries, support vector machines, optimization strategies, neural networks, autoencoders, CNNs, and LSTMs. Each question includes specific aims, problem objectives, and hints for implementation, focusing on practical applications and model training techniques. The exercises cover a range of topics from basic classification to advanced architectures and generative models.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Average Problem of Deep Learning

Subject Name:- Deep Learning lab


Subject Code: - 22CSP-368/22 ITP-368

Question:-1

● Aims: Introduce fundamental concepts and terminologies. Understand decision


boundaries in classification tasks.
Problem Aim: Visualize decision surfaces for two classes using different linear
classifiers.
Hint: Use a simple dataset like XOR and visualize how decision boundaries adapt with
different classifiers.

Question:-2

● Aims: Learn how linear models classify data and hinge loss's role in SVM optimization.
Problem Aim: Implement a support vector machine (SVM) with hinge loss from scratch.
Hint: Focus on deriving the hinge loss gradient and comparing it with other loss
functions like cross-entropy.

Question:-3

● Aims: Understand optimization strategies to minimize loss functions effectively.


Problem Aim: Implement gradient descent with different batch sizes on a toy dataset.
Hint: Demonstrate how batch size affects convergence by plotting loss curves.

Question:-4

● Aims: Build and train a simple neural network using backpropagation.


Problem Aim: Implement backpropagation for a 3-layer neural network from scratch.
Hint: Manually derive and validate weight gradients for one forward-backward pass.

Question:-5
● Aims: Learn how to represent data compactly using autoencoders.
Problem Aim: Train an autoencoder on the MNIST dataset to compress and reconstruct
images.
Hint: Focus on reducing reconstruction loss (e.g., Mean Squared Error).

Question:-6

● Aims: Understand CNN basics and leverage pretrained models.


Problem Aim: Fine-tune a pretrained CNN for a new classification task (e.g., CIFAR-
10).
Hint: Freeze the base layers and train only the final dense layers initially.

Question:-7

● Aims: Explore advanced optimization methods and their effects.


Problem Aim: Compare SGD, Momentum, RMSProp, and Adam optimizers on the same
dataset.
Hint: Plot training and validation losses to show differences in convergence rates.

Question:-8

● Aims: Learn regularization techniques to improve model generalization.


Problem Aim: Train a deep network with/without early stopping and dropout.
Hint: Evaluate performance differences using metrics like accuracy or F1-score.

Question:-9

● Aims: Study advanced architectures for training deeper networks.


Problem Aim: Implement a simple residual block and train a ResNet on CIFAR-10.
Hint: Focus on identity mapping and gradient flow through skip connections.

Question:-10

● Aims: Explore real-world tasks using deep learning models.


Problem Aim: Use a U-Net architecture for image segmentation on a small dataset.
Hint: Augment data with masks and evaluate with Intersection over Union (IoU).

Question:-11
● Aims: Understand LSTM's role in modeling sequential data.
Problem Aim: Build an LSTM to predict a time-series dataset (e.g., stock prices).
Hint: Experiment with sequence lengths and observe overfitting without regularization.

Question:-12

● Aims: Generate new data using generative models.


Problem Aim: Train a GAN to generate images of handwritten digits.
Hint: Monitor the loss for both generator and discriminator to ensure stable training.

You might also like