ass8_soln
ass8_soln
Deep Learning
Assignment- Week 8
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________
QUESTION 1:
Which of the following is false about CNN?
Detailed Solution:
QUESTION 2:
The input image has been converted into a matrix of size 64 X 64 and a kernel/filter of size 5x5
with a stride of 1 and no padding. What will be the size of the convoluted matrix?
a. 5x5
b. 59x59
c. 64x64
d. 60x60
Correct Answer: d
Detailed Solution:
The size of the convoluted matrix is given by CxC where C=((I-F+2P)/S)+1, where C is the
size of the Convoluted matrix, I is the size of the input matrix, F the size of the filter matrix
and P the padding applied to the input matrix. Here P=0, I=64, F=5 and S=1. Therefore,
the answer is 60x60.
______________________________________________________________________________
QUESTION 3:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
Filter size of 3x3 is convolved with matrix of size 4x4 (stride=1). What will be the size of output
matrix if valid padding is applied:
a. 4x4
b. 3x3
c. 2x2
d. 1x1
Correct Answer: c
Detailed Solution:
This type is used when there is no requirement for Padding. The output matrix after
convolution will have the dimension of ((n f +2P)/S+ 1) x ((n f +2P)/S+ 1)
______________________________________________________________________________
QUESTION 4:
Let us consider a Convolutional Neural Network having three different convolutional layers in
its architecture as:
Layer 3 of the above network is followed by a fully connected layer. If we give a 3-D
image input of dimension 39 X 39 to the network, then which of the following is the input
dimension of the fully connected layer.
a. 1960
b. 2200
c. 4563
d. 13690
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
Correct Answer: a
Detailed Solution:
the input image of dimension 39 X 39 X 3 convolves with 10 filters of size 3 X 3 and takes
the Stride as 1 with no padding. After these operations, we will get an output of 37 X 37 X
10.
______________________________________________________________________________
QUESTION 5:
Suppose you have 64 convolutional kernel of size 3 x 3 with no padding and stride 1 in the first
layer of a convolutional neural network. You pass an input of dimension 1024x1024x3 through
this layer. What are the dimensions of the data which the next layer will receive?
a. 1020x1020x40
b. 1022x1022x40
c. 1021x1021x40
d. 1022x1022x3
Correct Answer: b
Detailed Solution:
Requires four hyperparameters: Number of filters K=64, their spatial extent F=3, the
stride S=1, the amount of padding P=0.
____________________________________________________________________________
QUESTION 6:
Consider a CNN model which aims at classifying an image as either a rose,or a marigold, or a lily
or orchid (consider the test image can have only 1 of the images at a time) . The last (fully-
connected) layer of the CNN outputs a vector of logits, L, that is passed through a ____
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
activation that transforms the logits into probabilities, P. These probabilities are the model
predictions for each of the 4 classes.
a. Leaky ReLU
b. Tanh
c. ReLU
d. Softmax
Correct Answer: a
Detailed Solution:
Softmax works best if there is one true class per example, because it outputs a probability
vector whose entries sum to 1.
____________________________________________________________________________
QUESTION 7:
Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with
100 filters that are each 5x5. How many parameters does this hidden layer have (without bias)
a. 2501
b. 2600
c. 7500
d. 7600
Correct Answer: c
Detailed Solution:
Now we have 100 such filters. Now, as there is no bias so, total number of parameters= = 5
* 5 * 3 * 100 = 7500
______________________________________________________________________________
QUESTION 8:
Which of the following activation functions can lead to vanishing gradients?
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
a. ReLU
b. Sigmoid
c. Leaky ReLU
d. None of the above
Correct Answer: b
Detailed Solution:
For sigmoid activation, a large change in the input of the sigmoid function will cause a
small change in the output. Hence, the derivative becomes small. When more and more
layers uses such activation, the gradient of the loss function becomes very small making the
network difficult to train.
___________________________________________________________________________
QUESTION 9:
Statement 1: Residual networks can be a solution for vanishing gradient problem
Statement 3: Residual networks can never be a solution for vanishing gradient problem
a. Statement 2 is correct
b. Statement 3 is correct
c. Both Statement 1 and Statement 2 are correct
d. Both Statement 2 and Statement 3 are correct
Correct Answer: c
Detailed Solution:
of the block.
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur
____________________________________________________________________________
QUESTION 10:
Input to SoftMax activation function is [0.5,0.5,1]. What will be the output?
a. [0.28,0.28,0.44]
b. [0.022,0.956, 0.022]
c. [0.045,0.910,0.045]
d. [0.42, 0.42,0.16]
Correct Answer: a
Detailed Solution:
SoftMax,
______________________________________________________________________
______________________________________________________________________________
************END*******