0% found this document useful (0 votes)
2 views

ass8_soln

The document contains an assignment for a Deep Learning course from IIT Kharagpur, consisting of 10 multiple-choice questions (MCQs) related to Convolutional Neural Networks (CNNs). Each question includes the correct answer and a detailed solution explaining the reasoning behind it. Topics covered include convolution operations, filter sizes, activation functions, and the vanishing gradient problem.

Uploaded by

Revathi S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ass8_soln

The document contains an assignment for a Deep Learning course from IIT Kharagpur, consisting of 10 multiple-choice questions (MCQs) related to Convolutional Neural Networks (CNNs). Each question includes the correct answer and a detailed solution explaining the reasoning behind it. Topics covered include convolution operations, filter sizes, activation functions, and the vanishing gradient problem.

Uploaded by

Revathi S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

NPTEL Online Certification Courses

Indian Institute of Technology Kharagpur

Deep Learning
Assignment- Week 8
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________

QUESTION 1:
Which of the following is false about CNN?

a. Output should be flattened before feeding it to a fully connected lyer


b. There can be only 1 fully connected layer in CNN
c. We can use ana many convolutional layers in CNN
d. None of the above
Correct Answer: b

Detailed Solution:

Direct from classroom lecture


______________________________________________________________________________

QUESTION 2:
The input image has been converted into a matrix of size 64 X 64 and a kernel/filter of size 5x5
with a stride of 1 and no padding. What will be the size of the convoluted matrix?

a. 5x5
b. 59x59
c. 64x64
d. 60x60

Correct Answer: d

Detailed Solution:

The size of the convoluted matrix is given by CxC where C=((I-F+2P)/S)+1, where C is the
size of the Convoluted matrix, I is the size of the input matrix, F the size of the filter matrix
and P the padding applied to the input matrix. Here P=0, I=64, F=5 and S=1. Therefore,
the answer is 60x60.
______________________________________________________________________________

QUESTION 3:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Filter size of 3x3 is convolved with matrix of size 4x4 (stride=1). What will be the size of output
matrix if valid padding is applied:

a. 4x4
b. 3x3
c. 2x2
d. 1x1

Correct Answer: c

Detailed Solution:

This type is used when there is no requirement for Padding. The output matrix after
convolution will have the dimension of ((n f +2P)/S+ 1) x ((n f +2P)/S+ 1)

______________________________________________________________________________

QUESTION 4:
Let us consider a Convolutional Neural Network having three different convolutional layers in
its architecture as:

Layer-1: Filter Size 3 X 3, Number of Filters 10, Stride 1, Padding 0

Layer-2: Filter Size 5 X 5, Number of Filters 20, Stride 2, Padding 0

Layer-3: Filter Size 5 X5 , Number of Filters 40, Stride 2, Padding 0

Layer 3 of the above network is followed by a fully connected layer. If we give a 3-D
image input of dimension 39 X 39 to the network, then which of the following is the input
dimension of the fully connected layer.

a. 1960
b. 2200
c. 4563
d. 13690
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Correct Answer: a

Detailed Solution:

the input image of dimension 39 X 39 X 3 convolves with 10 filters of size 3 X 3 and takes
the Stride as 1 with no padding. After these operations, we will get an output of 37 X 37 X
10.

Output of layer 2 would be 17x17x20

Ouput of layer 3 would be 7x7x40. Flattening this gives 1960.

______________________________________________________________________________

QUESTION 5:
Suppose you have 64 convolutional kernel of size 3 x 3 with no padding and stride 1 in the first
layer of a convolutional neural network. You pass an input of dimension 1024x1024x3 through
this layer. What are the dimensions of the data which the next layer will receive?

a. 1020x1020x40
b. 1022x1022x40
c. 1021x1021x40
d. 1022x1022x3

Correct Answer: b

Detailed Solution:

The layer accepts a volume of size W1×H1×D1. In our case, 1024x1024x3

Requires four hyperparameters: Number of filters K=64, their spatial extent F=3, the
stride S=1, the amount of padding P=0.

Produces a volume of size W2×H2×D2 i.e. 225x225x256 where:


=(1024 3)/1+1 =1022, =(1024 3)/1+1 =1022, (i.e. width and height are
computed equally by symmetry), D2= Number of filters K=40.

____________________________________________________________________________

QUESTION 6:
Consider a CNN model which aims at classifying an image as either a rose,or a marigold, or a lily
or orchid (consider the test image can have only 1 of the images at a time) . The last (fully-
connected) layer of the CNN outputs a vector of logits, L, that is passed through a ____
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

activation that transforms the logits into probabilities, P. These probabilities are the model
predictions for each of the 4 classes.

Fill in the blanks with the appropriate option.

a. Leaky ReLU
b. Tanh
c. ReLU
d. Softmax

Correct Answer: a

Detailed Solution:

Softmax works best if there is one true class per example, because it outputs a probability
vector whose entries sum to 1.

____________________________________________________________________________

QUESTION 7:
Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with
100 filters that are each 5x5. How many parameters does this hidden layer have (without bias)

a. 2501
b. 2600
c. 7500
d. 7600

Correct Answer: c

Detailed Solution:

As we have a RGB Image so each filter would be 3D, whose dimension is 5 * 5 * 3 = 75

Now we have 100 such filters. Now, as there is no bias so, total number of parameters= = 5
* 5 * 3 * 100 = 7500

______________________________________________________________________________

QUESTION 8:
Which of the following activation functions can lead to vanishing gradients?
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

a. ReLU
b. Sigmoid
c. Leaky ReLU
d. None of the above

Correct Answer: b

Detailed Solution:

For sigmoid activation, a large change in the input of the sigmoid function will cause a
small change in the output. Hence, the derivative becomes small. When more and more
layers uses such activation, the gradient of the loss function becomes very small making the
network difficult to train.

___________________________________________________________________________

QUESTION 9:
Statement 1: Residual networks can be a solution for vanishing gradient problem

Statement 2: Residual networks provide residual connections straight to earlier layers

Statement 3: Residual networks can never be a solution for vanishing gradient problem

Which of the following option is correct?

a. Statement 2 is correct
b. Statement 3 is correct
c. Both Statement 1 and Statement 2 are correct
d. Both Statement 2 and Statement 3 are correct

Correct Answer: c

Detailed Solution:

Residual networks can be a solution to vanishing gradient problems, as they provide

of the block.
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

____________________________________________________________________________

QUESTION 10:
Input to SoftMax activation function is [0.5,0.5,1]. What will be the output?

a. [0.28,0.28,0.44]
b. [0.022,0.956, 0.022]
c. [0.045,0.910,0.045]
d. [0.42, 0.42,0.16]

Correct Answer: a

Detailed Solution:

SoftMax,

Therefore, =0.28and similarly the other values

______________________________________________________________________

______________________________________________________________________________

************END*******

You might also like