0% found this document useful (0 votes)
705 views7 pages

DL - Assignment 5 Solution

This document contains a 10 question multiple choice quiz on deep learning concepts. The questions cover topics like activation functions, sigmoid function properties, benefits of ReLU activation, calculating network parameters, classification loss functions, and calculating gradients in neural networks. For each question, the correct answer and a detailed solution is provided.

Uploaded by

swathisreejith6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
705 views7 pages

DL - Assignment 5 Solution

This document contains a 10 question multiple choice quiz on deep learning concepts. The questions cover topics like activation functions, sigmoid function properties, benefits of ReLU activation, calculating network parameters, classification loss functions, and calculating gradients in neural networks. For each question, the correct answer and a detailed solution is provided.

Uploaded by

swathisreejith6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

NPTEL Online Certification Courses

Indian Institute of Technology Kharagpur

Deep Learning
Assignment- Week 5
TYPE OF QUESTION: MCQ/MSQ
Number of questions: 10 Total mark: 10 X 1 = 10
______________________________________________________________________________

QUESTION 1:

Look at the following figures. Can you identify which of the following options correctly identify
the activation functions ?

(1) (2) (3) (4)

a. Figure 1: Sigmoid, Figure 2: Leaky ReLU, Figure 3: Tanh, Figure 4: ReLU


b. Figure 4: Sigmoid, Figure 3: Leaky ReLU, Figure 2: Tanh, Figure 1: ReLU
c. Figure 2: Sigmoid, Figure 3: Leaky ReLU, Figure 4: Tanh, Figure 1: ReLU
d. Figure 3: Sigmoid, Figure 2: Leaky ReLU, Figure 1: Tanh, Figure 4: ReLU

Correct Answer: a

Detailed Solution:

The figures along with option (a) is self-explanatory .

QUESTION 2:
What is the output of sigmoid function for an input with dynamic range[0, ∞]?

a. [0, 1]
b. [−1, 1]
c. [0.5, 1]
d. [0.25, 1]

Correct Answer: c
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Detailed Solution:

𝟏
𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝒙) =
𝟏 + 𝒆−𝒙
𝟏 𝟏
If 𝒙 = 𝟎, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(𝟎) = 𝟏+𝒆−𝟎 = 𝟏+𝟏 = 𝟎. 𝟓

𝟏 𝟏
If 𝒙 = ∞, 𝑺𝒊𝒈𝒎𝒐𝒊𝒅(∞) = = 𝟏+𝟎 = 𝟏
𝟏+𝒆−∞

QUESTION 3:
𝜕𝐽
Find the gradient component 𝜕𝑤 for the network shown below if 𝐽(∙) = 0.5(𝑝̂ − 𝑝)2 is the loss
1
function, 𝑝 is the target?

a. 2𝑝̂ × 𝑥1
b. 2(𝑝̂ − 𝑝) × 𝑥1
c. (𝑝̂ − 𝑝) × 𝑥1
d. 2(1 − 𝑝) × 𝑥1

Correct Answer: c

Detailed Solution:

̂ − 𝒑)𝟐
𝑱(∙) = 𝟎. 𝟓(𝒑

̂ = 𝒙 𝟏 𝒘𝟏 + 𝒙 𝟐 𝒘𝟐 + 𝟏
𝒑

Using chain rule,

𝝏𝑱 ̂
𝝏𝑱 𝝏𝒑
= ∙
𝝏𝒘𝟏 𝝏𝒑 ̂ 𝝏𝒘𝟏
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

𝝏𝑱 ̂
𝝏𝒑
̂
= (𝒑
̂ − 𝒑), = 𝒙𝟏
𝝏𝒑 𝝏𝒘𝟏

𝝏𝑱
= (𝒑
̂ − 𝒑) × 𝒙𝟏
𝝏𝒘𝟏

QUESTION 4:
Which of the following are potential benefits of using ReLU activation over sigmoid activation?

a. ReLu helps in creating dense (most of the neurons are active) representations
b. ReLu helps in creating sparse (most of the neurons are non-active)
representations
c. ReLu helps in mitigating vanishing gradient effect
d. Both (b) and (c)

Correct Answer: d

Detailed Solution:

ReLu(x) = max(0, x). Since, the values of neurons are clipped to zero for negative values,
ReLu helps in sparse representations since an appreciable fraction of neurons might have
negative values. Sigmoid, on the other hand always outputs some real values for neurons’
activations and thus the representations are dense. It is prefered to have sparse
representations over dense representations.

Moreover, the magnitude of gradient for sigmoid function tends to zero as the value of the
node increases. Since the value of the gradient is essential for update of a neuron during
back-propagation, this leads to vanishing gradient problem which leads to slower learning.

ReLu, on the other hand offers a constant gradient for all x >0 and thus it is free from
vanishing gradient problems.

QUESTION 5:
Suppose a fully-connected neural network has a single hidden layer with 50 nodes. The input is
represented by a 5D feature vector and we have a binary classification problem. Calculate the total
number of parameters of the network. Consider there are NO bias nodes in the network.
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

a. 250
b. 120
c. 350
d. 300

Correct Answer: d

Detailed Solution:

Number of parameters = (5 * 50) + (50 * 1) = 300

QUESTION 6:
A 3-input neuron has weights 1.5, 0.5, 0.5. The transfer function is linear, with the constant of
proportionality being equal to 2. The inputs are 6, 20, 4 respectively. The output will be:
a. 40
b. 42
c. 32
d. 12
Correct Answer: b

Detailed Solution:

In order to find out the output, we multiply the weights with their respective inputs, add the
results and then further multiply them with their transfer function.
Thus, output= 2*(1.5*6 + 0.5*20 + 0.5*4 ) = 42
QUESTION 7:
You want to build a 5-class neural network classifier, given a leaf image, you want to classify
which of the 5 leaf breeds it belongs to. Which among the 4 options would be an appropriate
loss function to use for this task?

a. Cross Entropy Loss


b. MSE Loss
c. SSIM Loss
d. None of the above

Correct Answer: a

Detailed Solution:
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Out of the given options, Cross Entropy Loss is well suited for classification problems which is
the end task given in the question.

QUESTION 8:
Consider the below neural network. 𝑝̂ is the output after applying the non-linearity function
𝑓𝑁𝐿 (∙) on 𝑦 .The non-linearity 𝑓𝑁𝐿 (∙) is given as a step function i.e.,
0, 𝑖𝑓 𝑣 < 0
𝑓(𝑣) = {
1, 𝑖𝑓 𝑣 ≥ 0

The weights are given as 𝑤1 = 2, 𝑤2 = −1.5, 𝑤3 = 1

Choose the correct outputs generated by the network when the inputs are
{𝑥1 = 1, 𝑥2 = 0, 𝑥3 = 0} and {𝑥1 = 0, 𝑥2 = 1, 𝑥3 = 1}. Outputs are in the same order as inputs.

a. 1, 1
b. 0, 0
c. 1, 0
d. 0, 1

Correct Answer: c

Detailed Solution:
𝒚 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 + 𝒙𝟑 𝒘𝟑 = 𝟐 𝑿 𝟏 − 𝟏. 𝟓 𝑿 𝟎 + 𝟏 𝑿 𝟎 = 𝟐 𝒇(𝒚) = 𝟏 𝒂𝒔 𝒚 ≥ 𝟎

Similarly for the other point


𝒚 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 + 𝒙𝟑 𝒘𝟑 = 𝟐 𝑿 𝟎 − 𝟏. 𝟓 𝑿 𝟏 + 𝟏 𝑿 𝟏 = −𝟎. 𝟓 𝒇(𝒚) = 𝟎 𝒂𝒔 𝒚 < 𝟎

QUESTION 9:
Consider the below neural network where bias 𝑏 has been added to the previous network.𝑝̂ is
the output after applying the non-linearity function 𝑓𝑁𝐿 (∙) on 𝑦 .The non-linearity 𝑓𝑁𝐿 (∙) is
given as a step function i.e.,
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

0, 𝑖𝑓 𝑣 < 0
𝑓(𝑣) = {
1, 𝑖𝑓 𝑣 ≥ 0

The weights are given as 𝑤1 = 2, 𝑤2 = −1.5, 𝑤3 = 1, 𝑤0 = 1

Choose the correct outputs generated by the network when the inputs are
{𝑥1 = 1, 𝑥2 = 0, 𝑥3 = 0} and {𝑥1 = 0, 𝑥2 = 1, 𝑥3 = 1}. Outputs are in the same order as inputs.

a. 1, 1
b. 0, 0
c. 1, 0
d. 0, 1

Correct Answer: a

Detailed Solution:
𝒚 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 + 𝒙𝟑 𝒘𝟑 + 𝟏 = 𝟐 𝑿 𝟏 − 𝟏. 𝟓 𝑿 𝟎 + 𝟏 𝑿 𝟎 + 𝟏 = 𝟑 𝒇(𝒚) = 𝟏 𝒂𝒔 𝒚 ≥ 𝟎

Similarly for the other point


𝒚 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 + 𝒙𝟑 𝒘𝟑 + 𝟏 = 𝟐 𝑿 𝟎 − 𝟏. 𝟓 𝑿 𝟏 + 𝟏 𝑿 𝟏 + 𝟏 = 𝟎. 𝟓 𝒇(𝒚) = 𝟏 𝒂𝒔 𝒚 > 𝟎

QUESTION 10:
Suppose a neural network has 3 input 3 nodes, x, y, z. There are 2 neurons, Q and F. Q = x + y
and F = Q * z. What is the gradient of F with respect to x, y and z? Assume, (x, y, z) = (-2, 5, -4).
a. (-4, 3, -3)
b. (-4, -4, 3)
c. (4, 4, -3)
d. (3, 3, 4)

Correct Answer: b
NPTEL Online Certification Courses
Indian Institute of Technology Kharagpur

Detailed Solution:

𝝏𝑭
𝑭 = 𝑸. 𝒛, =𝑸=𝒙+𝒚=𝟑
𝝏𝒛
𝝏𝑭 𝝏𝑭
𝑭 = 𝑸. 𝒛 = (𝒙 + 𝒚). 𝒛, = 𝒛 = −𝟒, = 𝒛 = −𝟒
𝝏𝒙 𝝏𝒚

______________________________________________________________________________

************END*******

You might also like