Spring 2015 Mid-Sem Q_A
Spring 2015 Mid-Sem Q_A
Q. 1
(a) µA (x) and µB (x) are the membership functions of the fuzzy sets A and B, respectively.
1
µA (x) = e 1+x
1
µB (x) = 1+( x−50 )4
10
(b) Given two fuzzy sets A and B defined over universe of discourses X and Y , respectively.
A = {(20, 0.2), (25, 0.4), (30, 0.6), (35, 0.6), (40, 0.7), (45, 0.8),
(50, 0.8)}
B = {(1, 0.8), (2, 0.8), (3, 0.6), (4, 0.4)}
X = {10, 15, 20, 25, 30, 35, 40, 45, 50, 55}
Y = {0, 1, 2, 3, 4, 5}
Draw the graphs for the following.
i. A × B
ii. A =⇒ B
(i) A × B
1 2 3 4
20 0.2 0.2 0.2 0.2
25
0.4 0.4 0.4 0.4
30
0.6 0.6 0.6 0.4
µA×B (x, y) = min{µA (x), µB (y)} = 35
0.6 0.6 0.6 0.4
40
0.7 0.7 0.6 0.4
45 0.8 0.8 0.6 0.4
50 0.8 0.8 0.6 0.4
(ii) A ⇒ B
For this many interpretation are possible.
A ⇒ B ≡ Ā ∪ B or A × B or (A × B) ∪ (Ā × Y )
Accordingly answer will be different.
[4+4]
Q. 2
(a) Suppose, a fuzzy relation is ‘If x is A then y is B’. How to find the following:
i. x is C, given that y is D
ii. y is D, given that x is C
Answer :
GPM
if x is A then y is B
x is A0
− − − − − − − − − − − − − − −−
y is B 0
GMT is
if x is A then y is B
y is B 0
− − − − − − − − − − − − − − −−
x is A0
Since C and D are not mentioned as A0 or B 0 and (vice-versa) none of the GMP and GMT are
applicable in this case and hence we can not deduce anything.
[3+3]
2
∴ (P ∩ Q)0.4 = {x|µ(x) ≥ 0.4} = {x3 , x4 }
(ii)
x1 x2 x3 x4 x5 x1 x2 x3 x4 x5
x1 0.1 0.1 0.1 0.1 0.1 x1 0 0 0 0 0
x2
0.2 0.2 0.2 0.2 0.2
x2
0 0 0 0 0
P ×Q= x3
0.7 0.6 0.3 0.2 0.7 , (P × Q)0.4 =
x3
1 1 0 1 1
x4 0.5 0.5 0.3 0.2 0.4 x4 1 1 0 0 1
x5 0.4 0.4 0.3 0.2 0.4 x5 1 1 0 0 1
[3+3]
3
Q. 3
(a) The membership functions of two fuzzy sets A and B are shown in the following graph.
A: climate is Hot.
B: climate is Cold.
1.0
B A
-15 -10 -5 0 5 10 15 20 25 30 35 40
i. Draw the graph of the membership function, which represents the fuzzy set C: climate is
Extreme.
Answer : Climate is Extreme ≈ Climate is Hot OR Climate is Cold.
1.0 A B
-15 -10 -5 0 5 10 15 20 25 30 35 40
ii. What would be the graph of the membership function µD of the fuzzy set D = (A ∩ C)?
State D in terms of fuzzy linguistic.
Answer :
D
1.0
B A
-15 -10 -5 0 5 10 15 20 25 30 35 40
¯ B = Climate is not Pleasant
Lingustic interpretation : A ∩ B = Climate is Pleasant, D = A ∩
[3+3]
(b) Two fuzzy relations ‘likes’ and ‘earns’ are defined below.
F ootball Hockey Cricket
Dhoni 0.1 0.3 0.8
V irat 0.2 0.7 0.5
likes =
Rohit 0.5 0.4 0.2
Sekhar 0.4 0.5 0.6
For example, x likes Game.
10L 50L 100L
Dhoni 0.6 0.3 0.2
V irat 0.4 0.7 0.8
earns =
0.1
Rohit 0.3 0.2
Sekhar 0.5 0.2 0.6
4
For example, x earns Money.
Obtain the relation between a game to a money? [6]
Answer :
This relation can be obtained as likesT ◦ earns. That is,
5
Q. 4
(a) What are the components you should consider in order to mathematically model an artificial
neuron ?
[4]
Answer : Two components namely summation until and threshold unit are required to mathe-
matically model as artificial neuron. Two components can be defined as follows.
x1 Ø(I)
w1
x2 w2
w3
I y
x3
…..
wn
xn
Pn
Summation unit I = i=1 xi · wi . Threshold unit = φ(I), where φ is some transfer function.
1
(b) If φ(I) = 1+e−αI
is a transfer function in a perceptron, then show that
Answer :
1−φ(I)
φ(I) = 1
1+e−αI
; Let z = 1 + e−αI . ∴ ∂z
∂I = −α · e−αI = −α · φ(I)
∂φ(I) ∂φ(I) ∂z
∂I = ∂z · ∂I
1−φ(I)
= − z12 · −α · φ(I)
1−φ(I) 1
= φ(I)2 · α · φ(I) ,∵ z = φ(I)
= α(1 − φ(I)) · φ(I) [3]
(c) Draw a schematic diagram of a multi-layer feed-forward artificial neural network architecture and
clearly label the different elements in it.
Give one application, where you should apply such an ANN architecture.
Answer :
Application : Such an ANN would be applied to problems whose output are non-separable with
resepct to input. [4+1]
6
Io O
I OI [V] IH OH [W]
21 31
v11
......
wj1
......
1 11
I
.
1
1
vi
v1i
......
vl1
wjk
vij
I i1 1i
v1m
2j 3k
......
vlj
......
vlm
......
wjn
.
1 1l
I l
vlm
2m 3n
OI IH OH Io
I N1 N2 N3 O
[V] [W]
Input layer Hidden Layer Output Layer
|N2|=m |N3|=n
|N1|=l
f jm ( I Hj , j )
1 e o I o e o I o
fi l ( I i , i ) j I Hj f kn ( I ko , k )
1 e e o I o e o I o
Q. 5
(a) Show how the computations in input, hidden and output layers of an ANN can be accomplished
in terms of matrix algebra.
[2+2+2]
Answer :
Whole learning method consists of the following three computations:
In our computation, we assume that < T0 , TI > be the training set of size |T |.
• Let us consider an input training data at any instant be I I = [I11 , I21 , · · · , Ii1 , Il1 ] where I I ∈ TI
• Consider the outputs of the neurons lying on input layer are the same with the corresponding
inputs to neurons in hidden layer. That is,
OI = I I
[l × 1] = [l × 1] [Output of the input layer]
• The input of the j-th neuron in the hidden layer can be calculated as follows.
7
where j = 1, 2, · · · m.
[Calculation of input of each node in the hidden layer]
• In the matrix representation form, we can write
I H = V T · OI
[m × 1] = [m × l] [l × 1]
Note that all output of the nodes in the hidden layer can be expressed as a one-dimensional column
matrix.
···
···
..
.
1
OH = −αH ·IjH
1+e
..
.
···
··· m×1
Let us calculate the input to any k-th node in the output layer. Since, output of all nodes in the
hidden layer go to the k-th layer with weights w1k , w2k , · · · , wmk , we have
IkO = w1k · oH H H
1 + w2k · o2 + · · · + wmk · om
where k = 1, 2, · · · , n
In the matrix representation, we have
I O = W T · OH
[n × 1] = [n × m] [m × 1]
Now, we estimate the output of the k-th neuron in the output layer. We consider the tan-sigmoid
transfer function.
o o
eαo ·Ik −e−αo ·Ik
Ok = α ·I o −α ·I o
e o k +e o k
for k = 1, 2, · · · , n
Hence, the output of output layer’s neurons can be represented as
···
···
..
α ·I o . −α ·I o
e o k −e o k
O = αo ·I o −αo ·I o
e k +e k
..
.
···
··· n×1
8
(b) Explain the basic principle of calculating error in supervised learning.
[2]
Answer :
• Let us consider any k-th neuron at the output layer. For an input pattern Ii ∈ TI (input in
training) the target output TOk of the k-th neuron be TOk .
• Then, the error ek of the k-th neuron is defined corresponding to the input Ii as
ek = 1
2 (TOk − OOk )2
(c) Derive the ‘delta rule’ according to the method of Steepest descent.
• For simplicity, let us consider the connecting weights are the only design parameter.
• Suppose, V and W are the wights parameters to hidden and output layers, respectively.
• Thus, given a training set of size N , the error surface, E can be represented as
E= N i
P
i=1 e (V, W, Ii )
where Ii is the i-th input pattern in the training set and ei (...) denotes the error computation
of the i-th input.
• Now, we will discuss the steepest descent method of computing error, given a changes in V
and W matrices.
~
• Suppose, A and B are two points on the error surface (see figure in Slide 30). The vector AB
can be written as
~ = (Vi+1 − Vi ) · x̄ + (Wi+1 − Wi ) · ȳ = ∆V · x̄ + ∆W · ȳ
AB
~ can be obtained as
The gradient of AB
∂E ∂E
~ =
eAB ∂V · x̄ + ∂W · ȳ
1
∂E ∂E
ēAB
~ = |eAB · x̄ + · ȳ
~ | ∂V ∂W
~ =η
∂E ∂E
AB ∂V · x̄ + ∂W · ȳ
9
k
where η = |eAB and k is a constant
~ |
[2+2]
10