0% found this document useful (0 votes)
5 views10 pages

Spring 2015 Mid-Sem Q_A

The document is a class test for the course IT60108: Soft Computing and Application at IIT Kharagpur, consisting of multiple questions related to fuzzy sets, fuzzy relations, and artificial neural networks. It includes theoretical questions, graphing tasks, and mathematical modeling of fuzzy relations and artificial neurons. The test covers topics such as membership functions, fuzzy operations, and the architecture of neural networks.

Uploaded by

yaswanth0376626
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views10 pages

Spring 2015 Mid-Sem Q_A

The document is a class test for the course IT60108: Soft Computing and Application at IIT Kharagpur, consisting of multiple questions related to fuzzy sets, fuzzy relations, and artificial neural networks. It includes theoretical questions, graphing tasks, and mathematical modeling of fuzzy relations and artificial neurons. The test covers topics such as membership functions, fuzzy operations, and the architecture of neural networks.

Uploaded by

yaswanth0376626
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

School of Information Technology

Indian Institute of Technology Kharagpur


IT60108: Soft Computing and Application
Class Test - I
F.M. 20 Session 2014 − 2015 Time: 20 mins

Q. 1

(a) µA (x) and µB (x) are the membership functions of the fuzzy sets A and B, respectively.
1
µA (x) = e 1+x
1
µB (x) = 1+( x−50 )4
10

Decide whether A and B are closed or open.


Answer : A membership function µ(x) is said to be closed iff limx→−∞ µ( x) =0= limx→α µ(x)
1
Case 1 : µA (x) = e 1+n : Here limx→−∞ µA (x) = limx→+∞ µA (x) = 1
Hence, it is neither closed nor open.
1
Case 2 : µB (x) = 1+( x−50 )4
: Here limx→−∞ µB (x) = limx→+∞ µA (x) = 0
10
Hence, it is closed.
[2+2]

(b) Given two fuzzy sets A and B defined over universe of discourses X and Y , respectively.
A = {(20, 0.2), (25, 0.4), (30, 0.6), (35, 0.6), (40, 0.7), (45, 0.8),
(50, 0.8)}
B = {(1, 0.8), (2, 0.8), (3, 0.6), (4, 0.4)}
X = {10, 15, 20, 25, 30, 35, 40, 45, 50, 55}
Y = {0, 1, 2, 3, 4, 5}
Draw the graphs for the following.

i. A × B
ii. A =⇒ B

(i) A × B

1 2 3 4
 
20 0.2 0.2 0.2 0.2
25 
 0.4 0.4 0.4 0.4 

30 
 0.6 0.6 0.6 0.4 

µA×B (x, y) = min{µA (x), µB (y)} = 35 
 0.6 0.6 0.6 0.4 

40 
 0.7 0.7 0.6 0.4 

45  0.8 0.8 0.6 0.4 
50 0.8 0.8 0.6 0.4

(ii) A ⇒ B
For this many interpretation are possible.
A ⇒ B ≡ Ā ∪ B or A × B or (A × B) ∪ (Ā × Y )
Accordingly answer will be different.
[4+4]
Q. 2

(a) Suppose, a fuzzy relation is ‘If x is A then y is B’. How to find the following:

i. x is C, given that y is D
ii. y is D, given that x is C

Answer :
GPM
if x is A then y is B
x is A0
− − − − − − − − − − − − − − −−
y is B 0
GMT is
if x is A then y is B
y is B 0
− − − − − − − − − − − − − − −−
x is A0
Since C and D are not mentioned as A0 or B 0 and (vice-versa) none of the GMP and GMT are
applicable in this case and hence we can not deduce anything.
[3+3]

(b) Two fuzzy sets P and Q are defined on x∈ X as follows.


x1 x2 x3 x4 x5
P 0.1 0.2 0.7 0.5 0.4
Q 0.9 0.6 0.3 0.2 0.8

Find (i.) (P ∩ Q)0.4 (ii.) (P × Q)0.4


Answer :
(i)
P ∩ Q = (x1 , 0.1), (x2 , 0.2), (x3 , 0.7), (x4 , 0.5), (x5 , 0.2)

2
∴ (P ∩ Q)0.4 = {x|µ(x) ≥ 0.4} = {x3 , x4 }
(ii)
x1 x2 x3 x4 x5 x1 x2 x3 x4 x5
   
x1 0.1 0.1 0.1 0.1 0.1 x1 0 0 0 0 0
x2 
 0.2 0.2 0.2 0.2 0.2 
 x2 
 0 0 0 0 0 

P ×Q= x3 
 0.7 0.6 0.3 0.2 0.7 , (P × Q)0.4 =
 x3 
 1 1 0 1 1 

x4  0.5 0.5 0.3 0.2 0.4  x4  1 1 0 0 1 
x5 0.4 0.4 0.3 0.2 0.4 x5 1 1 0 0 1
[3+3]

3
Q. 3

(a) The membership functions of two fuzzy sets A and B are shown in the following graph.
A: climate is Hot.
B: climate is Cold.

1.0
B A

-15 -10 -5 0 5 10 15 20 25 30 35 40

i. Draw the graph of the membership function, which represents the fuzzy set C: climate is
Extreme.
Answer : Climate is Extreme ≈ Climate is Hot OR Climate is Cold.

1.0  A B

-15 -10 -5 0 5 10 15 20 25 30 35 40

ii. What would be the graph of the membership function µD of the fuzzy set D = (A ∩ C)?
State D in terms of fuzzy linguistic.
Answer :
D
1.0

B A

-15 -10 -5 0 5 10 15 20 25 30 35 40
¯ B = Climate is not Pleasant
Lingustic interpretation : A ∩ B = Climate is Pleasant, D = A ∩
[3+3]
(b) Two fuzzy relations ‘likes’ and ‘earns’ are defined below.
F ootball Hockey Cricket
 
Dhoni 0.1 0.3 0.8
V irat  0.2 0.7 0.5 
likes =  
Rohit  0.5 0.4 0.2 
Sekhar 0.4 0.5 0.6
For example, x likes Game.
10L 50L 100L
 
Dhoni 0.6 0.3 0.2
V irat  0.4 0.7 0.8 
earns = 
 0.1

Rohit 0.3 0.2 
Sekhar 0.5 0.2 0.6

4
For example, x earns Money.
Obtain the relation between a game to a money? [6]
Answer :
This relation can be obtained as likesT ◦ earns. That is,

10L 50L 100L


Dhoni V irat Rohit Sekhan  
  Dhoni 0.6 0.3 0.2
F ootboll 0.1 0.2 0.5 0.4  0.4
 0.3 ◦ V irat 0.7 0.8 
Hockey 0.7 0.4 0.5 
 0.1
=
Rohit 0.3 0.2 
Cricket 0.8 0.5 0.2 0.6
Sekhan 0.5 0.2 0.6
10L 50L 100L
 
F ootball 0.4 0.3 0.4
Hockey  0.5 0.7 0.7 
Circket 0.6 0.5 0.6

5
Q. 4

(a) What are the components you should consider in order to mathematically model an artificial
neuron ?
[4]
Answer : Two components namely summation until and threshold unit are required to mathe-
matically model as artificial neuron. Two components can be defined as follows.

x1 Ø(I)
w1

x2 w2

w3
I y
x3
…..

wn

xn

Summation Threshold unit output


input weight
unit

Pn
Summation unit I = i=1 xi · wi . Threshold unit = φ(I), where φ is some transfer function.
1
(b) If φ(I) = 1+e−αI
is a transfer function in a perceptron, then show that
Answer :
1−φ(I)
φ(I) = 1
1+e−αI
; Let z = 1 + e−αI . ∴ ∂z
∂I = −α · e−αI = −α · φ(I)
∂φ(I) ∂φ(I) ∂z
∂I = ∂z · ∂I

1−φ(I)
= − z12 · −α · φ(I)
1−φ(I) 1
= φ(I)2 · α · φ(I) ,∵ z = φ(I)
= α(1 − φ(I)) · φ(I) [3]

(c) Draw a schematic diagram of a multi-layer feed-forward artificial neural network architecture and
clearly label the different elements in it.
Give one application, where you should apply such an ANN architecture.
Answer :

Application : Such an ANN would be applied to problems whose output are non-separable with
resepct to input. [4+1]

6
Io O
I OI [V] IH OH [W]
21 31

v11

......
wj1

......
1 11
I

.
1
1

vi
v1i

......
vl1
wjk
vij
I i1 1i
v1m
2j 3k
......

vlj

......
vlm

......
wjn

.
1 1l
I l

vlm

2m 3n

OI IH OH Io

I N1 N2 N3 O

[V] [W]
Input layer Hidden Layer Output Layer

|N2|=m |N3|=n
|N1|=l

Linear transfer Log-Sigmoid transfer Tan-Sigmoid transfer


function, ɵ function function

f jm  ( I Hj ,  j ) 
1 e   o I o  e  o I o
fi l  ( I i , i )  j I Hj f kn  ( I ko ,  k ) 
1 e e   o I o  e  o I o

Q. 5

(a) Show how the computations in input, hidden and output layers of an ANN can be accomplished
in terms of matrix algebra.
[2+2+2]
Answer :
Whole learning method consists of the following three computations:

(a) Input layer computation


(b) Hidden layer computation
(c) Output layer computation

In our computation, we assume that < T0 , TI > be the training set of size |T |.

• Let us consider an input training data at any instant be I I = [I11 , I21 , · · · , Ii1 , Il1 ] where I I ∈ TI
• Consider the outputs of the neurons lying on input layer are the same with the corresponding
inputs to neurons in hidden layer. That is,
OI = I I
[l × 1] = [l × 1] [Output of the input layer]
• The input of the j-th neuron in the hidden layer can be calculated as follows.

IjH = v1j oI1 + v2j oI2 +, · · · , +vij oIj + · · · + vij oIl

7
where j = 1, 2, · · · m.
[Calculation of input of each node in the hidden layer]
• In the matrix representation form, we can write
I H = V T · OI
[m × 1] = [m × l] [l × 1]

• Let us consider any j-th neuron in the hidden layer.


• Since the output of the input layer’s neurons are the input to the j-th neuron and the j-th
neurons follows the log-sigmoid transfer function, we have
1
OjH = −α ·I H
1+e H j

where j = 1, 2, · · · , m and αH is the constant co-efficient of the transfer function.

Note that all output of the nodes in the hidden layer can be expressed as a one-dimensional column
matrix.
 
···

 ··· 

 .. 

 . 

1
OH =  −αH ·IjH 
 
1+e
..
 
 

 . 

 ··· 
··· m×1

Let us calculate the input to any k-th node in the output layer. Since, output of all nodes in the
hidden layer go to the k-th layer with weights w1k , w2k , · · · , wmk , we have

IkO = w1k · oH H H
1 + w2k · o2 + · · · + wmk · om

where k = 1, 2, · · · , n
In the matrix representation, we have

I O = W T · OH
[n × 1] = [n × m] [m × 1]

Now, we estimate the output of the k-th neuron in the output layer. We consider the tan-sigmoid
transfer function.
o o
eαo ·Ik −e−αo ·Ik
Ok = α ·I o −α ·I o
e o k +e o k

for k = 1, 2, · · · , n
Hence, the output of output layer’s neurons can be represented as
 
···

 ··· 

 .. 
 α ·I o . −α ·I o 
 
 e o k −e o k 
O =  αo ·I o −αo ·I o 
 e k +e k
 .. 

 . 

 ··· 
··· n×1

8
(b) Explain the basic principle of calculating error in supervised learning.
[2]

Answer :

• Let us consider any k-th neuron at the output layer. For an input pattern Ii ∈ TI (input in
training) the target output TOk of the k-th neuron be TOk .
• Then, the error ek of the k-th neuron is defined corresponding to the input Ii as

ek = 1
2 (TOk − OOk )2

where OOk denotes the observed output of the k-th neuron.


• For a training session with Ii ∈ TI , the error in prediction considering all output neurons can
be given as
Pn 1 Pn
e= k=1 ek = 2 k=1 (TOk − OOk )

where n denotes the number of neurons at the output layer.


• The total error in prediction for all output neurons can be determined considering all training
session < TI , TO > as
Pn
1
− OOk )2
P P
E= ∀Ii ∈TI e= 2 ∀t∈<TI ,TO > k=1 (TOk

(c) Derive the ‘delta rule’ according to the method of Steepest descent.

• For simplicity, let us consider the connecting weights are the only design parameter.
• Suppose, V and W are the wights parameters to hidden and output layers, respectively.
• Thus, given a training set of size N , the error surface, E can be represented as

E= N i
P
i=1 e (V, W, Ii )

where Ii is the i-th input pattern in the training set and ei (...) denotes the error computation
of the i-th input.

• Now, we will discuss the steepest descent method of computing error, given a changes in V
and W matrices.
~
• Suppose, A and B are two points on the error surface (see figure in Slide 30). The vector AB
can be written as

~ = (Vi+1 − Vi ) · x̄ + (Wi+1 − Wi ) · ȳ = ∆V · x̄ + ∆W · ȳ
AB
~ can be obtained as
The gradient of AB

∂E ∂E
~ =
eAB ∂V · x̄ + ∂W · ȳ

Hence, the unit vector in the direction of gradient is

1
 ∂E ∂E

ēAB
~ = |eAB · x̄ + · ȳ
~ | ∂V ∂W

• With this, we can alternatively represent the distance vector AB as

~ =η
 ∂E ∂E

AB ∂V · x̄ + ∂W · ȳ

9
k
where η = |eAB and k is a constant
~ |

• So, comparing both, we have


∂E
∆V = η ∂V
∂E
∆W = η ∂W

This is also called as delta rule and η is called learning rate.

[2+2]

10

You might also like