0% found this document useful (0 votes)
32 views

01-ci-cs6

Uploaded by

deb.saikia03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

01-ci-cs6

Uploaded by

deb.saikia03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Computational Intelligence @ KIIT

CS 30011

Version: 2024/08/13

Author: Ajay Anand

Department: School of Computer Engineering

Institute: Kalinga Institute of Industrial Technology (DU), Bhubaneswar

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


Contents

0 Notices 1
0.1 Top Announcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.2 Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.3 Time Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.4 Lesson plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
0.5 Grading Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1 Introduction 4
1.1 Branches of Soft Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Characteristics of Soft Computing . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Artificial Neural Network 6


2.1 Overview of Biological Neural Network . . . . . . . . . . . . . . . . . . . . . 6
2.2 Artificial Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Single layer and multiple layer networks . . . . . . . . . . . . . . . . . . . . . 10
2.5 Madaline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Back Propagation Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Fuzzy Set Theory 16


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Popular membership function templates . . . . . . . . . . . . . . . . . . . . 17
3.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Fuzzy Set Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


Section 0 Notices

0.1 Top Announcement

Bookmark link to the course folder in your web browser and join the
whatsapp group.

0.2 Links

Note This file may be old. Refer to the latest version of this file from the course folder.
Course folder
https://drive.google.com/drive/folders/1v1xYs8OubR6aoZS36rpY7U7Tq0 axsxR
Whatsapp group
https://chat.whatsapp.com/KatvgJ8BLEM88hbAt4izgL

0.3 Time Table

Batch: CI-CS6
Tue 4 p.m. A-LH-009
Wed 11 a.m. A-LH-009
Fri 1 p.m. A-LH-009

0.4 Lesson plan

Module Topic Hours Lectures


1. Introduction to Soft 3 1-3
Computing and Neuro- 1. Introduction to Concept of computing
Fuzzy System 2. ”Soft” computing versus ”Hard” comput-
ing
3. Conventional AI
4. Constituents of Soft Computing
5. Neuro-Fuzzy Systems

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


0.4 Lesson plan

2. Artificial Neural Net- 15 4-18


works (ANN) 1. Introduction to ANN
2. Adaline and Madaline
3. Learning algorithms
4. Perceptron
5. Multilayer Perceptron (MLP) and Back-
propagation (BP) algorithm
6. Radial Basis Function Networks (RBF)

3. Fuzzy Set Theory 6 19-24


1. Fuzzy sets, Basic Definition and Termi-
nology
2. Member Function Formulation and Pa-
rameterization
3. Set-theoretic Operations and Fuzzy
sets operations (Union, Intersection and
Complement)

4. Fuzzy Rules, Fuzzy 7 25-31


Reasoning and Fuzzy 1. Extension Principle and Fuzzy Relations
Inference Systems 2. Fuzzy If-Then Rules and Fuzzy Reason-
ing
3. Fuzzy Inference Systems: Mamdani
Fuzzy Models, Sugeno Fuzzy Models,
Tsukamoto Fuzzy Models
4. Adaptive Neuro Fuzzy Inference Sys-
tems (ANFIS)

5. Optimization 3 32-34
1. Derivative-based Optimization and
Derivative-free Optimization
2. Genetic Algorithms (GA)
3. Differential Evolution (DE)

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


0.5 Grading Policy

6. Swarm Intelligence 5 35-39


1. Particle Swarm Optimization
2. Ant Colony Optimization
3. Artificial Bee Colony Optimization

0.4.1 Resources

Text Book
Neuro-Fuzzy and Soft Computing, Jang, Sun, Mizutani, PHI/Pearson Education
Reference Books
1. Neural Network Design, M. T. Hagan, H. B. Demuth, Mark Beale, Thomson Learning,
Vikash Publishing House
2. Genetic Algorithms: Search, Optimization and Machine Learning, Davis E. Goldberg, Ad-
dison Wesley, N.Y., 1989
3. Swarm Intelligence Algorithms: A Tutorial, Adam Slowik, Ed: CRC Press, 2020
4. Introduction to Soft Computing, Roy and Chakraborty, Pearson Education
5. Fuzzy Logic with Engineering Applications, Timothy J. Ross, McGraw-Hill, 1997
6. Neural Networks: A Comprehensive Foundation, Simon Haykin, Prentice Hall
7. Neural Networks, Fuzzy Logic and Genetic Algorithms, S. Rajasekaran and G.A.V. Pai,
PHI, 2003

0.5 Grading Policy

Before Mid-Semester Exam


1. Assignment 1 [5 marks]
2. Quiz 1 [10 marks]
3. Classroom participation [5 marks]

After Mid-Semester Exam


1. Assignment 2 [5 marks]
2. Quiz 2 [10 marks]
3. Classroom participation [5 marks]

Best of the two quiz marks will be used.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


Section 1 Introduction

Prelude

h Soft computing h Characteristics


h Hybrid systems

Soft Computing is a term coined to contrast the word hard computing. Hard computing
means to solve problems using precise methods to obtain accurate solutions. For example
logical reasoning and numerical search techniques. Like if x = 10 , 1/x is 0.1.

Soft Computing
Soft computing means to solve problems using imprecise methods to obtain approxi-
mate solutions. Here approximate reasoning and randomized search techniques are
used. ♣

For e.g. if x is high, 1/x is low. Soft computing deals with problems characterized by
uncertainty and imprecision just like human mind. Such problems are unsolvable by hard
computing.

Hard Computing Soft Computing


Precise and analytical Imprecise based on partial truth
Uses binary logic and crisp sets Uses fuzzy logic and fuzzy sets
Involves extensive computing Involves approximate computing
Input needs to be exact Deals with ambiguous and noisy input
Sequential in nature Allows parallelism
Results are precise and deterministic Results are approximate and stochastic
Table 1.1: Hard vs Soft Computing

1.1 Branches of Soft Computing

Soft computing uses three complementary branches of computational science: genetic


algorithms, neural networks and fuzzy logic to achieve its end.
Genetic Algorithms: Unorthodox search and optimization algorithms that mimic some
of the processes of natural evolution.
Neural Networks: It is a highly interconnected network of a large number of processing
elements in an architecture inspired by human brain. It is suitable for parallel distributed
computing.
Fuzzy Logic: Fuzzy logic is logic based on fuzzy set theory where each individual has
a grade of membership to various sets. This helps in modeling real world uncertainty.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


1.2 Hybrid Systems

1.2 Hybrid Systems

These are systems where more than one technology is used to solve a problem. The aim
is to consolidate the strength of individual technologies and eliminating their weaknesses.
1. Neuro Fuzzy: Fuzzy logic based neural network where fuzzy inputs are used.
2. Neuro Genetic: Optimizing neural networks using genetic algorithms.
3. Fuzzy Genetic: Run genetic algorithms with fuzzy constraints.

1.3 Characteristics of Soft Computing

1. It does not require any mathematical modeling of problem solving.


2. It may not yield the precision solution.
3. Algorithms are adaptive (i.e. it can adjust to the change of dynamic environment).
4. Use some biological inspired methodologies such as genetic, evolution, ant’s behavior,
particles’ swarming, human nervous system, etc.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


Section 2 Artificial Neural Network

Prelude

h Artificial neuron h Backpropagation


h Perceptron h Radial-basis functions
h Feed-forward multilayer network

2.1 Overview of Biological Neural Network

1. Human brain consists of 100 billion neurons.


2. Each neuron is connected to about 10,000 other neurons.
3. Electric impulses are passed from a neuron to next connected neuron.
4. Parts of a neuron:
(a). Dendrites: Heavily branched part of neuron that receives electric impulse from
neighboring neurons.
(b). Soma: The main body of the cell with nucleus.
(c). Axon: Carries electric output from the neuron.
(d). Synapses: Another branched part of neuron that transfers electrical signal to den-
drites of other neurons via an electrochemical fluid between them.

Figure 2.1: Biological neuron

5. If total received impulse is greater than a threshold, the neuron fires the neighboring
neurons. Its action is inhibitory if it prevents firing of next neuron or it can be excitatory
if helps firing the next neuron.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.2 Artificial Neuron

2.2 Artificial Neuron

1. Inspired by the biological neuron, a model of artificial neuron can be utilized in creating
massively parallel computing structure.
2. Artificial neuron behaves like biological neuron.
(a). It collects the weighted sum of input from neighboring neurons.
zj = XWjT
(b). If the total input is above firing potential θj , it sends output signal to other neurons.
yj = f (zj − θj )
Here w are the weights and f is the activation function or transfer function for
neuron j.

Figure 2.2: Artificial Neuron

(c). To provide a nonzero threshold, one of the inputs, say x0 , can be fixed as 1. Then
its weight will act like a threshold value.
zj = XWjT
yj = f (zj )
Here X contains x0 = 1 and W contains the term w0j extra.
3. In absence of activation function f , the neuron is not really useful. Activation function
makes it a highly nonlinear element. One example of activation function is unit step
function (also called Heaviside function), f = u(z).

0 if z < 0,

u(z) = (2.1)
1 if z ≥ 0.

2.2.1 McCulloch-Pitt’s Neuron

It is the simplest neuron model consisting of only two layers. Input layer represents the
input xi connected to the neuron through links with weights wi . Output layer consists of neuron
and its output y. Output is yj = f (XWjT ) where weights wi are fixed.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.2 Artificial Neuron

Example 2.1 Design a McCulloch-Pitt’s neuron that takes two binary inputs and performs
AND operation to get the output. [Derived on the board]
 Exercise 2.1 Design a neuron that takes two binary inputs and performs OR operation to get
the output.
 Exercise 2.2 Design a neuron that takes two binary inputs and performs XOR operation to
get the output.

2.2.2 Gradient descent and Adaline

1. Weights in McCulloch-Pitt’s model were fixed.


2. We can use gradient descent to update the weights (train the neuron).
(a). Let’s say Error(wij ) is the error in system where wij is a variable.
(b). We need to minimize the absolute value of Error(wij ). Let us define Error to be
half of the squared difference between actual and target output:
Error(wi j) = 21 (yj − ytj )2

Figure 2.3: Gradient descent

(c). The gradient descent method suggests that Error can be minimized by modifying
wij in following manner:
wij ← wij − η ∂∂Error
wij
This update equation is called learning rule and η is called learning rate.
3. Let us begin with a model that has linear activation function (f). The derivative of Error
when the activation is linear is:
∂ Error
∂ wij = (yj − ytj )xi
4. Such neurons with adjustable weights where output is linear function of inputs are called
adaline (adaptive linear neuron).
5. Adaline can be trained by the weight adjustment formula. The formula can be derived
by substituting the expression for Error gradient in the learning rule.
wij ← wij + η (ytj − yj )xi

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.3 Neural Networks

2.2.3 Perceptron

Perceptron
Perceptron is extension of McCulloch-Pitt’s model designed for pattern recognition abil-
ities. The first layer of perceptron acts as feature detector. Learning is achieved by
making adjustments to connection strengths and to the threshold θ.

1. Since the derivative of unit step function is zero everywhere (except at origin), the
weights of links connecting inputs and outputs can be updated using following sim-
plified formula:
wij ← wij + η ytj xi

Note Weights are updated only when yj ̸= ytj .
2. The inputs for perceptron may have been obtained by processing original sensory inputs
through an association layer. For example, original inputs can be polar in nature (rather
than binary) which are later converted to binary. Output may be polar or binary. For
binary output unit step function previously defined in eq. 2.1 may be used. For polar
output, following sign function may be used.




−1 if z < 0,

sign(z) = 0 if z = 0, (2.2)



1 if z > 0.

3. Differentiable activation functions: As we have seen, gradient descent method requires


calculation of derivative of activation function f but the derivative of unit step function of
sign function is not much useful. So several other differentiable functions can be used
like:

Function f(z) f’(z)


dy
Linear y=z dz
=1
1 dy
Sigmoid y= 1+e−z dz
= y(1 − y)
1−e−z dy 1
1 − y2

Hyperbolic tangent y= 1+e−z dz
= 2

Table 2.1: Commonly used activation functions

4. Perceptron can also have multiple outputs.

Example 2.2 Perform perceptron learning for OR operation using polar inputs and outputs.
Use learning rate, η = 1. [Solved on the board]
 Exercise 2.3 Plot the above activation functions and determine if they are binary or polar.
 Exercise 2.4 Derive and verify the derivatives f ′ (z) of above activation functions.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.3 Neural Networks

2.3 Neural Networks

As it was evident from exercise 2.2, McCulloch Pitt’s model can not be designed for XOR
operation, it can also be inferred that the model can not be trained to learn XOR by either
adaline or perceptron learning rules. This is because adaline and perceptron have one input
layer and one output layer. Input layer is not counted so they are single layer networks.

2.4 Single layer and multiple layer networks

Multiple layer network is a network with input layer, output layer and one or more addi-
tional layers called hidden layers. Multilayered network is also called multilayer perceptron
(MLP).

Figure 2.4: Multilayer Network

Notation for multiple layer network:

Layers numbered from 0 ... L


Total nodes in layer l = Nl
Output node index j varies from 1 to Nl
Input node index i varies from 1 to Nl −1
Table 2.2: Notation used in neural network

P 
Nl −1
Then the output of a neuron j is yj = f i=1 wij yi .

 Exercise 2.5 Realize XOR operation using an MLP.

2.5 Madaline

Madaline (multiple adaptive linear neurons) is a structure which consists of adalines


placed in parallel, each contributing to the output with some weight factor. The output of
madaline is 1 if majority of adalines are producing an output of 1.

10

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.6 Back Propagation Network

Figure 2.5: Madaline

2.6 Back Propagation Network

Back propagation algorithm is based on gradient descent technique of optimization.

Forward propagation

The network is presented input patterns one by one and output of all neurons are com-
puted starting from first hidden layer till the final output layer.

Backpropagation

Using the cumulative error, the weights are updated in the direction of gradient descent.
The weight update mechanismconsists of following steps:
1. A pattern is presented to the system and error signal is generated at the output nodes.
[Eq 2.3 and 2.4]
2. The error signal is propagated backwards from the output nodes till the input nodes.
[Eq 2.5]
3. Using the error value, the weights on the links are updated. [Eq 2.6]
4. Although small learning rate guarantees stable but slow convergence and high learning
rate increases the chances of failure to converge, a small momentum factor can help in-
creasing the learning rate without divergence (oscillations). In [Eq 2.6], α is momentum
factor.
5. This process is repeated until error signal is weaker than a specified value. [Recursion]
 
Nl −1
X
yl,j = f  wij yl −1,i  (2.3)
i=0
 ′
δL,i = yt,i − yL,i fL,i (2.4)
 
XNl+1
δl,i =  δl+1,j wij  fl,i′ (2.5)
j=1

∆wij = ηδl+1,j yl,i + α∆wij,old (2.6)

11

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.6 Back Propagation Network

Example 2.3 Consider a two-layer feedforward ANN with two inputs x1 and x2 , hidden units
h1 and h2 , and output units y1 and y2 . Initialize the weights with the value 0.1 each and
compute their new values for first two training iterations (one iteration on each input) using
backpropagation. Assume learning rate of 0.5 and momentum factor of 0.1 with incremental
weight updates.

x0 = 1 x0 = 1

w1 w7
w2 w8
x1 h1 y1 y1
w3 w9
w4 w10
w5 w11
x2 h2 y2 y2
w6 w12

x1 x2 t1 t2
1 0 1 0
0 1 0 1

Table 2.3: Traning data

1 w1 = 0.1 w7 = 0.1
2 w2 = 0.1 w8 = 0.1
3 w3 = 0.1 w9 = 0.1
4 w4 = 0.1 w10 = 0.1
5 w5 = 0.1 w11 = 0.1
6 w6 = 0.1 w12 = 0.1

8 h1=f(w1 + w2 X1 + w3 x2) = 0.5498


9 h2=f(w4 + w5 X1 + w6 x2) = 0.5498
10 y1=f(w7 + w8 h1 + w9 h2) = 0.5523
11 y2=f(w10 + w11 h1 + w12 h2) = 0.5523

13 gy1 = (t1 - y1)* y1’ = 0.1107


14 gy2 = (t2 - y2)* y2’ = -0.1366
15 gh1 = (gy1 w8 + gy2 w11) h1’ = - 0.0006
16 gh2 = (gy1 w9 + gy2 w12) h2’ = - 0.0006

18 ∆w1 = η x0 gh1 = -0.0003 ∆w7 = η x0 gy1 = 0.0554


19 ∆w2 = η x1 gh1 = -0.0003 ∆w8 = η h1 gy1 = 0.0304

12

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.6 Back Propagation Network

20 ∆w3 = η x2 gh1 = 0.0000 ∆w9 = η h2 gy1 = 0.0304


21 ∆w4 = η x0 gh2 = -0.0003 ∆w10 = η x0 gy2 = - 0.0683
22 ∆w5 = η x1 gh2 = -0.0003 ∆w11 = η h1 gy2 = -0.0375
23 ∆w6 = η x2 gh2 = 0.0000 ∆w12 = η h2 gy2 = -0.0375

25 w1 = 0.0997 w7 = 0.1554
26 w2 = 0.0997 w8 = 0.1304
27 w3 = 0.1 w9 = 0.1304
28 w4 = 0.0997 w10 = 0.0317
29 w5 = 0.0997 w11 = 0.0625
30 w6 = 0.1 w12 = 0.0625

2.6.1 Effects of Training Parameters

The following factors were discussed in the class.

1. Live (on-line) learning vs batch learning


2. Learning rate
3. Momentum factor
4. Initial weights
5. Number of nodes and layers
6. Training patterns
For example, if the number of nodes are to many the network will overlearn (memorize)
the input. When nodes are too few, the network will underlearn (fail to learn) the task.

2.6.2 Extending Back Propagation on Recurrent Networks

Recurrent Network
A network with nodes connected to form a directional loop is called a recurrent network.

Figure 2.6: Recurrent Network

The state of such systems at time t+1 is dependent on the state at time t. We can apply
back propagation on such networks by unfolding the network. The network is unfolded in time.

13

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.6 Back Propagation Network

Figure 2.7: Unfolding in time

Each time frame adds a new layer consisting of copies of all original nodes hence only
a limited number of frames can be supported. Applying back propagation on such network is
fairly straightforward. This is called BPTT (backpropagation through time).

2.6.3 Radial Basis Function Networks (RBFN)

RBFN
RBFN are ”locally tuned” receptive networks. There are no connection weights. Rather
there are proximity weights of the links that are used radially according to following
equation:
|x −w |2

y=e σ2 (2.7)

Euclidean distance between inputs (x) and weights (w) is calculated. The output signal
is a Gaussian function of the euclidean distance.

14

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


2.6 Back Propagation Network

Architecture

Figure 2.8: RBNF Models

Inputs are connected to a hidden layer containing receptive fields. The final output of
RBFN is weighted sum of the outputs of receptive fields (fig.2.8 a). Here outputs of the re-
ceptive fields are being multipled by weights C. The output may be normalized by dividing
the original output by sum of the individual outputs (fig.2.8 b). Finally, fig.2.8 c and d show
”weighted sum” and ”normalized sum” cases for multiple output RBFN respectively.

15

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


Section 3 Fuzzy Set Theory

3.1 Introduction

3.1.1 Crisp Set

Crisp set or classical set is a set with a well defined boundary. There is clear distinction
between belonging to a set and not belonging to a set. Eg. is tank more than half filled?

Properties of Crisp Set

Law of contradiction A ∩ A′ = ∅
Law of excluded middle A ∪ A′ = X
Idempotency A∩A=A
A∪A=A
Involution A’ ’ = A
Commutativity A∩B =B∩A
A∪B =B∪A
Associativity (A ∪ B) ∪ C = A ∪ (B ∪ C)
(A ∩ B) ∩ C = A ∩ (B ∩ C)
Distributivity A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
Absorption A ∪ (A ∩ B) = A ∩ (A ∪ B) = A
DeMorgan’s laws (A ∪ B)′ = A′ ∩ B′
(A ∩ B)′ = A′ ∪ B′
Table 3.1: Operations on crisp sets

3.1.2 Fuzzy Set

It is a set without a crisp boundary. The transition from ”belong to a set” to ”not belong
to a set” is gradual. Eg. is tank almost full? Fuzziness does not arise from randomness, but
from imprecise nature of abstract concepts.

Fuzzy set
A fuzzy set A in X is defined as set of ordered pairs:

A = { (x, µ(x)) | x ∈ X }

where µ is membership function or MF that maps each element of X to a membership


grade between 0 and 1.

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


3.2 Popular membership function templates

If output of MF is limited to 0 and 1, it becomes the classical case. X is called universe


of discourse and it may consist of discrete values (like good amount of public holidays in a
year) or continuous space (good amount of rain in a year).

Examples

1. Ideal number of holidays

A = {(11, 0.2), (12, 0.5), (13, 0.7), (14, 1), (15, 1), (16, 0.8), (17, 0.6)}

2. Ideal rainfall
( )
1
B=  x
4 x ∈ (0, ∞)
1 + log 4.5

3. Classical example of young, middle aged and old aged.

Figure 3.1

3.2 Popular membership function templates

x −a c −x
Triangular 0, ,
b −a c −b
,0
x −a
Trapezoidal 0, b −a
, 1, dd − x
−c , 0
2
− 12 ( x −c
σ )
Gaussian e
1
Generalized bell c 2b
1+( x −
a )
1
Sigmoidal 1+e−a(x −c)

Table 3.2: Types of membership functions

17

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


3.3 Terminology

Figure 3.2: Commonly used membership functions

3.3 Terminology

Term Definition
support All points where membership is not 0. support(A) = {x | µA (x) > 0}
core All points where membership is full. core(A) = {x | µA (x) = 1}
normality A normal fuzzy set has non-empty core(A) ̸= ∅
core.
corssover Points where membership is 0.5. crossover(A) = {x | µA (x) = 0.5}
point
fuzzy single- Fuzzy set whose support is a single
ton point with membership value of 1.
α-cut or A crisp set with cutoff of α. Aα = {x | µA (x) ≥ α}
α-level set
strong α-cut A crisp set with strict cutoff of α. Aα = {x | µA (x) > α}
convex every α-cut is convex µA(m) < min(µA(a) , µA(b) ), a ≤ m ≤ b.
bandwith Distance between crossover points in
a normal complex fuzzy set.
symmetry Set is symmetric around x = c if µA (c − x) = µA (c + x) ∀ x ∈ X
open left limx →−∞ µA (x) = 1 and
limx →+∞ µA (x) = 0
open right limx →−∞ µA (x) = 0 and
limx →+∞ µA (x) = 1
closed limx →−∞ µA (x) = limx →+∞ µA (x) = 0

18

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar


3.4 Fuzzy Set Operations

Table 3.3: Fuzzy set terminology

Figure 3.3

3.4 Fuzzy Set Operations

We define containment (subset) A ⊂ B if µA(x) < µB(x)



1. Union: µC(x) = max µA(x) , µB(x) or algebraic sum: µA(x) + µB(x) − µA(x) µB(x)

2. Intersection: µC(x) = min µA(x) , µB(x) or algebraic product: µA(x) µB(x)
3. Complement: 1 − µ(x)

Figure 3.4

19

2024 Ajay Anand | [email protected] | Room 13, Campus 15 A, KIIT Bhubaneswar

You might also like