Lab Manual Soft Computing IT - 701
Lab Manual Soft Computing IT - 701
Semester-VII
Index
Sr. No. Contents Page No.
1. List of Experiments
4. Experiment No. 1
5. Experiment No. 2
6. Experiment No. 3
7. Experiment No. 4
8. Experiment No. 5
9. Experiment No. 6
Department of Computer
Engineering
List of Experiments
Sr. No. Experiments Name
Implementation of Fuzzy Operations.
1
Implementation of Fuzzy Relations (Max-min Composition)
2
Implementation of Fuzzy Controller (Washing Machine)
3
Implementation of Simple Neural Network (McCulloh-Pitts model)
4
Implementation of Perceptron Learning Algorithm
5
Implementation of Unsupervised Learning Algorithm
6
Implementation of Simple Genetic Application
7
Study of ANFIS Architecture
8
Study of Derivative-free Optimization
9
Study of research paper on Soft Computing.
10
1
Department of Computer Engineering
Soft Computing
Experiment No. : 1
5
Department of Computer
Engineering
Experiment No. 1
1. Aim: Implementation of Fuzzy Operations.
2. Objectives:
5. Theory:
Fuzzy Logic:
Fuzzy logic is an organized method for dealing with imprecise data. It is a multivalued
logic that allows intermediate values to be defined between conventional solutions.
In classical set theory, the membership of elements in a set is assessed in binary terms according
to a bivalent condition — an element either belongs or does not belong to the set. By contrast,
fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is
described with the aid of a membership function valued in the real unit interval [0, 1].
Bivalent Set Theory can be somewhat limiting if we wish to describe a 'humanistic' problem
mathematically. For example, Fig 1 below illustrates bivalent sets to characterize the temperature
6
Department of Computer
Engineering
of a room.The most obvious limiting feature of bivalent sets that can be seen clearly from the
diagram is that they are mutually exclusive - it is not possible to have membership of more than
one set. Clearly, it is not accurate to define a transition from a quantity such as 'warm' to 'hot' by
the application of one degree Fahrenheit of heat. In the real world a smooth (unnoticeable) drift
from warm to hot would occur.
Fuzzy Sets:
Let Then,
is called not included in the fuzzy set if
is called fully included if
is called a fuzzy member if .
Example:
1. Union:
7
Department of Computer Engineering
Example:
2. Intersection
Union of two fuzzy sets and B is denoted by and is defined as,
Example:
3. Complement
Complement of a fuzzy set denoted by and is defined as,
8
Department of Computer
Engineering
Example:
4. Algebraic Sum
Algebraic sum of two fuzzy sets and B is denoted by and is defined as,
Example:
5. Algebraic Product
as, Example:
6. Conclusion
The concepts of union, intersection and complement are implemented using fuzzy sets which
helped to understand the differences and similarities between fuzzy set and classical set
theories. It provides the basic mathematical foundations to use the fuzzy set operations.
7. Viva Questions:
What is the difference between the crisp set and fuzzy set?
8. References:
10
Department of Computer Engineering
Soft Computing
Experiment No. : 2
11
Department of Computer
Engineering
Experiment No. 2
1. Aim: Implementation of fuzzy relations (Max-Min Composition)
2. Objectives:
5 Theory:
Fuzzy relation in different product space can be combined with each other by the
operation called ―Composition‖. There are many composition methods in use , e.g. max-
product method, max-average method and max-min method. But max-min composition
method is best known in fuzzy logic applications.
Definition:
Composition of Fuzzy Relations
Consider two fuzzy relation; R (X ×Y) and S (Y ×Z), then a relation T (X × Z ), can be
expressed asmax-min composition
T=RoS
µT (x, z) = max-min [µR (x, y), µS (y, z)]
12
Department of Computer
Engineering
T=RoS
µT (x, z) = max [µR (x, y) . µS (y, z)]
The max-min composition can be interpreted as indicating the strength of the existence of
relation between the elements of X and Z.
Crisp relation:
Fuzzy relation:
If , then
13
Department of Computer
Engineering
Max-Min Composition:
Let X,Y and Z be universal sets and let R and Q be relations that relate them as,
Example: and
6. Conclusion
With the use of fuzzy logic principles max min composition of fuzzy set is
calculated which describes the relationship between two or more fuzzy sets.
7. Viva Questions:
What is the main difference between the probability and fuzzy logic?
What are the types of fuzzy logic sets?
Who is the founder of fuzzy logic?
14
Department of Computer
Engineering
8. References:
15
Department of Computer Engineering
Soft Computing
Experiment No. : 3
16
Department of Computer
Engineering
Experiment No. 3
1. Aim: Implementation of fuzzy controller(Washing Machine)
2. Objectives :
Cover fuzzy logic inference with emphasis on their use in the design of
intelligent or humanistic systems.
Prepare the students for developing intelligent systems.
5. Theory:
Control System:
Any system whose outputs are controlled by some inputs to the system is called control
system.
17
Department of Computer
Engineering
Fuzzy Controller:
Fuzzy controllers are the most important applications of fuzzy theory. They work
different than conventional controllers as:
Expert knowledge is used instead of differential equations to describe a system.
This expert knowledge can be expressed in very natural way using linguistic variables,
which are described by fuzzy sets.
The fuzzy controllers are constructed in following three stages:
1. Create the membership values (fuzzify).
2. Specify the rule table.
3. Determine your procedure for defuzzifying the result.
To design a system using fuzzy logic, input & output is necessary part of the system.
Main function of the washing machine is to clean cloth without damaging the cloth. In
order to achieve it, the output parameters of fuzzy logic, which are the washing
parameters, must be given more importance. The identified input & output parameters
are:
Input: 1. Degree of dirt
2. Type of dirt
Output: Wash time
18
Department of Computer Engineering
Fuzzy sets:
The fuzzy sets which characterize the inputs & output are given as follows:
1. Dirtness of clothes
19
Department of Computer Engineering
Type of dirt
3. Wash time
Procedure:
For the fuzzification of inputs, that is, to compute the membership for the antecedents,
the formula used is,
20
Department of Computer Engineering
Slope 1 Slope 2
MAX
Degree of
membership
Point 1 x Point 2
S M L
NG VS S M
M M M L
G L L VL
1. If Dirtness of clothes is Large and Type of dirt is Greasy then Wash Time is Very Long;
2. If Dirtness of clothes is Medium and Type of dirt is Greasy then Wash Time is Long;
21
Department of Computer
Engineering
3. If Dirtness of clothes is Small and Type of dirt is Greasy then Wash Time is Long;
4. If Dirtness of clothes is Large and Type of dirt is Medium then Wash Time is Long;
5. If Dirtness of clothes is Medium and Type of dirt is Medium then Wash Time is Medium
6. If Dirtness of clothes is Small and Type of dirt is Medium then Wash Time is Medium;
7. If Dirtness of clothes is Large and Type of dirt is Not Greasy then Wash Time is Medium;
8. If Dirtness of clothes is Medium and Type of dirt is Not Greasy then Wash Time is Short;
9. If Dirtness of clothes is Small and Type of dirt is Not Greasy then Wash Time is Very Short;
Fuzzy Output
Fuzzy output of the system is the ‗fuzzy OR‘ if all the fuzzy outputs of the rules with non zero
rule strengths.
Step 3: Defuzzification
6. Conclusion
Fuzzy controller for washing machine application is implemented using the fuzzy logic
which also defines the rules for fuzzification and de-fuzzification. We have analyzed how
outputs are controlled by some inputs of the system through this experiment.
7. Viva Questions:
What is the reason that logic function has rapidly become one of the most
successful technologies for developing sophisticated control systems?
What is sequence of steps taken in designing a fuzzy logic machine?
22
Department of Computer
Engineering
8. References:
23
Department of Computer Engineering
Soft Computing
Experiment No. : 4
24
Department of Computer
Engineering
Experiment No. 4
1. Aim: To implement Mc-Culloch pits Model using XOR
2. Objectives:
The student will be able to obtain the fundamentals and different architecture
of neural networks.
The student will have a broad knowledge in developing the different
algorithms for neural networks.
Describe the relation between real brains and simple artificial neural network
models.
Understand the role of neural networks in engineering.
Apply the knowledge of computing and engineering concept to this discipline.
5. Theory:
Neural network was inspired by the design and functioning of human brain and
components.
Definition:
―Information processing model that is inspired by the way biological nervous system
(i.e the brain) process information, is called Neural Network.‖
Neural Network has the ability to learn by examples. It is not designed to perform fix
/specific task, rather task which need thinking (e.g. Predictions).
ANN is composed of large number of highly interconnected processing
elements(neurons) working in unison to solve problems. It mimic human brain. It is
configured for special application such as pattern recognition and data classification
through a learning process. ANN is 85-90% accurate.
25
Department of Computer
Engineering
Y- output neuron
Yin= x1w1+x2w2
Output is :
y=f(Yin)
Output= function
The early model of an artificial neuron is introduced by Warren McCulloch and Walter
Pitts in 1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It
is a neuron of a set of inputs I1,I2,I3…Im and one output y . The linear threshold gate
simply classifies the set of inputs into two different classes. Thus the output y is binary.
Such a function can be described mathematically using these equations:
26
Department of Computer
Engineering
For inhibition to be absolute, the threshold with the activation function should satisfy
the following condition:
θ >nw –p
Output will fire if it receives ―k‖ or more excitatory inputs but no inhibitory inputs where
kw≥θ>(k-1) w
27
Department of Computer
Engineering
X1 X2 Y
0 0 0
0 1 1
1 0 1
1 1 0
Yin=x1w1+x2w2
As we know,
X1 X2 Z1
0 0 0
0 1 0
1 0 1
1 1 0
For Z1,
Θ=1
28
Department of Computer
Engineering
X1 X2 Z2
0 0 0
0 1 1
1 0 0
1 1 0
For Z2,
Θ=1
Y=Z1+Z2
Z1 Z2 Y
0 0 0
0 1 1
1 0 1
1 1 0
For Y,
Θ=1
6. Conclusion:
Mc-Culloch pits Model is implemented for XOR function by using the thresholding
activation function. Activation of M-P neurons is binary (i.e) at any time step the neuron
may fire or may not fire. Threshold plays major role here.
7. Viva Questions:
What are Neural Networks? What are the types of neural networks?
29
Department of Computer
Engineering
8. References:
30
Department of Computer Engineering
Soft Computing
Experiment No. : 5
31
Department of Computer
Engineering
Experiment No. 5
1. Aim: Implementation of Single layer Perceptron Learning Algorithm.
2. Objectives:
To become familiar with neural networks learning algorithms from available
examples.
Provide knowledge of learning algorithm in neural networks.
5. Theory:
In late 1950s, Frank Rosenblatt introduced a network composed of the units that were
32
Department of Computer
Engineering
6. Algorithm:
The perceptron learning rule was originally developed by Frank Rosenblatt in the late 1950s.
Training patterns are presented to the network's inputs; the output is computed. Then the
connection weightswjare modified by an amount that is proportional to the product of
the difference between the actual output, y, and the desired output, d, and
the input pattern, x.
where
d is the desired output,
t is the iteration number, and
eta is the gain or step size, where 0.0 < n < 1.0
33
Department of Computer Engineering
Learning only occurs when an error is made; otherwise the weights are left unchanged.
Multilayer Perceptron
Output Values
Output Layer
Adjustable Weights
X1 X2 Y
0 0 0
0 1 0
1 0 0
1 1 1
34
Department of Computer
Engineering
7. Conclusion:
Single layer perceptron learning algorithm is implemented for AND function. It is used
for train the iterations of neural network. Neural network mimics the human brain and
perceptron learning algorithm trains the neural network according to the input given.
8. Viva Questions:
9. References:
35
Department of Computer
Engineering
Soft Computing
Experiment No. : 6
36
Department of Computer
Engineering
Experiment No. 6
1. Aim: Implementation of unsupervised learning algorithm – Hebbian Learning
2. Objectives:
5. Theory:
These types of model are not provided with the correct results during the training.
It can be used to cluster the input data in classes on the basis of their statistical properties
only.
The labelling can be carried out even if the labels are only available for a small
number of objects represented of the desired classes. All similar input patters are grouped
together as clusters. If matching pattern is not found, a new cluster is formed.
37
Department of Computer
Engineering
number of different patterns & learns how to classify input data into appropriate
categories. Unsupervised learning tends to follow the neuro-biological organization of
brain. It aims to learn rapidly & can be used in real-time.
Hebbian Learning:
In 1949, Donald Hebb proposed one of the key ideas in biological learning,
commonly known as Hebb‘s Law. Hebb‘s Law states that if neuron i is near enough is
excite enough to excite neuron j & repeatedly participates in its activation, the synaptic
connection between these two neurons is strengthened & neuron j becomes more
sensitive to stimuli from neuron i.
1. If two neurons on either side of a connection are activated synchronously, then the
weight of that connection is increased.
2. If two neurons on either side of a connection are activated asynchronously, then the
weight of that connection is decreased.
Hebb‘s law provide basis for learning without a teacher. Learning here is a
local phenomenon occurring without feedback from the environment.
Hebbian learning implies that weights can only increase. To resolve this
problem, we might impose a limit on the growth of synaptic weights. It can be
done by introducing non-linear forgetting factor into Hebb‘s Law:
Step 1: Initialization
Set initial synaptic weights and thresholds to small random values, say in an interval [0,1].
Step 2: Activation
Step 3: Learning
Step 4: Iteration
7. Conclusion:
8. Viva Questions:
39
Department of Computer
Engineering
9. References:
40
Department of Computer Engineering
Soft Computing
Experiment No. : 7
41
Department of Computer
Engineering
Experiment No. 7
1. Aim: Implementation Genetic Application – Match Word Finding.
2. Objectives:
Creating an understanding about the way the GA is used and the domain of
application.
To appreciate the use of various GA operators in solving different types of GA
problems.
Match the industry requirements in the domains of Programming and Networking
with the required management skills.
5. Theory:
Genetic algorithm:
42
Department of Computer
Engineering
43
Department of Computer Engineering
Flowchart
44
Department of Computer
Engineering
6. Algorithm:
the population.
7. Conclusion:
The match word finding algorithm is implemented using the genetic algorithms
which include all the genetic algorithm operators. Genetic algorithm includes the
selection, crossover, mutation operators along with fitness function.
Viva Questions:
45
Department of Computer
Engineering
8. References:
46
Department of Computer Engineering
Soft Computing
Experiment No. : 8
47
Department of Computer
Engineering
Experiment No. 8
1. Aim: Study of ANFIS Architecture.
2. Objectives:
Aware of the use of neuro fuzzy inference systems in the design of intelligent or
humanistic systems.
To become knowledgeable about neuro fuzzy inference systems.
Identify, analyze the problem and validate the solution using software and
hardware.
4. Theory:
The adaptive network-based fuzzy inference systems (ANFIS) is used to solve problems
related to parameter identification. This parameter identification is done through a hybrid
learning rule combining the back-propagation gradient descent and a least-squares
method.
Let the membership functions of fuzzy sets Ai, Bi, i=1,2, be , AiBi.
48
Department of Computer
Engineering
All computations can be presented in a diagram form. ANFIS normally has 5 layers of
neurons of which neurons in the same layer are of the same function family.
49
Department of Computer
Engineering
Layer 1 (L1): Each node generates the membership grades of a linguistic label.
where {a, b, c} is the parameter set. As the values of the parameters change, the shape of
the bell-shaped function varies. Parameters in that layer are called premise parameters.
Layer 2 (L2): Each node calculates the firing strength of each rule using the min or prod
operator. In general, any other fuzzy AND operation can be used.
Layer 3 (L3): The nodes calculate the ratios of the rule are firing strength to the sum of
all the rules firing strength. The result is a normalised firing strength.
Layer 4 (L4): The nodes compute a parameter function on the layer 3 output. Parameters
in this layer are called consequent parameters.
Layer 5 (L5): Normally a single node that aggregates the overall output as the summation
of all incoming signals.
50
Department of Computer
Engineering
5. Algorithm:
When the premise parameters are fixed, the overall output is a linear combination of the
consequent parameters. In symbols, the output f can be written as
which is linear in the consequent parameters cij (i = 1,2¸ j = 0,1,2). A hybrid algorithm
adjusts the consequent parameters cij in a forward pass and the premise parameters {ai,
bi, ci} in a backward pass (Jang et al., 1997). In the forward pass the network inputs
propagate forward until layer 4, where the consequent parameters are identified by the
least-squares method. In the backward pass, the error signals propagate backwards and
the premise parameters are updated by gradient descent.
Because the update rules for the premise and consequent parameters are decoupled in the
hybrid learning rule, a computational speedup may be possible by using variants of the
gradient method or other optimisation techniques on the premise parameters.
6. Conclusion:
7. Viva Questions:
8. References:
51
Department of Computer
Engineering
Soft Computing
Experiment No. : 9
52
Department of Computer
Engineering
Experiment No. 9
1. Aim: Study of Derivative-free Optimization.
Aware of the use of neuro fuzzy inference systems in the design of intelligent or
humanistic systems.
To become knowledgeable about neuro fuzzy inference systems.
An ability to apply knowledge of computing and use of current computing
techniques appropriate to the discipline.
4. Theory:
53
Department of Computer
Engineering
In the field of artificial intelligence, a genetic algorithm (GA) is a search heuristic that
mimics the process of natural selection. This heuristic (also sometimes called a
metaheuristic) is routinely used to generate useful solutions to optimization and search
problems.[1] Genetic algorithms belong to the larger class of evolutionary algorithms
(EA), which generate solutions to optimization problems using techniques inspired by
natural evolution, such as inheritance, mutation, selection and crossover.
Optimization problems
The evolution usually starts from a population of randomly generated individuals, and is
an iterative process, with the population in each iteration called a generation. In each
generation, the fitness of every individual in the population is evaluated; the fitness is
usually the value of the objective function in the optimization problem being solved. The
more fit individuals are stochastically selected from the current population, and each
individual's genome is modified (recombined and possibly randomly mutated) to form a
new generation. The new generation of candidate solutions is then used in the next
iteration of the algorithm. Commonly, the algorithm terminates when either a maximum
number of generations has been produced, or a satisfactory fitness level has been reached
for the population.
54
Department of Computer
Engineering
due to their fixed size, which facilitates simple crossover operations. Variable length
representations may also be used, but crossover implementation is more complex in this
case. Tree-like representations are explored in genetic programming and graph-form
representations are explored in evolutionary programming; a mix of both linear
chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to
initialize a population of solutions and then to improve it through repetitive application of
the mutation, crossover, inversion and selection operators.
Initialization
The population size depends on the nature of the problem, but typically contains several
hundreds or thousands of possible solutions. Often, the initial population is generated
randomly, allowing the entire range of possible solutions (the search space).
Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to
be found.
Selection
The fitness function is defined over the genetic representation and measures the quality of
the represented solution. The fitness function is always problem dependent. For instance,
in the knapsack problem one wants to maximize the total value of objects that can be put
in a knapsack of some fixed capacity. A representation of a solution might be an array of
bits, where each bit represents a different object, and the value of the bit (0 or 1)
represents whether or not the object is in the knapsack. Not every such representation is
valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the
solution is the sum of values of all objects in the knapsack if the representation is valid,
or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these
cases, a simulation may be used to determine the fitness function value of a phenotype
(e.g. computational fluid dynamics is used to determine the air resistance of a vehicle
55
Department of Computer
Engineering
whose shape is encoded as the phenotype), or even interactive genetic algorithms are
used.
Terminology:
56
Department of Computer
Engineering
Flowchart
6. Conclusion:
This study experiments describe the various techniques used for derivative
free optimization. It also describes how to use optimization techniques in soft
computing domain.
7. Viva Questions:
8. References:
57
Department of Computer
Engineering
58
Department of Computer Engineering
Soft Computing
Experiment No. : 10
59
Department of Computer
Engineering
Experiment No. 10
4. Theory:
Students can find the research papers based on the artificial neural network,
hybrid systems, genetic algorithm, fuzzy system, fuzzy logic, fuzzy inference
system etc.
Students need to search recent papers on any of the above mentioned topics, study
it and prepare presentation on the same
5. Conclusion:
Through this experiment, we have understood the recent advancements and
applications of various subdomains of soft computing.
6. References:
1. www.ieeeexplore.com
2. www.Scicencedirect.com
3. Any open access journal
60
Department of Computer
Engineering
61