Unit 1
Unit 1
Soft computing is an umbrella term used to describe types of algorithms that produce
approximate solutions to unsolvable high-level problems in computer science. Typically,
traditional hard-computing algorithms heavily rely on concrete data and mathematical models to
produce solutions to problems. Soft computing was coined in the late 20th century.[1] During this
period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a
computational paradigm that entertains the uncertainties in data by using levels of truth rather
than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced
by human brain functions. Finally, evolutionary computation is a term to describe groups of
algorithm that mimic natural processes such as evolution and natural selection.
In the context of artificial intelligence and machine learning, soft computing provides tools to
handle real-world uncertainties. Its methods supplement preexisting methods for better solutions.
Today, the combination with artificial intelligence has led to hybrid intelligence systems that
merge various computational algorithms. Expanding the applications of artificial intelligence,
soft computing leads to robust solutions. Key points include tackling ambiguity, flexible
learning, grasping intricate data, real-world applications, and ethical artificial intelligence.
Zadeh coined the term of soft computing in 1992. The objective of soft computing is to provide
precise approximation and quick solutions for complex real-life problems.
In simple terms, you can understand soft computing - an emerging approach that gives the
amazing ability of the human mind. It can map a human mind and the human mind is a role
model for soft computing.
Note: Basically, soft computing is different from traditional/conventional computing and it deals with
approximation models.
o Soft computing provides an approximate but precise solution for real-life problems.
o The algorithms of soft computing are adaptive, so the current process is not affected by
any kind of change in the environment.
o The concept of soft computing is based on learning from experimental data. It means
that soft computing does not require any mathematical model to solve the problem.
o Soft computing helps users to solve real-world problems by providing approximate
results that conventional and analytical models cannot solve.
o It is based on Fuzzy logic, genetic algorithms, machine learning, ANN, and expert
systems.
Example
Soft computing deals with the approximation model. You will understand with the help of
examples of how it deals with the approximation model.
Let's consider a problem that actually does not have any solution via traditional computing, but
soft computing gives the approximate solution.
1. Problem 1
2. Are string1 and string2 same?
3. Solution
4. No, the solution is simply No. It does not require any algorithm to analyze this.
1. Problem 2
2. How much string1 and string2 are same?
3. Solution
4. Through conventional programming, either the answer is Yes or No. But these strings might be 8
0% similar according to soft computing.
You have noticed that soft computing gave us the approximate solution.
Soft computing fuzzy logic and neural networks help with pattern recognition, image processing,
and computer vision. Its versatility is vital in natural language processing as it helps decipher
human emotions and language. They also aid in data mining and predictive analysis by obtaining
priceless insights from enormous datasets. Soft computing helps optimize solutions from
energy, financial forecasts, environmental and biological data modeling, and anything that deals
with or requires models.
Within the medical field, soft computing is revolutionizing disease detection, creating plans to
treat patients and models of healthcare.
There are several applications of soft computing where it is used. Some of them are listed below:
As we already said that, soft computing provides the solution to real-time problems and here you
can see that. Besides these applications, there are many other applications of soft computing.
Furthermore, there needs to be more backing behind soft computing algorithms, which makes
them less reliable than complicated computing models. Finally, there is a considerable potential
for bias because of the input data, which leads to ethical dilemmas if methods are in fields such
as medicine, finance, and healthcare.
o Hard computing is used for solving mathematical problems that need a precise answer. It
fails to provide solutions for some real-life problems. Thereby for real-life problems
whose precise solution does not exist, soft computing helps.
o When conventional mathematical and analytical models fail, soft computing helps, e.g.,
you can map even the human mind using soft computing.
o Analytical models can be used for solving mathematical problems and valid for ideal
cases. But the real-world problems do not have an ideal case; these exist in a non-ideal
environment.
o Soft computing is not only limited to theory; it also gives insights into real-life problems.
o Like all the above reasons, Soft computing helps to map the human mind, which cannot
be possible with conventional mathematical and analytical models.
o Fuzzy Logic
o Artificial Neural Network (ANN)
o Genetic Algorithms
Unlike classical sets that allow members to be entirely within or out, fuzzy sets allow partial
membership by incorporating "graduation" between sets. Fuzzy logic operations
include negation, conjunction, and disjunction, which handle membership between data sets.
Fuzzy rules are logical statements that map the correlation between input and output parameters.
They set the rules needed to trace variable relationships linguistically, and they would not be
possible without linguistic variables. Linguistic variables represent values typically not
quantifiable, allowing uncertainties.
Fuzzy logic is nothing but mathematical logic which tries to solve problems with an open and
imprecise spectrum of data. It makes it easy to obtain an array of precise conclusions.
Fuzzy logic is basically designed to achieve the best possible solution to complex problems from
all the available information and input data. Fuzzy logics are considered as the best solution
finders.
Neural networks are computational models that attempt to mimic the structure and functioning of
the human brain. While computers typically use binary logic to solve problems, neural networks
attempt to provide solutions for complicated problems by enabling systems to think human-like,
which is essential to soft computing.
Neural networks revolve around perceptrons, which are artificial neurons structured in layers.
Like the human brain, these interconnected nodes process information using complicated
mathematical operations.
Through training, the network handles input and output data streams and adjusts parameters
according to the provided information. Neural networks help make soft computing
extraordinarily flexible and capable of handling high-level problems.
In soft computing, neural networks aid in pattern recognition, predictive modeling, and data
analysis. They are also used in image recognition, natural language processing, speech
recognition, and systems.
An artificial neural network (ANN) emulates a network of neurons that makes a human brain
(means a machine that can think like a human mind). Thereby the computer or a machine can
learn things so that they can take decisions like the human brain.
Artificial Neural Networks (ANN) are mutually connected with brain cells and created using
regular computing programming. It is like as the human neural system.
AD
Soft computing vs hard computing
Hard computing uses existing mathematical algorithms to solve certain problems. It provides a
precise and exact solution of the problem. Any numerical problem is an example of hard
computing.
On the other hand, soft computing is a different approach than hard computing. In soft
computing, we compute solutions to the existing complex problems. The result calculated or
provided by soft computing are also not precise. They are imprecise and fuzzy in nature.
Soft Computing is a modern computing model that evolved to resolve non-linear problems that
involve approximation, uncertainty, and imprecision. Thus, soft computing can be associated
with being liberal with inexactness, uncertainty, partial truth and approximation. Soft computing
mainly depends on the formal logic and probabilistic reasoning.
The term "soft computing" was first coined by Dr Lotfi Zadeh. According to Dr Zadeh, soft
computing is a methodology that imitates the human brain to reason and learn in an uncertain
environment. Soft computing uses multivalued logics and thus has a sophisticated nature. It is
mainly used to perform parallel computations.
Hard Computing is a conventional approach used in computing and requires an accurately stated
analytical model. The term "hard computing" too was coined by Dr Lotfi Zadeh. In fact, he
coined this term before "soft computing". Hard computing depends on the binary logic and crisp
system.
Hard computing uses two-valued logic. Therefore, it has a deterministic nature. It produces
precise and accurate results. In hard computing, some definite control actions are defined using a
mathematical model or algorithm.
The major drawback of hard computing is that it is incapable in solving the real world problems
whose behavior is imprecise and their information being changing continuously. Hard computing
is mainly used to perform sequential computations.
Computation time Takes less computation time. Takes more computation time.
The biological processes of the human nervous system serve as the foundation
for neural computers (brain). Neural computing entails substantial parallel processing
and self-learning computers similar to a brain, which is made feasible by the brain's
neural network. A neural network is just a collection of processing pieces that are
interconnected in a web-like fashion and can provide some outcomes after receiving
input.
Like the human brain, these artificial neural networks learn from experiences, generalize
from examples, and may extract important information from noisy input. They can
operate in parallel at a faster rate and are fault resilient.
1. Large volumes of data may be utilized to train and generalize artificial neural
networks. They may be trained using vast datasets, allowing them to make
patterns-based predictions and judgments.
2. ANNs may be improved and employed efficiently on hardware accelerators or
dedicated AI processors like GPUs and AI accelerators for quick and parallel
processing.
3. Another advantage of ANN is that they continue to function even in the presence
of noise or errors in data. As a result, they are appropriate in scenarios involving
noisy, partial, or distorted data.
4. They are non-linear in nature as well. It enables them to represent complex data
relationships and patterns. They can also be customized to handle various sorts
of data and perform various activities.
5. They are capable of extracting features from data. It removes the need for
manual feature editing. They can also be taught to handle many jobs at once. As
a result, they may be utilized in advanced AI applications.
Disadvantages
1. Artificial neural networks may grow overly complex due to their architecture and
the massive information used to train them. They can memorize the training data.
It may result in a poor generalization of new data.
2. Artificial neural networks require suitable hardware components like central
processors or dedicated AI accelerators, vast storage spaces, and massive random
access memory.
3. Their working principles and even outcomes can be difficult to grasp because of
the complexities of ANNs. Some people may find it difficult to comprehend their
decision-making processes.
4. No explicit rule determines the structure of ANN. The proper network structure is
obtained by trial and error.
5. They are also susceptible to adversarial instances or slight changes in input data.
These modifications may cause the artificial neural network to make wrong
decisions and produce irrelevant results.
Every synapse has a processing value and weight recognized during network training.
The performance and potency of the network fully depend on the neuron numbers in
the network, how they are connected to each other (i.e., topology), and the weights
assigned to every synapse.
Disadvantages
1. ANN is the mathematical model which is mainly inspired by the biological neuron
system in the human brain. In contrast, the biological neural network is also
composed of several processing pieces known as neurons that are linked
together via synapses.
2. An artificial neural network's processing was sequential and centralized. In
contrast, a biological neural network processes information in parallel and
distributive.
3. The artificial neural network is of a much smaller size than the biological neural
network. In contrast, the biological neural network is large in size.
4. The biological neural network is fault tolerant. In contrast, the artificial neural
network is not fault tolerant.
5. The processing speed of an artificial neural network is in the nanosecond range,
which is faster than the biological neural network, where the cycle time
associated with a neural event triggered by an external input is in the millisecond
range.
6. BNN may perform more difficult issues than artificial neural networks.
7. The operating environment of the artificial neural network is well-defined and
well-constrained. In contrast, the operating environment of the biological neural
network is poorly defined and unconstrained.
8. The reliability of the artificial neural network is very vulnerable. In contrast, the
reliability of the biological neural network is robust.
Processing Its processing was sequential and It processes the information in a parallel
centralized. and distributive manner.
Control Its control unit keeps track of all All processing is managed centrally.
Mechanism computer-related operations.
Complexity It cannot perform complex pattern The large quantity and complexity of the
recognition. connections allow the brain to perform
complicated tasks.
Memory Its memory is separate from a Its memory is integrated into the
processor, localized, and non- processor, distributed, and content-
content addressable. addressable.
Reliability It is very vulnerable. It is robust.
Learning It has very accurate structures and They are tolerant to ambiguity.
formatted data.
Response time Its response time is measured in Its response time is measured in
milliseconds. nanoseconds.