metaheuristic optimization
metaheuristic optimization
Lecture - 16
Introduction to Metaheuristic Optimization
Welcome back to the course on Optimization Methods for Civil Engineering. So, today, I will
discuss about Metaheuristic Optimization method or basically, I will introduce what is
metaheuristic optimization method. So, far we have discussed the classical methods. So, right
now you know what is classical optimization technique.
So, you have learned the necessary and sufficient condition for optimality then we have
discussed the line search technique ok. So, line search technique that means, for a single
variable optimization problem; so, how you can find out the optimal solution of that
particular function. Then, we discuss how to solve the multi variable problem.
So, what you are doing? Basically, you are doing couple of lines search. So, you are taking a
direction and then, along that direction, you are trying to find out the optimal solution. So, we
have discussed few algorithms like the Newton’s method, the steepest descent method and the
univariate method as well as the conjugate direction method.
Now, you look at these algorithms. So, the basic assumption when you derive these your
algorithm was that the function is a convex function that means there is only one optimal
solution. So, therefore, if you apply this algorithm for a nonconvex problem so, having more
than one optimal solution so, in that case, what you are getting?
You are getting only the local optimal solution that means you will get a local optimal
solution of the problem. So, you may not get the global optimal solution of the problem if you
apply these algorithms. So, let us initially look at what is the disadvantage of classical
optimization methods or techniques and then, I will discuss about metaheuristic optimization
technique which are capable of finding the global optimal solution of a nonconvex problem or
a difficult problem ok.
So, let us see the difficulties or you can say disadvantage of classical optimization methods.
Now as I said the classical optimization techniques are useful in finding optimal solution or
unconstrained maxima or minima of a continuous and differentiable function ok. So, the
function has to be continuous and differentiable, why? Because, we are using the gradient
information. So, therefore, the function has to be continuous as well as differentiable. So,
look at the first function.
This is a simple function, we have only one optimal solution somewhere here, this is a convex
function, and I can easily solve this problem using classical method. So, you just do a line
search, you can apply the golden section search method, you can apply the inter locking
method, you can apply the Newton Raphson method or you can apply any line search
technique so, single variable optimization technique, you can find out the minimum of this
particular function.
Because there is only one minima and you can easily find out the optimal solution of this
problem. Similarly, if I take this two-variable function, this is also a convex function ok; this
is also a convex function so, you apply any methods like Newton’s method or univariate
method, conjugate direction method, steepest descent method, you search from any point and
finally, you will get the optimal solution of this particular problem.
So, these are simple problem, and I can say this problems are convex problem. So, therefore,
if we apply the classical optimization methods so, you will be able to find out the optimal
solution of this problems. So, there is no need to apply any other algorithms, because the
classical algorithms are capable of solving this problem.
And you know that when we have derived this or when we have discussed about the classical
optimization techniques mainly line search as well as after that we have discussed the
different methods to get the direction I say that the function has to be a convex function that
means, under the assumption that the function is a convex function and we have derived this
or we have derived this algorithms ok.
(Refer Slide Time: 05:39)
Now, let us see. I would like to show you one example problem. Suppose this is a function
and we have only one optimal solution somewhere here; so, we have only one optimal
solution somewhere here. Now, if I apply the reason elimination technique ok. So, in this
case, I have applied suppose the interval halving method.
So, I have shown you the solution at different iteration and this is the beginning, the initial
length of the search space is 4 ok. So, lower bound is 2, upper bound is 6 ok; So, lower bound
is 2 and upper bound is 6.
(Refer Slide Time: 06:29)
Now, if I apply the interval halving method so, in that case, after one iteration ok, in the
second iteration, I am getting the lower bound equal to 3 and upper bound is 5 and the length
is equal to 2.
(Refer Slide Time: 06:44)
Then in the next iteration, the length I am getting 1, lower bound is 3, upper bound is 4.
(Refer Slide Time: 06:51)
Then, if I continue this iteration so, I am getting length equal to 5 then, 0.25 something like
that.
(Refer Slide Time: 06:59)
(Refer Slide Time: 07:02)
And finally, I am getting a very small length and it is or the value of L is 0.00024414 ok. So, I
am getting this particular solution ok. So, this is the optimal solution of this problem, or I can
say this is your x star. So, what I am telling here. So, if you have only one optimal solution, if
your function is a convex function so, you apply this algorithm ok and you will get the
optimal solution of this particular problem.
(Refer Slide Time: 07:43)
Now, I would like to apply this same interval halving method on this particular function.
Now, just see this function has two optimal solution; somewhere here and here or maybe
there may be other solution ok. Now, if you apply the interval halving method here, I have
taken lower bond equal to minus 2, upper bound equal to plus 6 ok. Now, after the first
iteration so, this is the initial solution or you can say the initially lower bound is minus 2 and
upper bound is 6 and length is 8 ok.
(Refer Slide Time: 08:21)
And if I continue this, then finally, I will get the solution somewhere here ok. So, this is the
optimal solution of this particular problem. So, this is your x star ok; this is your x star. So,
what will happen that if there; if there are more than one optimal solution so, in that case, you
will get one of them. So, you can apply, it is not that you cannot apply this algorithm on a
nonconvex problem so, you can apply, and you will get one of the local optimal solution.
So, this is a nonconvex function and I have applied the reason elimination technique though it
was designed for a convex function, but I can apply this particular algorithm on a nonconvex
function and I will get this particular solution.
(Refer Slide Time: 09:24)
Now, let us apply the Newton Raphson method. So, in case of Newton Raphson method so,
what we are doing that we are applying this equation that is x i plus 1 which is equal to x i
minus that first derivative at x i divided by second derivative at x i ok to get the optimal
solution of this function.
Now, just see. So, this is a nonconvex function ok, this is a nonconvex function and there is a
maxima here and there is a minima here ok. Now, if I apply the Newton Raphson method
here, suppose I have taken this as a initial point ok so, this as a initial point that is minus 2 as
a initial point and if I apply the Newton’s method ok. So, in the second iteration, I am getting
this point, in the third iteration I am getting this point, fourth iteration I am getting this point
so, finally, I am getting this solution ok.
So, this is one x star, or you can say this is a stationary point and this is a maximum point. So,
this is one solution of this particular function.
Now, you take plus 6 that x naught equal to 6. So, in the earlier case, I have taken the x
naught equal to minus 2. So, from minus 2, I am getting this particular solution, then if I take
x naught equal to 6. So, in that case just see in the second iteration, I am getting this solution,
third iteration I am getting this solution, fourth iteration I am getting this solution, and this is
the solution of this particular function. So, this is another stationary point is not it. So, this is
another stationary point.
So, therefore, what will happen? So, if you are taking minus 2 so, you are getting this solution
and if you are taking plus 6, you are getting this particular solution. So, therefore, if you apply
this algorithms so, you will get one of the solution or you can say you will get the local
solution from where actually you are starting. So, therefore, this algorithms are sensitive to
initial solution.
Let us take an example problem. So, this is f x and this is x suppose the function is like this.
So, we have one solution here and this is a maximum point. We have another solution, this is
also a maximum point, this is another point, this is a minimum point, and this is also a
minimum point. So, therefore, in this particular function, we have two maxima ok so, this is
one maximum point this is another maximum point and there are two minima. So, this is one
minimum point and this is another minimum point.
Now, let us use the classical optimization method that means, any single variable
optimization technique to find out the maxima and minima of this particular function. Now, if
I take this as my initial point x naught, then I will get this particular solution. If I take this
particular point as my x naught; so, I will get this particular solution. If I take this particular
solution as x naught; so, I may get this solution. And if I take this particular solution as x
naught; so, I will get this solution.
So, therefore, what will happen? Depending upon your initial solution, an initial solution you
have to define as a user, you have to define the initial solution, this x naught value that user
has to define. So, depending upon what initial value you have provided to the algorithm so,
you will get one of these optimal solutions.
So, therefore, for a nonconvex problem or if you have more than one optimal solution of a
particular function, then this algorithms are highly sensitive to initial points supplied to the
algorithm is not it. So, depending upon the initial point, suppose if you are supplying this
particular initial point, then you may get the global maximum of this function and if you are
giving this particular point as an initial point, then you may get global minimum of this
particular function.
So, now in this case, you know actually you have seen this function and you can decide an
initial point, but just look about if a problem having 100 variables ok. So, this is not possible
to know actually what initial point you should take to get the global optimal solution of the
problem ok.
So, therefore, this algorithms are not capable of solving a nonconvex problem or a problem
having multiple optimal solutions. So, if you apply this algorithms on such type of problems,
in that case, you will get a local optimal solution of the problem.
Now, let us look at the limitations of gradient based classical optimization method ok. So, I
already discussed that one, but I have listed down the limitations here. So, what you need?
You need the derivative values; you need the derivative information is not it. The second one,
the objective function and the constraints have to be continuous and differentiable. So, I
already said that one.
So, because I would like to apply the gradient based classical optimization technique. So,
therefore, the function has to be continuous and differentiable ok. So, this is the need or you
can say this is the one assumption or this is the need. So, then what are the limitations? The
limitations are the algorithms are not suitable for solving optimization problems with
discontinuous function.
So, now, if your function is discontinuous ok, it is not a continuous function; so, you will not
be able to apply this algorithm. So, this is one limitation. The second limitation is the
algorithms are not suitable for solving optimization problem having non-differentiable
function ok. So, if your function is non-differentiable, it is continuous function, but
non-differentiable so, in that case, you will not be able to apply this algorithms.
Now, I would like to tell one point here, there may be a function ok, it is differentiable, but
question is that how you are calculating derivative? So, you have to calculate derivative using
numerical methods many time. So, you need to calculate the derivative you using numerical
method.
So, I discuss about that one, how you can calculate the derivative using numerical methods.
So, now in that case, what will happen? So, many time, you will face some problem in
calculating the derivative of the function ok. So, therefore, these algorithms are not suitable
for solving optimization problem having non-differentiable function even if sometime its
differentiable, but it is very difficult to calculate the derivative.
So, it may take lot of time to calculate the derivative also. So, in that case also, these
algorithms are not suitable ok. Then, the third one is the algorithms are not suitable for
solving discrete optimization problem ok. So, discrete problem so, I can say that if your
problem is discrete in nature ok so, in that case, this algorithms are not your suitable because
that function is not a continuous function.
So, similarly, if you have integer problem or mixed integer problem so, many times what
happen? Suppose I can give an example of integer problem suppose you would like to design
a particular your beam or column ok and I would like to design the reinforcement, I would
like to minimize the reinforce requirements so, ok.
So, reinforce is suppose you are giving in terms of diameter of the bar. Now, this diameter of
the bar so, suppose you are getting 8.5 so, 8.5 will not be available, 8.5 mm bar suppose your
optimal solution is 8.5 mm bar you need to use in that particular beam or column.
Now, 8.5 is not available ok in the market because either you will get 8 mm or you may get
10 mm or you may get 12 mm, then 16 mm, then 20 mm like that so, 8.5 is not available.
Similarly, suppose you are getting 20.06 so, that is also not available so, I then either you
have to use 20 or 25 or your 18 or 16 ok. So, therefore, these problems are your integer type
problem. So, either you can use 20 or 16 or 12 something like that. Now, in that case, these
algorithms are not suitable ok.
So, if you are apply this algorithms, then you will get a value suppose something like that as I
said in this case 8.5 or 20.06 or 21.27 so, this is not available in the market. So, therefore, you
will not be able to use this particular value. So, in that case, these algorithms are not suitable
for solving integer problem or mixed integer problem. So, here, there may be integer variable
along with the continuous variable ok.
Then, the other limitation is are the work under assumptions that the problem is a convex
problem. So, I have already explained that one that when we derive this algorithms ok. So, in
that case, we said that the function is a convex function, we have only one optimal solution, if
the function is a convex function ok so, convex function. And we have already discussed
what is convex function and what is nonconvex functions ok.
So, this algorithms work under the assumption that the problem is a convex problem. So,
entire problem is a convex problem. For nonconvex problems, the algorithm can only locate a
local optimum ok. So, if your problem as I said you can apply, you can apply this algorithms
on a nonconvex problem, but in that case, you will get a local optimal solution. So, you can
apply provided the function is a continuous function and function is differentiable ok.
So, in that case, you can apply, and you will get a local optimal solution of the problem. Now,
question is that how to get the global optimal solution. You can try, you can use this
algorithms to get the global optimal solution of the problem, but what you have to do? You
have to sense your initial points. Suppose you are starting with x naught equal to initial 1,
maybe after that x naught equal to 5.
So, for different x naught ok so, you can try and just see whether you are getting a better
optimal solution of the problem. So, you can apply on a nonconvex problem, but in that case,
you will get the local optimal your solution of the problem or a local optimal solution you
will get.
Now, user has to define a starting point. So, this is you can say another limitation because I
do not know in case of a nonconvex problem, if you have more than one optimal solution of a
particular problem and I would like to find out the global optimal solution of the problem so,
in that case, I really do not know what starting point I should take so, but user has to define a
starting point here and you can say suppose if I use a suitable starting point.
So, whatever starting point you have given, luckily that starting point is near the global
optimal solution of the problem or you can say on that particular region where global
optimum solution is there. So, in that case, you will get the global optimal solution of the
problem. So, user has to define a starting point and you can say this is another limitation of
classical optimization technique ok.
So, why I am telling about this limitation? Because this metaheuristic optimization methods
are developed to overcome this limitations ok. So, when we will discuss about metaheuristic
optimization methods ok or technique so, these algorithms are actually develop or design to
overcome this limitations of the classical optimization methods. I hope this is clear to you.
(Refer Slide Time: 24:39)
Now, I have shown you some functions here. So, I can apply the first function is a convex
function is not it? Convex function ok. So, I can apply the classical optimization technique
and I can obtain the global as well as local optimal solution of this problem and this is a local
optimal solution as well as the global optimal solution of the problem because this is a
convex function.
Now, this is a nonconvex function is not it, this is a nonconvex function. So, in this case also,
I can apply the classical optimization technique because the function is differentiable and the
function is continuous so, I can apply the classical optimization technique.
But I will get one of the solution. If you have multiple optimal solutions, then in that case,
you will get one of that if you apply the gradient based classical optimization technique. So,
you can apply here, but depending upon what initial solution you have provided so, you will
get one of the optimal solution of the problem.
Now, this is a discontinuous function, the third function so, I will not be able to apply the
classical optimization technique, the function is not continuous ok. Then, similarly, this
particular function is non-differentiable at this particular point. So, therefore, I will not be
able to apply the classical optimization technique and similarly, here also this is
discontinuous function, this is also a non-continuous function.
Therefore, I will not be able to apply the classical optimization technique ok and this one is
you have discrete search phase ok. So, solution can be here, solution can be here, solution can
be here, solution can be here, solution can be here so, this is not a continuous function.
So, you will not be able to apply the classical optimization technique or gradient based
classical optimization technique for solving or for finding the optimal solution of this
particular function or particular problem because this is discrete in nature ok. So, either you
will get you should get this solution, or you should get this solution or you should get this
solution, there is no solution here ok. So, in between there is no solution. So, therefore, you
will not be able to apply the gradient base classical optimization.
(Refer Slide Time: 27:42)
Similarly, you look at this particular function. So, let me look at, let me discuss about this
second function. So, here you just see there are four optimal solution of this particular
problem ok. Now, this is a continuous function, and this is also differentiable, but only
problem is this is not a convex function ok. We have more than one optimal solution.
Now, I can apply the classical optimization technique because the function is differentiable
and continuous ok. Now, if you take x naught somewhere here so, maybe you are going in
this direction and in this direction so, you may get this particular point. Now, if you are taking
x naught somewhere here, then you may get this point, x naught somewhere here you may get
this particular point.
So, therefore, depending upon what x naught you have chosen so, you will get one of the
optimal solution ok. So, you may not get the global optimal solution, but you will get one
optimal solution. Now, you look at this particular function so, there are innumerable, there are
lot of local optimal solution and one of them is the global optimal solution. So, you have only
one global optimal solution so, somewhere here and others are local optimal solutions ok.
Now, if you apply the classical optimization methods here so, you will get one of the local
optimal solution. So, what you have to do? You have to change your initial point and luckily
if you are giving initial point somewhere here so, you may get the global optimal solution of
the problem. So, therefore, this particular function is therefore, it is not that easy to get the
global optimal solution of this particular function using classical optimization method.
Now, you know what is the disadvantage of classical optimization methods so, let us look at
the metaheuristic optimization. Now, metaheuristic optimizations methods are derived from
different phenomenon’s happening in natures. So, if you look at the nature so, intelligence in
biological system. Now, let us start from this particular your circle computation.
So, what is happening? In nature, there is a competition, or we can say that we are struggling
for survival. So, within the spaces or across the spaces is not it. So, there is a competition and
competition is for the resources, for the survival. So, therefore, everyone will not be able to
survive in nature and you can say the fittest will survive.
Those who are fit, those who are strong, those who are intelligent, they will survive so, other
people will not be able to survive because of competition is not it. So, because of
competition. So, this is one phenomenon which is happening in nature ok. Then, another is
learning. So, we are learning not only the human beings, even the animals, even the ants so,
they are learning with the experience ok with experience they are learning.
So, they are learning and you can say this is a trial and error-based learning method ok so,
they are learning. Then, they are also communicating. So, there is a very good mechanism to
communicate each other ok. So, they are communicating with the your other individual of
that particular species so, there is a good communication method.
Then swarming. Swarming is, so, they are actually working in group ok. So, they are not
working in individual means suppose they are not working individually ok. So, they are
working in group, and you can say the swarming. Suppose they are when they are searching
food, they are not going alone.
So, they are going in a group and they have a very good methodology to search the food and
basically, it is a well-organized, it is not arbitrarily they are moving from here and there and
in search of food. So, they are moving in group or swarming. And reproduction, reproduction
means they produce offspring.
Now, when they produce offspring, some of the offspring may be better than their parents ok.
So, that means, the offspring may be better than parents or they may have the means better
capability for survival. So, that is another your factor or that is another factor what is
happening in nature ok so, these are.
So, now, what we are doing basically? If you look at these natural processes or if you look at
the phenomenon what is happening in nature or intelligence in biological your system, now
people are trying to look at this phenomenon and if you are looking at it, everything is
optimized.
Suppose when this ant, they are searching for food so, as I said in my first class also so, they
have a very good communication technique, they are communicating each other, and they
have a methodology to move in a shortest route basically ok. So, therefore, they are not
moving arbitrarily. So, they have actually optimized the process so that they can reach, or
they can take the food using a shortest distance ok.
Similarly, if you look at the other phenomenon what is happening in nature, they are also
using, or their processor also optimize process. So, by looking at this your natural processes
so, many researcher, they have proposed the optimization methods so, we call it metaheuristic
optimization method. So, I will explain why it is metaheuristic optimization ok. So, they have
given different metaheuristic optimization method for solving optimization problem.
(Refer Slide Time: 34:34)
Now, some of the metaheuristic optimization algorithms are genetic algorithm ok. So,
inspired by the evolution process in nature so, this algorithm. So, people are looking at the
evolution process and whether this evolution process is a random process or a guided process.
So, it is not a random process and based on that, this algorithms has been you derived or it
has been proposed and it is working fine in finding the optimal solution of the problem ok.
Similarly, there are some other algorithms. So, this one is particle swarm optimization and
this is based on social behavior of bird flocking or fish schooling ok. So, what is the social
behavior based on the social behavior of bird flocking or fish schooling ok. So, how they are
searching food.
Then, similarly, the simulated annealing, this is inspired by the process of annealing in
metallurgy ok so, process of annealing in metallurgy. So, based on that, this simulated
annealing algorithm has been developed. Then, difference in evolution, this is based on
biology and the evolution of living beings ok. So, based on that so, it has been developed.
Then, firefly algorithm. So, this is inspired by the flashing nature of fireflies ok so, flushing
nature of firefly. So, I will also discuss about this algorithm.
Then. there is another algorithm which is inspired by the spreading strategy of weeds ok. So,
if you look at a paddy field or a cultivated field is basically captured by the weed, this is
unwanted. So, this is based on the spreading strategy of weed so, which are unwanted as you
know so, we call it invasive weed optimization. So, it is inspired by how weed is occupying
the entire fields or it is an we call it invasive weed optimization ok.
Now, ant colony optimization. In the first-class introduction also, I said about that one. So,
ant colony optimization so, it is inspired by food searching behavior of real and colonies ok.
So, how they are searching food ok. So, based on that, this algorithm has been developed so,
inspired by the food searching behavior of real ant. Antlion optimizer; so, hunting mechanism
of antlion ok. So, this is inspired by the hunting mechanism of antlion.
Then, artificial bee colony algorithm. So, foraging behavior of honeybees ok. So, how they
are collecting the honey in a honeybees ok. So, based on that or inspired by that mechanism,
this algorithm has been proposed or it has been developed artificial bee colony algorithm.
Then, bat algorithm that is inspired by echolocation behaviour of microbats ok. So, this is
another algorithm. Then, crow search algorithm. So, this is inspired by the intelligent
behavior of crows ok. So, this is another metaheuristic optimization algorithm. Then, cuckoo
search algorithm so, this is inspired by the reproduction of cuckoo birds ok.
So, this is some of the metaheuristic algorithms. So, these algorithms are inspired by the
different phenomenon what is happening in nature and if you look at these algorithms so, they
are not actually using the gradient information so, that also I will explain, they are not using
gradient information and they are only using the function fellow, but this algorithms are
inspired by the different phenomenon, different activities happening in nature ok.
(Refer Slide Time: 39:24)
Now, let me discuss what are the advantage of metaheuristic optimization method. So, the
first advantage is does not required the derivative value ok. So, we do not use the derivative
value either first derivative or second derivative so, I do not need that one ok. So, I need not
calculate the derivative value ok. So, this is you can say this is the main advantage of
metaheuristic optimization method. So, I do not need the derivative information.
The objective function and constraint need not be continuous and differentiable. So, the one
of the limitation of classical optimization methods are one of the limitations is that the
function should be continuous and differentiable, but in case of metaheuristic optimization
method so, this restriction is not there.
So, therefore, the algorithms are suitable for solving optimization problem with discontinuous
function. Algorithms are suitable for solving optimization problem having non-differentiable
function. The algorithms are suitable for solving discrete optimization problem. The
algorithms are also suitable for solving integer and mixed integer problem ok.
So, another advantage is not works under the assumption that the problem is a convex
problem. So, I can apply this algorithm for nonconvex function rather I will say that if your
problem is a nonconvex problem. So, I also discussed this thing earlier that suppose how to
select an algorithm. So, now, you have different algorithms ok. Now, you have classical
algorithm now, you have metaheuristic optimization methods now, for your problem which
one you will select?
So, what I will say that if your problem is a convex problem, you go for classical optimization
technique because they are very good algorithm for solving a convex problem. So, if your
problem is a convex problem, if you know that if your problem is a convex problem, you
apply the classical optimization method.
Now, if your problem is a nonconvex problem, then you apply this metaheuristic optimization
problem. So, I will suggest that you apply metaheuristic optimization problem. So, therefore,
for nonconvex problem, it is expected that the algorithm can locate the global optimum ok.
So, it can locate the global optimum. So, in case of metaheuristic optimization problem that
probability is very high that you will get the global optimal solution of the problem.
So, it will not, it will basically avoid the local optimal solution. Then, user need not define a
starting point. So, I need not define a starting point like classical method, I have to define
what is x naught ok. So, x naught I have to define and from there, your search process will
start, but in this case, I need not define the starting point ok.
Then, it is inherently parallel that means, if I want to apply parallel your computation ok. So, I
can do inherently parallel, but the classical algorithms are sequential that means, from you are
going from one point to another, then you are going to another point, but here, this is
inherently parallel ok.
Non-knowledgeable based optimization. So, here, I do not want to actually without knowing
the nature of the function actually, you can apply this algorithm so, non-knowledge-based
optimization process. So, without knowing that suppose in case of classical methods; so, you
should know actually what problem you are trying to solve it ok otherwise, you may not get
the optimal solution.
But in this case, I do not need the knowledge on the problem ok. So, without knowing about
much about the objective function and constraint so, I can apply this because I am not
calculating the derivative, or it can be applied on a non-continuous your function also it. So, it
can be applied on a non-continuous, non-differentiable function also.
So, non-knowledge base optimization easy to discover the global optimum. As I said it is
expected that the problem will give you the global optimal solution of the problem, easy to
discover the global optimum and can avoid trapping in local optima. So, I can avoid the local
optimal solution that mean suppose if your function is something like that, I would like to
find out the global minima so, I may get this particular.
And similarly, if I need to find out the global maxima that also I may get. So, I can easily
avoid the other local optimal solutions ok. So, these are advantage of the metaheuristic
optimization methods. So, I would like to tell that metaheuristic optimization methods are
problem independent method ok. So, it is not depend on what type of problem actually you
are trying to solve. So, it is independent of the problem.
(Refer Slide Time: 45:49)
Now, as I said now, I can apply the metaheuristic optimization method for finding the global
maximum solution, global maximum or global minimum. So, I can apply here also, I can
apply here also, now you look at this is a very difficult problem if you apply the classical
optimization method.
Because, so, if you are putting this particular point as your x naught so, you will get this
solution. If you are putting this one, then you will get this solution. If you are putting
somewhere here, you may get this solution, this solution, this solution. So, therefore,
depending upon what x naught you are using, you will get 1 of the local optimal solutions.
But if you apply the metaheuristic optimization methods so, you may get the global optimal
solution in a single run ok. Similarly, I can also apply the metaheuristic optimization methods
here. So, this is all about the introduction to metaheuristic optimization methods.
So, today I have introduced you what is metaheuristic optimization methods, I also discuss
the disadvantages of classical optimization methods ok. So, in many problems, if your
problem is not continuous, non-differentiable, you will not be able to apply classical methods.
Similarly, if your problem is a integer problem or a discrete problem so, you will not be able
to apply classical method.
So, to overcome those things so, this metaheuristic optimization methods can be used and this
method can give you the global optimal solution of the problem or your search process will
not trap in a local optimal point. So, I can avoid the local optimal solution and I can get the
global optimal solution of the problem.
So, in the next class, I will discuss genetic algorithm. So, this is one of the metaheuristic
optimization algorithms very powerful in finding the global optimal solution of the problem.
So, I will discuss this algorithms in the next class.
Thank you.