0% found this document useful (0 votes)
39 views26 pages

Genetic Algorthim

The genetic algorithm is an optimization method that uses concepts from natural selection and evolution. It works by initializing a population of solutions randomly and then using selection, crossover, and mutation operators to create new populations, evolving toward an optimal solution over many generations. The genetic algorithm differs from classical optimization in that it uses a population of potential solutions rather than a single point and selection based on fitness rather than deterministic computations.

Uploaded by

adlinreshma654
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views26 pages

Genetic Algorthim

The genetic algorithm is an optimization method that uses concepts from natural selection and evolution. It works by initializing a population of solutions randomly and then using selection, crossover, and mutation operators to create new populations, evolving toward an optimal solution over many generations. The genetic algorithm differs from classical optimization in that it uses a population of potential solutions rather than a single point and selection based on fitness rather than deterministic computations.

Uploaded by

adlinreshma654
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

GENETIC ALGORITHM

The genetic algorithm is a method for solving both constrained and unconstrained
optimization problems that is based on natural selection, the process that drives biological
evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At
each step, the genetic algorithm selects individuals at random from the current population to
be parents and uses them to produce the children for the next generation. Over successive
generations, the population "evolves" toward an optimal solution. Genetic algorithm can be
applied to solve a variety of optimization problems that are not well suited for standard
optimization algorithms, including problems in which the objective function is discontinuous,
non-differentiable, stochastic, or highly nonlinear.
The genetic algorithm uses three main types of rules at each step to create the next generation
from the current population:
 Selection rules select the individuals, called parents, that contribute to the population at the
next generation.
 Crossover rules combine two parents to form children for the next generation.
 Mutation rules apply random changes to individual parents to form children.
The genetic algorithm differs from a classical, derivative-based, optimization algorithm in
two main ways, as summarized in the following table.

Classical Algorithm Genetic Algorithm


Generates a single point at Generates a population of points at each iteration. The
each iteration. The sequence best point in the population approaches an optimal
of points approaches an solution.
optimal solution.
Selects the next point in the Selects the next population by computation which uses
sequence by a deterministic random number generators.
computation.

Genetic Algorithm Terminology

Fitness Functions
The fitness function is the function you want to optimize. For standard optimization
algorithms, this is known as the objective function. The optimtool in the Matlab software tries
to find the minimum of the fitness function.
Individuals
An individual is any point to which can be applied to the fitness function. The value of the
fitness function for an individual is its score. An individual is sometimes referred to as
a genome and the vector entries of an individual as genes.
Populations and Generations
A population is an array of individuals. For example, if the size of the population is 100 and
the number of variables in the fitness function is 3, you represent the population by a 100-by-
3 matrix. The same individual can appear more than once in the population. For example, the
individual (2, -3, 1) can appear in more than one row of the array.
At each iteration, the genetic algorithm performs a series of computations on the current
population to produce a new population. Each successive population is called a
new generation.
Diversity
Diversity refers to the average distance between individuals in a population. A population has
high diversity if the average distance is large; otherwise it has low diversity. In the following
figure, the population on the left has high diversity, while the population on the right has low
diversity.

Diversity is essential to the genetic algorithm because it enables the algorithm to search a
larger region of the space.
Fitness Values and Best Fitness Values
The fitness value of an individual is the value of the fitness function for that individual.
Because the toolbox software finds the minimum of the fitness function, the best fitness value
for a population is the smallest fitness value for any individual in the population.
Parents and Children
To create the next generation, the genetic algorithm selects certain individuals in the current
population, called parents, and uses them to create individuals in the next generation,
called children. Typically, the algorithm is more likely to select parents that have better
fitness values.

How the Genetic Algorithm Works

Outline of the Algorithm


The following outline summarizes how the genetic algorithm works:
1. The algorithm begins by creating a random initial population.
2. The algorithm then creates a sequence of new populations. At each step, the algorithm
uses the individuals in the current generation to create the next population. To create the
new population, the algorithm performs the following steps:
a. Scores each member of the current population by computing its fitness value.
b. Scales the raw fitness scores to convert them into a more usable range of values.
c. Selects members, called parents, based on their fitness.
d. Some of the individuals in the current population that have lower fitness are chosen
as elite. These elite individuals are passed to the next population.
e. Produces children from the parents. Children are produced either by making random
changes to a single parent—mutation—or by combining the vector entries of a pair of
parents—crossover.
f. Replaces the current population with the children to form the next generation.
3. The algorithm stops when one of the stopping criteria is met. See Stopping Conditions for
the Algorithm.
Initial Population
The algorithm begins by creating a random initial population, as shown in the following
figure.

In this example, the initial population contains 20 individuals, which is the default value
of Population size in the Population options. Note that all the individuals in the initial
population lie in the upper-right quadrant of the picture, that is, their coordinates lie between
0 and 1, because the default value of Initial range in the Population options is [0;1].
If you know approximately where the minimal point for a function lies, you should set Initial
range so that the point lies near the middle of that range. For example, if you believe that the
minimal point for Rastrigin's function is near the point [0 0], you could set Initial range to
be [-1;1]. However, as this example shows, the genetic algorithm can find the minimum even
with a less than optimal choice for Initial range.
Creating the Next Generation
At each step, the genetic algorithm uses the current population to create the children that
make up the next generation. The algorithm selects a group of individuals in the current
population, called parents, who contribute their genes—the entries of their vectors—to their
children. The algorithm usually selects individuals that have better fitness values as parents.
You can specify the function that the algorithm uses to select the parents in the Selection
function field in the Selection options.
The genetic algorithm creates three types of children for the next generation:
 Elite children are the individuals in the current generation with the best fitness values. These
individuals automatically survive to the next generation.
 Crossover children are created by combining the vectors of a pair of parents.
 Mutation children are created by introducing random changes, or mutations, to a single
parent.
The following schematic diagram illustrates the three types of children.

Mutation and Crossover explains how to specify the number of children of each type that the
algorithm generates and the functions it uses to perform crossover and mutation.
The following sections explain how the algorithm creates crossover and mutation children.
Crossover Children
The algorithm creates crossover children by combining pairs of parents in the current
population. At each coordinate of the child vector, the default crossover function randomly
selects an entry, orgene, at the same coordinate from one of the two parents and assigns it to
the child. For problems with linear constraints, the default crossover function creates the
child as a random weighted average of the parents.
Mutation Children
The algorithm creates mutation children by randomly changing the genes of individual
parents. By default, for unconstrained problems the algorithm adds a random vector from a
Gaussian distribution to the parent. For bounded or linearly constrained problems, the child
remains feasible.
The following figure shows the children of the initial population, that is, the population at the
second generation, and indicates whether they are elite, crossover, or mutation children.
Plots of Later Generations
The following figure shows the populations at iterations 60, 80, 95, and 100.

As the number of generations increases, the individuals in the population get closer together
and approach the minimum point [0 0].
Stopping Conditions for the Algorithm
The genetic algorithm uses the following conditions to determine when to stop:
 Generations — The algorithm stops when the number of generations reaches the value
of Generations.
 Time limit — The algorithm stops after running for an amount of time in seconds equal
to Time limit.
 Fitness limit — The algorithm stops when the value of the fitness function for the best point
in the current population is less than or equal to Fitness limit.
 Stall generations — The algorithm stops when the average relative change in the fitness
function value over Stall generations is less than Function tolerance.
 Stall time limit — The algorithm stops if there is no improvement in the objective function
during an interval of time in seconds equal to Stall time limit.
 Stall test — The stall condition is either average change or geometric weighted.
For geometric weighted, the weighting function is 1/2n, where n is the number of generations
prior to the current. Both stall conditions apply to the relative change in the fitness function
over Stall generations.
 Function Tolerance — The algorithm runs until the average relative change in the fitness
function value over Stall generations is less than Function tolerance.
 Nonlinear constraint tolerance — The Nonlinear constraint tolerance is not used as
stopping criterion. It is used to determine the feasibility with respect to nonlinear constraints.
Also, a point is feasible with respect to linear constraints when the constraint violation is
below the square root of Nonlinear constraint tolerance.
The algorithm stops as soon as any one of these conditions is met. You can specify the values
of these criteria in the Stopping criteria pane in the Optimization app. The default values are
shown in the pane.
When you run the genetic algorithm, the Run solver and view results panel displays the
criterion that caused the algorithm to stop.
The options Stall time limit and Time limit prevent the algorithm from running too long. If
the algorithm stops due to one of these conditions, you might improve your results by
increasing the values of Stall time limit and Time limit.
Selection
The selection function chooses parents for the next generation based on their scaled values
from the fitness scaling function. An individual can be selected more than once as a parent, in
which case it contributes its genes to more than one child. The default selection
option, Stochastic uniform, lays out a line in which each parent corresponds to a section of
the line of length proportional to its scaled value. The algorithm moves along the line in steps
of equal size. At each step, the algorithm allocates a parent from the section it lands on.
A more deterministic selection option is Remainder, which performs two steps:
 In the first step, the function selects parents deterministically according to the integer part of
the scaled value for each individual. For example, if an individual's scaled value is 2.3, the
function selects that individual twice as a parent.
 In the second step, the selection function selects additional parents using the fractional parts
of the scaled values, as in stochastic uniform selection. The function lays out a line in
sections, whose lengths are proportional to the fractional part of the scaled value of the
individuals, and moves along the line in equal steps to select the parents.
Note that if the fractional parts of the scaled values all equal 0, as can occur
using Top scaling, the selection is entirely deterministic.
For details and more selection options, see Selection Options.
Reproduction Options
Reproduction options control how the genetic algorithm creates the next generation. The
options are
 Elite count — The number of individuals with the best fitness values in the current
generation that are guaranteed to survive to the next generation. These individuals are
called elite children. The default value of Elite count is 2.
When Elite count is at least 1, the best fitness value can only decrease from one generation to
the next. This is what you want to happen, since the genetic algorithm minimizes the fitness
function. Setting Elite count to a high value causes the fittest individuals to dominate the
population, which can make the search less effective.
 Crossover fraction — The fraction of individuals in the next generation, other than elite
children, that are created by crossover. Setting the Crossover Fraction describes how the
value of Crossover fraction affects the performance of the genetic algorithm.
Mutation and Crossover
The genetic algorithm uses the individuals in the current generation to create the children that
make up the next generation. Besides elite children, which correspond to the individuals in
the current generation with the best fitness values, the algorithm creates
 Crossover children by selecting vector entries, or genes, from a pair of individuals in the
current generation and combines them to form a child
 Mutation children by applying random changes to a single individual in the current
generation to create a child
Both processes are essential to the genetic algorithm. Crossover enables the algorithm to
extract the best genes from different individuals and recombine them into potentially superior
children. Mutation adds to the diversity of a population and thereby increases the likelihood
that the algorithm will generate individuals with better fitness values.
See Creating the Next Generation for an example of how the genetic algorithm applies
mutation and crossover.
You can specify how many of each type of children the algorithm creates as follows:
 Elite count, in Reproduction options, specifies the number of elite children.
 Crossover fraction, in Reproduction options, specifies the fraction of the population, other
than elite children, that are crossover children.
For example, if the Population size is 20, the Elite count is 2, and the Crossover
fraction is 0.8, the numbers of each type of children in the next generation are as follows:
 There are two elite children.
 There are 18 individuals other than elite children, so the algorithm rounds 0.8*18 = 14.4 to 14
to get the number of crossover children.
 The remaining four individuals, other than elite children, are mutation children.

Genetic Algorithm Options


Optimization App vs. Command Line
There are two ways to specify options for the genetic algorithm, depending on whether you
are using the Optimization app or calling the functions ga or gamultiobj at the command line:
 If you are using the Optimization app (optimtool), select an option from a drop-down list or
enter the value of the option in a text field.
 If you are calling ga or gamultiobj at the command line, create options using the
function optimoptions, as follows:
 options = optimoptions('ga','Param1', value1, 'Param2', value2, ...);

See Setting Options at the Command Line for examples.


In this section, each option is listed in two ways:
 By its label, as it appears in the Optimization app
 By its field name in the options structure
For example:
 Population type is the label of the option in the Optimization app.
 PopulationType is the corresponding field of the options structure.
Plot Options
Plot options let you plot data from the genetic algorithm while it is running. You can stop the
algorithm at any time by clicking the Stop button on the plot window.
Plot interval (PlotInterval) specifies the number of generations between consecutive calls to
the plot function.
You can select any of the following plot functions in the Plot functions pane:
 Best fitness (@gaplotbestf) plots the best function value versus generation.
 Expectation (@gaplotexpectation) plots the expected number of children versus the raw
scores at each generation.
 Score diversity (@gaplotscorediversity) plots a histogram of the scores at each generation.
 Stopping (@gaplotstopping) plots stopping criteria levels.
 Best individual (@gaplotbestindiv) plots the vector entries of the individual with the best
fitness function value in each generation.
 Genealogy (@gaplotgenealogy) plots the genealogy of individuals. Lines from one
generation to the next are color-coded as follows:
o Red lines indicate mutation children.
o Blue lines indicate crossover children.
o Black lines indicate elite individuals.
 Scores (@gaplotscores) plots the scores of the individuals at each generation.
 Max constraint (@gaplotmaxconstr) plots the maximum nonlinear constraint violation at
each generation.
 Distance (@gaplotdistance) plots the average distance between individuals at each
generation.
 Range (@gaplotrange) plots the minimum, maximum, and mean fitness function values in
each generation.
 Selection (@gaplotselection) plots a histogram of the parents.
 Custom function lets you use plot functions of your own. To specify the plot function if you
are using the Optimization app,
o Select Custom function.
o Enter @myfun in the text box, where myfun is the name of your function.
See Structure of the Plot Functions.
gamultiobj allows Distance, Genealogy, Score diversity, Selection, Stopping, and Custom
function, as well as the following additional choices:
 Pareto front (@gaplotpareto) plots the Pareto front for the first two objective functions.
 Average Pareto distance (@gaplotparetodistance) plots a bar chart of the distance of each
individual from its neighbors.
 Rank histogram (@gaplotrankhist) plots a histogram of the ranks of the individuals.
Individuals of rank 1 are on the Pareto frontier. Individuals of rank 2 are lower than at least
one rank 1 individual, but are not lower than any individuals from other ranks, etc.
 Average Pareto spread (@gaplotspread) plots the average spread as a function of iteration
number.
To display a plot when calling ga from the command line, set the PlotFcn option to be a
function handle to the plot function. For example, to display the best fitness plot,
set options as follows:
options = optimoptions('ga','PlotFcn', @gaplotbestf);
To display multiple plots, use the syntax
options = optimoptions('ga','PlotFcn', {@plotfun1, @plotfun2, ...});
where @plotfun1, @plotfun2, and so on are function handles to the plot functions.
If you specify multiple plot functions, all plots appear as subplots in the same window. Right-
click any subplot to obtain a larger version in a separate figure window.
Structure of the Plot Functions
The first line of a plot function has this form:
function state = plotfun(options,state,flag)
The input arguments to the function are
 options — Structure containing all the current options settings.
 state — Structure containing information about the current generation. The State
Structure describes the fields of state.
 flag — Description of the stage the algorithm is currently in.
Passing Extra Parameters in the Optimization Toolbox™ documentation explains how to
provide additional parameters to the function.
The output argument state is a state structure as well. Pass the input argument, modified if
you like; see Changing the State Structure. To stop the iterations, set state.StopFlag to a
nonempty character vector, such as 'y'.
The State Structure
ga. The state structure for ga, which is an input argument to plot, mutation, and output
functions, contains the following fields:
 Population — Population in the current generation
 Score — Scores of the current population
 Generation — Current generation number
 StartTime — Time when genetic algorithm started, returned by tic
 StopFlag — Reason for stopping, a character vector
 Selection — Indices of individuals selected for elite, crossover, and mutation
 Expectation — Expectation for selection of individuals
 Best — Vector containing the best score in each generation
 how — The 'augLag' nonlinear constraint algorithm reports one of the following
actions: 'Infeasible point', 'Update multipliers', or 'Increase penalty'; see Augmented
Lagrangian Genetic Algorithm
 FunEval — Cumulative number of function evaluations
 LastImprovement — Generation at which the last improvement in fitness value occurred
 LastImprovementTime — Time at which last improvement occurred
 NonlinIneq — Nonlinear inequality constraints at current point, present only when a
nonlinear constraint function is specified
 NonlinEq — Nonlinear equality constraints at current point, present only when a nonlinear
constraint function is specified
gamultiobj. The state structure for gamultiobj, which is an input argument to plot, mutation,
and output functions, contains the following fields:
 Population — Population in the current generation
 Score — Scores of the current population, a Population-by-nObjectives matrix,
where nObjectives is the number of objectives
 Generation — Current generation number
 StartTime — Time when genetic algorithm started, returned by tic
 StopFlag — Reason for stopping, a character vector
 FunEval — Cumulative number of function evaluations
 Selection — Indices of individuals selected for elite, crossover, and mutation
 Rank — Vector of the ranks of members in the population
 Distance — Vector of distances of each member of the population to the nearest neighboring
member
 AverageDistance — The average of Distance
 Spread — Vector where the entries are the spread in each generation
 mIneq — Number of nonlinear inequality constraints
 mEq — Number of nonlinear equality constraints
 mAll — Total number of nonlinear constraints, mAll = mIneq + mEq
 C — Nonlinear inequality constraints at current point, a PopulationSize-by-mIneq matrix
 Ceq — Nonlinear equality constraints at current point, a PopulationSize-by-mEq matrix
 isFeas — Feasibility of population, a logical vector with PopulationSize elements
 maxLinInfeas — Maximum infeasibility with respect to linear constraints for the population
Population Options
Population options let you specify the parameters of the population that the genetic algorithm
uses.
Population type (PopulationType) specifies the type of input to the fitness function. Types
and their restrictions are:
 Double vector ('doubleVector') — Use this option if the individuals in the population have
type double. Use this option for mixed integer programming. This is the default.
 Bit string ('bitstring') — Use this option if the individuals in the population have components
that are 0 or 1.
Caution The individuals in a Bit string population are vectors of type double, not strings or characters.
 For Creation function (CreationFcn) and Mutation function (MutationFcn),
use Uniform (@gacreationuniform and @mutationuniform) or Custom. For Crossover
function(CrossoverFcn), use Scattered (@crossoverscattered), Single
point (@crossoversinglepoint), Two point (@crossovertwopoint), or Custom. You cannot use
a Hybrid function, andga ignores all constraints, including bounds, linear constraints, and
nonlinear constraints.
 Custom — For Crossover function and Mutation function, use Custom. For Creation
function, either use Custom, or provide an Initial population. You cannot use a Hybrid
function, and gaignores all constraints, including bounds, linear constraints, and nonlinear
constraints.
Population size (PopulationSize) specifies how many individuals there are in each
generation. With a large population size, the genetic algorithm searches the solution space
more thoroughly, thereby reducing the chance that the algorithm returns a local minimum that
is not a global minimum. However, a large population size also causes the algorithm to run
more slowly.
If you set Population size to a vector, the genetic algorithm creates multiple subpopulations,
the number of which is the length of the vector. The size of each subpopulation is the
corresponding entry of the vector. See Migration Options.
Creation function (CreationFcn) specifies the function that creates the initial population
for ga. Do not specify a creation function with integer problems because ga overrides any
choice you make. Choose from:
 [] uses the default creation function for your problem.
 Uniform (@gacreationuniform) creates a random initial population with a uniform
distribution. This is the default when there are no linear constraints, or when there are integer
constraints. The uniform distribution is in the initial population range
(InitialPopulationRange). The default values for InitialPopulationRange are [-10;10] for
every component, or [-9999;10001]when there are integer constraints. These bounds are
shifted and scaled to match any existing bounds lb and ub.
Caution Do not use @gacreationuniform when you have linear constraints. Otherwise, your population mi
 Feasible population (@gacreationlinearfeasible), the default when there are linear constraints
and no integer constraints, creates a random initial population that satisfies all bounds and
linear constraints. If there are linear constraints, Feasible population creates many individuals
on the boundaries of the constraint region, and creates a well-dispersed population.Feasible
population ignores Initial range (InitialPopulationRange).
gacreationlinearfeasible calls linprog to create a feasible population with respect to bounds
and linear constraints.
For an example showing its behavior, see Linearly Constrained Population and Custom Plot
Function.
 Nonlinear Feasible population (@gacreationnonlinearfeasible) is the default creation function
for the 'penalty' nonlinear constraint algorithm. For details, see Constraint Parameters.
 Custom lets you write your own creation function, which must generate data of the type that
you specify in Population type. To specify the creation function if you are using the
Optimization app,
o Set Creation function to Custom.
o Set Function name to @myfun, where myfun is the name of your function.
If you are using ga, set
options = optimoptions('ga','CreationFcn', @myfun);
Your creation function must have the following calling syntax.
function Population = myfun(GenomeLength, FitnessFcn, options)
The input arguments to the function are:
o Genomelength — Number of independent variables for the fitness function
o FitnessFcn — Fitness function
o options — Options structure
The function returns Population, the initial population for the genetic algorithm.
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
Caution When you have bounds or linear constraints, ensure that your creation function creates individuals
population might not satisfy the constraints.
Initial population (InitialPopulationMatrix) specifies an initial population for the genetic
algorithm. The default value is [], in which case ga uses the default Creation function to
create an initial population. If you enter a nonempty array in the Initial population field, the
array must have no more than Population size rows, and exactly Number of
variables columns. In this case, the genetic algorithm calls a Creation function to generate
the remaining individuals, if required.
Initial scores (InitialScoreMatrix) specifies initial scores for the initial population. The initial
scores can also be partial. Do not specify initial scores with integer problems
because gaoverrides any choice you make.
Initial range (InitialPopulationRange) specifies the range of the vectors in the initial
population that is generated by the gacreationuniform creation function. You can set Initial
range to be a matrix with two rows and Number of variables columns, each column of
which has the form [lb;ub], where lb is the lower bound and ub is the upper bound for the
entries in that coordinate. If you specify Initial range to be a 2-by-1 vector, each entry is
expanded to a constant row of length Number of variables. If you do not specify an Initial
range, the default is [-10;10]([-1e4+1;1e4+1] for integer-constrained problems), modified to
match any existing bounds.
See Setting the Initial Range for an example.
Fitness Scaling Options
Fitness scaling converts the raw fitness scores that are returned by the fitness function to
values in a range that is suitable for the selection function. You can specify options for fitness
scaling in the Fitness scaling pane.
Scaling function (FitnessScalingFcn) specifies the function that performs the scaling. The
options are
 Rank (@fitscalingrank) — The default fitness scaling function, Rank, scales the raw scores
based on the rank of each individual instead of its score. The rank of an individual is its
position in the sorted scores. An individual with rank r has scaled score proportional to 1/Gr.
So the scaled score of the most fit individual is proportional to 1, the scaled score of the next
most fit is proportional to 1/G2, and so on. Rank fitness scaling removes the effect of the
spread of the raw scores. The square root makes poorly ranked individuals more nearly equal
in score, compared to rank scoring. For more information, see Fitness Scaling.
 Proportional (@fitscalingprop) — Proportional scaling makes the scaled value of an
individual proportional to its raw fitness score.
 Top (@fitscalingtop) — Top scaling scales the top individuals equally.
Selecting Top displays an additional field, Quantity, which specifies the number of
individuals that are assigned positive scaled values. Quantity can be an integer from 1
through the population size or a fraction from 0 through 1 specifying a fraction of the
population size. The default value is 0.4. Each of the individuals that produce offspring is
assigned an equal scaled value, while the rest are assigned the value 0. The scaled values
have the form [01/n 1/n 0 0 1/n 0 0 1/n ...].
To change the default value for Quantity at the command line, use the following syntax:
options = optimoptions('ga','FitnessScalingFcn', {@fitscalingtop,quantity})
where quantity is the value of Quantity.
 Shift linear (@fitscalingshiftlinear) — Shift linear scaling scales the raw scores so that the
expectation of the fittest individual is equal to a constant multiplied by the average score.
You specify the constant in the Max survival rate field, which is displayed when you
select Shift linear. The default value is 2.
To change the default value of Max survival rate at the command line, use the following
syntax
options = optimoptions('ga','FitnessScalingFcn',
{@fitscalingshiftlinear, rate})
where rate is the value of Max survival rate.
 Custom lets you write your own scaling function. To specify the scaling function using the
Optimization app,
o Set Scaling function to Custom.
o Set Function name to @myfun, where myfun is the name of your function.
If you are using ga at the command line, set
options = optimoptions('ga','FitnessScalingFcn', @myfun);
Your scaling function must have the following calling syntax:
function expectation = myfun(scores, nParents)
The input arguments to the function are:
o scores — A vector of scalars, one for each member of the population
o nParents — The number of parents needed from this population
The function returns expectation, a column vector of scalars of the same length as scores,
giving the scaled values of each member of the population. The sum of the entries
of expectationmust equal nParents.
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
See Fitness Scaling for more information.
Selection Options
Selection options specify how the genetic algorithm chooses parents for the next generation.
You can specify the function the algorithm uses in the Selection function (SelectionFcn)
field in theSelection options pane. Do not use with integer problems.
gamultiobj uses only the Tournament (@selectiontournament) selection function.
For ga the options are:
 Stochastic uniform (@selectionstochunif) — The ga default selection function, Stochastic
uniform, lays out a line in which each parent corresponds to a section of the line of length
proportional to its scaled value. The algorithm moves along the line in steps of equal size. At
each step, the algorithm allocates a parent from the section it lands on. The first step is a
uniform random number less than the step size.
 Remainder (@selectionremainder) — Remainder selection assigns parents deterministically
from the integer part of each individual's scaled value and then uses roulette selection on the
remaining fractional part. For example, if the scaled value of an individual is 2.3, that
individual is listed twice as a parent because the integer part is 2. After parents have been
assigned according to the integer parts of the scaled values, the rest of the parents are chosen
stochastically. The probability that a parent is chosen in this step is proportional to the
fractional part of its scaled value.
 Uniform (@selectionuniform) — Uniform selection chooses parents using the expectations
and number of parents. Uniform selection is useful for debugging and testing, but is not a
very effective search strategy.
 Roulette (@selectionroulette) — Roulette selection chooses parents by simulating a roulette
wheel, in which the area of the section of the wheel corresponding to an individual is
proportional to the individual's expectation. The algorithm uses a random number to select
one of the sections with a probability equal to its area.
 Tournament (@selectiontournament) — Tournament selection chooses each parent by
choosing Tournament size players at random and then choosing the best individual out of
that set to be a parent. Tournament size must be at least 2. The default value
of Tournament size is 4.
To change the default value of Tournament size at the command line, use the syntax
options = optimoptions('ga','SelectionFcn',...
{@selectiontournament,size})
where size is the value of Tournament size.
When Constraint parameters > Nonlinear constraint
algorithm is Penalty, ga uses Tournament with size 2.
 Custom enables you to write your own selection function. To specify the selection function
using the Optimization app,
o Set Selection function to Custom.
o Set Function name to @myfun, where myfun is the name of your function.
If you are using ga at the command line, set
options = optimoptions('ga','SelectionFcn', @myfun);
Your selection function must have the following calling syntax:
function parents = myfun(expectation, nParents, options)
ga provides the input arguments expectation, nParents, and options. Your function returns the
indices of the parents.
The input arguments to the function are:
o expectation
 For ga, expectation is a column vector of the scaled fitness of each member of the population.
The scaling comes from the Fitness Scaling Options.
Tip You can ensure that you have a column vector by using expectation(:,1). For example, edit selectionsto
functions.
 For gamultiobj, expectation is a matrix whose first column is the rank of the individuals, and
whose second column is the distance measure of the individuals. See Multiobjective Options.
o nParents— Number of parents to select.
o options — Genetic algorithm options structure.
The function returns parents, a row vector of length nParents containing the indices of the
parents that you select.
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
See Selection for more information.
Reproduction Options
Reproduction options specify how the genetic algorithm creates children for the next
generation.
Elite count (EliteCount) specifies the number of individuals that are guaranteed to survive to
the next generation. Set Elite count to be a positive integer less than or equal to the
population size. The default value is ceil(0.05*PopulationSize) for continuous problems,
and 0.05*(default PopulationSize) for mixed-integer problems.
Crossover fraction (CrossoverFraction) specifies the fraction of the next generation, other
than elite children, that are produced by crossover. Set Crossover fraction to be a fraction
between0 and 1, either by entering the fraction in the text box or moving the slider. The
default value is 0.8.
See Setting the Crossover Fraction for an example.
Mutation Options
Mutation options specify how the genetic algorithm makes small random changes in the
individuals in the population to create mutation children. Mutation provides genetic diversity
and enables the genetic algorithm to search a broader space. You can specify the mutation
function in the Mutation function (MutationFcn) field in the Mutation options pane. Do not
use with integer problems. You can choose from the following functions:
 Gaussian (mutationgaussian) — The default mutation function for unconstrained
problems, Gaussian, adds a random number taken from a Gaussian distribution with mean 0
to each entry of the parent vector. The standard deviation of this distribution is determined by
the parameters Scale and Shrink, which are displayed when you select Gaussian, and by
the Initial rangesetting in the Population options.
o The Scale parameter determines the standard deviation at the first generation. If you
set Initial range to be a 2-by-1 vector v, the initial standard deviation is the same at all
coordinates of the parent vector, and is given by Scale*(v(2)-v(1)).
If you set Initial range to be a vector v with two rows and Number of variables columns,
the initial standard deviation at coordinate i of the parent vector is given by Scale*(v(i,2) -
v(i,1)).
o The Shrink parameter controls how the standard deviation shrinks as generations go by. If
you set Initial range to be a 2-by-1 vector, the standard deviation at the kth generation, σk, is
the same at all coordinates of the parent vector, and is given by the recursive formula
( )
σk=σk−1 1−Shrink .
kGenerations
If you set Initial range to be a vector with two rows and Number of variables columns, the
standard deviation at coordinate i of the parent vector at the kth generation, σi,k, is given by
the recursive formula
( )
σi,k=σi,k−1 1−Shrink .
kGenerations
If you set Shrink to 1, the algorithm shrinks the standard deviation in each coordinate
linearly until it reaches 0 at the last generation is reached. A negative value of Shrink causes
the standard deviation to grow.
 The default value of both Scale and Shrink is 1. To change the default values at the
command line, use the syntax
 options = optimoptions('ga','MutationFcn', ...

 {@mutationgaussian, scale, shrink})


 where scale and shrink are the values of Scale and Shrink, respectively.
Caution Do not use mutationgaussian when you have bounds or linear constraints. Otherwise, your popula
 Uniform (mutationuniform) — Uniform mutation is a two-step process. First, the algorithm
selects a fraction of the vector entries of an individual for mutation, where each entry has a
probability Rate of being mutated. The default value of Rate is 0.01. In the second step, the
algorithm replaces each selected entry by a random number selected uniformly from the
range for that entry.
To change the default value of Rate at the command line, use the syntax
options = optimoptions('ga','MutationFcn', {@mutationuniform, rate})
where rate is the value of Rate.
Caution Do not use mutationuniform when you have bounds or linear constraints. Otherwise, your populat
 Adaptive Feasible (mutationadaptfeasible), the default mutation function when there are
constraints, randomly generates directions that are adaptive with respect to the last successful
or unsuccessful generation. The mutation chooses a direction and step length that satisfies
bounds and linear constraints.
 Custom enables you to write your own mutation function. To specify the mutation function
using the Optimization app,
o Set Mutation function to Custom.
o Set Function name to @myfun, where myfun is the name of your function.
If you are using ga, set
options = optimoptions('ga','MutationFcn', @myfun);
Your mutation function must have this calling syntax:
function mutationChildren = myfun(parents, options, nvars,
FitnessFcn, state, thisScore, thisPopulation)
The arguments to the function are
o parents — Row vector of parents chosen by the selection function
o options — Options structure
o nvars — Number of variables
o FitnessFcn — Fitness function
o state — Structure containing information about the current generation. The State
Structure describes the fields of state.
o thisScore — Vector of scores of the current population
o thisPopulation — Matrix of individuals in the current population
The function returns mutationChildren—the mutated offspring—as a matrix where rows
correspond to the children. The number of columns of the matrix is Number of variables.
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
Caution When you have bounds or linear constraints, ensure that your mutation function creates individual
population will not necessarily satisfy the constraints.
Crossover Options
Crossover options specify how the genetic algorithm combines two individuals, or parents, to
form a crossover child for the next generation.
Crossover function (CrossoverFcn) specifies the function that performs the crossover. Do
not use with integer problems. You can choose from the following functions:
 Scattered (@crossoverscattered), the default crossover function for problems without linear
constraints, creates a random binary vector and selects the genes where the vector is a 1 from
the first parent, and the genes where the vector is a 0 from the second parent, and combines
the genes to form the child. For example, if p1 and p2 are the parents
 p1 = [a b c d e f g h]

p2 = [1 2 3 4 5 6 7 8]
and the binary vector is [1 1 0 0 1 0 0 0], the function returns the following child:
child1 = [a b 3 4 e 6 7 8]

Caution Do not use @crossoverscattered when you have linear constraints. Otherwise, your population wi
 Single point (@crossoversinglepoint) chooses a random integer n between 1 and Number of
variables and then
o Selects vector entries numbered less than or equal to n from the first parent.
o Selects vector entries numbered greater than n from the second parent.
o Concatenates these entries to form a child vector.
For example, if p1 and p2 are the parents
p1 = [a b c d e f g h]
p2 = [1 2 3 4 5 6 7 8]
 and the crossover point is 3, the function returns the following child.
 child = [a b c 4 5 6 7 8]

Caution Do not use @crossoversinglepoint when you have linear constraints. Otherwise, your population w
Two point (@crossovertwopoint) selects two random
integers m and n between 1 and Number of variables. The function selects
o Vector entries numbered less than or equal to m from the first parent
o Vector entries numbered from m+1 to n, inclusive, from the second parent
o Vector entries numbered greater than n from the first parent.
The algorithm then concatenates these genes to form a single gene. For example,
if p1 and p2 are the parents
p1 = [a b c d e f g h]
p2 = [1 2 3 4 5 6 7 8]
and the crossover points are 3 and 6, the function returns the following child.
child = [a b c 4 5 6 g h]

Caution Do not use @crossovertwopoint when you have linear constraints. Otherwise, your population wi
Intermediate (@crossoverintermediate), the default crossover function when there are
linear constraints, creates children by taking a weighted average of the parents. You
can specify the weights by a single parameter, Ratio, which can be a scalar or a row
vector of length Number of variables. The default is a vector of all 1's. The function
creates the child from parent1 andparent2 using the following formula.
child = parent1 + rand * Ratio * ( parent2 - parent1)
If all the entries of Ratio lie in the range [0, 1], the children produced are within the
hypercube defined by placing the parents at opposite vertices. If Ratio is not in that range, the
children might lie outside the hypercube. If Ratio is a scalar, then all the children lie on the
line between the parents.
To change the default value of Ratio at the command line, use the syntax
options = optimoptions('ga','CrossoverFcn', ...
{@crossoverintermediate, ratio});
where ratio is the value of Ratio.
Heuristic (@crossoverheuristic) returns a child that lies on the line containing the two
parents, a small distance away from the parent with the better fitness value in the
direction away from the parent with the worse fitness value. You can specify how far
the child is from the better parent by the parameter Ratio, which appears when you
select Heuristic. The default value of Ratio is 1.2. If parent1 and parent2 are the
parents, and parent1 has the better fitness value, the function returns the child
child = parent2 + R * (parent1 - parent2);
To change the default value of Ratio at the command line, use the syntax
options = optimoptions('ga','CrossoverFcn',...
{@crossoverheuristic,ratio});
where ratio is the value of Ratio.
Arithmetic (@crossoverarithmetic) creates children that are the weighted arithmetic
mean of two parents. Children are always feasible with respect to linear constraints
and bounds.
Custom enables you to write your own crossover function. To specify the crossover
function using the Optimization app,
o Set Crossover function to Custom.
o Set Function name to @myfun, where myfun is the name of your function.
If you are using ga, set
options = optimoptions('ga','CrossoverFcn',@myfun);
Your crossover function must have the following calling syntax.
xoverKids = myfun(parents, options, nvars, FitnessFcn, ...
unused,thisPopulation)
The arguments to the function are
o parents — Row vector of parents chosen by the selection function
o options — options structure
o nvars — Number of variables
o FitnessFcn — Fitness function
o unused — Placeholder not used
o thisPopulation — Matrix representing the current population. The number of rows of the
matrix is Population size and the number of columns is Number of variables.
The function returns xoverKids—the crossover offspring—as a matrix where rows
correspond to the children. The number of columns of the matrix is Number of variables.
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
Caution When you have bounds or linear constraints, ensure that your crossover function creates individua
population will not necessarily satisfy the constraints.
Migration Options
Note: Subpopulations refer to a form of parallel processing for the genetic algorithm. ga currently does not
hosts a number of individuals. These individuals are a subpopulation. The worker evolves the subpopulation
migration causes some individuals to travel between workers.
Because ga does not currently support this form of parallel processing, there is no benefit to setting Populati
the MigrationDirection, MigrationInterval, orMigrationFraction options.
Migration options specify how individuals move between subpopulations. Migration occurs if
you set Population size to be a vector of length greater than 1. When migration occurs, the
best individuals from one subpopulation replace the worst individuals in another
subpopulation. Individuals that migrate from one subpopulation to another are copied. They
are not removed from the source subpopulation.
You can control how migration occurs by the following three fields in the Migration options
pane:
 Direction (MigrationDirection) — Migration can take place in one or both directions.
o If you set Direction to Forward ('forward'), migration takes place toward the last
subpopulation. That is, the nth subpopulation migrates into the (n+1)th subpopulation.
o If you set Direction to Both ('both'), the nth subpopulation migrates into both the (n–1)th and
the (n+1)th subpopulation.
Migration wraps at the ends of the subpopulations. That is, the last subpopulation migrates
into the first, and the first may migrate into the last.
 Interval (MigrationInterval) — Specifies how many generation pass between migrations. For
example, if you set Interval to 20, migration takes place every 20 generations.
 Fraction (MigrationFraction) — Specifies how many individuals move between
subpopulations. Fraction specifies the fraction of the smaller of the two subpopulations that
moves. For example, if individuals migrate from a subpopulation of 50 individuals into a
subpopulation of 100 individuals and you set Fraction to 0.1, the number of individuals that
migrate is 0.1*50=5.
Constraint Parameters
Constraint parameters refer to the nonlinear constraint solver. For details on the algorithm,
see Nonlinear Constraint Solver Algorithms.
Choose between the nonlinear constraint algorithms by setting
the NonlinearConstraintAlgorithm option to 'auglag' (Augmented Lagrangian)
or 'penalty' (Penalty algorithm).
 Augmented Lagrangian Genetic Algorithm
 Penalty Algorithm
Augmented Lagrangian Genetic Algorithm
 Initial penalty (InitialPenalty) — Specifies an initial value of the penalty parameter that is
used by the nonlinear constraint algorithm. Initial penalty must be greater than or equal to 1,
and has a default of 10.
 Penalty factor (PenaltyFactor) — Increases the penalty parameter when the problem is not
solved to required accuracy and constraints are not satisfied. Penalty factor must be greater
than 1, and has a default of 100.
Penalty Algorithm
The penalty algorithm uses the gacreationnonlinearfeasible creation function by default. This
creation function uses fmincon to find feasible
individuals. gacreationnonlinearfeasiblestarts fmincon from a variety of initial points within
the bounds from the InitialPopulationRange option.
Optionally, gacreationnonlinearfeasible can run fmincon in parallel on the initial points.
You can specify tuning parameters for gacreationnonlinearfeasible using the following name-
value pairs.

Name Value

SolverOpts fmincon options, created using optimoptions or optimset.

UseParallel When true, run fmincon in parallel on initial points; default is false.

NumStartPts Number of start points, a positive integer up to sum(PopulationSize) in value.

Include the name-value pairs in a cell array along with @gacreationnonlinearfeasible.


options = optimoptions('ga','CreationFcn',{@gacreationnonlinearfeasible,...
'UseParallel',true,'NumStartPts',20});
Multiobjective Options
Multiobjective options define parameters characteristic of the multiobjective genetic
algorithm. You can specify the following parameters:
 DistanceMeasureFcn — Defines a handle to the function that computes distance measure of
individuals, computed in decision variable or design space (genotype) or in function space
(phenotype). For example, the default distance measure function is distancecrowding in
function space, or {@distancecrowding,'phenotype'}.
 ParetoFraction — Sets the fraction of individuals to keep on the first Pareto front while the
solver selects individuals from higher fronts. This option is a scalar between 0 and 1.
Hybrid Function Options
 ga Hybrid Function
 gamultiobj Hybrid Function
ga Hybrid Function
A hybrid function is another minimization function that runs after the genetic algorithm
terminates. You can specify a hybrid function in Hybrid function (HybridFcn) options. Do
not use with integer problems. The choices are
 [] — No hybrid function.
 fminsearch (@fminsearch) — Uses the MATLAB® function fminsearch to perform
unconstrained minimization.
 patternsearch (@patternsearch) — Uses a pattern search to perform constrained or
unconstrained minimization.
 fminunc (@fminunc) — Uses the Optimization Toolbox function fminunc to perform
unconstrained minimization.
 fmincon (@fmincon) — Uses the Optimization Toolbox function fmincon to perform
constrained minimization.
Note: Ensure that your hybrid function accepts your problem constraints. Otherwise, ga throws an error.
You can set separate options for the hybrid function. Use optimset for fminsearch,
or optimoptions for fmincon, patternsearch, or fminunc. For example:
hybridopts = optimoptions('fminunc','Display','iter','Algorithm','quasi-newton');
Include the hybrid options in the Genetic Algorithm options structure as follows:

options = optimoptions('ga',options,'HybridFcn',{@fminunc,hybridopts});
hybridopts must exist before you set options.

See Include a Hybrid Function for an example.


gamultiobj Hybrid Function
A hybrid function is another minimization function that runs after the multiobjective genetic
algorithm terminates. You can specify the hybrid function fgoalattain in Hybrid
function (HybridFcn) options.
In use as a multiobjective hybrid function, the solver does the following:
1. Compute the maximum and minimum of each objective function at the solutions. For
objective j at solution k, let
Fmax(j)Fmin(j)= F (j)= F (j).
maxk k mink k
2. Compute the total weight at each solution k,

w(k)= max(j)−Fk(j)1+Fmax(j)−Fmin(j).
jF
3. Compute the weight for each objective function j at each solution k,
p(j,k)=w(k) max .
F (j)−Fk(j)1+Fmax(j)−Fmin(j)
4. For each solution k, perform the goal attainment problem with goal vector Fmin and weight
vector p(j,k).
For more information, see section 9.6 of Deb [3].
Stopping Criteria Options
Stopping criteria determine what causes the algorithm to terminate. You can specify the
following options:
 Generations (MaxGenerations) — Specifies the maximum number of iterations for the
genetic algorithm to perform. The default is 100*numberOfVariables.
 Time limit (MaxTime) — Specifies the maximum time in seconds the genetic algorithm runs
before stopping, as measured by tic and toc. This limit is enforced after each iteration,
so ga can exceed the limit when an iteration takes substantial time.
 Fitness limit (FitnessLimit) — The algorithm stops if the best fitness value is less than or
equal to the value of Fitness limit.
 Stall generations (MaxStallGenerations) — The algorithm stops if the average relative
change in the best fitness function value over Stall generations is less than or equal
to Function tolerance. (If the Stall Test (StallTest) option is 'geometricWeighted', then the
test is for a geometric weighted average relative change.) For a problem with nonlinear
constraints, Stall generations applies to the subproblem (see Nonlinear Constraint Solver
Algorithms).
For gamultiobj, if the weighted average relative change in the spread of the Pareto solutions
over Stall generations is less than Function tolerance, and the spread is smaller than the
average spread over the last Stall generations, then the algorithm stops. The spread is a
measure of the movement of the Pareto front.
 Stall time limit (MaxStallTime) — The algorithm stops if there is no improvement in the
best fitness value for an interval of time in seconds specified by Stall time limit, as measured
by ticand toc.
 Function tolerance (FunctionTolerance) — The algorithm stops if the average relative
change in the best fitness function value over Stall generations is less than or equal
to Function tolerance. (If the StallTest option is 'geometricWeighted', then the test is for
a geometric weighted average relative change.)
For gamultiobj, if the weighted average relative change in the spread of the Pareto solutions
over Stall generations is less than Function tolerance, and the spread is smaller than the
average spread over the last Stall generations, then the algorithm stops. The spread is a
measure of the movement of the Pareto front.
 Constraint tolerance (ConstraintTolerance) — The Constraint tolerance is not used as
stopping criterion. It is used to determine the feasibility with respect to nonlinear constraints.
Also,max(sqrt(eps),ConstraintTolerance) determines feasibility with respect to linear
constraints.
See Set Maximum Number of Generations for an example.
Output Function Options
Output functions are functions that the genetic algorithm calls at each generation. Unlike all
other solvers, a ga output function can not only read the values of the state of the algorithm,
but can modify those values.
To specify the output function using the Optimization app,
 Select Custom function.
 Enter @myfun in the text box, where myfun is the name of your function.
Write myfun with appropriate syntax.
 To pass extra parameters in the output function, use Anonymous Functions.
 For multiple output functions, enter a cell array of output function
handles: {@myfun1,@myfun2,...}.
At the command line, set
options = optimoptions('ga','OutputFcn',@myfun);
For multiple output functions, enter a cell array:
options = optimoptions('ga','OutputFcn',{@myfun1,@myfun2,...});
To see a template that you can use to write your own output functions, enter
edit gaoutputfcntemplate
at the MATLAB command line.
Structure of the Output Function
Your output function must have the following calling syntax:
[state,options,optchanged] = myfun(options,state,flag)
MATLAB passes the options, state, and flag data to your output function, and the output
function returns state, options, and optchanged data.

Note: To stop the iterations, set state.StopFlag to a nonempty character vector, such as 'y'.
The output function has the following input arguments:
 options — Options structure
 state — Structure containing information about the current generation. The State
Structure describes the fields of state.
 flag — Current status of the algorithm:
o 'init' — Initialization state
o 'iter' — Iteration state
o 'interrupt' — Iteration of a subproblem of a nonlinearly constrained problem
 When flag is 'interrupt', the values of optimvalues fields apply to the subproblem iterations.
 When flag is 'interrupt', ga does not accept changes in options, and ignores optchanged.
o 'done' — Final state
Passing Extra Parameters in the Optimization Toolbox documentation explains how to
provide additional parameters to the function.
The output function returns the following arguments to ga:
 state — Structure containing information about the current generation. The State
Structure describes the fields of state. To stop the iterations, set state.StopFlag to a nonempty
character vector, such as 'y'.
 options — Options structure modified by the output function. This argument is optional.
 optchanged — Boolean flag indicating changes to options. To change options for subsequent
iterations, set optchanged to true.
Changing the State Structure
Caution Changing the state structure carelessly can lead to inconsistent or erroneous results. Usually, you c
using mutation or crossover functions, instead of changing the state structure in a plot function or output fun
ga output functions can change the state structure (see The State Structure). Be careful when
changing values in this structure, as you can pass inconsistent data back to ga.
Tip If your output structure changes the Population field, then be sure to update the Score field, and possib
they contain consistent information.
To update the Score field after changing the Population field, first calculate the fitness
function values of the population, then calculate the fitness scaling for the population.
See Fitness Scaling Options.
Display to Command Window Options
Level of display ('Display') specifies how much information is displayed at the command
line while the genetic algorithm is running. The available options are
 Off ('off') — No output is displayed.
 Iterative ('iter') — Information is displayed at each iteration.
 Diagnose ('diagnose') — Information is displayed at each iteration. In addition, the diagnostic
lists some problem information and the options that have been changed from the defaults.
 Final ('final') — The reason for stopping is displayed.
Both Iterative and Diagnose display the following information:
 Generation — Generation number
 f-count — Cumulative number of fitness function evaluations
 Best f(x) — Best fitness function value
 Mean f(x) — Mean fitness function value
 Stall generations — Number of generations since the last improvement of the fitness function
When a nonlinear constraint function has been specified, Iterative and Diagnose do not
display the Mean f(x), but will additionally display:
 Max Constraint — Maximum nonlinear constraint violation
The default value of Level of display is
 Off in the Optimization app
 'final' in options created using optimoptions
Vectorize and Parallel Options (User Function Evaluation)
You can choose to have your fitness and constraint functions evaluated in serial, parallel, or
in a vectorized fashion. These options are available in the User function evaluation section
of theOptions pane of the Optimization app, or by setting
the 'UseVectorized' and 'UseParallel' options with optimoptions.
 When Evaluate fitness and constraint functions ('UseVectorized') is in
serial (false), ga calls the fitness function on one individual at a time as it loops through the
population. (At the command line, this assumes 'UseParallel' is at its default value of false.)
 When Evaluate fitness and constraint functions ('UseVectorized')
is vectorized (true), ga calls the fitness function on the entire population at once, i.e., in a
single call to the fitness function.
If there are nonlinear constraints, the fitness function and the nonlinear constraints all need to
be vectorized in order for the algorithm to compute in a vectorized manner.
See Vectorize the Fitness Function for an example.
 When Evaluate fitness and constraint functions (UseParallel) is in parallel (true), ga calls
the fitness function in parallel, using the parallel environment you established (see How to
Use Parallel Processing). At the command line, set UseParallel to false to compute serially.
Note: You cannot simultaneously use vectorized and parallel computations. If you set 'UseParallel' to true a
and constraint functions in a vectorized manner, not in parallel.
How Fitness and Constraint Functions Are Evaluated

UseVectorized = false U

UseParallel = false Serial V

UseParallel = true Parallel V

You might also like