0% found this document useful (0 votes)
1 views

Evolutionary Algorithm Definition

The document presents a new evolutionary algorithm aimed at automatic programming, addressing the limitations of existing algorithms that are often computationally expensive and theoretically weak. The proposed method enhances the evolutionary process by decomposing complex problems into manageable parts, improving the likelihood of converging to optimal solutions. Key features include the use of finite state machines and a theoretical framework based on system definitions to efficiently achieve input-output specifications.

Uploaded by

rafa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Evolutionary Algorithm Definition

The document presents a new evolutionary algorithm aimed at automatic programming, addressing the limitations of existing algorithms that are often computationally expensive and theoretically weak. The proposed method enhances the evolutionary process by decomposing complex problems into manageable parts, improving the likelihood of converging to optimal solutions. Key features include the use of finite state machines and a theoretical framework based on system definitions to efficiently achieve input-output specifications.

Uploaded by

rafa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

American J.

of Engineering and Applied Sciences 2 (4): 789-795, 2009


ISSN 1941-7020
© 2009 Science Publications

Evolutionary Algorithm Definition

Nada M.A. AL-Salami


Department of Management Information Systems, Faculty of Economic and Business,
Al Zaytoonah University of Jordan, Amman, Jordan

Abstract: Problem statement: Most resent evolutionary algorithms work under weak theoretical
basis and thus, they are computationally expensive. Approach: This study discussed the use of new
evolutionary algorithm for automatic programming, based on theoretical definitions of program
behaviors. Evolutionary process adapted fixed and self-organized input-output specification of the
problem, to evolve good finite state machine that efficiently satisfies these specifications. Results: The
proposed algorithm enhanced evolutionary process by simultaneously solving multi-parts from the
same problem. Conclusion: The probability that the algorithm will converge to the optimal solution
was highly enhanced when decomposing the main problem into multi-part.

Key words: Evolutionary computation, genetic programming, automatic programming, system


design, self-organization system

INTRODUCTION second one leads to evolutionary computing. The


algorithms involved in Evolutionary computing are
Life on earth has evolved for some 3.5 billion termed as Evolutionary Algorithms (EA). Application
years. Initially only the strongest creatures survived, but of EC may includes: Bioinformatics, numerical
over time some creatures developed the ability to recall combinatorial optimization, system modeling and
past series of events and apply that knowledge towards identifications, planning and control, engineering
making intelligent decisions. The very existence of design, data mining, machine learning and artificial life.
humans is testimony to the fact that our ancestors were In evolutionary computation, the idea of self-
able to outwit, rather than out power, those whom they modification has its origins in the ontogenetic
were in competition with, in other words, their response programming system of Spector and Stoffel[1], the
to the threat of their environment was intellectual graph re-writing system of Gruau[2] and the
adaptation. This could be regarded as the beginning of developmental method of evolving graphs and circuits
intelligent behavior. “Intelligent behavior is a of Miller[3].
composite ability to predict one’s environment coupled In this study, we propose new evolutionary
with a translation of each prediction into a suitable algorithm, based on theoretical definition of system and
response in light of some objective”. Evolutionary it’s input-output boundaries, in contrast with traditional
Computing is a research area within Computer Science, evolutionary methods. Then compare it to the most
which draws inspiration from the process of natural recently used evolutionary algorithms.
evolution. Evolutionary computation, offers practical
advantages to the researcher facing difficult
Background: Evolutionary algorithms are ubiquitous
optimization problems. These advantages are multi-
nowadays, having been successfully applied to
fold, including the simplicity of the approach, its robust
response to changing circumstance, its flexibility and numerous problems from different domains, including
many other facets. The evolutionary approach can be optimization, automatic programming, machine learning,
applied to problems where heuristic solutions are not operations research, bioinformatics and social systems.
available or generally lead to unsatisfactory results. In many cases the mathematical function, which
Thus evolutionary computing is needed for Developing describes the problem is not known and the values at
automated problem solvers, where the most powerful certain parameters are obtained from simulations. In
natural problem solvers are human Brain and contrast to many other optimization techniques an
evolutionary process (that created the human brain). important advantage of evolutionary algorithms is they
Designing the problem solvers based on human brain can cope with multi-modal functions[4]. Additional
leads to the field of “neurocomputing”. While the advantages are listed as follows:
789
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

• It is conceptually simple. The procedure may be • Evolutionary Programming (EP)


written as difference equation: • Evolution Strategies (ES)
• Genetic Algorithm (GA)
x[t + 1] = s(v (x [t])) (1) • Genetic Programming (GP)
Where:
x[t] = The population at time t under a They all share a common conceptual base of
representation x simulating the evolution of individual structures via
v = A random variation operator processes of selection, mutation and reproduction. The
s = The selection operator processes depend on the perceived performance of the
individual structures as defined by the problem, Table 1.
• It is representation independent, in contrast with
Evolutionary programming, developed by Fogel et al.[4]
other numerical techniques, which might be
traditionally has used representations that are tailored to
applicable for only continuous values or other
the problem domain. EP is often used as an optimizer,
constrained sets
although it arose from the desire to generate machine
• It offers a framework such that it is comparably
intelligence. Rechenberg and Schwefel developed
easy to incorporate prior knowledge about the
Evolutionary Strategies. The algorithm is similar to EP
problem. Incorporating such information focuses
in many ways. In the last few years they have had
the evolutionary search, yielding a more efficient
something of a renaissance and have become more
exploration of the state space of possible solutions
popular, particularly in research work. However, in
• Can also be combined with more traditional practical and industrial systems, they have been
optimization techniques. This may be as simple as
eclipsed somewhat by the success of the GA. One
the use of a gradient minimization used after
reason behind the GA’s success is that its advocates are
primary search with an evolutionary algorithm, or
very good at describing the algorithm in an easy to
it may involve simultaneous application of other
understand and non-mathematical way.
algorithms A genotype-phenotype mapping therefore implies
• The evaluation of each solution can be handled in an algorithm that transforms an input string of numbers
parallel and only selection (which requires at least encoding a genotype into another string of numbers that
pair wise competition) requires some serial comprises the phenotype of an individual. Both
processing evolutionary programming and evolutionary strategies
• Traditional methods of optimization are not robust are known as phenotypic algorithms (physical
to dynamic changes in problem the environment characteristic of the genotype like smart, beautiful,
and often require a complete restart in order to healthy), whereas the genetic algorithm is a genotypic
provide a solution (e.g., dynamic programming). In algorithm (Particular set of genes in a genome).
contrast, evolutionary algorithms can be used to Phenotypic Algorithms operate directly on the
adapt solutions to changing circumstance parameters of the system itself, whereas genotypic
• It has the ability to address problems for which algorithms operate on strings representing the system.
there are no human experts. Although human In other words, the analogy in biology to Phenotypic
expertise should be used when it is available, it Algorithms is a direct change in an animal's behavior or
often proves less than adequate for automating body and the analogy to Genotypic is a change in the
problem-solving routines animal’s genes, which lie behind the behavior or
body.GA is implemented by having arrays of bits or
However there are some disadvantages of EC characters to represent the chromosomes. In EP there
such as: are no such restrictions for the representation. In most
cases the representation follows from the problem. EP
• There is no guarantee for optimum solution within typically uses an adaptive mutation operator in which
finite time the severity of mutations is often reduced as the global
• Works under weak theoretical basis optimum is approached while GA’s use a pre-fixed
• May need parameter tuning mutation operator. Among the schemes to adapt the
• Computationally expensive mutation step size, the most widely studied being the
“meta-evolutionary” technique in which the variance of
Resentally area in EC: Sub-area of the term the mutation distribution is subject to mutation by a
evolutionary computation or evolutionary algorithms fixed variance mutation operator that evolves along
includes: with the solution.
790
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

Table 1: Comparison between different Evolutionary Algorithms


Algorithm type Developed researcher Individual representation Operators Selection method
Evolutionary programming Phenotypic Fogel et al., 1966 [4] FSMs Mutation only Tournament
Evolutionary strategies Phenotypic Rechenberg, 1973 [4-8] Real values Mainly mutation Ranking
Genetic algorithm Genotypic Holland, 1975 [5] Bitstrings Mainly crossover Proportionate
Genetic programming Phenotypic Koza, 1992 [4][8] Expression trees Mainly crossover Proportionate

On the other hand, when comparing evolutionary computational problems[10]. The interesting
programming to evolution strategies, one can identify characteristic of CGP are:
the following differences: When implemented to solve
real-valued function optimization problems, both • More powerful program encoding using graphs, than
typically operate on the real values themselves and use using conventional GP tree-like representations, the
adaptive reproduction operators. EP typically uses population of strings are of fixed length, whereas
stochastic tournament selection while ES typically uses their corresponding graphs are of variable length
deterministic selection. EP does not use crossover depending on the number of genes in use
operators while ES uses crossover. Some specific • Efficient evaluation derived from the intrinsic
advantages of genetic programming are that no analytical feature of subgraph-reuse exhibited by graphs
knowledge is needed and still could get accurate results. • Less complicated graph recombination via the
GP approach does scale with the problem size. GP does crossover and mutation genetic operators
impose restrictions on how the structure of solutions
should be formulated. There are several variants of GP, MATERIALS AND METHODS
some of them are: Linear Genetic Programming (LGP),
Gene Expression Programming (GEP), Multi Expression Proposed method is based on theoretical system
Programming (MEP), Cartesian Genetic Programming definitions discussed in[11], thus it overcomes the
(CGP), Traceless Genetic Programming (TGP) and difficulties of traditional method, in addition it has all
Genetic Algorithm for Deriving Software (GADS). attractive characteristic of CGP. Our evolutionary
Following we shall concentrate on CGP, since it is the algorithm evolves FSA that achieve input-output
most near to our proposed method[5-8]. specification of the problem. FSA transit from state to
state according to trajectory data sets, which either
Cartesian genetic programming: Cartesian genetic fixed, or Self-Organized during evolutionary process.
programming was originally developed by Miller and Trajectory data are stored as a string of numbers (the
Thomson[9] for the purpose of evolving digital circuits genotype) and evolved to achieve the optimum
and represents a program as a directed graph. One of mapping. The theory is based on McCarthy's formalism
the benefits of this type of representation is the implicit of the theory of computer science[12-13]: There is a set of
re-use of nodes in the directed graph. Originally CGP base function F and a set of strategies C for building
used a program topology defined by a rectangular grid new function out of old, the closure C (F) comprises all
of nodes with a user defined number of rows and computable functions. For any language L it may be
columns. In CGP, the genotype is a fixed-length possible to isolate a set (FL.) of base functions to
representation and consists of a list of integers which express the meaning of identifiers and statements and a
encode the function and connections of each node in the set (CL) of strategies to express the meaning of the
directed graph. The genotype is then mapped to an linguistic structure and data structures of L. Then the
indexed graph that can be executed as a program. In meaning of P in L would be computable function in
CGP there are very large numbers of genotypes that CL(FL):
map to identical genotypes due to the presence of a
large amount of redundancy. Firstly there is node Meaning (P): L→CL(FL)
redundancy that is caused by genes associated with
nodes that are not part of the connected graph So, P effects a transformation:
representing the program. Another form of redundancy
in CGP, also present in all other forms of GP is, (P) Xinitial→X final
functional redundancy. Simon Harding and Ltd
introduce computational development using a form of on a state vector X, which consists of an association of
Cartesian Genetic Programming that includes self- the variable manipulated by the program and their
modification operations. One advantage of this values. A program P can be defined as 9-tuples, called
approach is that the system can be used to solve Semantic Finite State Automata (SFSA)[11]:
791
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

P = ( x, X, T, F, Z, I, O, γ, Xinitial) ⊆ N, where N is the set of restrictions of F to •T. If (•f,



X, •t) is an element of •F × •X × •T, then there exists
Where: fЄF, such that the restriction of f to •T is •f and •z (•f,

x = The set of system variables X, •t) is z (f, X, t).
X = The set of system states, X = {X initial, ….., The idea of recursive function could be simply
Xfinal} applied with the proposed method using mathematical
T = The time scale, T = [0, ∞) induction. The principle of mathematical induction can
F = The set of primitive functions be used to construct system as well as proofs. Consider
Z = The state transition function, Z = {(f, X, t): (f, the following definition of the recursion function fr,
X, t)ЄF × X × T, z(f, X, t) = (•X, •t)} which is highly reminiscent of proofs by mathematical
I = The set of inputs induction:
O = The set of outputs
γ = The readout function fr (X) = X, t = tmax +1 if X = 0 (base of induction)
Xinitial = The initial state of the system, XinitialЄX
fr (X) = Xinitial = X, t = 0 otherwise (induction step)
All sets involved in the definition of S are arbitrary,
except T and F. Time scale T must be some subset of the where, T = [0, tmax].
set [0, ∞) of nonnegative integer numbers, while the set
of primitive function F must be a subset of the set CL (FL) Input-Output Specification (IOS): An IOS is a
of all computable functions in the language L and modification for input-output specification used with
sufficient to generate the remainder functions. Two ant colony optimization algorithm given in[14]. IOS is
features characterize state transition function: establishing the input-output boundaries of the system.
It describes the inputs that the system is designed to
z ( -, -, t) = (Xinitial, 1) if t = 0 (2) handle and the outputs that the system is designed to
produce. An IOS is not a system, but it determines the
z(f, X, t) = z (f, z( f(t-1), X, t-1)) if t ≠ 0 (3) set of all systems that satisfy the IOS. It is a 6-tuples:
The concepts of reusable parameterized subsystems
can be implemented by restricting the transition functions IOS = (T, I, O, Ti, To, η)
of the main system, so that it has the ability to call and
pass parameters to one or more such sub-systems. Where:
Suppose we have sub-system •P and main-system P, then T = The time scale of IOS
they can be defined by the following 9-tuples: I = The set of inputs
O = A set of outputs
P (x, X, T, F, Z, 1, 0, Xinitial, γ) Ti = A set of input trajectories defined over T, with
values in I

P (•x, •X, •T, •F, •Z, •I, O, •Xinitial, •γ) T = A set of output trajectories defined over T, with
values in O
where, •x ⊆ x, •XinitialЄX, then there exit *fЄF, zЄZ, Η = A function defined over Ti whose values are

f,ЄF and •zЄ•Z and h is a function defined over •Z with subset of To; that is, η matches with each given
input trajectories Ti the set of all output
value in •X is defined as follows:
trajectories that might, or could be, or eligible to
be produced by some systems as output,
h = •z (•f, •X initial, 1) = Xh, ti (4) experiencing the given input trajectory Ti.

z(*f, X, t) = z (h, X, t) = Xh,t (5) A system P satisfies IOS if there is a state X of P


and some subset U not empty of the time scale T of P,
*f is a special function we call it sub-SFSA function to such that for every input trajectory g in Ti, there is an
distinguish it from other primitive functions in the set F. output trajectory h in To matched with g by η such that
Also, we call the sub-system •S, sub-SFSA, to the output trajectory generated by S, started in the
distinguish it from the main SFSA. Formally, a system state X is:

S is a sub-system of a system S, iff: •x ⊆ x, •T ⊆ T, •I
⊆ I, •O ⊆ O, •γ must be the restriction of γ to •O and •F γ (Z (f (g), X, t) = η(h(t)) For every tЄU (6)
792
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

RESULTS

The search space in genetic program generation


algorithm is the set of all possible computer programs
described as an 9-tuples SFSA. Multi-objective fitness
measure is adopted to incorporate a combination of
correctness (satisfy IOS), parsimony (smallness T) and
efficiency (smallness β), whereas, β, is the time
required by the machine to complete system execution,
hence it is high sensitive to the machine type.. The
fitness value of individual is computed by the following Fig. 1: Convergance time with respect to no. of sub-
equation: program

 Tx  its clear that using high number of sub-program may


fitness(i) = δ  α1 − ∑ η  Ti ( j)  − η(R i ( j)) 
  (7) lead to speed up algorithm convergences). The
 j= 0 
operation of “sub-SFSA creation”, creates new sub-
+ (Tmax − Ti ) + (β − βi ) system within an overall system:

Where: Creating sub-SFSA algorithm:


δ = The weight parameter, δ > = 2
βi = The run time of individual i • An individual is selected from the population,
Tx = The time scale of the individual i based on I’s fitness value
Ri = The actual calculated input trajectory of • Randomly create sub-SFSA defined by a 9-tuples P
individual i (•x, •X, •T, •F, •Z, •I, •O, •X initial, •γ), where •x is a
subset of the corresponding term x in the main-
Three types of points are defined in each FSA and •Xinitial gets its value from the state of the
individual: Transition zЄZ, function fЄF and function calling transition function
arguments. When structure-preserving crossover is • A uniquely-named sub-SFSA function *f is added
performed, any point type anywhere in the first selected to the set F of the main-SFSA such that each
individuals may be chosen as the crossover point of the occurrence of *f in the transition function set Z will
first parent. The crossover point of the second parent be replaced by the transition function •z (•f, •Xinitial,
must be chosen only from among points of this type. 1) of the newly created sub-SFSA
The restriction in the choice of the second crossover
• Randomly choose a point in the main-SFSA
points ensures the syntactic validity of the offspring.
transition function and mutate it with *
When sub-SFSA functions are being used, the initial
random generation of the population must be created so
The last step is optional, since it just ensures the
that each individual has the intended constrained
existence of at least one reference to the newly created
structure, that is one main-SFSA and zero or more sub-
sub-SFSA function *f.
SFSA defined under the condition of transition function
restriction. The population at generation 0 is DISSCUSION
architecturally diverse, the architecture of the
participating individuals are changing during a run of Fixed versus self-organized data trajectory sets:
GPG and hence determine the architecture of a multi- During evolutionary process, trajectory information
part system dynamically during the run. play the primary role. States transformations are done
Sub-systems can be reused to solve multiple according to them values, which either fixed, or self-
problems. They provide rational way to reduce organized during evolutionary process. Trajectory data
software cost and increase software quality. Programs are stored as a string of numbers (the genotype) and
with less sub-programs tend to disappear because they evolved to achieve the optimum mapping.
accrue fitness from generation to generation, more
slowly than those programs with sub-programs. The Example: Assume we try to solve a search problem to
proposed APS gain leverage in simultaneously solving find an occurrence of element e in a list L of i integer
the problems of system induction and evolving the number. At least i+2 inputs are needed (two inputs to read
architecture of a single or multi-part system. From Fig. 1, the values of e and i and i input to read i element of L).
793
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

Table 2: Fixed input-output trajectory sets • Trajectory information play important role in the
I = {e,i, L[1], L[2], ….., L[i]} Evolutionary Process. Fixed specification of
0 = {0, 1)
trajectory sets, speed-up convergence time of the
Tx = 1 2 3 4 5 6
Ti = I[1] I[2] I[j] -1 -1 -1 algorithm. Although self-organized trajectory sets
To = -1 -1 -1 -1 O1 O2 are useful tools in chaotic behavior, they take more
Then η: time to converge to the fine solution
η (Ti(t)) = O1 if t = 5
O2 if t = 6 , and -1 otherwise
REFERENCES
Table 3: Self-Organized Input and Output Trajectory sets.
Ti(at iteration: 0) = -1, -1, I[1] To(at iteration: 0) = O1, -1, -1 1. Spector, L. and K. Stoffel, 1996. Onto genetic
Ti(at iteration:10) = I[1], -1, -1, I[2] To(at iteration: 10) = -1, -1, O1, -1 programming. Proceedings of the 1st Annual
Ti(at iteration: n) = I[1], I[2], To(at iteration: n) = -1, -1, -1, Conference, MIT Press, Stanford University, CA.,
-1, -1, -1, -1, I[3] -1, O1, -1, -1
USA., pp: 394-399.
2. Gruau, F., 1994. Neural network synthesis using
Accordingly, at least two different output may be cellular encoding and the genetic algorithm. PhD
produced by the program to indicate search result Thesis, Laboratoire de l’Informatique du
(found and not found, or, 1 and 0). Obviously no such Parallelisme, Ecole Normale Superieure de Lyon,
outputs are produced unless at least four operations are France.
executed that are: input e, input i, input L[1] and check http://citeseerx.ist.psu.edu/viewdoc/summary?doi=
its equality with e. The time scale of IOS must be 10.1.1.29.5939
defined under the worst case. i.e., L[i] = e, or no
3. Miller, J.F. and P. Thomson, 2003. A
occurrence of e is found at all. Now, we can pre-
developmental method for growing graphs and
specified Tx, Ti and To, as fixed set as given in Table 2.
circuits. Lecture Notes Comput. Sci., 2606: 93-104.
In this case, Evolutionary process computes the fitness
value for each individual based on applying fitness http://cat.inist.fr/?aModele=afficheN&cpsidt=1567
function only. While in case of self-organized case, 2446
trajectory sets are randomly built according to currently 4. Abraham, A., N. Nedjah and L.D.M. Mourelle,
available information about system input-output 2006. Evolutionary computation: from genetic
boundaries, as seen in Table 3. At the end of i algorithms to genetic programming. Stud. Comput.
generations, these sets are modified according to the Intel., 13: 1-20.
input-output specification of the best individuals, http://www.springerlink.com/content/l646l37m787
obviously, such modifications are vary continually until g121t/
the required results are produced. Although trajectory 5. Grosan, C. and A. Abraham, 2007. Hybrid
data are changed over time, but by experiment, it still evolutionary algorithms: Methodologies,
sensitive to initial configuration of SFSA. This is one of architectures and reviews. Stud. Comput. Intell.,
the most important characteristic of a chaotic system 75: 1-17. http://www.softcomputing.net/hea1.pdf
(butterfly effect sensitivity to the initial conditions)[15]. 6. Montes, H.A. and J.L. Wyatt, 2003. Cartesian
genetic programming for image processing tasks.
CONCLUSION Proceedings of the IASTED International
Conference on Neural Networks and
• Proposed method is based on theoretical system Computational Intelligence, May 19-21, Cancun,
definitions, thus it overcomes the difficulties of Mexico, pp: 185-190.
traditional method, in addition it have all attractive http://md1.csa.com/partners/viewrecord.php?reque
characteristic of CGP ster=gs&collection=TRD&recid=200311420157CI
• Sub-systems can be reused to solve multiple &q=&uid=788263103&setcookie=yes
problems. They provide rational way to reduce 7. Koza, J.R., 1995. Survey of genetic algorithms and
software cost and increase software quality. genetic programming. Proceeding of the
Programs with less sub-programs tend to disappear Conference on Microelectronics Communications
because they accrue fitness from generation to Technology Producing Quality Products Mobile
generation, more slowly than those programs with and Portable Power Emerging Technologies, Nov.
sub-programs. The proposed APS gain leverage in 7-9, IEEE Xplore Press, San Francisco, CA, USA.,
simultaneously solving the problems of system pp: 589.
induction and evolving the architecture of a single http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnum
or multi-part system ber=485447
794
Am. J. Engg. & Applied Sci., 2 (4): 789-795, 2009

8. Poli, R., W.B. Langdon, N.F. McPhee and J.R. Koza, 11. Nada Al Salami, 2009. System evolving using ant
2007. Genetic programming: An introductory colony optimization algorithm. J. Comput. Sci.,
tutorial and a survey of techniques and 5: 380-387.
applications. Technical report CES-475. 12. Hoperoft, J.E. and J.D. Ullman, 1979. Introduction
http://citeseerx.ist.psu.edu/viewdoc/summary?doi= to Automata Theory: Languages and Computation.
10.1.1.126.3889 Addison Wesley Publishing Company, USA.,
9. Miller, J.F. and P. Thomson, 2000. Cartesian ISBN: 10: 020102988X, pp: 418.
genetic programming. Proceedings of European 13. Wymore, 1986. Theory of System. Handbook of
Conference on Genetic Programming, Apr. 15-16, Software Engineering, CBS Publishers, pp: 119-133.
ACM Press, Springer-Verlag, London, UK., 14. AL-Salami, N.M.A., 2009. System evolving using
pp: 121-132. ant colony optimization algorithm. J. Comput. Sci.,
http://portal.acm.org/citation.cfm?id=704075 5: 380-387.
10. Harding, S.L., J.F. Miller and W. Banzhaf, 2007. http://www.scipub.org/fulltext/jcs/jcs55380-
Self-modifying Cartesian genetic programming. 387.pdf
Proceedings of the 9th Annual Conference on 15. Smith, L., 2007. Chaos: A Very Short Introduction.
Genetic and Evolutionary Computation, July 7-11, Illustrated Edn., Oxford University Press, USA.,
ACM Press, New York, USA., pp: 1021-1028. ISBN: 10: 0192853783, pp: 176.
http://portal.acm.org/citation.cfm?id=1277161&dl
=GUIDE&coll=GUIDE&CFID=51767811&CFTO
KEN=87184646

795

You might also like