0% found this document useful (0 votes)
347 views

Science

This document proposes a hybrid algorithm called NM-PSO that combines the Nelder-Mead simplex search method and particle swarm optimization for unconstrained optimization problems. The NM-PSO algorithm is compared to the original Nelder-Mead method, particle swarm optimization, and other cooperative particle swarm optimization algorithms on a suite of 20 test functions. Computational results show that the hybrid NM-PSO approach outperforms the other algorithms in terms of solution quality and convergence rate, demonstrating that it is an effective and efficient method for locating optimal solutions to unconstrained optimization problems.

Uploaded by

Hamada Bouchina
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
347 views

Science

This document proposes a hybrid algorithm called NM-PSO that combines the Nelder-Mead simplex search method and particle swarm optimization for unconstrained optimization problems. The NM-PSO algorithm is compared to the original Nelder-Mead method, particle swarm optimization, and other cooperative particle swarm optimization algorithms on a suite of 20 test functions. Computational results show that the hybrid NM-PSO approach outperforms the other algorithms in terms of solution quality and convergence rate, demonstrating that it is an effective and efficient method for locating optimal solutions to unconstrained optimization problems.

Uploaded by

Hamada Bouchina
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

European Journal of Operational Research 181 (2007) 527548 www.elsevier.

com/locate/ejor

Continuous Optimization

A hybrid simplex search and particle swarm optimization for unconstrained optimization
Shu-Kai S. Fan *, Erwie Zahara
Department of Industrial Engineering and Management, Yuan Ze University, 135 Yuan-Tung Road, Chung-Li, Taoyuan County 320, Taiwan, ROC Received 22 December 2003; accepted 5 June 2006 Available online 7 September 2006

Abstract This paper proposes the hybrid NM-PSO algorithm based on the NelderMead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modication of both the NelderMead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely eective and ecient at locating best-practice optimal solutions for unconstrained optimization. 2006 Elsevier B.V. All rights reserved.
Keywords: Simplex search method; Particle swarm optimization; Unconstrained optimization; Metaheuristics

1. Introduction Function minimization is employed extensively in physical sciences, with applications in root nding of polynomials, system of equations and in estimating
* Corresponding author. Tel.: +886 3 4638800x2510; fax: +886 3 4638907. E-mail address: [email protected] (S.-K. S. Fan).

the parameters of nonlinear functions [1]. The focus of this paper is thus on solving unconstrained minimization problems, and we present a hybrid method that tries to nd a potential global minimum of a real-valued function f(x) of N real-valued variables. Nelder and Mead (NM) have proposed a simplex search method [24], which is a simple direct search technique that has been widely used in unconstrained optimization scenarios. One of the reasons

0377-2217/$ - see front matter 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2006.06.034

528

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

for its popularity is that this method is easy to use and does not need the derivatives of the function under exploration. This is a very important feature in many applications where gradient information is not always available. However, one has to be very careful when using this method since it is very sensitive to the choice of initial points and not guaranteed to attain the global optimum. Based upon the interaction of individual entities called particles, particle swarm optimization (PSO) presented by Kennedy and Eberharts [5,6] is an evolutionary computation technique in which each particle ies through the multi-dimensional search space with a velocity, which is constantly updated by the particles previous best performance and by the previous best performance of the particles neighbors. To date, PSO has been used successfully to optimize various continuous nonlinear functions. Although PSO does eventually locate the desired solution, practical use of an evolutionary computation technique in solving complex optimization problems is severely limited by the high computational cost of the slow convergence rate [7]. The convergence rate of PSO is also typically slower than those of local direct search techniques (e.g., Hooke and Jeeves method [8] and NelderMead simplex search method), as they do not utilize much local information to determine a most promising search direction. To deal with the slow convergence of PSO, an idea to combine PSO with a local simplex search technique is addressed in this paper; the rationale behind this is that such a hybrid approach expects to enjoy the merits of PSO with those of a local simplex search technique. In other words, PSO contributes to the hybrid approach in a way to ensure that the search is less likely to be trapped in local optima that often arise from employing a pure local search technique, while the local simplex search part of the hybrid approach makes the search converge faster than pure PSO. Roughly speaking, the hybrid approach can usually exploit a better tradeo between computational eorts and global optimality of the solution found. In actuality, various global optimization methods in the area of unconstrained optimization have already been proposed in the literature, such as the genetic algorithm, simulated annealing, tabu search, variable neighborhood search, among others. Nonetheless, in light of state-of-the-art concept and high practical implementability of particle swarm optimization, this paper places primary emphasis on how to design more eective and ecient PSO-based proce-

dures than the conventional particle swarm optimizers while solving the unconstrained optimization problems. To verify the claim that the convergence rate and eectiveness of PSO can be greatly improved, a hybrid approach combining PSO with the Nelder Mead simplex search method for unconstrained optimization is compared extensively with the following three optimization methods: (1) the simplex search method developed by NelderMead [2]; (2) the PSO method developed by Kennedy Eberhart [6]; (3) the modied PSO method introduced by van den Bergh [9]. The hybrid approach will be demonstrated via computational studies to be superior to the three optimization methods mentioned above, as will be discussed shortly in Section 4. In addition, the cooperative PSO-based procedures described in [9] will also be used as a yardstick to re-evaluate the proposed algorithm. The rest of the paper is outlined as follows. Section 2 reviews the fundamentals of simplex search method, PSO method and modied PSO method. Section 3 presents the NM-PSO hybrid structure and algorithm. Section 4 illustrates eciency and accuracy of the hybrid approach in solving unconstrained optimization problems through computational comparisons. Finally, major results of the paper are summarized in Section 5 along with some remarks of areas for future research. 2. Simplex search method and particle swarm optimization Optimization techniques can be classied into two broad categories: traditional direct search techniques (e.g., the simplex search method, Rosenbrocks method [10], etc.) and heuristic techniques (e.g., PSO, genetic algorithms, neural networks, etc.). Note that the methods of gradient-based types are beyond the scope of this research. The rst major difference between NM and PSO is that the choice of initial points in the simplex search method is predetermined but PSO initializes a swarm with a set of random points. The second dierence is that PSO proceeds by attracting new points towards those points that have better function values, whereas the simplex search method evolves by mov-

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

529

ing away from a point that has the worst performance. The advantage of the simplex search method is that it is straightforward in an algorithmic sense and computationally ecient. However, as a result of using only local information, when they converge to a stationary point, there is no guarantee that the global optimum is found. In contrast, a PSO method explores the global search space without using local information of promising search directions. Consequently, it is less likely to be trapped in local optima, but its computational cost is comparably higher. In short, a PSO method often focuses on exploration; the simplex method focuses on exploitation. Making most use of the characteristics of each method, it is reasonable to anticipate that the
Phigh Plow

hybrid method would exhibit a mixture of exploitation and exploration properties nicely. As expected, the problem that the NelderMead simplex search method is sensitive to initial conditions can be resolved by this hybridization. van den Bergh [9] also presented the trade-o analysis between exploration and exploitation that can be obtained or tuned by appropriate selection of inertia weight and acceleration constant values. 2.1. The NelderMead simplex search method (NM) The NelderMead simplex search method is based upon the work of Spendley et al. [11], which is a local search method designed for unconstrained
Phigh Plow

Pcent
Prefl

Pcent
Prefl

Psec hi

Psec hi

Pexp

Phigh Plow

Phigh Plow Pcont Pcent Pcont


Prefl Prefl

Pcent

Psec hi

Psec hi

Phigh Plow Pcent Pcont


Prefl

Phigh Plow Pcent Pcont


Prefl

Psec hi

Psec hi

Fig. 1. NelderMead pivot operations: (a) reection, (b) expansion, (c) contraction when Pre is better than Phigh, (d) contraction when Phigh is better than Pre, (e) shrink after failed contraction for the case where Pre is better that Phigh, (f) shrink after failed contraction for the case where Phigh is better that Pre.

530

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

optimization without using gradient information. The operations of this method are to rescale the simplex based on the local behavior of the function by using four basic procedures: reection, expansion, contraction and shrinkage. Through these procedures, the simplex can successfully improve itself and get closer to the optimum. The original NM simplex procedure is outlined below and the pivot steps of the NM algorithm are illustrated in Fig. 1 through a two-dimensional case (N = 2). 1. Initialization. For the minimization of a function of N variables, create N + 1 vertex points to form an initial N-dimensional simplex. Evaluate the functional value at each vertex point of the simplex. See a two-dimensional simplex exhibited in (a) of Fig. 1. For the maximization case, it is convenient to transform the problem into the minimization case by pre-multiplying the objective function by 1. 2. Reection. In each iteration, determine Phigh, Psec hi, Plow vertices, indicating vertex points that have the highest, the second highest, and the lowest function values, respectively. Let fhigh, fsec hi, flow represent the corresponding observed function values. Find Pcent, the center of the simplex excluding Phigh in the minimization case. Generate a new vertex Pre by reecting the worst point according to the following equation (see (a) of Fig. 1): P refl 1 aP cent aP high ; 1

4. Contraction. When fre > fsec hi and fre 6 fhigh, then Pre replaces Phigh and contraction is tried (see (c) of Fig. 1). If fre > fhigh, then direct contraction without the replacement of Phigh by Pre is performed (see (d) of Fig. 1). The contraction vertex is calculated by the following equation: P cont bP high 1 bP cent ; 3

where b is the contraction coecient (0 < b < 1). Nelder and Mead suggested b = 0.5. If fcont 6 fhigh, the contraction is accepted by replacing Phigh with Pcont and then a new iteration begins with step 2. 5. Shrink. If fcont > fhigh in step 4, contraction has failed and shrinkage will be the next attempt. This is done by shrinking the entire simplex (except Plow) by (see (e) and (f) of Fig. 1): Pi dP i 1 dP low ; 4

where d is the shrinkage coecient (0 < d < 1). Nelder and Mead suggested d = 0.5. The algorithm then evaluates function values at each vertex (except Plow) and returns to step 2 to start a new iteration. The NelderMead simplex method has been applied in physics [12], crystallography [13], biology [14], chemistry [15] and health care [16]. Fletcher [17] considers the NelderMead technique as one of the most successful methods that merely compare function values. There have been abundant studies devoted to various modications of the Nelder Mead simplex method appearing in the literature, such as Barton and Ivey [18], Nazareth and Tzeng [19], etc. However, the unmodied NelderMead version of the simplex algorithm is best suited to our needs for comparison purposes. 2.2. Particle swarm optimization (PSO) Before the introduction of the PSO procedure, previous optimization approaches have been developed based on mimicking the evolutionary process as frequently seen in nature. Just as survival of the ttest promotes the better of the entire population in the long run, the ltering operation (crossover and/or mutation) found in these approaches eventually leads to satisfactory solutions. PSO is also population-based and evolutionary in nature, with one major dierence is that the PSO has memory in terms of the inertia weight, and then the

where a is the reection coecient (a > 0). Nelder and Mead suggested the use of a = 1. If flow 6 fre 6 fsec hi, accept the reection by replacing Phigh with Pre, and step 2 is entered again for a new iteration. 3. Expansion. Should reection produce a function value smaller than flow (i.e., fre < flow), the reection is expanded in order to extend the search space in the same direction and the expansion point is calculated by the following equation (see (b) of Fig. 1): P exp cP refl 1 cP cent ; 2

where c is the expansion coecient (c > 1). Nelder and Mead suggested c = 2. If fexp < flow, the expansion is accepted by replacing Phigh with Pexp; otherwise, Pre replaces Phigh. The algorithm continues with a new iteration in step 2.

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

531

social exchange information. Instead, a commonly observed social behavior, where members of a group tend to follow the lead of the best of the group, is simulated by PSO. As simple in concept and economic in terms of computational costs, PSO has a denite edge over other evolutionary optimization techniques. The procedure of PSO is reviewed below. 1. Initialization. Randomly generate a swarm of the potential solutions, called particles and assign a random velocity to each. The population size is problem-dependent and the one most commonly used in PSO is often between 20 and 50 [20]. For the computational experiments conducted in Section 4, the population of 5N particles is sampled from Uniform(50, 50) for solving N-dimensional problems for PSO. 2. Velocity update. The particles are then own through hyperspace by updating their own velocity. The velocity update of a particle is dynamically adjusted, subject to its own past ight and those of its companions. The particles velocity and position are updated by the following equations: V New t 1 w V old t c1 rand id id pid t xold t c2 rand id pgd t xold t; id xNew t id 1 xold t id V
New id t

spring system [25] and neural network training [26,27]. Generally speaking, PSO like other evolutionary optimization algorithms is applicable to most optimization problems and circumstances that can be cast into optimization problems. 2.3. Guaranteed convergence particle swarm optimization (GCPSO) Recently, van den Bergh [9] and Clerc and Kennedy [28] have done extensive analysis of particle trajectories within the PSO to determine how to guarantee convergent swarm behavior. van den Bergh [9] found a dangerous property in the original PSO: if a particles current position coincides with the global best position particle, the particle will only move away from this point if its previous velocity and w are non-zero. If their previous velocities are very close to zero, then all particles will stop moving once they catch up with the global best particle, which may lead to premature convergence of the algorithm. To pre-actively counter this behavior in a particle swarm and to ensure convergence, van den Bergh and Engelbrecht [29] modied the original PSO method called the Guaranteed Convergence Particle Swarm Optimizer (GCPSO). The GCPSO algorithm works as follows. Let s be the index of the global best particle. The idea of GCPSO is then to update the position of the particle s as: xNew t 1 pgd t wV old t qt1 2rand : sd sd 7 To achieve this, the velocity update of s is dened as V New t 1 w V old t pgd t xold t sd sd sd qt1 2rand : 8 In brief, the xsd term resets the particles position to the position pgd, wV old signies a search id direction, and q(t)(1 2rand( )) term generates a random search term with side length 2q(t); q(0) is initialized to 1.0, with q(t) dened as 8 if #successes > sc ; > 2qt < 9 qt 1 0:5qt if #failures > fc ; > : qt otherwise; where the terms # failures and #successes denote the number of consecutive failures or successes, respectively. A failure is dened as f(xgd(t)) % f(xgd(t 1)),

5 6

1;

where c1 and c2 are two positive constants; w is an inertia weight and rand( ) is a random value inside (0, 1). Eberhart and Shi [21] and Hu and Eberhart [22] suggested c1 = c2 = 2 and w = [0.5 + (rand( )/ 2.0)]. Eq. (5) illustrates the calculation of a new velocity for each individual. The velocity of each particle is updated according to its previous velocity (Vid), the particles previous best location (pid) and the global best location (pgd). Particles velocities on each dimension are clamped to a maximum velocity Vmax, where the maximum velocity Vmax is a fraction of the domain search space in each dimension. Eq. (6) shows how each particles position is updated in the search space. As far, PSO is one of the most recent evolutionary optimization methods. One of the reasons that PSO is attractive is that there are only a very few parameters that need to be adjusted. Although PSO is still in its infancy, it has been used across a wide range of applications, including end milling [23], reactive power and voltage control [24], mass-

532

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

indicating no considerable function improvement occurs between the two consecutive global best particles. The values sc and fc are threshold parameters and it is recommended to set fc = 5,sc = 15. The following additional rules must also be implemented to ensure that Eq. (9) is well dened: #successest 1 > #successest ) #failurest 1 0; #failurest 1 > #failurest ) #successest 1 0: Thus, on a success the failure count is set to zero, and likewise the success count is reset when a failure occurs. Note that only the best particle in the swarm uses the modied updates in Eqs. (7) and (8), and the rest of the swarm uses the normal velocity and position updates dened in Eqs. (5) and (6). For the computational experiments in Section 4, GPCSO uses a swarm size of 20 which is randomly generated with a Uniform(50, 50) for solving N-dimensional problems. The stopping criterion adopted by GPCSO for solving problems of size higher than 10 dimensions contains 10,000 iterations, and as to other test problems the stopping criterion will be discussed in Section 4. 3. Hybrid NM-PSO method The hybrid idea behind the methods introduced in Section 2 is to combine their advantages and
Best

avoid disadvantages. Similar ideas have been discussed in hybrid methods using genetic algorithm and direct search technique. These hybrid techniques emphasized the tradeo between accuracy, reliability and computation time in global optimization [30,31]. This section introduces the hybrid method and, in doing so, also demonstrates that the convergence of the simplex method and the accuracy of the PSO method can be further improved simultaneously. 3.1. The structure of our NM-PSO hybrid Fig. 2 depicts the schematic representation of the proposed hybrid NM-PSO. When solving an N-dimensional problem, the hybrid approach takes 3N + 1 particles. The initial swarm, N + 1 particles, is constructed using the random generated starting point from Uniform(50, 50) and a step size of 1.0 in each coordinate direction to form an initial simplex, and additional 2 particles with opposite directions are randomly generated for each dimension (akin to the experimental exploration set out in EVOP [32]). For example, if the starting point is (0, 1) then the initial swarms for the NM and NM-PSO methods are illustrated in Fig. 3. The additional swarm of 2N particles mentioned above may be a worthy investment as they could possibly bring about a great leap to the vicinity of the global optimum in the early iterations. A total of 3N + 1 particles are sorted by tness, and the best N particles are saved for sub-

N
N+1 from simplex design

N elites

Modified Simplex

2N from random generation

Modified PSO Method

2N
Selection

2N
Mutation for global best Velocity update

Worst

Initialized Population

Ranked Population

Updated Population

Fig. 2. Schematic representation of the NM-PSO hybrid: (a) ( ) representation of the selection on N elites particles; (b) ( ) representation of the modied simplex operation; (c) () representation of the modied PSO operation.

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548
NM 3 3 NM- PSO

533

2.5

x2

0.5

0.5

x2

1.5

-1

-2

-3 -5

x1

x1

Fig. 3. Initial populations for NM, PSO, NM-PSO for the starting point (0, 1) in a two-dimensional case.

1. Initialization. Generate a population of size 3N+1. Repeat 2. Evaluation & Ranking. Evaluate the fitness of each particle. Rank them based on the fitness results. 3. N Elites. Save the top N elites. 4. Modified Simplex. Apply a simplex operator to the top N+1 particles and replace the N+1th particle with the update. 5. Modified PSO. Apply modified PSO operator for updating 2N particles with worst fitness. Until a termination criterion is reached.

Fig. 4. The NM-PSO hybrid algorithm.

sequent use. The top N + 1 particles in the Fig. 2 are fed into the modied simplex search method to improve the N + 1th particle. Joined by the N best particles and the N + 1th particle, the last 2N particles are adjusted by the modied PSO method (i.e., selection, mutation for global best and velocity update). The result is sorted in preparation for repeating the entire run. The algorithm of this simplex-PSO approach is summarized in

Fig. 4 and the algorithm terminates when it satises a convergence criterion. The stopping criterion will be presented in Section 4. 3.2. The modied simplex search method If the optimal solution of an N-dimensional problem is very far away from the starting point, a second expansion operator may help improve

534

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

1. From ranked population, select N+1 particles. 2. Attempt reflection. If the reflection is accepted then attempt expansion, else attempt contraction or replace the worst fitness point with the reflection point. 3. Attempt expansion. If the expansion is accepted then attempt second expansion, else replace the worst fitness point with the expansion point. 4. Attempt second expansion. If the second expansion is accepted then replace the worst fitness point with the second expansion point, else replace the worst fitness point with the first expansion point. 5. Attempt contraction. If the contraction is accepted then replace the worst fitness point with the contraction point, else attempt shrinking for the entire simplex (except the best fitness point).

Fig. 5. The modied simplex search algorithm.

the convergence rate. This operator will apply only after the success of an expansion attempt. In this situation, the current simplex might be very likely remote from the optimal solution. A detailed description of this second expansion is as follows: Second expansion. If fexp < flow (after expansion has been performed), the expansion is performed again in order to extend the search space in the same promising direction and the second expansion point is calculated by the following equation: P second exp hP exp 1 hP cent ; 10

Best 1

global best particle

N+1 2

neighborhood best particle 1 neighborhood best particle 2

where h is the second expansion coecient (h > 1). The choice of h = 2 has been tested with much success from early computational experience. If fsecond exp < flow, the second expansion is accepted by replacing Phigh with Psecond exp; otherwise, Pexp replaces Phigh. A new iteration is started. Fig. 5 summaries the modied simplex search method algorithm. 3.3. The modied PSO method First, the modied PSO method begins with the selection of the global best particle (pgd) and the neighborhood best particles (pld). The global best particle is selected from the entire population which has been sorted by tness, and the N neighborhood best particles are selected from the worst 2N particles which are divided into N groups of two where the better of each group is selected. See the illustration in Fig. 6. The particles previous best location (pid) used for a velocity update as shown in Eq. (5) has been replaced by the neighborhood best particles position (pld), and Eq. (5) becomes

. . .

2 Worst

neighborhood best particle N

Ranked Population
Fig. 6. The selection operator for modied PSO.

V New t 1 w V old t c1 rand id id pld t xold t c2 id rand pgd t xold t: id 11 Looking at the original global-best PSO method, it is realized that the particles velocity updates depend strongly on the global best particle. If the global best particle is trapped in a local optimum, then all the other particles will also y toward the local optimum. For this situation, the velocity update for the global best particle though Eqs. (5) and (6) generates merely a tiny jump for further

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

535

improvement such that those particles are very unlikely to pull themselves out of the local optimum. To resolve this problem, a mutation heuristic is added to the global best particle as described below. 3.3.1. Mutation heuristic for the global best particle Let xold_gbest denote the position for the global best particle in an N-dimensional problem. At rst, the heuristic uses normal distribution to randomly generate 5 particles based on xold_gbest according to the following equation: xinew
gbest

xold

gbest

e;

i 1; 2; 3; 4; 5;

12

where eT = [e1, e2, . . . , eN], ej $ N(0, r), j = 1, 2, . . . , N, and r is initially set to 1. If the ratio of the number of successful mutations to all 5 mutations is higher than 2/5, then r is increased to (1/k)r, else r is de-

creased to kr for the next mutation, where k is the mutation coecient (k < 1). The value of k = 0.85 is opted herein and has been tested with success. If the ratio of the number of successful mutations to all mutations is 2/5, then r remains unchanged. Thus, the best tness is culled from the ve mutations to replace xold_gbest. The mutation heuristic is an adaptation of PSO particularly to the instances of unconstrained optimization, in which case the maximum constrained velocity Vmax need not be taken into account, therefore implying an acceleration of search speed. Higashi and Iba [33] had also presented particle swarm optimization with Gaussian mutation that combines the idea of the particle swarm with concepts from evolutionary algorithms. Their mutation scheme is similar to the one used in this study. The method

Part I: Selection. From the population select the global best particle and the neighborhood best particles. Part II: Mutation heuristic for the global best particle. The algorithm is as below.
1. Define x old _ gbest as the global best particle, set the mutation success rate to 2/5 and the mutation coefficient ( ) to 0.85. 2. Generate 5 particles based on the position of the global best particle with variance according to equation 11. 3. For each mutation { if mutation success rate = 2/5, then new = old . if mutation success rate > 2/5, then new = (1 / ) old if mutation success rate < 2/5, then new = old } 4. Replace the old global best particle ( x old _ gbest ) with the new global best particle found among the xinew _ gbest .

Part III: Velocity update. Apply velocity update to the 2N particles with worst fitness according to:
old old VidNew (t + 1) = w Vid (t ) + c1 rand ( ) ( pld (t ) xid (t )) old + c 2 rand ( ) ( p gd (t ) xid (t ))

New old xid (t + 1) = xid (t ) + VidNew (t + 1)

Fig. 7. Algorithmic representation of the modied PSO.

536

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

selected particles at the predetermined probability and their positions are determined at the probability under the Gaussian distribution. Fig. 7 summarizes the modied PSO algorithm for unconstrained optimization. 4. Computational results The ingredients of the hybrid NM-PSO method have been clearly described in Section 3. In this section, the design of experiments is explained, sensitivity analysis of NM-PSO parameters is explored and the empirical results are reported, which compare the hybrid approach with those of NM, PSO and GCPSO. 4.1. Design of the experiments Comparing the eectiveness of these algorithms requires large-scale testing on a variety of response functions. The NM-PSO will be tested against NM, PSO and GCPSO for a set of 20 deterministic test functions collected from More et al. [34] and van den Bergh [9]. They are a set of curvilinear functions for dicult unconstrained minimization problems. The forms and optimal values for the functions are described completely in the Appendix. The variety of dimensions and functional forms make it possible to fairly assess the robustness (i.e., eectiveness, eciency and accuracy) of the proposed approach
Table 1 Test functions used for performance analysis Test function 1. Powell badly scaled function 2. B2 function 3. Beale function 4. Booth function 5. Hellical valley function 6. De Joung function 7. Box three dimensional function 8. Wood function 9. Trigonometric function 10. Extended Rosenbrock function 11. Variably dimensioned function 12. Penalty function I 13. Penalty function II 14. Trigonometric function 15. Extended Powell function 16. Griewank function 17. Rastrigin function 18. Extended Rosenbrock function 19. Sphere function 20. Griewank function Dim 2 2 2 2 3 3 3 4 4 4 4 8 8 8 8 10 10 10 30 50 Optimal OBJ value 0 0 0 0 0 0 0 0 0 0 0 5.42152. . .e5 1.23335. . .e4 0 0 0 0 0 0 0

within tolerably computational time. Many of these functions allow a choice of dimension, and an input dimension ranging from 2 to 50 for each test function is given in Table 1. Typically, there are three widely accepted stopping criteria employed in optimization methods. A stopping criterion used for this study is based on the simplex size over N + 1 best points of the population. The stopping criterion proposed by Dennis and Woods [35], is dened as: 1=D max kP i P low k 6 t;
i

D max1; kP low k; 13

where the maximization is over all best points i 0 s (for i = 1, 2, . . . , N + 1) in the current simplex, and kk denotes the Euclidean norm. The Dennis and Woods criterion is used in the computational test described below, with t = 1 104. In order to achieve quicker convergence, the algorithms will stop when either (13) is satised or the number of iterations reaches 1000 N. Notice that the stopping criterion adopted by GPCSO for test problems 19 and 20 (of problem sizes 30 and 50) contains 10,000 iterations (i.e., 200,000 function evaluations). In GCPSO a xed warm size of 20 particles is applied universally, so GCPSO cannot calculate the simplex size over N + 1 best points of the swarm. According to our stopping criterion in Eq. (13), they need 31 and 51 particles, respectively. The initial starting point for NM and NM-PSO is sampled from Uniform(50, 50) and the initial swarm population for PSO and GCPSO is also randomly generated from Uniform(50, 50). The optimization task of NM, PSO, GCPSO and NM-PSO on each test function was run 100 times. To evaluate the algorithms eciency and eectiveness, we adopted the following criteria, which had been observed from 100 minimizations per test function: the rate of successful minimizations, the average of the objective function evaluation numbers and the average error. These criteria are dened precisely below. As long as either one of the termination criteria is rst reached, the algorithms stop and return the coordinate of a located point, and the objective function value FOBJALG (algorithm) at this point. We compared this result with the known analytical minimum FOBJANAL and considered this result to be successful if the following inequality holds: jFOBJALG FOBJANAL j < 0:001: 14

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

537

The average of the objective function evaluation numbers is evaluated in relation to only the successful minimizations. The average error is dened as the average of FOBJ gap between the best successful point found and the known global optimum in relation to only the successful minimizations achieved by the algorithm. The simulations were run on a Pentium IV 2.4 G with memory capacity 256 MB, using Matlab 6.5. 4.1.1. Sensitivity analysis of NM-PSO parameters This section will investigate the performance of various NM-PSO parameters using several benchmark functions (i.e., Powell badly scaled function, Beale function, Hellical valley function, Box three dimensional function and Wood function) selected from Table 1. The rate of successful minimization (%) is used as the criterion for setting parameter values. The experiment is conducted with the original coecients shown in Table 11. Each time one of the NM-PSO parameters is varied in a certain interval to see which value within this internal will result in the best performance in term of rate of successful minimization. Table 2 shows the sensitivity analysis of the reection coecient. The choice of interval [0.5, 2.0] used in this analysis was motivated by the original NelderMead simplex search procedure, where a reection coecient greater than 0 was suggested for
Table 2 Sensitivity analysis of reection coecient Reection coecient (a) 0.50 0.75 1.00 1.25 1.50* 1.75 2.00 Rate of successful minimization (%) Powell 100 100 100 100 100 98 97 Beale 84 92 92 94 100 90 82 Hellical 95 95 100 73 100 98 95 Box 94 96 95 99 100 100 98 Wood 99 99 98 98 100 98 99

Table 4 Sensitivity analysis of second expansion coecient Second expansion coecient (h) 1.50 1.75 2.00* 2.25 2.50 2.75 3.00 Rate of successful minimization (%) Powell 99 99 100 100 100 100 100 Beale 93 92 100 80 100 93 97 Hellical 90 95 100 98 43 89 87 Box 99 99 100 100 100 99 100 Wood 100 100 100 98 98 99 100

Table 5 Sensitivity analysis of contraction coecient Contraction coecient (b) 0.25 0.50 0.75* Rate of successful minimization (%) Powell 100 100 100 Beale 86 95 100 Hellical 91 69 99 Box 97 97 100 Wood 89 100 99

Table 6 Sensitivity analysis of shrinking coecient Shrinking coecient (d) 0.25 0.50* 0.75 Rate of successful minimization (%) Powell 100 100 99 Beale 71 100 97 Hellical 100 99 99 Box 100 100 98 Wood 97 100 99

Table 3 Sensitivity analysis of expansion coecient Expansion coecient (c) 1.50 1.75 2.00 2.25 2.50 2.75* 3.00 Rate of successful minimization (%) Powell 100 100 100 100 100 100 100 Beale 82 93 94 96 88 100 96 Hellical 96 94 97 84 99 100 98 Box 100 100 100 94 100 100 99 Wood 99 100 99 99 100 99 99

general usage. From this table, it is found that a reection coecient setting at 1.5 returns the best rate of successful minimization. Tables 36 show the sensitivity analysis of expansion coecient, second expansion coecient, contraction coecient and shrinking coecient. The choice of their respective intervals has been motivated by the NelderMead simplex search procedure. From Tables 36, it is found that expansion coecient, second expansion coecient, contraction coecient and shrinking coecient setting at 2.75, 2.0, 0.75 and 0.5 return the best rate of successful minimization. Table 7 illustrates the sensitivity analysis of the inertia weight, and from this table, it is found that
Table 7 Sensitivity analysis of inertia weight Inertia weight w Rate of successful minimization (%) Powell 0.5 + (rand( )/2.0)* rand( ) 100 100 Beale 98 100 Hellical 100 91 Box 100 100 Wood 99 100

538

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548 Table 10 Sensitivity analysis of mutation coecient Mutation coecient (k) 0.25 0.40 0.55 0.70 0.85* Rate of successful minimization (%) Powell 100 99 100 99 100 Beale 93 100 100 96 98 Hellical 93 88 98 99 100 Box 100 100 99 100 100 Wood 99 99 99 100 98

setting the inertia weight to 0.5 + (rand( )/2.0) is better than rand( ) for achieving a better rate of successful minimization. Tables 8 and 9 describe the sensitivity analyses of c1 and c2 coecients; the choice of interval [0.2, 2.0] used in these analyses was inspired by the original PSO algorithm, where these two coecients with values greater than 0 and smaller or equal than 2 have been suggested for general usage. Table 8 indicates that a c1 coecient setting to 0.6 returns the best rate of successful minimization. Table 9 shows that a c2 coecient setting to 1.6 returns the best rate of successful minimization. Table 10 is on the sensitivity analysis of the mutation coecient; the choice of interval [0.2, 0.8] used in mutation is initially motivated by the contraction operation in the NelderMead simplex search procedure, where a contraction coecient greater than 0 and smaller than 1 was suggested for use. A mutation coecient setting to 0.85 has been found to return the best rate of successful minimization. We summarize the above ndings in Table 11, and apply these parameter values in our
Table 8 Sensitivity analysis of c1 coecient c1 coecient 0.20 0.40 0.60* 0.80 1.00 1.20 1.40 1.60 1.80 2.00 Rate of successful minimization (%) Powell 99 99 100 100 100 100 100 100 100 100 Beale 96 96 100 89 98 89 97 97 98 82 Hellical 96 95 100 90 90 93 94 97 100 93 Box 100 100 100 99 99 99 99 97 100 95 Wood 99 100 100 100 100 98 100 100 100 99

Table 11 Best suggested for NM-PSO parameters after sensitivity analysis NM-PSO parameters Reection coecient (a) Expansion coecient (c) Contraction coecient (b) Shrinking coecient (d) Second expansion coecient (h) Inertia weight coecient (w) C1 coecient C2 coecient Mutation coecient (k) Original 1.00 2.00 0.50 0.50 2.00 0.5 + (rand( )/2.0) 2.00 2.00 0.85 Bestsuggested 1.50 2.75 0.75 0.50 2.00 0.5 + (rand( )/2.0) 0.60 1.60 0.85

hybrid approach for conducting experimental comparisons with other algorithms. Note again that the parameter selection procedure is performed in a one-factor-at-a-time manner. For each sensitivity analysis in this section, only one parameter is varied each time, and the remaining parameters are kept at the values suggested by the original NM and PSO algorithms. The interaction relation between parameters is assumed unimportant. 4.1.2. Empirical results The empirical evaluations showed in this section will demonstrate that the proposed NM-PSO approach is highly eective at nding the global optima for the unconstrained optimization problems. Fig. 8 illustrates the performances of all four approaches on the rst test function to nd the global optimum by plotting the best tness versus the number of iterations for a single run. It can obviously be seen from the gure that the hybrid NM-PSO method converges more quickly than the other three methods (see the number of iterations). The NM-PSO methods tness drops quickly (close to zero) at the second iteration attributed to adding those extra 2N particles in PSO, so the global solution is then swiftly achieved due to the second expansion strategy in NM. For this case, the

Table 9 Sensitivity analysis of c2 coecient c2 coecient 0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60* 1.80 2.00 Rate of successful minimization (%) Powell 100 100 100 100 100 100 100 100 100 100 Beale 97 94 90 97 91 99 90 100 97 92 Hellical 96 89 85 71 92 100 100 100 84 94 Box 100 100 100 100 99 100 100 100 99 98 Wood 100 99 100 100 100 99 99 100 100 99

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

539

1.2

F1: Powell badly scaled function

PSO 1 GCPSO 0.8 NM NM- PSO fitness 0.6

0.4

0.2

0 0 200 400 600 800 iteration 1000 1200 1400 1600

Fig. 8. Best tness of function 1 versus iteration for dierent approaches.

NM-PSO method performs better than the other three methods from the aspects of computational eciency. Fig. 9 portrays the search paths of NM,
NM 10 8 6 4 2 0 0 0. 5 1 x1 PSO 10 8 6 4 2 0 0 0. 5 1 x1 1. 5 x 10 2
-4

PSO, GCPSO and NM-PSO on the Powell badly scaled function. From these tracks, it can be seen that the NM, PSO and GCPSO methods begin to
NM- PSO 10 8 6 x2 4 2

x2

1. 5 x 10

2
-4

0. 5 x1 GCPSO

1 x 10

1. 5
-5

10 8 x2 6 4 2 0 0 2 x1 4 x 10
-4

Fig. 9. Solution paths of NM, PSO, GCPSO and NM-PSO on Powell badly scaled function. Note that the starting point is (0, 1) and the optimum point is (1.098. . .105, 9.106. . .).

x2

540

Table 12 A comparison of NM, PSO, GCPSO and NM-PSO results on 20 test functions No. Rate of successful minimization (%) NM 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 74 78 52 100 46 100 78 99 99 100 100 100 100 85 66 0 0 11 0 0 PSO 94 100 100 100 99 100 100 13 10 49 100 100 98 0 89 0 30 0 0 0 GCPSO 100 100 100 100 96 100 100 79 97 100 100 100 100 85 100 0 0 100 100 34 NM-PSO 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 82 60 100 100 82 Average of objective function evaluation numbers NM 776 (631) 159 (159) 325 (2742) 145 380 (481) 283 363 (329) 1200 (1198) 252 (251) 682 639 5796 6034 901 (901) 6877 (5528) (3270) (2887) 10,019 (6287) (63,552) (107,857) PSO 20,242 (20,144) 4188 5440 3848 28,736 (28,620) 9308 44,599 79,930 (83,488) 84,041 (84,041) 84,020 (84,020) 41,184 328,040 328,040 (328,040) (328,040) 328,040 (328,040) (504,657) 510,050 (509,193) (510,050) (4,530,150) (12,550,250) GCPSO 12,375 2340 2792 2128 6446 (6323) 3031 6432 34,193 (33,126) 22,615 (22,457) 17,170 5591 64,040 168,020 71,152 (75,469) 128,735 (9253) (11,146) 176,540 200,000 200,000 (200,000) NM-PSO 2971 1124 1458 1065 2552 1957 3406 4769 71,763 3806 4255 44907 114,734 255,866 28,239 14,076 (13,793) 12,353 (12,376) 28,836 87,004 378,354 (370,682) Average of FOBJ gaps between the best successful point found and the known global optimum NM 8.591e6 (0.475) 2.727e8 (0.099) 4.575e9 (0.845) 2.622e8 3.389e9 (6.257) 1.002e9 4.158e5 1.595e+29 2.882e8 (0.079) 1.807e4 (1.999e4) 6.718e7 5.173e9 1.157e5 5.705e6 7.470e5 (3.780e4) 6.830e8 (4116.783) (1.040) (1164.238) 2.429e8 (375.058) (726.704) (1.230) PSO 9.896e6 (692348675.92) 1.460e8 5.986e8 3.060e8 6.183e8 0.025 8.806e11 1.155e15 4.445e4 (0.050) 6.919e4 (0.333) 2.487e4 (3.259) 8.446e10 2.941e5 7.648e6 (9.861e4) (10.909) 1.766e4 (0.828) (0.123) 1.080e4 (1.021) (1013.251) (4824.621) (6.575) GCPSO 2.668e6 3.777e8 3.689e8 9.722e8 2.266e7 (0.460) 6.986e10 6.448e12 4.386e4 (0.001) 1.897e4 (2.633e4) 8.686e6 1.018e8 1.242e11 2.952e8 5.967e5 (1.637e3) 1.367e7 (0.088) (7.771) 2.932e4 2.17e16 1.469e16 (0.028) NM-PSO 3.785e6 3.235e10 1.607e9 1.266e9 2.573e9 1.630e13 2.382e11 2.714e9 7.571e5 1.709e9 1.344e9 1.306e11 8.955e11 3.031e5 1.134e8 1.040e11 (0.017) 1.911e11 (4.836) 3.378e9 2.763e11 9.969e12 (0.021) S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

541

approach a local optimum and then is rerouted to the global optimum, explaining why this method needs more iterations for convergence. The NMPSO is already near the optimum with one single jump, emphasizing why the NM-PSO method performs best among these four methods for test function 1. Table 12 shows a comparison of NM, PSO, GCPSO and NM-PSO results based on 20 test functions. Note again that the average of the objective function evaluation numbers and the average error in this table are evaluated in relation to only the successful minimizations; the numbers in parenthesis are the average of the objective function evaluation numbers and the average error over 100 runs. Comparing the simulation results of PSO and GCPSO in Table 12, they apparently exhibit the phenomenon described by van den Bergh [9]: the GCPSO algorithm has signicantly higher rate of successful minimization and lower average of objective function evaluation numbers on unimodal functions than PSO. This improved performance is not visible on multi-modal functions (e.g., Rastrigin function and Grienwank function), because the GCPSO can still be trapped in local minima, just like the original PSO. Overall, the NM and PSO algorithms achieve about 60% success rate, 85% for the GCPSO algorithm and 96% for the hybrid NM-PSO. The averages of the objective function evaluation numbers of the NM-PSO algorithm for every test function are all signicantly less than those of the PSO and GCPSO algorithms, indicating that the eciency of the original PSO can be improved by the hybrid method. With respect to the statistics of average error, the NM-PSO algorithm prevails almost in every test function with an appreciable accuracy. 4.1.3. Additional computational experience Most recently, van den Bergh and Engelbrecht [36] presented several variants of the traditional PSO algorithm, termed the cooperative particle swarm optimizer (CPSO), employing cooperative behavior of sub-swarms in order to improve the performance of the original algorithms. This is achieved by using multiple swarms to optimize different components of the solution vector space cooperatively. To conduct a further comparative study, we compare our hybrid algorithm NM-PSO with the most up-to-date cooperative approaches in PSO proposed by van den Bergh and Engelbrecht [36] on several benchmark optimization problems.

To make the problems much harder to solve by algorithms, all the functions are also tested through a coordinate rotation procedure using Salomons approach [37]. Note that coordinate rotation makes sure sucient correlation between the input variables, thus increasing diculty in reaching optimality of high dimensional problems by using particle swarm optimization. To conduct fair comparisons between our hybrid NM-PSO and their cooperative particle swarm optimizers, we follow completely the experimental conguration discussed in van den Bergh and Engelbrecht [36], where ve well-known benchmark functions of dimensionality 30 were selected for the assessment purpose, and the number of function evaluations was used as a time measure. These ve test functions are Rosenbrock (f0), Quadratic (f1), Ackley (f2), Rastrigin (f3), Griewank (f4). All the functions tested here have the value of 0 in their global minimum points. For the rst part of additional computational experience Fixed-Iteration Results, a xed number of 2 105 function evaluations were used as the termination criterion for all of the algorithms, and then 50 independent runs for each test function and algorithm were collected. The algorithm was halted after 2 105 function evaluations; the mean of the nal best objective function value in the entire population (i.e., swarm) and the corresponding standard error over the 50 runs were expressed in a 95% condence interval form, for the original (un-rotated) and rotated versions of the test functions, respectively. Notice that a new rotation was performed prior to every individual runs, so each rotated function has a dierent functional form but the same function structure. Thus, the minimum objective value is still on zero, and no bias is introduced due to a specic rotation. For a more detailed description of experimental setup, please refer to van den Bergh and Engelbrecht [36]. The computational results obtained are tabulated in Tables 1317. In these tables, s stands for the swarm size used in each type of cooperative GA and PSO. It indicates from Table 13 that the un-rotated Rosenbrock function (f0) is easily solved by the standard PSO; for the rotated version, the cooperative procedure CPSO-H6 performs comparably best among the algorithms. For the rotated case, the algorithms PSO, CPSO-H, CPSO-H6 produce almost equally good performance. For the Quadratic function (f1), Table 14 presents very intriguing results that there exists significant dierence in performance between the

542

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548 Table 15 Ackley (f2) after 2 105 function evaluations Algorithm PSO s 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Mean (Un-rotated) 7.33e+00 6.23e01 4.92e+00 5.81e01 3.57e+00 4.58e01 2.90e14 1.60e15 3.01e14 1.42e15 3.05e14 1.84e15 2.78e14 1.71e15 2.92e14 1.67e15 2.98e14 1.56e15 1.12e06 4.01e07 1.11e05 4.35e06 5.42e05 1.66e05 9.42e11 7.58e11 9.57e12 7.96e12 2.73e12 2.03e12 1.38e+01 4.04e01 9.51e02 3.39e02 3.15e06 8.10e07 Mean (Rotated) 7.54e+00 5.82e01 5.09e+00 5.11e01 3.42e+00 3.74e01 1.73e+01 1.45e00 1.81e+01 1.09e00 1.85e+01 7.76e00 1.43e+01 1.57e00 1.43e+01 1.48e00 1.60e+01 1.42e00 7.98e01 1.06e+00 1.14e+00 1.26e+00 1.54e+00 1.46e+00 8.23e01 1.04e+00 8.12e01 1.05e+00 1.51e12 6.83e13 1.27e+01 1.55e+00 1.57e+01 1.87e+00 5.55e01 2.17e01

Table 13 Rosenbrock (f0) after 2 105 function evaluations Algorithm PSO s 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Mean (Un-rotated) 1.30e01 1.45e01 5.53e03 6.19e03 9.65e03 7.28e03 7.58e01 1.16e01 7.36e01 3.04e02 9.06e01 3.56e02 2.92e01 2.19e02 3.14e01 1.74e02 4.35e01 2.48e02 1.41e+00 4.73e01 2.47e+00 7.00e01 1.59e+00 5.03e01 1.94e01 2.63e01 2.59e01 2.47e01 4.21e01 3.21e01 6.32e+01 1.19e+01 3.80e+00 1.93e01 2.00e02 6.00e04 Mean (Rotated) 3.32e01 9.50e02 2.84e01 5.17e02 3.16e01 3.41e02 3.23e+00 7.78e01 2.58e+00 5.36e01 4.37e+00 8.51e01 4.26e01 3.83e02 4.96e01 4.53e02 1.06e+00 2.96e01 2.65e+00 6.69e01 3.84e+00 9.81e01 4.27e+00 7.73e01 1.77e01 3.62e02 3.73e01 2.07e01 4.73e01 1.35e01 6.15e+01 1.42e+01 1.32e+01 2.19e+00 4.82e+00 2.52e01

CPSO-S

CPSO-S

CPSO-H

CPSO-H

CPSO-S6

CPSO-S6

CPSO-H6

CPSO-H6

GA CCGA NM-PSO

GA CCGA NM-PSO

Table 14 Quadric (f1) after 2 105 function evaluations Algorithm PSO s 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Mean (Un-rotated) 1.08e+00 1.41e+00 2.85e72 5.41e72 2.17e98 4.20e98 Mean (Rotated) 6.02e+03 2.17e+03 3.35e+02 1.35e+02 1.12e+02 4.91e+01

Table 16 Rastrigin (f3) after 2 105 function evaluations Algorithm PSO s 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Mean (Un-rotated) 8.27e+01 5.64e+ 00 7.44e+01 5.66e+00 6.79e+01 4.84e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 1.39e01 1.12e01 6.00e02 6.62e02 1.46e01 1.03e01 1.47e+00 3.16e01 8.77e01 2.20e01 7.78e01 1.87e01 1.29e+02 7.00e+00 1.22e+00 2.35e01 7.08e11 4.11e11 Mean (Rotated) 9.76e+01 5.90e+00 8.48e+01 5.42e+00 7.87e+01 6.79e+00 7.55e+01 7.53e+00 8.15e+01 6.26e+00 7.89e+01 6.41e+00 7.91e+01 6.97e+00 8.21e+01 6.49e+00 8.12e+01 5.92e+00 5.41e+01 5.18e+00 4.66e+01 3.84e+00 5.04e+01 5.50e+00 6.16e+01 5.08e+00 5.94e+01 5.04e+00 5.41e+01 4.63e+00 1.37e+02 1.78e+01 6.93e+01 1.02e+01 2.29e+00 1.69e+00

CPSO-S

2.55e128 4.98e128 1.47e+03 4.77e+02 7.26e89 1.14e88 1.28e+03 3.88e+02 3.17e67 2.21e67 1.72e+03 5.91e+02 5.41e95 1.05e94 6.74e81 8.92e81 1.45e63 1.98e63 4.63e07 6.14e07 1.36e05 1.76e05 1.20e04 8.99e05 2.63e66 5.08e66 9.00e46 1.09e45 1.40e29 1.15e29 1.68e+06 2.56e+05 1.38e+02 9.20e+01 2.30e03 7.00e04 2.15e+02 8.75e+01 3.45e+02 9.92e+01 4.10e+02 1.32e+02 2.89e+03 1.07e+03 2.99e+03 1.07e+03 4.64e+03 1.55e+03 2.40e+02 1.04e+02 7.06e+02 3.24e+02 1.03e+03 5.24e+02 1.07e+06 2.09e+05 6.53e+03 2.38e+03 1.19e01 3.94e02

CPSO-S

CPSO-H

CPSO-H

CPSO-S6

CPSO-S6

CPSO-H6

CPSO-H6

GA CCGA NM-PSO

GA CCGA NM-PSO

un-rotated and rotated cases. For the un-rotated case, all the procedures, except for GA and CCGA, yield quite accurate and stable solution outcomes. The three cooperative procedures CPSO-S, CPSOH and CPSO-H6 belong to the group of performance leaders. Notice that, for the rotated case, the GA-based procedures (GA and CCGA) seem

unfazed to the rotated search space. It is of primary importance to note that all the PSO-based procedures compete the way o with our hybrid NMPSO algorithm in both solution accuracy (mean) and stability (standard error) for the rotated problem.

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

543

The Ackleys function (f2) is a multi-modal function with the unique global solution at the origin and many local solutions located at a regular grid. In the un-rotated case, the cooperative procedures CPSO-S, CPSO-H and CPSO-H6 perform better than the other ones, and the hybrid NM-PSO procedure produces comparable performance with CPSOS6. Nonetheless, in the rotated case, the hybrid NM-PSO procedure outperforms every procedures except for an unexpected instance of CPSO-H6 with s = 20. The computational results of the Rastrigin function (f3) resemble those of the Ackley function. In the un-rotated case, CPSO-S, CPSO-H and NM-PSO perform best among the algorithms. On the other hand, the performance of all the cooperative procedures deteriorates drastically when the search space is rotated. The best solver among the cooperative procedures is CPSO-S6, which, however, still competes the way o with NMPSO in the rotated case. Table 17 shows that the NM-PSO algorithm performed better than any cooperative algorithm in all the experiments on the Griewank function (f4). It is not unreasonable to allege from Tables 1317 that the NM-PSO algorithm can generate quite competitive quality solutions in accuracy and stability, especially for those rotated instances where dominating performance is reported. The second part of additional computational experienceRobustnesscompares the various algorithms to decide on their relative rankings using both success rate and convergence speed as evaluation criteria. Here, success is claimed if an algorithm successfully reduces the objective function below a pre-specied threshold value within the maximum number of function evaluations. The robustness stated here means how consistently the algorithm achieves the threshold during all runs performed in the experiments. Tables 1822 display the computational results of robustness assessment. In the tables, s stands for the swarm size; column Succeeded denotes the number of success accomplished by the algorithm over the 50 independent runs while attaining a function value below the threshold value in less than 2 105 function evaluation; column Fn Evals represents the average number of function evaluation required to reach the threshold value only for succeeded runs. Note that the threshold values of f0f4 are 100, 0.01, 5.00, 100 and 0.1, respectively. Please see details in van den Bergh and Engelbrecht [36].

Table 17 Griewank (f4) after 2 105 function evaluations Algorithm PSO s 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Mean (Un-rotated) 9.65e01 7.58e01 2.62e01 1.61e01 6.51e02 2.17e02 2.79e02 8.36e03 2.21e02 6.28e03 2.25e02 6.10e03 2.45e02 5.38e03 2.38e02 9.81e03 1.86e02 5.46e03 7.29e02 1.49e02 6.90e02 1.56e02 8.59e02 1.68e02 6.75e02 1.40e02 5.54e02 1.27e02 5.24e02 1.19e02 5.94e+01 6.92e+00 2.20e01 6.57e02 1.52e02 6.40e03 Mean (Rotated) 3.45e01 1.64e01 1.17e01 4.62e02 9.64e02 4.95e02 5.10e02 9.77e03 5.77e02 1.26e02 6.11e02 1.17e02 5.19e02 1.34e02 5.40e02 1.51e02 4.42e02 1.08e02 6.41e02 1.18e02 7.40e02 1.39e02 5.51e02 1.38e02 4.67e02 1.32e02 3.86e02 1.05e02 4.06e02 1.03e02 4.98e+01 8.06e+00 1.93e01 4.82e02 9.90e03 3.40e03

CPSO-S

CPSO-H

CPSO-S6

CPSO-H6

GA CCGA NM-PSO

Table 18 Rosenbrock (f0) robustness analysis Algorithm s Un-rotated Succeeded PSO 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 49 50 50 Fn Evals 609 820 861 320 424 562 332 426 556 436 453 521 582 655 716 16,643 2652 94 Rotated Succeeded 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 48 50 50 Fn Evals 661 790 855 420 532 672 411 525 653 516 581 660 617 721 845 21,234 2679 378

CPSO-S

CPSO-H

CPSO-S6

CPSO-H6

GA CCGA NM-PSO

As can clearly be seen from Table 18, all the PSO-based procedures, including NM-PSO, attain the threshold value in every test instances of the un-rotated and rotated Rosenbrock functions. The standard PSO and four cooperative PSO procedures

544

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548 Table 21 Rastrigin (f3) robustness analysis Rotated Fn Evals 34,838 16,735 14,574 70,215 77,265 83,168 40,056 53,341 61,430 77,818 101,565 115,687 22,200 31,503 43,918 N/A N/A 77,449 Succeeded 0 1 2 0 0 0 0 0 0 0 0 0 1 0 0 0 0 10 Fn Evals N/A 26,161 175,788 N/A N/A N/A N/A N/A N/A N/A N/A N/A 126,271 N/A N/A N/A N/A 124,560 PSO 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 Algorithm s Un-rotated Succeeded 45 43 49 50 50 50 50 50 50 50 50 50 50 50 50 27 50 50 Fn Evals 2112 2525 3341 375 436 546 388 430 545 2226 2750 3029 2386 2748 3499 75,341 2339 136 Rotated Succeeded 41 39 41 40 37 41 40 39 37 48 50 50 48 50 50 1 50 50 Fn Evals 2403 2912 3142 3516 5187 4817 4484 5366 5658 7562 7517 9874 16,212 12,133 11,964 59,100 2659 1378

Table 19 Quadric (f1) robustness analysis Algorithm s Un-rotated Succeeded PSO 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 38 50 50 50 50 50 50 50 50 50 50 50 50 50 50 0 0 50

CPSO-S

CPSO-S

CPSO-H

CPSO-H

CPSO-S6

CPSO-S6

CPSO-H6

CPSO-H6

GA CCGA NM-PSO

GA CCGA NM-PSO

Table 20 Ackley (f2) robustness analysis Algorithm s Un-rotated Succeeded PSO 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 11 32 37 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 Fn Evals 2099 3019 2986 935 1053 1227 1068 1154 1245 3264 4136 4994 3105 3924 4947 100 100 2044 Rotated Succeeded 6 32 41 5 2 2 4 2 2 50 47 46 49 46 49 50 50 50 Fn Evals 1988 3385 3200 6240 11,644 43,314 24,420 5836 2401 6670 4533 5686 3494 5355 5657 100 100 2809

Table 22 Griewank (f4) robustness analysis Algorithm s Un-rotated Succeeded PSO 10 15 20 10 15 20 10 15 20 10 15 20 10 15 20 100 100 91 19 30 34 50 50 50 47 49 50 40 33 34 40 44 40 0 26 50 Fn Evals 17521 8066 8405 46,963 47,174 46,679 20,170 24,183 27,121 85,580 98,075 105,770 24,445 21,063 28,577 N/A 134,056 15,309 Rotated Succeeded 18 35 34 45 40 42 40 46 43 44 40 40 44 39 43 0 5 48 Fn Evals 24,081 9095 8620 55,532 59,911 59,389 24,374 30,257 35,715 64,311 72,844 77,259 19,478 21,282 28,099 N/A 128,545 27,483

CPSO-S

CPSO-S

CPSO-H

CPSO-H

CPSO-S6

CPSO-S6

CPSO-H6

CPSO-H6

GA CCGA NM-PSO

GA CCGA NM-PSO

solve both versions of the problem less than 900 function evaluations. By contrast, the NM-PSO procedure only needs 94 function evaluations for the un-rotated case and 378 for the rotated case on average.

Experimental results shown in Table 19 demonstrate how dicult it could be to optimize a rotated version of the Quadratic function (f1). For the un-

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

545

rotated case, all the cooperative PSO and NM-PSO procedures achieve the threshold value in all runs but the standard PSO has a little diculty in this situation. Based on both the success rate and the average number of function evaluation, CPSO-H6 performs best for the un-rotated problem. For the rotated problem, all the procedures fail except for the NM-PSO algorithm attaining 20% (10 out of 50) success rate. The standard PSO has some diculty in solving the un-rotated Ackley function (f2); the four cooperative PSO, GA and NM-PSO procedures solve the un-rotated case with 100% (50 out of 50) success rate, as can be seen from Table 20. Among the algorithms, CPSO-S and CPSO-H exhibit faster convergence for the un-rotated function, but fail almost completely on the rotated function. On the contrary, the two cooperative procedures CPSO-S6 and CPSO-H6 manage to solve the rotated problem stably. The hybrid NM-PSO algorithm solves the Ackley function (f2), regardless of rotation, with a perfect success rate (50 out of 50) and achieves the threshold value using economic function evaluations. Table 21 also presents a noticeable comparison report between NM-PSO and other procedures. In addition to 100% success rate on both versions of the Rastrigin function (f3), the NM-PSOs advantage of function evaluation over the other procedures overwhelmingly prevails in every experimental instances performed here. It can be easily seen from Table 22 that the Griewank function (f4 ) is very hard to solve for all the algorithms. Only CPSO-S, CPSO-H and NMPSO have the capability to consistently achieve the threshold value on the un-rotated problem. No any algorithm can attain a perfect success rate on the rotated problem, and NM-PSO performs best in 48 runs out of 50. From the aspect of function evaluation, NM-PSO takes the lead as compared to those cooperative procedures with a success rate higher than 80% (40 out of 50). In terms of the robustness and function evaluation, the NM-PSO algorithm is, on the whole, the winner in that it achieves perfect score in eight out of the ten test functions and requires much less function evaluations than any competitive, up-to-date cooperative PSO procedures. Computational experience gained on the preceding test functions conrms again the rich potential of the proposed hybrid NM-PSO algorithm to be an eective, ecient, robust and reliable general-purpose solver for unconstrained optimization problems.

To sum, evidence obtained from the computational results in Section 4 suggests that the hybrid approach provides an optimization solver with higher eciency, reliability and accuracy for solving general unconstrained optimization problems. It is particularly important to note that the NM-PSO algorithm does not require gradient or Hessian matrix calculations, and therefore it does not suer from the weakness of classical optimization methods, such as ill-conditioning. 5. Conclusions The function minimization techniques have been applied extensively in physical sciences, with applications in root nding of polynomials, system of equations, in estimating the parameters of nonlinear functions, and in searching the optimum parameter setting of scientic systems or engineering processes. The NelderMead (NM) simplex search method is a very popular, ecient direct search method for function minimization without using derivatives. Particle swarm optimization (PSO) is one of the most recent evolutionary optimization methods developed for solving continuous optimization problems. The current study investigates the hybridization of the above two methods, and the performance of the hybrid algorithm is compared with other pertinent alternatives via simulations. The substantial improvements upon the eectiveness, eciency and accuracy are reported to justify the claim that the hybrid approach presents an excellent tradeo between exploitation in NM and exploration in PSO. In this paper, a hybrid NM-PSO algorithm is presented for locating the global optima of continuous unconstrained optimization problems. The motivation of such a hybrid is to explore a better tradeo between computational cost and global optimality of the solution attained. The initial population design of the NM-PSO method is an original idea in optimization methods and enables the hybrid mechanism to swiftly arrive at the neighborhood of the global optimum only after a few iterations elapsed. The second expansion strategy of the modied simplex search method makes it possible for a faster convergence rate, while the mutation heuristic strategy for the global best particle of the modied PSO method allows more latitude of search space to anchor the global optimum. Computational experience gained on a wide variety of test instances conrms the rich potential of the

546

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

NM-PSO hybrid in solving deterministic unconstrained optimization problems. These observations also lead us to allege that the hybrid approach is indeed more accurate, reliable and ecient at locating best-practice optima than the other alternatives. This hybrid NM-PSO is demonstrated to be a promising and viable tool to solve unconstrained nonlinear optimization problems. A number of improvements and extensions are currently being investigated by the authors. These include ways to accelerate the convergence for problems of higher dimension, as well as methods for extending the methodology to stochastic multi-objective systems. Practical applications of this hybrid approach in areas of classication, engineering process control, response surface optimization, and machine vision would also be worth further studying. Appendix The 20 test functions we employed are given below. To dene the test functions, we have adopted the following general format: (Dimension): Name of function (a) Function denition (b) Global optimum Function 1 (2-D): Powell badly scaled function (a) f(x) = (10x1x2 1) + (exp[ x1] + exp [x2] 1.0001)2. (b) Global optimum with f = 0 at (1.098. . . 105, 9.106. . .). Function 2 (2-D): B2 function (a) f x 0:3 cos3px1 0:4 cos4p x2 0:7. (b) Global optimum with f = 0 at (0, 0). Function 3 (2-D): Beale function P3 2 (a) f x i1 y i x1 1 xi2 , where y1 = 1.5, y2 = 2.25, y3 = 2.625. (b) Global optimum with f = 0 at (3, 0.5). Function 4 (2-D): Booth function (a) f(x) = (x1 + 2x2 7)2 + (2x1 + x2 5)2. (b) Global optimum with f = 0 at (1, 3). x2 1 2x2 2
2

Function 5 (3-D): Helical valley function (a) f x 10x3 10hx1 ; x2 2 10x2 x2 1=2 1 2 12 x3 2 ,   8 1 < 2p arctan x2 ; if x1 > 0; x  1 where hx1 ; x2 : 1 arctan x2 ; if x1 < 0: 2p x1 (b) Global optimum with f = 0 at (1, 0, 0). Function 6 (3-D): De Jong function (a) f x x2 x2 x2 . 1 2 3 (b) Global optimum with f = 0 at (0, 0, 0). Function 7 (3-D): Box three-dimensional function P3 (a) f x i1 expti x1 expti x2 x3 exp 2 ti exp10ti , where ti = (0.1)i. (b) Global optimum with f = 0 at (1, 10, 1), (10, 1, 1) and wherever (x1 = x2 and x3 = 0). Function 8 (4-D): Wood function (a) f x 10x2 x2 1 x1 90 1 2 2 1=2 2 x4 x2 1 x3 10 x2 x4 2 3 1=2 2 x2 x4 . 10 (b) Global optimum with f = 0 at (1, 1, 1, 1). Function 9 (4-D): Trigonometric function P (a) fi x 4 4 cos xj i1 cos xi sin xi ; j1 i 1; 2;P . ; 4 .. 4 f x j1 fj2 x. (b) Global optimum with f = 0. Function 10 (4-D): Rosenbrock function P2 2 2 (a) f x i1 100x2i x2 1 x2i1 . 2i1 (b) Global optimum with f = 0 at (1, 1, 1, 1). Function 11 (4-D): Variably dimensioned function P 2 P4 4 (a) x f x 12 ixi 1 i i1 i1 4 P4 ixi 1 . i1 (b) Global optimum with f = 0 at (1, 1, 1, 1). Function 12 (8-D): Penalty function I P  P 8 (a) f x 8 105 1=2 xi 12 x2 j i1 j1 
2 2 2 1=2

1 . 4 (b) Global optimum with f = 5.42152. . . 105.

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548

547

Function 13 (8-D): Penalty function II a f1 x x1 0:2 xi 1=2


i1 fi x a exp 10 exp x10 y i ; 2 6 i 6 8 xin1 1 1=2 8 < i < 16 fi x a exp 10 exp 10  P8 2 f16 x j1 8 j 1xj 1 P16 2 f x j1 fj x i where a = 105 and y i exp 10 exp i1 . 10

References
[1] S.G. Nash, A. Sofer, Linear and Nonlinear Programming, McGraw-Hill, New York, 1996. [2] J.A. Nelder, R. Mead, A simplex method for function minimization, Computer Journal 7 (1965) 308313. [3] D.M. Olsson, L.S. Nelson, The NelderMead simplex procedure for function minimization, Technometrics 17 (1975) 4551. [4] D.H. Chen, Z. Saleem, D.W. Grace, A new simplex procedure for function minimization, International Journal of Modeling and Simulation 6 (1986) 8185. [5] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 1995, pp. 3943. [6] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 1995, pp. 19421948. [7] S. Smith, The simplex method and evolutionary algorithms, in: Proceedings of the IEEE International Conference on Evolutionary Computation 1998, pp. 799804. [8] R. Hooke, T.A. Jeeves, Direct search solution of numerical and statistical problems, Journal of Association for Computing Machinery 8 (1961) 212221. [9] F. van den Bergh, An analysis of particle swarm optimizers, Ph.D. dissertation, University of Pretoria, Pretoria, 2001. [10] H.H. Rosenbrock, An automatic method for nding the greatest or least value of a function, Computer Journal 3 (1960) 175184. [11] W. Spendley, G.R. Hext, F.R. Himsworth, Sequential application of simplex designs in optimization and evolutionary operation, Technometrics 4 (1962) 441461. [12] J. Balakrishnan, M.K. Gunasekaran, E.S.R. Gopal, Critical dielectric constant measurements in the binary liquid system methanol + normal heptane, International Journal of PA Physics 22 (1984) 286298. [13] W.R. Busing, M. Matsui, The application of external forces to computational model of crystals, Acta Crystallographica Section A 40 (1984) 532540. [14] W. Schulze, U. Rehder, Organization and morphogenesis of the human seminiferous epithelium, Cellular Tissue Review 237 (1984) 395417. [15] G.L. Silver, Space modication: An alternative approach to chemistry problem involving geometry, Journal of Computational Chemistry 2 (1981) 478490. [16] P.R. Sthapit, J.M. Ottoway, G.S. Fell, Determination of lead in matural and tap waters by ame atomic-uorescence spectrometry, Analyst 109 (1984) 10611075. [17] R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, Chichester, 1987. [18] R.R. Barton, J.S. Ivey, NelderMead simplex modications for simulation optimization, Management Science 42 (1996) 954973. [19] L. Nazareth, P. Tzeng, Gilding the lily: A variant of the NelderMead algorithm based on golden-section search, Computational Optimization and Applications 22 (2002) 133134. [20] X. Hu, R.C. Eberhart, Adaptive particle swarm optimization: Detection and response to dynamic systems, in: Proceedings of the IEEE International Conference on

(b) Global optimum with f = 1.23335. . . 104. Function 14 (8-D): Trigonometric function P (a) fi x 8 8 cos xj i1 cos xi sin xi ; j1 i 1; 2; . . . ; 8 P8 2 f x j1 fj x. (b) Global optimum with f = 0. Function 15 (8-D): Extended Powell function f4i3 x x4i3 10x4i2 ; f4i2 x 51=2 x4i1 x4i ; 2 (a) f4i1 x x4i2 2x4i1 ; 2 1=2 f4i x 10 x4i3 x4i ; P f x 8 fj2 x j1 i 1; 2 i 1; 2 i 1; 2 . i 1; 2

(b) Global optimum with f = 0 at the origin. Function 16 (10-D): Griewank function   P10 x2 Q10 x i (a) f x i1 4000 i1 cos pi 1. i (b) Global optimum with f = 0 at the origin. Function 17 (10-D): Rastrigin function P10 (a) f x i1 x2 10 cos2pxi 10. i (b) Global optimum with f = 0 at origin. Function 18 (10-D): Rosenbrock function P (a) f x 5 100x2i x2 2 1 x2i1 2 . 2i1 i1 (b) Global optimum with f = 0 at (1, 1, 1, 1). Function 19 (30-D): Sphere function P30 (a) f x i1 x2 . i (b) Global optimum with f = 0 at the origin. Function 20 (50-D): Griewank function   P50 x2 Q10 x i (a) f x i1 4000 i1 cos pi 1. i (b) Global optimum with f = 0 at the origin.

548

S.-K. S. Fan, E. Zahara / European Journal of Operational Research 181 (2007) 527548 Evolutionary Computation, Honolulu, Hawaii, USA, 2002, pp. 16661670. R.C. Eberhart, Y. Shi, Tracking and optimizing dynamic systems with particle swarms, in: Proceedings of Congress on Evolutionary Computation, Seoul, Korea, 2001, pp. 9497. X. Hu, R.C. Eberhart, Tracking dynamic systems with PSO: Wheres the cheese? in: Proceedings of The Workshop on Particle Swarm Optimization, Indianapolis, IN, USA, 2001. V. Tandon, Closing the gap between CAD/CAM and optimized CNC and milling, Masters thesis, Purdue School of Engineering and Technology, Indianapolis, IN, USA, 2000. H. Yoshida, K. Kawata, Y. Fukuyama, Y. Nakanishi, A particle swarm optimization for reactive power and voltage control considering voltage stability, in: Proceedings of the International Conference on Intelligent System Application to Power Systems, Rio de Janeiro, Brazil, 1999, pp. 117121. B. Brandstatter, U. Baumgartner, Particle swarm optimizationmass-spring system analogon, IEEE Transactions on Magnetics 38 (2002) 9971000. R.C. Eberhart, X. Hu, Human tremoe analysis using particle swarm optimization, in: Proceedings of the Congress on Evolutionary Computation, Washington DC, USA, 1999, pp. 19271930. F. van den Bergh, A.P. Engelbrecht, Cooperative learning in neural networks using particle swarm optimizers, South African Computer Journal 26 (2000) 8490. M. Clerc, J. Kennedy, The particle swarm-explosion, stability and convergence in a multidimensional complex space, IEEE Transactions on Evolutionary Computation 6 (2002) 5873. [29] F. van den Bergh, A.P. Engelbrecht, A new locally convergent particle swarm optimizer, in: IEEE International Conference on Systems, Man and Cybernetics, 2002, pp. 96101. [30] J.M. Renders, S.P. Flasse, Hybrid methods using genetic algorithms for global optimization, IEEE Transaction on Systems, Man, and CyberneticsPart B: Cybernetics 26 (1996) 243258. [31] J. Yen, J.C. Liao, B. Lee, D. Randolph, A hybrid approach to modeling metabolic systems using a genetic algorithm and simplex method, IEEE Transaction on Systems, Man, and CyberneticsPart B: Cybernetics 28 (1998) 173191. [32] G.E.P. Box, Evolutionary operation: A method for increasing industrial productivity, Applied Statistics 6 (1957) 81101. [33] N. Higashi, H. Iba, Particle swarm optimization with Gaussian mutation, in: IEEE Proceedings on Swarm Intelligence Symposium, Indianapolis, USA, 2003, pp. 7279. [34] J.J. More, B.S. Garbow, K.E. Hillstrom, Testing unconstrained optimization software, ACM Transactions on Mathematical Software 7 (1981) 1741. [35] J.E. Dennis Jr., D.J. Woods, Optimization on microcomputers: the NelderMead simplex algorithm, in: A. Wouk (Ed.), New Computing Environments: Microcomputers in Large Scale Computing, SIAM, Philadelphia, PA, 1987, pp. 116122. [36] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Transactions on Evolutionary Computation 8 (2004) 225239. [37] R. Salomon, Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions, BioSystems 39 (1996) 263278.

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

You might also like