Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms
()
About this ebook
Andre A. Keller
André A. Keller is an Associate Researcher at the Computer Science Laboratory of Lille University, Science and Technology, France. He received a PhD in Economics (Operations Research) from University of Paris I Panthéon- Sorbonne. He is a reviewer for international journals including AMM, Ecol.Model., JMAA and a Member of the JAST editorial board. He presented several Plenary Lectures at International Conferences and was Invited Professor in different countries. As a Full Professor, he has taught applied mathematics and optimization techniques, econometrics, microeconomics, game theory, and more in various universities in France. His experience centers are in the building, analyzing, and forecasting with large scale macro-economic systems within research groups of the French CNRS. Other domains of experience are notably discrete mathematics, circuit analysis, time-series analysis, spectral analysis, fuzzy logic. He has published numerous articles, book chapters, and books. The books are on topics such as time-delay systems, and multi-objective optimization.
Related to Multi-Objective Optimization in Theory and Practice II
Related ebooks
Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms Rating: 0 out of 5 stars0 ratingsGenetic Algorithm: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsThe evolution of Soft Computing - From neural networks to cellular automata Rating: 4 out of 5 stars4/5How to Design Optimization Algorithms by Applying Natural Behavioral Patterns Rating: 0 out of 5 stars0 ratingsMulti Agent System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBeyond Logic: The Future of AI Reasoning and Evolution Rating: 0 out of 5 stars0 ratingsIoT-enabled Sensor Networks: Architecture, Methodologies, Security, and Futuristic Applications Rating: 0 out of 5 stars0 ratingsMulti-Objective Optimization in Chemical Engineering: Developments and Applications Rating: 0 out of 5 stars0 ratingsOptimal Pathfinding with A-Star Algorithms: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsArtificial Life: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsOptimization Algorithms and Hierarchical Convergence Rating: 0 out of 5 stars0 ratingsConcepts of Combinatorial Optimization Rating: 0 out of 5 stars0 ratingsPython-Based Evolutionary Algorithms for Engineers Rating: 0 out of 5 stars0 ratingsComputational Science: An Introduction for Scientists and Engineers Rating: 0 out of 5 stars0 ratingsDigital Technologies – an Overview of Concepts, Tools and Techniques Associated with it Rating: 0 out of 5 stars0 ratingsParticle Swarm Optimization: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence 2024 Book 2 of 2: AI, #2 Rating: 0 out of 5 stars0 ratingsSimulated Annealing: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsGenetic Programming: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsDevelopmental Robotics: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsComputational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing Rating: 0 out of 5 stars0 ratingsConstraint Networks: Targeting Simplicity for Techniques and Algorithms Rating: 0 out of 5 stars0 ratingsProject Management – an Artificial Intelligent (Ai) Approach Rating: 0 out of 5 stars0 ratingsHands-On Genetic Algorithms with Python: Apply genetic algorithms to solve real-world AI and machine learning problems Rating: 0 out of 5 stars0 ratingsSwarm Intelligence: Collaborative Algorithms and Distributed Problem Solving in Robotics Rating: 0 out of 5 stars0 ratingsEvolutionary Computation: Harnessing Intelligent Algorithms for Advanced Robotic Systems Rating: 0 out of 5 stars0 ratingsOptimal Computability, Applied Rating: 0 out of 5 stars0 ratingsOptimization: A Comprehensive Guide for Beginners Rating: 0 out of 5 stars0 ratingsConstraint Satisfaction: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Mathematics For You
My Best Mathematical and Logic Puzzles Rating: 4 out of 5 stars4/5What If?: Serious Scientific Answers to Absurd Hypothetical Questions Rating: 4 out of 5 stars4/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Math Magic: How To Master Everyday Math Problems Rating: 3 out of 5 stars3/5Fluent in 3 Months: How Anyone at Any Age Can Learn to Speak Any Language from Anywhere in the World Rating: 3 out of 5 stars3/5Basic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5Calculus Made Easy Rating: 4 out of 5 stars4/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Basic Math & Pre-Algebra Workbook For Dummies with Online Practice Rating: 3 out of 5 stars3/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5The Everything Everyday Math Book: From Tipping to Taxes, All the Real-World, Everyday Math Skills You Need Rating: 5 out of 5 stars5/5Mental Math: Tricks To Become A Human Calculator Rating: 2 out of 5 stars2/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5Calculus For Dummies Rating: 4 out of 5 stars4/5Algebra I Workbook For Dummies Rating: 3 out of 5 stars3/5The Moscow Puzzles: 359 Mathematical Recreations Rating: 5 out of 5 stars5/5Algebra I For Dummies Rating: 4 out of 5 stars4/5Calculus Essentials For Dummies Rating: 5 out of 5 stars5/5Pre-Calculus For Dummies Rating: 5 out of 5 stars5/5The Art of Statistical Thinking Rating: 5 out of 5 stars5/5Algebra II For Dummies Rating: 3 out of 5 stars3/5Statistics: a QuickStudy Laminated Reference Guide Rating: 0 out of 5 stars0 ratingsGeometry For Dummies Rating: 4 out of 5 stars4/5Painless Algebra Rating: 3 out of 5 stars3/5Trigonometry For Dummies Rating: 0 out of 5 stars0 ratingsThe Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5Precalculus: A Self-Teaching Guide Rating: 4 out of 5 stars4/5
Reviews for Multi-Objective Optimization in Theory and Practice II
0 ratings0 reviews
Book preview
Multi-Objective Optimization in Theory and Practice II - Andre A. Keller
Pareto-Optimal Front Determination
André A. Keller
Abstract
Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. In such problems, we will obtain rather a set of equally good solutions and not just one best solution. Classical methods for solving MOO problems have shown their limits to find Pareto-optimality fronts. The difficulties were the necessity of repetitive applications, the requirement of some knowledge about the problem, the insufficient spread of non-dominated solutions, and the sensitivity of results to the shape of the Pareto-optimal front. Therefore, metaheuristics with application to practical MOO problems had an accelerated development in the decades 1980s and 1990s. A variety of nature-inspired algorithms have been proposed in the recent years with extensive applications. The concept of dominance is the basis of the Pareto optimality. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the fitness space and decision space. The non-dominated solution sets yield Pareto fronts. Different techniques were proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Our method uses graph theory analysis. This approach provides a non-dominated set by using the Hasse diagram of an acyclic graph. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts. In particular, Pareto-optimal fronts change if we decide to maximize instead to minimize the objectives. Algorithms use different selection procedures. The selection criteria can be an elitist Pareto ordering, non-Pareto criteria like indicators, a bi-criteria evolution, and the extended concepts of dominance like ε-dominance and grid dominance.
Keywords: Attraction-based algorithm, Conflicting/nonconflicting objectives, Darwinian evolution, Engineering design problem, Globally/locally Pareto-optimality, Hasse diagram, Master strategy, Metaheuristics, Multi-objective optimization, Nature-inspired algorithm, Near/exact Pareto-optimal front, Non-dominated solution, Pareto-optimal solution, Pareto ranking, Population-based algorithm, Strongly/weakly Pareto-optimality, Subordinate heuristics, Swarm intelligence, Trajectory-based algorithm.
1.1. Introduction
The first use of heuristic algorithms goes back to Turing (1948) [1] when he was breaking the German Enigma code during World War II (see also Yang, 2014 [2], and Angelov, 2016 [3]). After that, heuristic and metaheuristic algorithms for solving programming problems resulted from the difficulties with classical optimization methods. Abido (2010) [4] specified four inconveniences with conventional algorithms. Firstly, there is a need for a repetitive application of an algorithm to find the Pareto-optimal solutions. Secondly, some knowledge about the problem is required. Thirdly, an algorithm is sensitive to the shape of the Pareto-optimal front. Finally, the spread of the Pareto-optimal solutions depends on the chosen algorithm. Heuristic algorithms are suitable solvers for severe high-dimensional real-life problems (see, for instance, Tong et al., 2014 [5]).
1.1.1. Heuristic and Metaheuristic Algorithm
Heuristics and metaheuristics are approximation resolution methods. Heuristics denotes techniques which seek near-optimal solutions at a low cost, is often problem-specific. Metaheuristics consists of a master strategy that guides and corrects the operations of subordinate heuristics (see, for instance, Reeves, 1995 [6]). Heuristics denotes a set of strategies to guide the search process in the feasible space. Metaheuristics, such as evolutionary algorithms (EAs), refers to a higher level procedure which combines different includes of depended heuristics for exploring search area. Metaheuristics includes notably genetic algorithms (GAs), evolution strategy (ES), and genetic programming (GP). EAs also include, but are not limited to, nature-inspired algorithms such as neural methods, simulated annealing, tabu search, ant colony systems and other particle swarm intelligence techniques. The capacity of such methods to solve NP-hard ¹ combinatorial problems is well known (e.g., the problems of traveling salesperson, scheduling, graph, and transportation). The book by Michalewicz, 1999 [7] introduced metaheuristics for solving numerical optimization problems. Many authors proposed an overview of evolutionary techniques with applications. Some overviews include Srinivas and K. Deb’s sorting GA (1994) [8], Fonseca and Fleming multi-objective GA (1993, 1995) [9, 10], Hajela and J. Lee’s weighted-based GA (1996) [11], Zitzler, 1999 [12] including Schaffer’s vector evaluated GA [13].
Genetic algorithms use principles of the Darwinian evolution characterized as follows. Individuals within populations (or species) differ. Traits are passed on to offspring. Few offspring can survive in every generation. The selected members who survive have most favorable performances. This natural process on individuals has consequences on the corresponding population. This evolution process is backward-looking, mostly deterministic (i.e., partially random). It is not perfect and can produce new traits besides existing traits. Such algorithms are population-based stochastic algorithms which elements include a population of individuals, fitness evaluation, genetic operators guiding evolution, and selection.
Fig. (1.1) shows the basic cycle of the process. The initial step consists of a population of individuals chosen at random. In the evaluation phase of the primary cycle, we evaluate all the individuals by using the objective functions of the programming problem. Next, fitness values are assigned to individuals on this basis. Then, the fittest individuals are selected for reproduction. After that, new individuals emanate from the use of genetic operators, such as with crossover and mutation. Closing the basic cycle, the new population including the selected individuals and offspring is transferred to the first step for evaluation, and a new cycle goes on.
Fig. (1.1))
Basic cycle of an evolutionary algorithm.
1.1.2. History of Metaheuristics
One should indicate the fast development of an MOO approach in the mid-1980s with the help of evolutionary algorithms². An early attempt to use genetic algorithms to solve MOO problems was realized by Ito et al. (1983) [15]. Goldberg (1989) [16] proposed Pareto-set fitness assignment to solve Schaffer’s multi-objective problems. In the same period, two books dealt with theory and techniques of MOO, such as Chankong and Haimes’ book (1983) [17], and that book of Sawaragi et al. (1985) [18]. The fast expansion of this approach was stimulated by numerous real-world applications from science, technology, management, and finance. Rangaiah’s book (2009) [19] was the first publication on MOOPs with a focus on chemical engineering. The applications in this area were notably in chemical, mineral processing, oil and gas, petroleum, pharmaceutical industries. Lai and C-L. Huang (1994) [20] extended the MOO approach to fuzzy decision-making problems ³.
The first use of genetic-based search algorithms to multi-objective optimization problems goes back to the pioneering work of Rosenberg (1967) [24] (see Coello, 1999 [25]). In his brief history of metaheuristics X-S. Yang, 2014 [2] specified the relevant decades of the development of evolutionary algorithms. The 1960s and 1970s knew the development of genetic algorithms at the University of Michigan. The contribution of Holland (1975) [26] proposed a search method based on the Darwinian evolution concepts and natural principles of biological systems. Crossover, mutation, and selection operators were used to solve difficult combinatorial problems. In the same period, researchers from the Technical University of Berlin proposed evolutionary strategies. Rechenberg (1971) [27] and Schwefel (1975) [28] proposed a search method for solving optimization problems. D.B. Fogel (1994) [29] introduced the evolutionary programming by using simulated evolution as a learning process ⁴.
Following X-S. Yang, the decades 1980s, and 1990s were fruitful steps for metaheuristic algorithms. Kirkpatrick, Gelatt Jr, and Vecchi pioneered the simulated annealing SA algorithm in 1983 [30]. This method has its origin in the annealing process of metals. In 1986, the use of memory was proposed by Glover’s tabu search in 1986 [31]. In 1992, the search technique by Dorigo [32] was inspired by the swarm intelligence of ant colonies using a pheromone to communicate. Later in 1995, Kennedy and Eberhardt [33] developed the particle swarm optimization, inspired by the swarm intelligence of fish and birds. In 1997, Storn and Price [34] proposed the differential evolution (DE) algorithm. This vector-based evolutionary algorithm was proved to be more efficient than a genetic algorithm. In 1997, the study on No Free Lunch Theorems (NFL) by Wolpert and Macready (1997) [35] was a significant step in the development of better algorithms. Indeed, theorems proved that there exists no universal better algorithm for all applications. Thus, the most efficient algorithm would be valid only for a given class of problems.
In the recent years, other nature-inspired algorithms were proposed. We can mention harmony search (HS) algorithm for distribution, transport and scheduling 2001, honeybee algorithms 2004, firefly algorithm (FA) 2007, cuckoo search algorithm (CSA) 2009, bat algorithm (BA) 2010 based on echolocation behavior, flower pollination algorithm (FPA) 2012.
1.1.3. Metaheuristic Algorithms and Applications
Some survey articles on metaheuristic algorithms suggested some classifications. Besides conventional aggregating approaches, Fonseca and Fleming (1995) [10] considered population-based non-Pareto methods (VEGA), Pareto-based approaches, and niche induction techniques. I. Fister et al. (2013) [36] divided algorithms into four groups of inspiration, such as 1) swarm intelligence (SI)– based, 2) not SI-based bio-inspired algorithms, 3) physics/chemistry-based algorithms, and 4) other algorithms. Fig. (1.2) and Table 1.1 show the selected classification for this study.
Fig. (1.2))
Metaheuristic algorithms for optimization problems [classification inspired by I. Fister et al. (2013) [36].
However, I. Fister et al. (2013) [36] declared that there is no single classification. Taxonomies depend on focus and perspective. Thus, algorithms include population-based algorithm (e.g., GA, FA, PSO algorithms) or trajectory-based (e.g., SA algorithm). We can divide algorithms into several categories such as attraction-based procedures (e.g., FA algorithm using the attraction of light) or non-attraction-based procedures (e.g., GA algorithms). Algorithms can also be classified rule-based (crossover and mutation principles in GAs) and updating equation-based (e.g., PSO, CS).
Fig. (1.2) shows most metaheuristic algorithms. The algorithms consist of bio-inspired algorithms, and physics and chemistry-based algorithms. The algorithms belong to one or more of the following groups, such as swarm intelligence, genetic algorithms, genetic programming GP, and evolutionary algorithms (see Tong et al., 2014 [5]).
Table 1.1 defines the algorithms of Fig. (1.2) with more information about the acronym, the authors, and year of publication. Additional information from the literature is the number of references provided by IEEE Explore Digital Library ⁵ for each algorithm. It tells the widespread use of each algorithm. The ten first algorithms with more than 1,000 publications are: 1) HEMH, 2) GA, 3) ECBO, 4) PSO, 5) GP, 6) SA, 7) EP, 8) ACO, 9) DE, 10) MOEA where the first three algorithms have 174,091 items. 25 algorithms have between 100 and 1,000 publications. 15 other algorithms have between 10 and 100 items, and eight algorithms have less than ten items.
Table 1.1 Description of 65 metaheuristic algorithms for optimization ⁶.
The review of the literature by Lalwani et al. (2013) [93] showed the applicability of particle swarm optimization (PSO) to multi-objective problems ⁷(MOPSO). In this algorithm⁸, the potential solution is the swarm which individuals are the particles. This heuristic is inspired by the choreography of bird flocks. It was validated using standard test functions and compared against the EMO algorithms⁹. Each particle is trying to move using information it holds on its position, velocity, and performance. The movement of a particle is governed by updating its position and velocity. Fig. (1.3) shows applications in a broad range of activities in engineering, industry, biology, management, and environment. The application areas of the studies ¹⁰ were presented by Lalwani et al. (2013) [93]. The applications included aerospace engineering (abbreviated by Aero Eng
in Fig. (1.3)), biological sciences (or Biological
), chemical engineering (or Chem Eng
), data mining (or data min
), electrical engineering (or Elec Eng
), Electromagnetic Engineering (or Electromag
), Electronics Engineering (or Electronics
), Environmental Sciences (or Environ
), flow shop and job shop scheduling problems (or ‘Flowshop
), image processing (or Image Proc
), industrial engineering (or Ind Eng
), mechanical engineering (or Mecha Eng
), neural network (or Neural netw
), robotics, and software engineering (or Software E.
).
Fig. (1.3))
Application area of MOPSO algorithm in published papers in the period 2006-2012 [data, and adapted Figure from Lalwani et al., 2013 [93], Figure 3, p.45]. Note: The sum of rounded percentages differs from hundred percent.
Problems studied in these applications are listed in Table 1.2 (see Lalwani et al., 2013 [93], Table 2 for more details).
Table 1.2 Type of problems solved by using MOPSO by areas [adapted from Lalwani et al. (2013) [93], Table 2, pp. 60-89].