0% found this document useful (0 votes)
9 views

Handbook of Bioinspired Algorithms and Applications 1st Edition Stephan Olariu instant download

The Handbook of Bioinspired Algorithms and Applications provides insights into biologically inspired techniques for solving complex problems across various domains. It emphasizes the practical application of these paradigms while balancing theoretical foundations, organized into two main sections: Models and Paradigms, and Application Domains. The handbook serves as a valuable reference for researchers seeking to apply bioinspired methods to computational challenges.

Uploaded by

omadzeaviaq
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Handbook of Bioinspired Algorithms and Applications 1st Edition Stephan Olariu instant download

The Handbook of Bioinspired Algorithms and Applications provides insights into biologically inspired techniques for solving complex problems across various domains. It emphasizes the practical application of these paradigms while balancing theoretical foundations, organized into two main sections: Models and Paradigms, and Application Domains. The handbook serves as a valuable reference for researchers seeking to apply bioinspired methods to computational challenges.

Uploaded by

omadzeaviaq
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Handbook of Bioinspired Algorithms and

Applications 1st Edition Stephan Olariu pdf


download

https://ebookgate.com/product/handbook-of-bioinspired-algorithms-
and-applications-1st-edition-stephan-olariu/

Get Instant Ebook Downloads – Browse at https://ebookgate.com


Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...

Bioinspired Materials for Medical Applications 1st Edition


Lígia Rodrigues

https://ebookgate.com/product/bioinspired-materials-for-medical-
applications-1st-edition-ligia-rodrigues/

ebookgate.com

The Oxford Handbook of Transformations of the State


Stephan Leibfried

https://ebookgate.com/product/the-oxford-handbook-of-transformations-
of-the-state-stephan-leibfried/

ebookgate.com

Graph Algorithms and Applications I 1st Edition Roberto


Tamassia

https://ebookgate.com/product/graph-algorithms-and-applications-i-1st-
edition-roberto-tamassia/

ebookgate.com

Basics of Programming and Algorithms Principles and


Applications 2024th Edition Roberto Mantaci

https://ebookgate.com/product/basics-of-programming-and-algorithms-
principles-and-applications-2024th-edition-roberto-mantaci/

ebookgate.com
Graph Algorithms and Applications Vol 5 Giuseppe Liotta

https://ebookgate.com/product/graph-algorithms-and-applications-
vol-5-giuseppe-liotta/

ebookgate.com

Data Clustering Algorithms and Applications 1st Edition


Charu C. Aggarwal

https://ebookgate.com/product/data-clustering-algorithms-and-
applications-1st-edition-charu-c-aggarwal/

ebookgate.com

Quantum Dreams The Art of Stephan Martiniere Stephan


Martiniere

https://ebookgate.com/product/quantum-dreams-the-art-of-stephan-
martiniere-stephan-martiniere/

ebookgate.com

Handbook of Scheduling Algorithms Models and Performance


Analysis 1st Edition James H. Anderson

https://ebookgate.com/product/handbook-of-scheduling-algorithms-
models-and-performance-analysis-1st-edition-james-h-anderson/

ebookgate.com

Relational Data Clustering Models Algorithms and


Applications 1st Edition Bo Long

https://ebookgate.com/product/relational-data-clustering-models-
algorithms-and-applications-1st-edition-bo-long/

ebookgate.com
CHAPMAN & HALL/CRC COMPUTER and INFORMATION SCIENCE SERIES

Handbook of
Bioinspired Algorithms
and Applications

© 2006 by Taylor & Francis Group, LLC


CHAPMAN & HALL/CRC
COMPUTER and INFORMATION SCIENCE SERIES
Series Editor: Sartaj Sahni

PUBLISHED TITLES
HANDBOOK OF SCHEDULING: ALGORITHMS, MODELS, AND PERFORMANCE ANALYSIS
Joseph Y.-T. Leung
THE PRACTICAL HANDBOOK OF INTERNET COMPUTING
Munindar P. Singh
HANDBOOK OF DATA STRUCTURES AND APPLICATIONS
Dinesh P. Mehta and Sartaj Sahni
DISTRIBUTED SENSOR NETWORKS
S. Sitharama Iyengar and Richard R. Brooks
SPECULATIVE EXECUTION IN HIGH PERFORMANCE COMPUTER ARCHITECTURES
David Kaeli and Pen-Chung Yew
SCALABLE AND SECURE INTERNET SERVICES AND ARCHITECTURE
Cheng-Zhong Xu

HANDBOOK OF BIOINSPIRED ALGORITHMS AND APPLICATIONS


Stephan Olariu and Albert Y. Zomaya

© 2006 by Taylor & Francis Group, LLC


CHAPMAN & HALL/CRC COMPUTER and INFORMATION SCIENCE SERIES

Handbook of
Bioinspired Algorithms
and Applications

Edited by
Stephan Olariu
Old Dominion University
Norfolk, Virginia, U.S.A.

Albert Y. Zomaya
University of Sydney
NSW, Australia

Boca Raton London New York

© 2006 by Taylor & Francis Group, LLC


C4754_Discl.fm Page 1 Tuesday, August 2, 2005 1:20 PM

Published in 2006 by
Chapman & Hall/CRC
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2006 by Taylor & Francis Group, LLC


Chapman & Hall/CRC is an imprint of Taylor & Francis Group

No claim to original U.S. Government works


Printed in the United States of America on acid-free paper
10 9 8 7 6 5 4 3 2 1

International Standard Book Number-10: 1-58488-475-4 (Hardcover)


International Standard Book Number-13: 978-1-58488-475-0 (Hardcover)

This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with
permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish
reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials
or for the consequences of their use.

No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or
other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com
(http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Library of Congress Cataloging-in-Publication Data

Catalog record is available from the Library of Congress

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com
Taylor & Francis Group and the CRC Press Web site at
is the Academic Division of T&F Informa plc. http://www.crcpress.com

© 2006 by Taylor & Francis Group, LLC


Preface

The Handbook of Bioinspired Algorithms and Applications seeks to provide an opportunity for researchers
to explore the connection between biologically inspired (or bioinspired) techniques and the development
of solutions to problems that arise in a variety of problem domains. The power of bioinspired paradigms
lies in their capability in dealing with complex problems with little or no knowledge about the search space,
and thus is particularly well suited to deal with a wide range of computationally intractable optimizations
and decision-making applications.
Vast literature exists on bioinspired approaches for solving an impressive array of problems and there
is a great need to develop repositories of “how to apply” bioinspired paradigms to difficult problems. The
material of the handbook is by no means exhaustive and it focuses on paradigms that are “bioinspired,”
and therefore, chapters on fuzzy logic or simulated annealing were not included in the organization. There
was a decision to limit the number of chapters so that the handbook remains manageable within a single
volume.
The handbook endeavors to strike a balance between theoretical and practical coverage of a range of
bioinspired paradigms and applications. The handbook is organized into two main sections: Models and
Paradigms and Application Domains, and the titles of the various chapters are self-explanatory and a
good indication to what is covered. The theoretical chapters are intended to provide the fundamentals of
each of the paradigms in such a way that allows the readers to utilize these techniques in their own fields.
The application chapters show detailed examples and case studies of how to actually develop a solution
to a problem based on a bioinspired technique. The handbook should serve as a repository of significant
reference material, as the list of references that each chapter provides will become a useful source of further
study.
Stephan Olariu
Albert Y. Zomaya

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page v — #5


Acknowledgments

First and foremost we would like to thank and acknowledge the contributors of this book for their support
and patience, and the reviewers for their useful comments and suggestions that helped in improving
the earlier outline of the handbook and presentation of the material. Professor Zomaya would like to
acknowledge the support from CISCO Systems and members of the Advanced Networks Research Group
at Sydney University. We also extend our deepest thanks to Jessica Vakili and Bob Stern from CRC Press
for their collaboration, guidance, and, most importantly, patience in finalizing this handbook. Finally,
we thank Mr. Mohan Kumar for leading the production process of this handbook in a very professional
manner.

Stephan Olariu
Albert Y. Zomaya

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page vii — #7


Editors

Stephan Olariu received his M.Sc. and Ph.D. degrees in computer science from McGill University,
Montreal, in 1983 and 1986, respectively. In 1986 he joined the Old Dominion University where he is a
professor of computer science. Dr. Olariu has published extensively in various journals, book chapters,
and conference proceedings. His research interests include image processing and machine vision, parallel
architectures, design and analysis of parallel algorithms, computational graph theory, computational geo-
metry, and mobile computing. Dr. Olariu serves on the Editorial Board of IEEE Transactions on Parallel
and Distributed Systems, Journal of Parallel and Distributed Computing, VLSI Design, Parallel Algorithms
and Applications, International Journal of Computer Mathematics, and International Journal of Foundations
of Computer Science.
Albert Y. Zomaya is currently the CISCO Systems chair professor of internetworking in the School of
Information Technologies, The University of Sydney. Prior to that he was a full professor in the Electrical
and Electronic Engineering Department at the University of Western Australia, where he also led the
Parallel Computing Research Laboratory from 1990 to 2002. He served as associate, deputy, and acting
head in the same department, and held visiting positions at Waterloo University and the University of
Missouri–Rolla. He is the author/co-author of 6 books and 200 publications in technical journals and
conferences, and the editor of 6 books and 7 conference volumes. He is currently an associate editor
for 14 journals, the founding editor of the Wiley Book Series on Parallel and Distributed Computing, and
the editor-in-chief of the Parallel and Distributed Computing Handbook (McGraw-Hill 1996). Professor
Zomaya was the chair of the IEEE Technical Committee on Parallel Processing (1999–2003) and currently
serves on its executive committee. He has been actively involved in the organization of national and
international conferences. He received the 1997 Edgeworth David Medal from the Royal Society of New
South Wales for outstanding contributions to Australian science. In September 2000 he was awarded the
IEEE Computer Society’s Meritorious Service Award. Professor Zomaya is a chartered engineer (CEng), a
fellow of the IEEE, a fellow of the Institution of Electrical Engineers (U.K.), and member of the ACM. He also
serves on the boards of two startup companies. His research interests are in the areas of high performance
computing, parallel algorithms, networking, mobile computing, and bioinformatics.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page ix — #9


Contributors

Enrique Alba Jürgen Branke Sajal K. Das


Department of Languages and Institute AIFB Center for Research in Wireless
Computer Science University of Karlsruhe Mobility and Networking
University of Málaga Karlsruhe, Germany Department of Computer
Campus de Teatinos Science & Engineering
Málaga, Spain Forbes Burkowski The University of Texas at
School of Computer Science Arlington
Abdullah Almojel University of Waterloo Arlington, Texas
Ministry of Higher Education Waterloo, Ontario, Canada
Riyadh, Saudi Arabia Tiago Ferra de Sousa
Escola Superior de Tecnologia
S. Cahon
Sanghamitra Bandyopadhyay Instituto Politecnico de Castelo
Laboratoire d’Informatique
Machine Intelligence Unit Branco
Fondamentale de Lille
Indian Statistical Institute Castelo Branco, Portugal
Lille, France
Kolkata, India
Francisco Fernández de Vega
J. Francisco Chicano Grupo de Evolución Artificial
Nilanjan Banerjee
Department of Languages and Centro Universitario de Mérida
Center for Research in Wireless
Computer Science Universidad de Extremadura
Mobility and Networking
University of Málaga Mérida, Spain
Department of Computer
Málaga, Spain
Science & Engineering
The University of Texas at C. Dhaenens
Arlington Ernesto Costa Laboratoire d’Informatique
Arlington, Texas Evolutionary and Complex Fondamentale de Lille
Systems Group Lille, France
Mohamed Belal Centro de Informática e Sistemas
Helwan University da Universidade de Coimbra Bernabe Dorronsoro
Cairo, Egypt Pinhal de Marrocos Central Computing Services
Coimbra, Portugal University of Málaga
Utpal Biswas Campus de Teatinos
Department of Computer Science Carlos Cotta Málaga, Spain
and Engineering Department of Languages and
University of Kalyani Computer Science Hoda El-Sayed
Kalyani, India University of Málaga Bowie State University
Campus de Teatinos Bowie, Maryland
Azzedine Boukerche Málaga, Spain
SITE Mohamed Eltoweissy
University of Ottawa Kris Crnomarkovic Virginia Tech
Ottawa, Canada Advanced Networks Research Falls Church, Virginia
Group
Anthony Brabazon School of Information Muddassar Farooq
Faculty of Commerce Technologies Informatik III
University College Dublin The University of Sydney University of Dortmund
Dublin, Ireland Sydney, Australia Dortmund, Germany

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xi — #11


xii Contributors

Marcos Fernández Kathia Regina Lemos Jucá Gabriel Luque


Instituto de Robótica Federal University of Santa Department of Languages and
Universidad de Valencia Catarina Computer Science
Polígono de la Coma Florianopolis, Brazil ETS Ingeniería
Paterna (Valencia), Spain Informática
M. Khabzaoui University of Málaga
Gianluigi Folino Laboratoire d’Informatique Málaga, Spain
Institute of High Performance Fondamentale de Lille
Computing and Networks Lille, France Ujjwal Maulik
Rende (CS), Italy
Department of Computer Science
Bithika Khargaria and Engineering
Agostino Forestiero High Performance Distributed Jadavpur University
Institute of High Performance Computing Laboratory Kolkata, India
Computing and Networks
The University of Arizona
Rende (CS), Italy
Tuscon, Arizona
N. Melab
Jafaar Gaber Laboratoire d’Informatique
Peter Korošec Fondamentale de Lille
UTMB
Computer Systems Department Lille, France
France
Jožef Stefan Institute
Mario Giacobini Ljubljana, Slovenia
M. Mezmaz
Information Systems Department Laboratoire d’Informatique
University of Lausanne Barbara Koroušić-Seljak
Fondamentale de Lille
Lausanne, Switzerland Computer Systems Department
Lille, France
Jožef Stefan Institute
Michael Guntsch Ljubljana, Slovenia
Institute AIFB Michelle Moore
University of Karlsruhe Zhen Li Department of Computing and
Karlsruhe, Germany The Applied Software Systems Mathematical Sciences
Laboratory Texas A&M University-Corpus
Christi
Salim Hariri Rutgers, The State University of
High Performance Distributed New Jersey Corpus Christi, Texas
Computing Laboratory Camden, New Jersey
The University of Arizona Pedro Morillo
Tuscon, Arizona Kenneth N. Lodding Instituto de Robótica
NASA Langley Research Center Universidad de Valencia
Piotr Jedrzejowicz Hampton, Virginia Polígono de la Coma
Department of Information Paterna (Valencia), Spain
Systems Mi Lu
Faculty of Business
Department of Electrical Anirban Mukhopadhyay
Administration
Engineering Department of Computer Science
Gdynid Maritime University
Texas A&M University and Engineering
Gdynia, Poland
College Station, Texas University of Kalyani
Kalyani, India
Kennie H. Jones
NASA Langley Research Center Francisco Luna
Hampton, Virginia Department of Languages and Mrinal Kanti Naskar
Computer Science Department of Electronics and
L. Jourdan ETS Ingeniería Telecommunication
Laboratoire d’Informatique Informática Engineering
Fondamentale de Lille University of Málaga Jadavpur University
Lille, France Málaga, Spain Kolkata, India

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xii — #12


Contributors xiii

Antonio J. Nebro Manish Parashar Arlindo Silva


Department of Languages and The Applied Software Systems Escola Superior de Tecnologia
Computer Science Laboratory Instituto Politécnico de Castelo
ETS Ingeniería Rutgers, The State University of Branco
Informática New Jersey Castelo Branco, Portugal
University of Málaga Camden, New Jersey and
Málaga, Spain Centro de Informatica e Sistemas
da Universidade de Coimbra
Zhiquan Frank Qiu Pinhal de Marrocos, Portugal
Ana Neves
Intel Corporation
Escola Superior de Tecnologia
Chandler, Arizona
Instituto Politécnico de Castelo João Bosco Mangueira Sobral
Branco Federal University of Santa
Castelo Branco, Portugal Catarina
Borut Robič
and Florianopolis, Brazil
Faculty of Computer and
Evolutionary and Complex and
Information Science
Systems Group
University of Ljubljana Evolutionary and Complex
Centro de Informática e Sistemas Systems Group
Ljubljana, Slovenia
da Universidade de Coimbra
Centro de Informática e Sistemas
Pinhal de Marrocos, Portugal
da Universidade de Coimbra
Abhishek Roy Pinhal de Marrocos, Portugal
Alioune Ngom Center for Research in Wireless
Computer Science Department Mobility and Networking
Tiago Sousa
University of Windsor Department of Computer
Escola Superior de Tecnologia
Windsor, Ontario, Canada Science & Engineering
Instituto Politécnico de Castelo
The University of Texas at
Branco
Arlington
Mirela Sechi Moretti Annoni Castelo Branco, Portugal
Notare Arlington, Texas
and
Barddal University Evolutionary and Complex
Florianopolis, Brazil Systems Group
Hartmut Schmeck
Institute AIFB Centro de Informática e Sistemas
Stephan Olariu University of Karlsruhe da Universidade de Coimbra
Old Dominion University Karlsruhe, Germany Pinhal de Marrocos, Portugal
Norfolk, Virginia
Giandomenico Spezzano
Michael O’Neill Franciszek Seredynski Institute of High Performance
Department of Computer Polish-Japanese Institute of Computing and Networks
Science & Information Systems Information Technologies Rende (CS), Italy
University of Limerick Koszykowa
Limerick, Ireland Warsaw, Poland
and
Michael Stein
Institute AIFB
Institute of Computer Science
Juan Manuel Orduña Polish Academy of Sciences
University of Karlsruhe
Departamento de Informática Germany
Ordona
Universidad de Valencia
Warsaw, Poland
Burjassot (Valencia), Spain
Ivan Stojmenović
Department of Computer Science
Gregor Papa Jurij Šilc School of Information Technology
Computer Systems Department Computer Systems Department and Engineering
Jožef Stefan Institute Jožef Stefan Institute University of Ottawa
Ljubljana, Slovenia Ljubljana, Slovenia Ottawa, Ontario, Canada

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xiii — #13


xiv Contributors

Anna Świȩcicka Marco Tomassini Xin-She Yang


Department of Computer Science Information Systems Department Department of Engineering
Bialystok University of University of Lausanne University of Cambridge
Technology Lausanne, Switzerland
Cambridge, United Kingdom
Bialystok, Poland
Ashraf Wadaa
Old Dominion University Y. Young
Javid Taheri Norfolk, Virginia Civil and Computational
Engineering Centre
Advanced Networks Research
Group Horst F. Wedde School of Engineering
School of Information Informatik III University of Wales Swansea
Technologies University of Dortmund Swansea, United Kingdom
Dortmund, Germany
The University of Sydney
Sydney, Australia Albert Y. Zomaya
B. Wei
Laboratoire d’Informatique Advanced Networks Research
Fondamentale de Lille Group
El-Ghazali Talbi
Cité scientifique, France School of Information
Université des Sciences et Technologies
Technologies de Lille
Benjamin Weinberg The University of Sydney
Cité Scientifique, France Université des Sciences et Sydney, Australia
Technologies de Lille
Domenico Talia Cité Scientifique, France Joviša Žunić
University of Calabria Computer Science Department
Larry Wilson
DEIS Old Dominion University Cardiff University
Rende, Italy Norfolk, Virginia Cardiff, Wales, United Kingdom

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xiv — #14


Contents

SECTION I Models and Paradigms

1 Evolutionary Algorithms Enrique Alba and Carlos Cotta . . . 1-3

2 An Overview of Neural Networks Models Javid Taheri and


Albert Y. Zomaya . . . . . . . . . . . . . . . . . . . . . 2-21

3 Ant Colony Optimization Michael Guntsch and Jürgen Branke 3-41

4 Swarm Intelligence Mohamed Belal, Jafaar Gaber,


Hoda El-Sayed, and Abdullah Almojel . . . . . . . . . . . . 4-55

5 Parallel Genetic Programming: Methodology, History, and


Application to Real-Life Problems
Francisco Fernández de Vega . . . . . . . . . . . . . . . . 5-65

6 Parallel Cellular Algorithms and Programs Domenico Talia . . 6-85

7 Decentralized Cellular Evolutionary Algorithms Enrique Alba,


Bernabe Dorronsoro, Mario Giacobini, and Marco Tomassini . . 7-103

8 Optimization via Gene Expression Algorithms


Forbes Burkowski . . . . . . . . . . . . . . . . . . . . . 8-121

9 Dynamic Updating DNA Computing Algorithms


Zhiquan Frank Qiu and Mi Lu . . . . . . . . . . . . . . . 9-135

10 A Unified View on Metaheuristics and Their Hybridization


Jürgen Branke, Michael Stein, and Hartmut Schmeck . . . . . 10-147

11 The Foundations of Autonomic Computing Salim Hariri,


Bithika Khargaria, Manish Parashar, and Zhen Li . . . . . . . 11-157

SECTION II Application Domains

12 Setting Parameter Values for Parallel Genetic Algorithms:


Scheduling Tasks on a Cluster Michelle Moore . . . . . . . . 12-179

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xv — #15


xvi Contents

13 Genetic Algorithms for Scheduling in Grid Computing


Environments: A Case Study Kris Crnomarkovic and
Albert Y. Zomaya . . . . . . . . . . . . . . . . . . . . . 13-193

14 Minimization of SADMs in Unidirectional SONET/WDM Rings


Using Genetic Algorithms Anirban Mukhopadhyay,
Utpal Biswas, Mrinal Kanti Naskar, Ujjwal Maulik, and
Sanghamitra Bandyopadhyay . . . . . . . . . . . . . . . . 14-209

15 Solving Optimization Problems in Wireless Networks Using


Genetic Algorithms Sajal K. Das, Nilanjan Banerjee, and
Abhishek Roy . . . . . . . . . . . . . . . . . . . . . . 15-219

16 Medical Imaging and Diagnosis Using Genetic Algorithms


Ujjwal Maulik, Sanghamitra Bandyopadhyay, and
Sajal K. Das . . . . . . . . . . . . . . . . . . . . . . . 16-235

17 Scheduling and Rescheduling with Use of Cellular Automata


Franciszek Seredynski, Anna Świȩcicka, and
Albert Y. Zomaya . . . . . . . . . . . . . . . . . . . . . 17-253

18 Cellular Automata, PDEs, and Pattern Formation


Xin-She Yang and Y. Young . . . . . . . . . . . . . . . . . 18-273

19 Ant Colonies and the Mesh-Partitioning Problem


Borut Robič, Peter Korošec, and Jurij Šilc . . . . . . . . . . . 19-285

20 Simulating the Strategic Adaptation of Organizations Using


OrgSwarm Anthony Brabazon, Arlindo Silva,
Ernesto Costa, Tiago Ferra de Sousa, and Michael O’Neill . . . 20-305

21 BeeHive: New Ideas for Developing Routing Algorithms


Inspired by Honey Bee Behavior Horst F. Wedde and
Muddassar Farooq . . . . . . . . . . . . . . . . . . . . 21-321

22 Swarming Agents for Decentralized Clustering in Spatial Data


Gianluigi Folino, Agostino Forestiero, and Giandomenico
Spezzano . . . . . . . . . . . . . . . . . . . . . . . . . 22-341

23 Biological Inspired Based Intrusion Detection Models for Mobile


Telecommunication Systems Azzedine Boukerche,
Kathia Regina Lemos Jucá, João Bosco Mangueira Sobral, and
Mirela Sechi Moretti Annoni Notare . . . . . . . . . . . . . 23-359

24 Synthesis of Multiple-Valued Circuits by Neural Networks


Alioune Ngom and Ivan Stojmenović . . . . . . . . . . . . 24-373

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xvi — #16


Contents xvii

25 On the Computing Capacity of Multiple-Valued Multiple-Threshold


Perceptrons Alioune Ngom,
Ivan Stojmenović, and Joviša Žunić . . . . . . . . . . . . . 25-427

26 Advanced Evolutionary Algorithms for Training Neural Networks


Enrique Alba, J. Francisco Chicano, Francisco Luna,
Gabriel Luque, and Antonio J. Nebro . . . . . . . . . . . . 26-453

27 Bio-Inspired Data Mining Tiago Sousa, Arlindo Silva,


Ana Neves, and Ernesto Costa . . . . . . . . . . . . . . . 27-469

28 A Hybrid Evolutionary Algorithm for Knowledge Discovery in


Microarray Experiments L. Jourdan, M. Khabzaoui,
C. Dhaenens, and El-Ghazali Talbi . . . . . . . . . . . . . 28-491

29 An Evolutionary Approach to Problems in Electrical Engineering


Design Gregor Papa, Jurij Šilc, and Barbara Koroušić-Seljak . . 29-509

30 Solving the Partitioning Problem in Distributed Virtual


Environment Systems Using Evolutive Algorithms
Pedro Morillo, Marcos Fernández, and Juan Manuel Orduña . . 30-531

31 Population Learning Algorithm and Its Applications


Piotr Jedrzejowicz . . . . . . . . . . . . . . . . . . . . . 31-555

32 Biology-Derived Algorithms in Engineering Optimization


Xin-She Yang . . . . . . . . . . . . . . . . . . . . . . . 32-589

33 Biomimetic Models for Wireless Sensor Networks


Kennie H. Jones, Kenneth N. Lodding, Stephan Olariu,
Ashraf Wadaa, Larry Wilson, and Mohamed Eltoweissy . . . . 33-601

34 A Cooperative Parallel Metaheuristic Applied to the Graph


Coloring Problem Benjamin Weinberg and
El-Ghazali Talbi . . . . . . . . . . . . . . . . . . . . . 34-625

35 Frameworks for the Design of Reusable Parallel and Distributed


Metaheuristics N. Melab, El-Ghazali Talbi, and S. Cahon . . . 35-639

36 Parallel Hybrid Multiobjective Metaheuristics on P2P Systems


N. Melab, El-Ghazali Talbi, M. Mezmaz, B. Wei . . . . . . . . 36-649

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C000” — 2005/8/17 — 18:11 — page xvii — #17


I
Models and Paradigms

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 1 — #1


1
Evolutionary
Algorithms

1.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3


1.2 Learning from Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
1.3 Nature’s Way for Optimizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Algorithm Meets Evolution • The Flavors of Evolutionary
Algorithms
1.4 Dissecting an Evolutionary Algorithm . . . . . . . . . . . . . . . . . 1-8
The Fitness Function • Initialization • Selection •
Recombination • Mutation • Replacement
1.5 Fields of Application of EAs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
1.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Enrique Alba Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Carlos Cotta References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14

1.1 Introduction
One of the most striking features of Nature is the existence of living organisms adapted for surviving in
almost any ecosystem, even the most inhospitable: from abyssal depths to mountain heights, from volcanic
vents to polar regions. The magnificence of this fact becomes more evident when we consider that the
life environment is continuously changing. This motivates certain life forms to become extinct whereas
other beings evolve and preponderate due to their adaptation to the new scenario. It is very remarkable
that living beings do not exert a conscious effort for evolving (actually, it would be rather awkward to talk
about consciousness in amoebas or earthworms); much on the contrary, the driving force for change is
controlled by supraorganic mechanisms such as natural evolution.
Can we learn — and use for our own profit — the lessons that Nature is teaching us? The answer is a big
YES, as the optimization community has repeatedly shown in the last decades. “Evolutionary algorithm”
is the key word here. The term evolutionary algorithm (EA henceforth) is used to designate a collection
of optimization techniques whose functioning is loosely based on metaphors of biological processes.
This rough definition is rather broad and tries to encompass the numerous approaches currently
existing in the field of evolutionary computation [1]. Quite appropriately, this field itself is continuously
evolving; a quick inspection of the proceedings of the relevant conferences and symposia suffices to
demonstrate the impetus of the field, and the great diversity of the techniques that can be considered
“evolutionary.”

1-3

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 3 — #3


1-4 Handbook of Bioinspired Algorithms and Applications

This variety notwithstanding, it is possible to find a number of common features of all (or at least
most of) EAs. The following quote from Reference 2 illustrates such common points:

The algorithm maintains a collection of potential solutions to a problem. Some of these possible
solutions are used to create new potential solutions through the use of operators. Operators act on
and produce collections of potential solutions. The potential solutions that an operator acts on are
selected on the basis of their quality as solutions to the problem at hand. The algorithm uses this
process repeatedly to generate new collections of potential solutions until some stopping criterion
is met.

This definition can be usually found in the literature expressed in a technical language that uses terms
such as genes, chromosomes, population, etc. This jargon is a reminiscence of the biological inspiration
mentioned before, and has deeply permeated the field. We will return to the connection with biology
later on.
The objective of this work is to present a gentle overview of these techniques comprising both the
classical “canonical” models of EAs as well as some modern directions for the development of the field,
namely, the use of parallel computing, and the introduction of problem-dependent knowledge.

1.2 Learning from Biology


Evolution is a complex fascinating process. Along history, scientists have attempted to explain its
functioning using different theories. After the development of disciplines such as comparative anatomy
in the middle of the 19th century, the basic principles that condition our current vision of evolution
were postulated. Such principles rest upon Darwin’s Natural Selection Theory [3], and Mendel’s work on
genetic inheritance [4]. They can be summarized in the following points (see Reference 5):

• Evolution is a process that does not operate on organisms directly, but on chromosomes. These
are the organic tools by means of which the structure of a certain living being is encoded, that is,
the features of a living being are defined by the decoding of a collection of chromosomes. These
chromosomes (more precisely, the information they contain) pass from one generation to another
through reproduction.
• The evolutionary process takes place precisely during reproduction. Nature exhibits a plethora
of reproductive strategies. The most essential ones are mutation (that introduces variability in
the gene pool) and recombination (that introduces the exchange of genetic information among
individuals).
• Natural selection is the mechanism that relates chromosomes with the adequacy of the entities they
represent, favoring the proliferation of effective, environment-adapted organisms, and conversely
causing the extinction of lesser effective, nonadapted organisms.

These principles are comprised within the most orthodox theory of evolution, the Synthetic
Theory [6]. Although alternate scenarios that introduce some variety in this description have been
proposed — for example, the Neutral Theory [7], and very remarkably the Theory of Punctuated
Equilibria [8] — it is worth considering the former basic model. It is amazing to see that despite the
apparent simplicity of the principles upon which it rests, Nature exhibits unparalleled power in developing
and expanding new life forms.
Not surprisingly, this power has attracted the interest of many researchers, who have tried to translate the
principles of evolution to the realm of algorithmics, pursuing the construction of computer systems with
analogous features. An important point must be stressed here: evolution is an undirected process, that is,
there exists no scientific evidence that evolution is headed to a certain final goal. On the contrary, it can
be regarded as a reactive process that makes organisms change in response to environmental variations.
However, it is a fact that human-designed systems do pursue a definite final goal. Furthermore, whatever

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 4 — #4


Evolutionary Algorithms 1-5

this goal might be, it is in principle, desirable to reach it quickly and efficiently. This leads to the distinction
between two approaches to the construction of natureinspired systems:
1. Trying to reproduce Nature principles with the highest possible accuracy, that is, simulate Nature.
2. Using these principles as inspiration, adapting them in whatever required way so as to obtain
efficient systems for performing the desired task.
Both approaches concentrate nowadays on the efforts of researchers. The first one has given rise to
the field of Artificial Life (e.g., see Reference 9), and it is interesting because it allows re-creating and
studying numerous natural phenomena such as parasitism, predator/prey relationships, etc. The second
approach can be considered more practical, and constitutes the source of EAs. Notice anyway that these
two approaches are not hermetic containers, and have frequently interacted with certainly successful
results.

1.3 Nature’s Way for Optimizing


As mentioned above, the standpoint of EAs is essentially practical — using ideas from natural evolution
in order to solve a certain problem. Let us focus on optimization and see how this goal can be achieved.

1.3.1 Algorithm Meets Evolution


An EA is a stochastic iterative procedure for generating tentative solutions for a certain problem P . The
algorithm manipulates a collection P of individuals (the population), each of which comprises one or more
chromosomes. These chromosomes allow each individual represent a potential solution for the problem
under consideration. An encoding/decoding process is responsible for performing this mapping between
chromosomes and solutions. Chromosomes are divided into smaller units termed genes. The different
values a certain gene can take are called the alleles for that gene.
Initially, the population is generated at random or by means of some heuristic seeding procedure. Each
individual in P receives a fitness value: a measure of how good the solution it represents is for the problem
being considered. Subsequently, this value is used within the algorithm for guiding the search. The whole
process is sketched in Figure 1.1.
As it can be seen, the existence of a set F (also known as phenotype space) comprising the solutions for
the problem at hand is assumed. Associated with F , there also exists a set G (known as genotype space).
These sets G and F respectively constitute the domain and codomain of a function g known as the growth
(or expression) function. It could be the case that F and G were actually equivalent, being g a trivial
identity function. However, this is not the general situation. As a matter of fact, the only requirement
posed on g is subjectivity. Furthermore, g could be undefined for some elements in G .
After having defined these two sets G and F , notice the existence of a function ι selecting some
elements from G . This function is called the initialization function, and these selected solutions (also
known as individuals) constitute the so-called initial population. This initial population is in fact a pool

P
t

c s

vx P⬘

g P⬙ vm

FIGURE 1.1 Illustration of the evolutionary approach to optimization.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 5 — #5


1-6 Handbook of Bioinspired Algorithms and Applications

Evolutionary-Algorithm:

1. P ← apply i on to obtain m individuals (the initial population);


2. while Termination Criterion is not met do
(a) P ⬘ ← apply  on P ; /* selection */
(b) P ⬙ ← apply 1, … , k on P ⬘; /* reproduction */
(c) P ← apply  on P and P ⬙; /* replacement */
endwhile

FIGURE 1.2 Pseudocode of an evolutionary algorithm.

of solutions onto which the EA will subsequently work, iteratively applying some evolutionary operators
to modify its contents. More precisely, the process comprises three major stages: selection (promising
solutions are picked from the population by using a selection function σ ), reproduction (new solutions
are created by modifying selected solutions using some reproductive operators ωi ), and replacement (the
population is updated by replacing some existing solutions by the newly created ones, using a replacement
function ψ). This process is repeated until a certain termination criterion (usually reaching a maximum
number of iterations) is satisfied. Each iteration of this process is commonly termed a generation.
According to this description, it is possible to express the pseudocode of an EA as shown in Figure 1.2.
Every possible instantiation of this algorithmic template1 will give rise to a different EA. More precisely,
it is possible to distinguish different EA families, by considering some guidelines on how to perform this
instantiation.

1.3.2 The Flavors of Evolutionary Algorithms


EAs, as we know them now, began their existence during the late 1960s and early 1970s (some earlier
references to the topic exist, though; see Reference 10). In these years — and almost simultaneouly —
scientists from different places in the world began the task of putting Nature at work in algorithmics, and
more precisely in search of problem-solving duties. The existence of these different primordial sources
originated the rise of three different EA models. These classical families are:

• Evolutionary Programming (EP): This EA family originated in the work of Fogel et al. [11].
EP focuses on the adaption of individuals rather than on the evolution of their genetic informa-
tion. This implies a much more abstract view of the evolutionary process, in which the behavior of
individuals is directly modified (as opposed to manipulating its genes). This behavior is typically
modeled by using complex data structures such as finite automata or as graphs (see Figure 1.3[a]).
Traditionally, EP uses asexual reproduction — also known as mutation, that is, introducing slight
changes in an existing solution — and selection techniques based on direct competition among
individuals.
• Evolution Strategies (ESs): These techniques were initially developed in Germany by Rechenberg
[12] and Schwefel [13]. Their original goal was serving as a tool for solving engineering problems.
With this goal in mind, these techniques are characterized by manipulating arrays of floating-point
numbers (there exist versions of ES for discrete problems, but they are much more popular for
continuous optimization). As EP, mutation is sometimes the unique reproductive operator used
in ES; it is not rare to also consider recombination (i.e., the construction of new solutions by
combining portions of some individuals) though. A very important feature of ES is the utilization
of self-adaptive mechanisms for controlling the application of mutation. These mechanisms are
aimed at optimizing the progress of the search by evolving not only the solutions for the problem
being considered, but also some parameters for mutating these solutions (in a typical situation,

1 Themere fact that this high-level heuristic template can host a low-level heuristic, justifies using the term
metaheuristic, as it will be seen later.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 6 — #6


Evolutionary Algorithms 1-7

(a) Input encoding


(a–n)...(m–n)(a–o)...(m–o)(a–p)..(m–p)(n–q)(o–q)(p–q)...(n–z)(o–z)(p–z)

Input layer a b c ••• m

Hidden layer n o p

Output layer q r s ••• z

(b) IF

AND IS

IS IS FOR PL

POS NL VEL NL

FIGURE 1.3 Two examples of complex representations. (a) A graph representing a neural network. (b) A tree
representing a fuzzy rule.

an ES individual is a pair (x , σ ), where σ is a vector of standard deviations used to control the
Gaussian mutation exerted on the actual solution x ).
• Genetic Algorithms (GAs): GAs are possibly the most widespread variant of EAs. They were con-
ceived by Holland [14]. His work has had a great influence in the development of the field, to the
point that some portions — arguably extrapolated — of it were taken almost like dogmas (i.e., the
ubiquitous use of binary strings as chromosomes). The main feature of GAs is the use of a recom-
bination (or crossover) operator as the primary search tool. The rationale is the assumption that
different parts of the optimal solution can be independently discovered, and be later combined to
create better solutions. Additionally, mutation is also used, but it was usually considered a second-
ary background operator whose purpose is merely “keeping the pot boiling” by introducing new
information in the population (this classical interpretation is no longer considered valid though).

These families have not grown in complete isolation from each other. On the contrary, numerous
researchers built bridges among them. As a result of this interaction, the borders of these classical families
tend to be fuzzy (the reader may check [15] for a unified presentation of EA families), and new variants
have emerged. We can cite the following:

• Evolution Programs (EPs): This term is due to Michalewicz [5], and comprises those techniques
that, while using the principles of functioning of GAs, evolve complex data structures, as in EP.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 7 — #7


1-8 Handbook of Bioinspired Algorithms and Applications

Nowadays, it is customary to use the acronym GA — or more generally EA — to refer to such an


algorithm, leaving the term “traditional GA” to denote classical bit-string based GAs.
• Genetic Programming (GP): The roots of GP can be traced back to the work of Cramer [16], but
it is undisputable that it has been Koza [17] the researcher who promoted GP to its current status.
Essentially, GP could be viewed as an evolution program in which the structures evolved represent
computer programs. Such programs are typically encoded by trees (see Figure 1.3[b]). The final
goal of GP is the automatic design of a program for solving a certain task, formulated as a collection
of (input, output) examples.
• Memetic Algorithms (MAs): These techniques owe their name to Moscato [18]. Some widespread
misconception equates MAs to EAs augmented with local search; although such an augmented EA
could be indeed considered a MA, other possibilities exist for defining MAs. In general, a MA is
problem-aware EA [19]. This problem awareness is typically acquired by combining the EA with
existing algorithms such as hill climbing, branch and bound, etc.

In addition to the different EA variants mentioned above, there exist several other techniques that could
also fall within the scope of EAs, such as Ant Colony Optimization [20], Distribution Estimation Algorithms
[21], or Scatter Search [22] among others. All of them rely on achieving some kind of balance between
the exploration of new regions of the search space, and the exploitation of regions known to be promising
[23], so as to minimize the computational effort for finding the desired solution. Nevertheless, these
techniques exhibit very distinctive features that make them depart from the general pseudocode depicted
in Figure 1.2. The broader term metaheuristic (e.g., see Reference 24) is used to encompass this larger set
of modern optimization techniques, including EAs.

1.4 Dissecting an Evolutionary Algorithm


Once the general structure of an EA has been presented, we will get into more detail on the different
components of the algorithm.

1.4.1 The Fitness Function


This is an essential component of the EA, to the point that some early (and nowadays discredited)
views of EAs considered it as the unique point of interaction with the problem that is intended to be
solved. This way, the fitness function measured how good a certain tentative solution is for the problem
of interest. This interpretation has given rise to several misconceptions, the most important being the
equation “fitness = quality of a solution.” There are many examples in which this is simple not true [19],
for example, tackling the satisfiability problem with EAs (i.e., finding the truth assignment that makes
a logic formula in conjunctive normal form be satisfied). If quality is used as fitness function, then the
search space is divided into solutions with fitness 1 (those satisfying the target formula), and solutions
with fitness 0 (those that do not satisfy it). Hence, the EA would be essentially looking for a needle in
a haystack (actually, there may be more than one needle in that haystack, but that does not change the
situation). A much more reasonable choice is making fitness equal to the number of satisfied clauses in
the formula by a certain solution. This introduces a gradation that allows the EA “climbing” in search of
near-optimal solutions.
The existence of this gradation is thus a central feature of the fitness function, and its actual implemen-
tation is not that important as long this goal is achieved. Of course, implementation issues are important
from a computational point of view, since the cost of the EA is typically assumed to be that of evaluating
solutions. In this sense, it must be taken into account that fitness can be measured by means of a
simple mathematical expression, or may involve performing a complex simulation of a physical system.
Furthermore, this fitness function may incorporate some level of noise, or even vary dynamically. The
remaining components of the EA must be defined accordingly so as to deal with these features of the fitness
function, for example, using a nonhaploid representation [25] (i.e., having more than one chromosome)

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 8 — #8


Evolutionary Algorithms 1-9

so as to have a genetic reservoir of worthwhile information in the past, and thus be capable of tackling
dynamic changes in the fitness function.
Notice that there may even exist more than one criterion for guiding the search (e.g., we would like to
evolve the shape of a set of pillars, so that their strength is maximal, but so that their cost is also minimal).
These criteria will be typically partially conflicting. In this case, a multiobjective problem is being faced.
This can be tackled in different ways, such as performing an aggregation of these multiple criteria into a
single value, or using the notion of Pareto dominance (i.e., solution x dominates solution y if, and only
if, fi (x) yields a better or equal value than fi (y) for all i, where the fi ’s represent the multiple criteria being
optimized). See References 26 and 27 for details.

1.4.2 Initialization
In order to have the EA started, it is necessary to create the initial population of solutions. This is
typically addressed by randomly generating the desired number of solutions. When the alphabet used
for representing solutions has low cardinality, this random initialization provides a more or less uniform
sample of the solution space. The EA can subsequently start exploring the wide area covered by the initial
population, in search of the most promising regions.
In some cases, there exists the risk of not having the initial population adequately scattered all over the
search space (e.g., when using small populations and/or large alphabets for representing solutions). It is
then necessary to resort to systematic initialization procedures [28], so as to ensure that all symbols are
uniformly present in the initial population.
This random initialization can be complemented with the inclusion of heuristic solutions in the initial
population. The EA can thus benefit from the existence of other algorithms, using the solutions they
provide. This is termed seeding, and it is known to be very beneficial in terms of convergence speed, and
quality of the solutions achieved [29,30]. The potential drawback of this technique is having the injected
solutions taking over the whole population in a few iterations, provoking the stagnation of the algorithm.
This problem can be remedied by tuning the selection intensity by some means (e.g., by making an
adequate choice of the selection operator, as it will be shown below).

1.4.3 Selection
In combination with replacement, selection is responsible for the competition aspects of individuals in
the population. In fact, replacement can be intuitively regarded as the complementary application of the
selection operation.
Using the information provided by the fitness function, a sample of individuals from the population is
selected for breeding. This sample is obviously biased towards better individuals, that is good — according
to the fitness function — solutions should be more likely in the sample than bad solutions.2
The most popular techniques are fitness-proportionate methods. In these methods, the probability of
selecting an individual for breeding is proportional to its fitness, that is,

fi
pi =  , (1.1)
j∈P fj

where fi is the fitness3 of individual i, and pi is the probability of i getting into the reproduction stage. This
proportional selection can be implemented in a number of ways. For example, roulette-wheel selection rolls

2 At least, this is customary in genetic algorithms. In other EC families, selection is less important for biasing
evolution, and it is done at random (a typical option in evolution strategies), or exhaustively, that is, all individuals
undergo reproduction (as it is typical in evolutionary programming).
3 Maximization is assumed here. In case we were dealing with a minimization problem, fitness should be transformed

so as to obtain an appropriate value for this purpose, for example, subtracting it from the highest possible value of the
guiding function, or taking the inverse of it.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 9 — #9


1-10 Handbook of Bioinspired Algorithms and Applications

a dice with |P| sides, such that the ith side has probability pi . This is repeated as many times as individuals
are required in the sample. A drawback of this procedure is that the actual number of instances of
individual i in the sample can largely deviate from the expected |P| · pi . Stochastic Universal Sampling [31]
(SUS) does not have this problem, and produces a sample with minimal deviation from expected values.
Fitness-proportionate selection faces problems when the fitness values of individuals are very similar
among them. In this case, pi would be approximately |P|−1 for all i ∈ P, and hence selection would be
essentially random. This can be remedied by using fitness scaling. Typical options are (see Reference 5):

• Linear scaling: fi = a · fi + b, for some real numbers a, b.


• Exponential scaling: fi = ( fi )k , for some real number k.
• Sigma truncation: fi = max(0, fi − ( f¯ − c · σ )), where f¯ is the mean fitness of individuals, σ is the
fitness standard deviation, and c is a real number.

Another problem is the appearance of an individual whose fitness is much better than the remaining
individuals. Such super-individuals can quickly take over the population. To avoid this, the best option is
using a nonfitness-proportionate mechanism. A first possibility is ranking selection [32]: individuals are
ranked according to fitness (best first, worst last), and later selected — for example, by means of SUS —
using the following probabilities:

 
1 − + − i−1
pi = η + (η − η ) , (1.2)
|P| |P| − 1

where pi is the probability of selecting the ith best individual, and η− + η+ = 2.


Another possibility is using tournament selection [33]. In this case, a direct competition is performed
whenever an individual needs to be selected. To be precise, α individuals are sampled at random, and
the best of them is selected for reproduction. This is repeated as many times as needed. The param-
eter α is termed the tournament size; the higher this value, the stronger the selective pressure. These
unproportionate selection methods have the advantage of being insensitive to fitness scaling problems
and to the sense of optimization (maximization or minimization). The reader is referred to, for example,
References 34 and 35 for a theoretical analysis of the properties of different selection operators.
Regardless of the selection operator used, it was implicity assumed in the previous discussion that any
two individuals in the population can mate, that is, all individuals belong to an unstructured centralized
population. However, this is not necessarily the case. There exists a long tradition in using structured
populations in EC, especially associated to parallel implementations. Among the most widely known
types of structured EAs, distributed (dEA) and cellular (cEA) algorithms are very popular optimization
procedures [36].
Decentralizing a single population can be achieved by partitioning it into several subpopulations,
where component EAs are run performing sparse exchanges of individuals (dEAs), or in the form of
neighborhoods (cEAs). The main difference is that a dEA has a large subpopulation, usually much
larger than the single individual that a cEA has typically in every component algorithm. In a dEA, the
subpopulations are loosely coupled, while for a cEA they are tightly coupled. Additionally, in a dEA, there
exist only a few subpopulations, while in a cEA there is a large number of them.
The use of decentralized populations has a great influence in the selection intensity, since not all
individuals have to compete among them. As a consequence, diversity is often better preserved.

1.4.4 Recombination
Recombination is a process that models information exchange among several individuals (typically two
of them, but a higher number is possible [37]). This is done by constructing new solutions using the
information contained in a number of selected parents. If it is the case that the resulting individuals (the
offspring ) are entirely composed of information taken from the parents, then the recombination is said to

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 10 — #10


Evolutionary Algorithms 1-11

Cut point
Binary mask 00110100011001
01001101011010 01001101011010
Parents
11011010010011 11011010010011

01001110010011 Descendant 01011001010011


FIGURE 1.4 Two examples of recombination on bitstrings: single-point crossover (left) and uniform crossover
(right).

Cutting
Father 1 2 3 4 5 6 7 8 9

1 3 84 5 6 9 2 7
Mappings Child

(1) (6)

Mother 4 3 8 1 7 5 9 2 6
(5)
(2) (4)

(3)

FIGURE 1.5 PMX at work. The numbers in brackets indicate the order in which elements are copied to the
descendant.

be transmitting [38,39]. This is the case of classical recombination operators for bitstrings such as single-
point crossover, or uniform crossover [40], among others. Figure 1.4 shows an example of the application
of these operators.
This property captures the a priori role of recombination: combining good parts of solutions that have
been independently discovered. It can be difficult to achieve for certain problem domains though (the
Traveling Salesman Problem (TSP) is a typical example). In those situations, it is possible to consider other
properties of interest such as respect or assortment. The former refers to the fact that the recombination
operator generates descendants carrying all features common to all parents; thus, this property can be seen
as a part of the exploitative side of the search. On the other hand, assortment represents the exploratory side
of recombination. A recombination operator is said to be properly assorting if, and only if, it can generate
descendants carrying any combination of compatible features taken from the parents. The assortment is
said to be weak if it is necessary to perform several recombinations within the offspring to achieve this
effect.
The recombination operator must match the particulars of the representation of solutions chosen.
In the GA context, the representation was typically binary, and hence operators such as those depicted
in Figure 1.4 were used. The situation is different in other EA families (and indeed in modern GAs too).
Without leaving GAs, another very typical representation is that of permutations. Many ad hoc operators
have been defined for this purpose, for example, order crossover (OX) [41], partially mapped crossover
(PMX; see Figure 1.5) [42], and uniform cycle crossover (UCX) [43] among others. The reader may check
[43] for a survey of these different operators.
When used in continuous parameter optimization, recombination can exploit the richness of the
representation, and utilize a variety of alternate strategies to create the offspring. Let (x1 , . . . , xn ) and

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 11 — #11


1-12 Handbook of Bioinspired Algorithms and Applications

+ / + /
 
+ X + – + + – + X +

X Y Y X X Y X Y Y X X Y

FIGURE 1.6 An example of branch-swapping recombination, as it is typically used in GP.

(y1 , . . . , yn ) be two arrays of real valued elements to be recombined, and let (z1 , . . . , zn ) be the resulting
array. Some possibilities for performing recombination are the following:

• Arithmetic recombination: zi = (xi + yi )/2, 1 ≤ i ≤ n.



• Geometric recombination: zi = xi yi , 1 ≤ i ≤ n.
• Flat recombination: zi = αxi + (1 − α)yi , 1 ≤ i ≤ n, where α is a random value in [0, 1].
• BLX-α recombination [44]: zi = ri + β(si − ri ), 1 ≤ i ≤ n, where ri = min(xi , yi ) − α|xi − yi |,
si = max(xi , yi ) + α|xi − yi |, and β is a random value in [0, 1].
• Fuzzy recombination: zi = Q(xi , yi ), 1 ≤ i ≤ n, where Q is a fuzzy connective [45].

In the case of self-adaptive schemes as those typically used in ES, the parameters undergoing self-
adaption would be recombined as well, using some of these operators. More details on self-adaption will
follow in next subsection.
Solutions can be also represented by means of some complex data structure, and the recombination
operator must be adequately defined to deal with these (e.g., References 46 to 48). In particular, the field
of GP normally uses trees to represent LISP programs [17], rule-bases [49], mathematical expressions,
etc. Recombination is usually performed here by swapping branches of the trees involved, as exemplified
in Figure 1.6.

1.4.5 Mutation
From a classical point of view (atleast in the GA arena [50]), this was a secondary operator whose mission is
to keep the pot boiling, continuously injecting new material in the population, but at a low rate (otherwise,
the search would degrade to a random walk in the solution space). EP practitioners [11] would disagree
with this characterization, claiming a central role for mutation. Actually, it is considered the crucial part
of the search engine in this context. This later vision has nowadays propagated to most EC researchers
(atleast in the sense of considering mutation as important as recombination).
As it was the case for recombination, the choice of a mutation operator depends on the representation
used. In bitstrings (and in general, in linear strings spanning n , where is arbitrary alphabet) mutation
is done by randomly substituting the symbol contained at a certain position by a different symbol. If a
permutation representation is used, such a procedure cannot be used for it would not produce a valid
permutation. Typical strategies in this case are swapping two randomly chosen positions, or inverting a
segment of the permutation. The interested reader may check [51] or [5] for an overview of different
options.
If solutions are represented by complex data structures, mutation has to be implemented accordingly.
In particular, this is the case of EP, in which, for example, finite automata [52], layered graphs [53],
directed acyclic graphs [54], etc., are often evolved. In this domain, it is customary to use more than one
mutation operator, making for each individual a choice of which operators will be deployed on it.
In the case of ES applied to continuous optimization, mutation is typically done using Gaussian
perturbations, that is,

zi = xi + Ni (0, σi ), (1.3)

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 12 — #12


Evolutionary Algorithms 1-13

where σi is a parameter controlling the amplitude of the mutation, and N (a, b) is a random number
drawn from a normal distribution with mean a and standard deviation b. The parameters σi usually
undergo self-adaption. In this case, they are mutated prior to mutating the xi ’s as follows:
 )+N (0,τ )
σi = σi · eN (0,τ i , (1.4)

where τ and τ  are two parameters termed the local and global learning rate, respectively. Advanced schemes
have been also defined in which a covariance matrix is used rather than independent σi ’s. However, these
schemes tend to be unpractical if solutions are highly dimensional. For a better understanding of ES
mutation see Reference 55.

1.4.6 Replacement
The role of replacement is keeping the population size constant.4 To do so, some individuals from the
population have to be substituted by some of the individuals created during reproduction. This can be
done in several ways:
• Replacement-of-the-worst : the population is sorted according to fitness, and the new individuals
replace the worst ones from the population.
• Random replacement : the individuals to be replaced are selected at random.
• Tournament replacement : a subset of α individuals is selected at random, and the worst one is
selected for replacement. Notice that if α = 1 we have random replacement.
• Direct replacement : the offspring replace their parents.
Some variants of these strategies are possible. For example, it is possible to consider the elitist versions
of these, and only perform replacement if the new individual is better than the individual it has to replace.
Two replacement strategies (comma and plus) are also typically considered in the context of ES and
EP. Comma replacement is analogous to replacement of the worst, with the addition that the number of
new individuals |P  | (also denoted by λ) can be larger than the population size |P| (also denoted by µ).
In this case, the population is constructed using the best µ out of the λ new individuals. As to the plus
strategy, it would be the elitist counterpart of the former, that is, pick the best µ individuals out of the µ
old individuals plus the λ new ones. The notation (µ, λ) — EA and (µ + λ) — EA is used to denote these
two strategies.
It must be noted that the term “elitism” is often used as well to denote replacement-of-the-worst
strategies in which |P  | < |P|. This strategy is very commonly used, and ensures that the best individual
found so far is never lost. An extreme situation takes place when |P  | = 1, that is, just a single individual is
generated in each iteration of the algorithm. This is known as steady-state reproduction, and it is usually
associated with faster convergence of the algorithm. The term generational is used to designate the classical
situation in which |P  | = |P|.

1.5 Fields of Application of EAs


Evolutionary algorithms have been thoroughly used in many domains. One of the most conspicuous
fields in which these techniques have been utilized is combinatorial optimization (CO). This way, EAs
have been used to solve classical NP — hard problems such as the Travelling Salesman Problem [57–59],
the Multiple Knapsack Problem [60,61], Number Partitioning [62,63], Max Independent Set [64,65], and
Graph Coloring [66,67], among others.
Other nonclassical — yet important — CO problems to which EAs have been applied are scheduling
(in many variants [43, 68–71]), timetabling [72,73], lot-sizing [74], vehicle routing [75,76], quadratic
assignment [77,78], placement problems [79,80], and transportation problems [81].

4 Although it is not mandatory to do so [56], it is common practice to use populations of fixed size.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 13 — #13


1-14 Handbook of Bioinspired Algorithms and Applications

Telecommunications is another field that has witnessed the successful application of EAs. For example,
EAs have been applied to the placement of antennas and converters [82,83], frequency assignment
[84–86], digital data network design [87], predicting bandwidth demands in ATM networks [88], error
code design [89,90], etc. See also Reference 91.
Evolutionary algorithms have been actively used in electronics and engineering as well. For example,
work has been done in structure optimization [92], aeronautic design [93], power planning [94], circuit
design [95] computer-aided design [96], analogue-network synthesis [97], and service restoration [98]
among other areas.
Besides the precise application areas mentioned before, EAs have been also utilized in many other
fields such as, for example, medicine [99,100], economics [101,102], mathematics [103,104], biology
[105–107], etc. The reader may try querying any bibliographical database or web search engine for
“evolutionary algorithm application” to get an idea of the vast number of problems that have been tackled
with EAs.

1.6 Conclusions
EC is a fascinating field. Its optimization philosophy is appealing, and its practical power is striking.
Whenever the user is faced with a hard search/optimization task that she cannot solve by classical means,
trying EAs is a must. The extremely brief overview of EA applications presented before can convince the
reader that a “killer approach” is in her hands.
EC is also a very active research field. One of the main weaknesses of the field is the absence of
a conclusive general theoretical basis, although great advances are being made in this direction, and
in-depth knowledge is available about certain idealized EA models.
Regarding the more practical aspects of the paradigm, two main streamlines can be identified:
parallelizing and hybridizing. The use of decentralized EAs in the context of multiprocessors or net-
worked systems can result in enormous performance improvement [108], and constitutes an ideal option
for exploiting the availability of distributed computing resources. As to hybridization, it has become
evident in the last years that it constitutes a crucial factor for the successful use of EAs in real-world
endeavors. This can be achieved by hard-wiring problem-knowledge within the EA, or by combining it
with other techniques. In this sense, the reader is encouraged to read other essays in this volume to get
valuable ideas on suitable candidates for this hybridization.

Acknowledgments
This work has been partially funded by the Ministry of Science and Technology (MCYT) and Regional
Development European Found (FEDER) under contract TIC2002-04498-C05-02 (the TRACER project)
http://tracer.lcc.uma.es.

References
[1] T. Bäck, D.B. Fogel, and Z. Michalewicz. Handbook of Evolutionary Computation. Oxford
University Press, New York, 1997.
[2] T.C. Jones. Evolutionary Algorithms, Fitness Landscapes and Search. Ph.D. thesis, University of
New Mexico, 1995.
[3] C. Darwin. On the Origin of Species by Means of Natural Selection. John Murray, London, 1859.
[4] G. Mendel. Versuche über pflanzen-hybriden. Verhandlungen des Naturforschendes Vereines in
Brünn, 4: 3–47, 1865.
[5] Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag,
Berlin, 1992.
[6] J. Huxley. Evolution, the Modern Synthesis. Harper, New York, 1942.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 14 — #14


Evolutionary Algorithms 1-15

[7] M. Kimura. Evolutionary rate at the molecular level. Nature, 217: 624–626, 1968.
[8] S.J. Gould and N. Elredge. Punctuated equilibria: The tempo and mode of evolution reconsidered.
Paleobiology, 32: 115–151, 1977.
[9] C.G. Langton. Artificial life. In C.G. Langton, Ed., Artificial Life 1. Addison-Wesley, Santa Fe, NM,
1989, pp. 1–47.
[10] D.B. Fogel. Evolutionary Computation: The Fossil Record. Wiley-IEEE Press, Piscataway, NJ, 1998.
[11] L.J. Fogel, A.J. Owens, and M.J. Walsh. Artificial Intelligence Through Simulated Evolution. John
Wiley & Sons, New York, 1966.
[12] I. Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologis-
chen Evolution. Frommann-Holzboog Verlag, Stuttgart, 1973.
[13] H.P. Schwefel. Numerische Optimierung von Computer–Modellen mittels der Evolutionsstrategie,
Vol. 26 of Interdisciplinary Systems Research. Birkhäuser, Basel, 1977.
[14] J.H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press,
Ann Harbor, MI, 1975.
[15] T. Bäck. Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York, 1996.
[16] M.L. Cramer. A representation for the adaptive generation of simple sequential programs.
In J.J. Grefenstette, Ed., Proceedings of the First International Conference on Genetic Algorithms.
Lawrence Erlbaum Associates, Hillsdale, NJ, 1985.
[17] J.R. Koza. Genetic Programming. MIT Press, Cambridge, MA, 1992.
[18] P. Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards
Memetic Algorithms. Technical report Caltech Concurrent Computation Program, Report 826,
California Institute of Technology, Pasadena, CA, USA, 1989.
[19] P. Moscato and C. Cotta. A gentle introduction to memetic algorithms. In F. Glover and
G. Kochenberger, Eds., Handbook of Metaheuristics. Kluwer Academic Publishers, Boston, MA,
2003, pp. 105–144.
[20] M. Dorigo and G. Di Caro. The ant colony optimization meta-heuristic. In D. Corne, M. Dorigo,
and F. Glover, Eds., New Ideas in Optimization. Maiden head, UK, 1999, pp. 11–32.
[21] P. Larrañaga and J.A. Lozano. Estimation of Distribution Algorithms. A New Tool for Evolutionary
Computation. Kluwer Academic Publishers, Boston, MA, 2001.
[22] M. Laguna and R. Martí. Scatter Search. Methodology and Implementations in C. Kluwer Academic
Publishers, Boston, MA, 2003.
[23] C. Blum and A. Roli. Metaheuristics in combinatorial optimization: Overview and conceptual
comparison. ACM Computing Surveys, 35: 268–308, 2003.
[24] F. Glover and G. Kochenberger. Handbook of Metaheuristics. Kluwer Academic Publishers, Boston,
MA, 2003.
[25] R.E. Smith. Diploid genetic algorithms for search in time varying environments. In Annual
Southeast Regional Conference of the ACM. ACM Press, New York, 1987, pp. 175–179.
[26] C.A. Coello. A comprehensive survey of evolutionary-based multiobjective optimization
techniques. Knowledge and Information Systems, 1: 269–308, 1999.
[27] C.A. Coello and A.D. Christiansen. An approach to multiobjective optimization using genetic
algorithms. In C.H. Dagli, M. Akay, C.L.P. Chen, B.R. Fernández, and J. Ghosh, Eds., Intelligent
Engineering Systems Through Artificial Neural Networks, Vol. 5. ASME Press, St. Louis, MO, 1995,
pp. 411–416.
[28] C.R. Reeves. Using genetic algorithms with small populations. In S. Forrest, Ed., Proceedings of the
Fifth International Conference on Genetic Algorithms. Morgan Kaufmann, San Mateo, CA, 1993,
pp. 92–99.
[29] C. Cotta. On the evolutionary inference of temporal Boolean networks. In J. Mira and
J.R. Álvarez, Eds., Computational Methods in Neural Modeling, Vol. 2686 of Lecture Notes in
Computer Science. Springer-Verlag, Berlin, Heidelberg, 2003, pp. 494–501.
[30] C. Ramsey and J.J. Grefensttete. Case-based initialization of genetic algorithms. In S. Forrest,
Ed., Proceedings of the Fifth International Conference on Genetic Algorithms. Morgan Kaufmann,
San Mateo, CA, 1993, pp. 84–91.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 15 — #15


1-16 Handbook of Bioinspired Algorithms and Applications

[31] J.E. Baker. Reducing bias and inefficiency in the selection algorithm. In J.J. Grefenstette, Ed.,
Proceedings of the Second International Conference on Genetic Algorithms. Lawrence Erlbaum
Associates, Hillsdale, NJ, 1987, pp. 14–21.
[32] D.L. Whitley. Using reproductive evaluation to improve genetic search and heuristic discovery.
In J.J. Grefenstette, Ed., Proceedings of the Second International Conference on Genetic Algorithms.
Lawrence Erlbaum Associates, Hillsdale, NJ, 1987, pp. 116–121.
[33] T. Bickle and L. Thiele. A mathematical analysis of tournament selection. In L.J. Eshelman,
Ed., Proceedings of the Sixth International Conference on Genetic Algorithms. Morgan Kaufmann,
San Francisco, CA, 1995, pp. 9-16.
[34] E. Cantú-Paz. Order statistics and selection methods of evolutionary algorithms. Information
Processing Letters, 82: 15–22, 2002.
[35] K. Deb and D. Goldberg. A comparative analysis of selection schemes used in genetic algorithms.
In G.J. Rawlins, Ed., Foundations of Genetic Algorithms. San Mateo, CA, 1991, pp. 69–93.
[36] E. Alba and J.M. Troya. A survey of parallel distributed genetic algorithms. Complexity, 4: 31–52,
1999.
[37] A.E. Eiben, P.-E. Raue, and Zs. Ruttkay. Genetic algorithms with multi-parent recombination.
In Y. Davidor, H.-P. Schwefel, and R. Männer, Eds., Parallel Problem Solving from Nature
III, Vol. 866 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, Heidelberg, 1994,
pp. 78–87.
[38] C. Cotta and J.M. Troya. Information processing in transmitting recombination. Applied
Mathematics Letters, 16: 945–948, 2003.
[39] N.J. Radcliffe. The algebra of genetic algorithms. Annals of Mathematics and Artificial Intelligence,
10: 339–384, 1994.
[40] G. Syswerda. Uniform crossover in genetic algorithms. In J.D. Schaffer, Ed., Proceedings of the
Third International Conference on Genetic Algorithms. Morgan Kaufmann, San Mateo, CA, 1989,
pp. 2–9.
[41] L. Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold Computer Library, New York,
1991.
[42] D.E. Goldberg and R. Lingle, Jr. Alleles, loci and the traveling salesman problem.
In J.J. Grefenstette, Ed., Proceedings of an International Conference on Genetic Algorithms.
Lawrence Erlbaum Associates, Hillsdale, NJ, 1985.
[43] C. Cotta and J.M. Troya. Genetic forma recombination in permutation flowshop problems.
Evolutionary Computation, 6: 25–44, 1998.
[44] L.J. Eshelman and J.D. Schaffer. Real-coded genetic algorithms and interval-schemata. In
D. Whitley, Ed., Foundations of Genetic Algorithms 2. Morgan Kaufmann Publishers, San Mateo,
CA, 1993, pp. 187–202.
[45] F. Herrera, M. Lozano, and J.L. Verdegay. Dynamic and heuristic fuzzy connectives-based cros-
sover operators for controlling the diversity and convengence of real coded genetic algorithms.
Journal of Intelligent Systems, 11: 1013–1041, 1996.
[46] E. Alba, J.F. Aldana, and J.M. Troya. Full automatic ann design: A genetic approach. In J. Cabestany,
J. Mira, and A. Prieto, Eds., New Trends in Neural Computation, Vol. 686 of Lecture Notes in
Computer Science. Springer-Verlag, Heidelberg, 1993, pp. 399–404.
[47] E. Alba and J.M. Troya. Genetic algorithms for protocol validation. In H.M. Voigt, W. Ebeling,
I. Rechenberg, and H.-P. Schwefel, Eds., Parallel Problem Solving from Nature IV. Springer-Verlag,
Berlin, Heidelberg, 1996, pp. 870–879.
[48] C. Cotta and J.M. Troya. Analyzing directed acyclic graph recombination. In B. Reusch, Ed.,
Computational Intelligence: Theory and Applications, Vol. 2206 of Lecture Notes in Computer
Science. Springer-Verlag, Berlin, Heidelberg, 2001, pp. 739–748.
[49] E. Alba, C. Cotta, and J.M. Troya. Evolutionary design of fuzzy logic controllers using strongly-
typed GP. Mathware & Soft Computing, 6: 109–124, 1999.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 16 — #16


Evolutionary Algorithms 1-17

[50] D.E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-
Wesley, Reading, MA, 1989.
[51] A.E. Eiben and J.E. Smith. Introduction to Evolutionary Computing. Springer-Verlag, Berlin,
Heidelberg, 2003.
[52] C.H. Clelland and D.A. Newlands. PFSA modelling of behavioural sequences by evolutionary
programming. In R.J. Stonier and X.H. Yu, Eds., Complex Systems: Mechanism for Adaptation.
IOS Press, Rockhampton, Queensland, Australia, 1994, pp. 165–172.
[53] X. Yao and Y. Liu. A new evolutionary system for evolving artificial neural networks. IEEE
Transactions on Neural Networks, 8: 694–713, 1997.
[54] M.L. Wong, W. Lam, and K.S. Leung. Using evolutionary programming and minimum descrip-
tion length principle for data mining of bayesian networks. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 21: 174–178, 1999.
[55] H.-G. Beyer. The Theory of Evolution Strategies. Springer-Verlag, Berlin, Heidelberg, 2001.
[56] F. Fernandez, L. Vanneschi, and M. Tomassini. The effect of plagues in genetic programming:
A study of variable-size populations. In C. Ryan et al., Eds., Genetic Programming, Proceedings of
EuroGP’2003, Vol. 2610 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, Heidelberg,
2003, pp. 320–329.
[57] S. Chatterjee, C. Carrera, and L. Lynch. Genetic algorithms and traveling salesman problems.
European Journal of Operational Research, 93: 490–510, 1996.
[58] D.B. Fogel. An evolutionary approach to the traveling salesman problem. Biological Cybernetics,
60: 139–144, 1988.
[59] P. Merz and B. Freisleben. Genetic local search for the TSP: New Results. In Proceedings of the
1997 IEEE International Conference on Evolutionary Computation. IEEE Press, Indianapolis, USA,
1997, pp. 159–164.
[60] C. Cotta and J.M. Troya. A hybrid genetic algorithm for the 0–1 multiple knapsack problem.
In G.D. Smith, N.C. Steele, and R.F. Albrecht, Eds., Artificial Neural Nets and Genetic Algorithms
3. Springer-Verlag, Wien New York, 1998, pp. 251–255.
[61] S. Khuri, T. Bäck, and J. Heitkötter. The zero/one multiple knapsack problem and genetic
algorithms. In E. Deaton, D. Oppenheim, J. Urban, and H. Berghel, Eds., Proceedings of the 1994
ACM Symposium of Applied Computation proceedings. ACM Press, New York, 1994, pp. 188–193.
[62] R. Berretta, C. Cotta, and P. Moscato. Enhancing the performance of memetic algorithms by
using a matching-based recombination algorithm: Results on the number partitioning problem.
In M. Resende and J. Pinho de Sousa, Eds., Metaheuristics: Computer-Decision Making. Kluwer
Academic Publishers, Boston, MA, 2003, pp. 65–90.
[63] D.R. Jones and M.A. Beltramo. Solving partitioning problems with genetic algorithms. In
R.K. Belew and L.B. Booker, Eds., In Proceedings of the Fourth International Conference on Genetic
Algorithms. Morgan Kaufmann, San Mateo, CA, 1991, pp. 442–449.
[64] C.C. Aggarwal, J.B. Orlin, and R.P. Tai. Optimized crossover for the independent set problem.
Operations Research, 45: 226–234, 1997.
[65] M. Hifi. A genetic algorithm-based heuristic for solving the weighted maximum independent set
and some equivalent problems. Journal of the Operational Research Society, 48: 612–622, 1997.
[66] D. Costa, N. Dubuis, and A. Hertz. Embedding of a sequential procedure within an evolutionary
algorithm for coloring problems in graphs. Journal of Heuristics, 1: 105–128, 1995.
[67] C. Fleurent and J.A. Ferland. Genetic and hybrid algorithms for graph coloring. Annals of
Operations Research, 63: 437–461, 1997.
[68] S. Cavalieri and P. Gaiardelli. Hybrid genetic algorithms for a multiple-objective scheduling
problem. Journal of Intelligent Manufacturing, 9: 361–367, 1998.
[69] D. Costa. An evolutionary tabu search algorithm and the NHL scheduling problem. INFOR, 33:
161–178, 1995.
[70] C.F. Liaw. A hybrid genetic algorithm for the open shop scheduling problem. European Journal
of Operational Research, 124: 28–42, 2000.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 17 — #17


1-18 Handbook of Bioinspired Algorithms and Applications

[71] L. Ozdamar. A genetic algorithm approach to a general category project scheduling problem.
IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), 29: 44–59,
1999.
[72] E.K. Burke, J.P. Newall, and R.F. Weare. Initialisation strategies and diversity in evolutionary
timetabling. Evolutionary Computation, 6: 81–103, 1998.
[73] B. Paechter, R.C. Rankin, and A. Cumming. Improving a lecture timetabling system for university
wide use. In E.K. Burke and M. Carter, Eds., The Practice and Theory of Automated Timetabling
II, Vol. 1408 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1998, pp. 156–165.
[74] K. Haase and U. Kohlmorgen. Parallel genetic algorithm for the capacitated lot-sizing prob-
lem. In Kleinschmidt et al., Eds., Operations Research Proceedings. Springer-Verlag, Berlin, 1996,
pp. 370–375.
[75] J. Berger and M. Barkaoui. A hybrid genetic algorithm for the capacitated vehicle routing prob-
lem. In E. Cantú-Paz, Ed., Proceedings of the Genetic and Evolutionary Computation Conference
2003, Vol. 2723 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, Heidelberg, 2003,
pp. 646–656.
[76] J. Berger, M. Salois, and R. Begin. A hybrid genetic algorithm for the vehicle routing problem with
time windows. In R.E. Mercer and E. Neufeld, Eds., Advances in Artificial Intelligence. 12th Biennial
Conference of the Canadian Society for Computational Studies of Intelligence. Springer-Verlag,
Berlin, 1998, pp. 114-127.
[77] P. Merz and B. Freisleben. Genetic algorithms for binary quadratic programming. In W. Banzhaf
et al., Eds., Proceedings of the 1999 Genetic and Evolutionary Computation Conference,
Morgan Kaufmann, San Francisco, CA, 1999, pp. 417–424.
[78] P. Merz and B. Freisleben. Fitness landscape analysis and memetic algorithms for the quadratic
assignment problem. IEEE Transactions on Evolutionary Computation, 4: 337–352, 2000.
[79] E. Hopper and B. Turton. A genetic algorithm for a 2d industrial packing problem. Computers &
Industrial Engineering, 37: 375–378, 1999.
[80] R.M. Krzanowski and J. Raper. Hybrid genetic algorithm for transmitter location in wireless
networks. Computers, Environment and Urban Systems, 23: 359–382, 1999.
[81] M. Gen, K. Ida, and L. Yinzhen. Bicriteria transportation problem by hybrid genetic algorithm.
Computers & Industrial Engineering, 35: 363–366, 1998.
[82] P. Calegar, F. Guidec, P. Kuonen, and D. Wagner. Parallel island-based genetic algorithm for radio
network design. Journal of Parallel and Distributed Computing, 47: 86–90, 1997.
[83] C. Vijayanand, M.S. Kumar, K.R. Venugopal, and P.S. Kumar. Converter placement in all-optical
networks using genetic algorithms. Computer Communications, 23: 1223–1234, 2000.
[84] C. Cotta and J.M. Troya. A comparison of several evolutionary heuristics for the frequency
assignment problem. In J. Mira and A. Prieto, Eds., Connectionist Models of Neurons, Learning
Processes, and Artificial Intelligence, Vol. 2084 of Lecture Notes in Computer Science. Springer-
Verlag, Berlin, Heidelberg, 2001, pp. 709–716.
[85] R. Dorne and J.K. Hao. An evolutionary approach for frequency assignment in cellular radio
networks. In 1995 IEEE International Conference on Evolutionary Computation. IEEE Press, Perth,
Australia, 1995, pp. 539–544.
[86] A. Kapsalis, V.J. Rayward-Smith, and G.D. Smith. Using genetic algorithms to solve the radio link
frequency assignment problem. In D.W. Pearson, N.C. Steele, and R.F. Albretch, Eds., Artificial
Neural Nets and Genetic Algorithms. Springer-Verlag, Wien New York, 1995, pp. 37–40.
[87] C.H. Chu, G. Premkumar, and H. Chou. Digital data networks design using genetic algorithms.
European Journal of Operational Research, 127: 140–158, 2000.
[88] N. Swaminathan, J. Srinivasan, and S.V. Raghavan. Bandwidth-demand prediction in virtual path
in atm networks using genetic algorithms. Computer Communications, 22: 1127–1135, 1999.
[89] H. Chen, N.S. Flann, and D.W. Watson. Parallel genetic simulated annealing: A massively parallel
SIMD algorithm. IEEE Transactions on Parallel and Distributed Systems, 9: 126–136, 1998.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 18 — #18


Evolutionary Algorithms 1-19

[90] K. Dontas and K. De Jong. Discovery of maximal distance codes using genetic algorithms.
In Proceedings of the Second International IEEE Conference on Tools for Artificial Intelligence. IEEE
Press, Herndon, VA, 1990, pp. 805–811.
[91] D.W. Corne, M.J. Oates, and G.D. Smith. Telecommunications Optimization: Heuristic and
Adaptive Techniques. John Wiley, New York, 2000.
[92] I.C. Yeh. Hybrid genetic algorithms for optimization of truss structures. Computer Aided Civil
and Infrastructure Engineering, 14: 199–206, 1999.
[93] D. Quagliarella and A. Vicini. Hybrid genetic algorithms as tools for complex optimisation prob-
lems. In P. Blonda, M. Castellano, and A. Petrosino, Eds., New Trends in Fuzzy Logic II. Proceedings
of the Second Italian Workshop on Fuzzy Logic. World Scientific, Singapore, 1998, pp. 300–307.
[94] A.J. Urdaneta, J.F. Gómez, E. Sorrentino, L. Flores, and R. Díaz. A hybrid genetic algorithm for
optimal reactive power planning based upon successive linear programming. IEEE Transactions
on Power Systems, 14: 1292–1298, 1999.
[95] M. Guotian and L. Changhong. Optimal design of the broadband stepped impedance transformer
based on the hybrid genetic algorithm. Journal of Xidian University, 26: 8–12, 1999.
[96] B. Becker and R. Drechsler. Ofdd based minimization of fixed polarity Reed-Muller expressions
using hybrid genetic algorithms. In Proceedings of the IEEE International Conference on Computer
Design: VLSI in Computers and Processor. IEEE, Los Alamitos, CA, 1994, pp. 106–110.
[97] J.B. Grimbleby. Hybrid genetic algorithms for analogue network synthesis. In Proceedings of the
1999 Congress on Evolutionary Computation. IEEE, Washington D.C., 1999, pp. 1781–1787.
[98] A. Augugliaro, L. Dusonchet, and E. Riva-Sanseverino. Service restoration in compensated dis-
tribution networks using a hybrid genetic algorithm. Electric Power Systems Research, 46: 59–66,
1998.
[99] M. Sipper and C.A. Peña Reyes. Evolutionary computation in medicine: An overview. Artificial
Intelligence in Medicine, 19: 1–23, 2000.
[100] R. Wehrens, C. Lucasius, L. Buydens, and G. Kateman. HIPS, A hybrid self-adapting expert system
for nuclear magnetic resonance spectrum interpretation using genetic algorithms. Analytica
Chimica ACTA, 277: 313–324, 1993.
[101] J. Alander. Indexed Bibliography of Genetic Algorithms in Economics. Technical report
94-1-ECO, University of Vaasa, Department of Information Technology and Production
Economics, 1995.
[102] F. Li, R. Morgan, and D. Williams. Economic environmental dispatch made easy with hybrid
genetic algorithms. In Proceedings of the International Conference on Electrical Engineering, Vol.
2. International Academic Publishers, Beijing, China, 1996, pp. 965–969.
[103] C. Reich. Simulation if imprecise ordinary differential equations using evolutionary algorithms.
In J. Carroll, E. Damiani, H. Haddad, and D. Oppenheim, Eds., ACM Symposium on Applied
Computing 2000. ACM Press, New York, 2000, pp. 428–432.
[104] X. Wei and F. Kangling. A hybrid genetic algorithm for global solution of nondifferentiable
nonlinear function. Control Theory & Applications, 17: 180–183, 2000.
[105] C. Cotta and P. Moscato. Inferring phylogenetic trees using evolutionary algorithms.
In J.J. Merelo, P. Adamidis, H.-G. Beyer, J.-L. Fernández-Villacañas, and H.-P. Schwefel, Eds.,
Parallel Problem Solving from Nature VII, Vol. 2439 of Lecture Notes in Computer Science.
Springer-Verlag, Berlin, 2002, pp. 720–729.
[106] G.B. Fogel and D.W. Corne. Evolutionary Computation in Bioinformatics. Morgan Kaufmann,
San Francisco, CA, 2003.
[107] R. Thomsen, G.B. Fogel, and T. Krink. A clustal alignment improver using evolution-
ary algorithms. In David B. Fogel, Xin Yao, Garry Greenwood, Hitoshi Iba, Paul Marrow,
and Mark Shackleton, Eds., Proceedings of the Fourth Congress on Evolutionary Computation
(CEC-2002) Vol. 1. 2002, pp. 121–126.
[108] E. Alba. Parallel evolutionary algorithms can achieve super-linear performance. Information
Processing Letters, 82: 7–13, 2002.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C001” — 2005/8/6 — 13:20 — page 19 — #19


2
An Overview of
Neural Networks
Models

2.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21


2.2 General Structure of a Neural Network . . . . . . . . . . . . . . . . 2-22
Single- and Multi-Layer Perceptrons • Function Representation
2.3 Learning in Single-Layer Models . . . . . . . . . . . . . . . . . . . . . . . 2-26
Supervised Learning
2.4 Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30
K-Means Clustering • Kohonen Clustering • ART1 • ART2
2.5 Learning in Multiple Layer Models . . . . . . . . . . . . . . . . . . . . . 2-32
The Back Propagation Algorithm • Radial Basis Functions
2.6 A Sample of Neural Network Applications . . . . . . . . . . . . 2-35
Expert Systems • Neural Controllers • Decision Makers •
Robot Path Planning • Adaptive Noise Cancellation
Javid Taheri 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36
Albert Y. Zomaya References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36

2.1 Introduction
Artificial Neural Networks have been one of the most active areas of research in computer science over
the last 50 years with periods of intense activity interrupted by episodes of hiatus [1]. The premise for
the evolution of the theory of artificial Neural Networks stems from the basic neurological structure of
living organisms. A cell is the most important constituent of these life forms. These cells are connected
by “synapses,” that are the links that carry messages between cells. In fact, by using synapses to carry the
pulses, cells can activate each other with different threshold values to form a decision or memorize an
event. Inspired by this simplistic vision of how messages are transferred between cells, scientists invented
a new computational approach, which became popularly known as Artificial Neural Networks (or Neural
Networks for short) and used it extensively to target a wide range of problems in many application
areas.
Although the shape or configurations of different Neural Networks may look different at the first
glance, they are almost similar in structure. Every neural network consists of “cells” and “links.” Cells are
the computational part of the network that perform reasoning and generate activation signals for other

2-21

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 21 — #1


2-22 Handbook of Bioinspired Algorithms and Applications

cells, while links connect the different cells and enable messages to flow between cells. Each link is usually
a one directional connection with a weight which affects the carried message in a certain way. This means,
that a link receives a value (message) from an input cell, multiplies it by a given weight, and then passes it
to the output cell. In its simplest form, a cell can have three states (of activation): +1 (TRUE), 0, and −1
(FALSE) [1].

2.2 General Structure of a Neural Network


Cells (or neurons) can have more sophisticated structure that can handle complex problems. These
neurons can basically be linear or nonlinear functions with or without biases. Figure 2.1 shows two simple
neurons, unbiased and biased.

2.2.1 Single- and Multi-Layer Perceptrons


The single-layer perceptron is one of the simplest classes of Neural Networks [1]. The general overview of
this network is shown in Figure 2.2 while the network has n inputs and generates only one output. The
input of the function f (·) is actually a linear combination of the network’s inputs. In this case, W is a
vector of neuron weights, X is the input vector, and y is the only output of the network defined as follows:

y = f (W · X + b),

W = (w1 w2 ... wn ),

X = (x1 x2 ... xn )T .

The above-mentioned basic structure can be extended to produce networks with more than one output.
In this case, each output has its own weights and is completely uncorrelated to the other outputs. Figure 2.3

(a)
w
x f (.) y

Input layer Neuron Output layer


y = f (wx)

(b)
w
x Σ f (.) y

Input layer Neuron Output layer


y = f (wx + b)

FIGURE 2.1 (a) Unbiased and (b) biased structure of a neural network.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 22 — #2


An Overview of Neural Networks Models 2-23

x1
w1
x2 w2
Σ f ( .) y

wn
xn
b

Input Layer Neuron Output Layer

FIGURE 2.2 A single output single-layer perceptron.

x1
w1, 1

w1, 2
Σ f1 (.) y1
x2

b1

Σ f2 (.) y2

b2

wm–1,n Σ fm (.) ym

bm
wm,n
xn
1

Input layer Layer -1 Output layer

FIGURE 2.3 A multi output single-layer perceptron.

shows an instant of such a network with the following formulas:

Y = F (W · X + B),
 
w1,1 w1,2 . . . w1,n
 w2,1 
 
W = . ,
 .. 
wm,1 . . . wm,n
X = (x1 x2 ... xn ) T ,
Y = ( y1 y2 ... ym ) T ,
B = (b1 b2 ... b m )T ,
F (·) = ( f1 (·) f2 (·) ... fm (·))T ,

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 23 — #3


2-24 Handbook of Bioinspired Algorithms and Applications

x1 y1
w11,1 z p1
z11 w 21,1 z 21 w p1,1
Σ f 11(.) Σ f 21(.) Σ f p1(.)
x1
b11 b12 p f P1(.)
b1
1 1 1
z p2 y2
z12 z 22
Σ .)
f 12( Σ f 22 ( .) Σ f P2( .)

b12 b 22 b p2
1 1 1

zm1 z 2m2
Σ f 1m1( .) Σ f 2m (.) Σ f 2m (.)
2
w 2m2,n2 w pmp,np 2

b1m1 b2m2 b pmp z pmp


xn w1m1,n1 ym
1 1 1

Input layer Layer-1 Layer-2 Layer-p Output layer

FIGURE 2.4 The basic structure of a multi-layer neural network.

where n is the number of inputs, m the number of outputs, W the weighing matrix, X the input vector,
Y the output vector, and F (·) the array of output functions.
A multi-layer perceptron can simply be constructed by concatenating several single-layer perceptron
networks. Figure 2.4 shows the basic structure of such network with the following parameters [1]: X is
the input vector, Y the output vector, n the number of inputs, m the number of outputs, p the total number
of layers in the network, mi the number of outputs for the ith layer and, ni the number of inputs for the
ith layer.
Note that in this network, every internal layer of the network can have its own number of inputs and
outputs only by considering the concatenation rule, that is, ni = mi−1 . The output of the first layer is
calculated as follows:

Z 1 = F 1 (W 1 · X + B 1 ),
 
1
w1,1 1
w1,2 ... 1
w1,n
 1 
 w2,1 
 
W1 = 
 ..
,

 . 
 
wm1
1 ,1
. . . wm
1
1 ,n

X = (x1 x2 ... x n )T ,

B 1 = (b11 b21 ... 1 T


bm1
) ,

Z 1 = (z11 z21 ... Zm1 1 )T ,

F 1 (·) = ( f11 (·) f21 (·) ... fm11 (·))T .

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 24 — #4


An Overview of Neural Networks Models 2-25

Consequently the output of the second layer would be:

Z 2 = F 2 (W 2 · Z 1 + B 2 ),
 2 
w1,1 w1,2 2 ... 2
w1,n
 2 
 w2,1 
 
W2 =  .
,

 .. 
 
2
wm 2 ,1
. . . wm
2
2 ,m1

B 2 = (b12 b22 ... 2 T


bm2
) ,

Z 2 = (z12 z22 ... 2 T


zm2
) ,

F 2 (·) = ( f12 (·) f22 (·) ... fm22 (·))T ,

and finally the last layer formulation can be presented as follows:

Y = Z p = F p (W p · Z p−1 + B p ),
 p p p 
w1,1 w1,2 . . . w1,n
 p 
w 
 2,1 
W =
p
 ..
,

 . 
 
p p
wm1 ,1 . . . wmp ,mp−1
p p p
B p = (b1 b2 ... bmp )T ,
p p p
Z p = (z1 z2 ... zmp )T ,
p p p
F p (·) = ( f1 (·) f2 (·) ... fmp (·))T .

Notice that in such networks, the complexity of the network raises in a fast race based on the number of
layers. Practically experienced, each multi-layer perceptron can be evaluated by a single-layer perceptron
with comparatively huge number of nodes.

2.2.2 Function Representation


Two of the most popular uses of Neural Networks are to represent (or approximate) functions and model
systems. Basically, a neural network would be used to imitate the behavior of a function by generating
relatively similar outputs in comparison to the real system (or function) over the same range of inputs.
2.2.2.1 Boolean Functions
Neural networks were first used to model simple Boolean functions. For example, Figure 2.5 shows how
a neural network can be used to model an AND operator, while Figure 2.6 gives the truth table. Note
that, “1” stands for “TRUE” while “−1” represents a “FALSE” value. The network in Figure 2.5 actually
simulates a linear (function) separator, which simply divides the decision space into two parts.
2.2.2.2 Real Valued Functions
In this case, the network weights must be set so that it can generate continuous outputs of a real system.
The generated network is also intended to act as an extrapolator that can generate output data for inputs
that are different from the training set.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 25 — #5


2-26 Handbook of Bioinspired Algorithms and Applications

x1 1.4

Σ sgn( .) y

1.4
x2 –0.7

Input layer Neuron Output layer

FIGURE 2.5 A neural network to implement the logical AND.

x2

+1

–1 +1 x1

–1

x1
–1 +1
x2 –1 –1 –1
+1 –1 +1

FIGURE 2.6 Implementation of the logical AND of Figure 2.5.

2.3 Learning in Single-Layer Models


The main, and most important, application of all Neural Networks is their ability to model a process or
learn the behavior of a system. Toward this end, several algorithms were proposed to train the adjustable
parameters of a network (i.e., W ). Basically, training a neural network to adjust the W ’s is categorized
into two different classes: supervised and unsupervised [2–6].

2.3.1 Supervised Learning


The main purpose of this kind of training is to “teach” a network to copy the behavior of a system or a
function. In this case, there is always a need to have a “training” data set. The network topology and the
algorithm that the network is trained with are highly inter-related. In general, a topology of the network
is chosen first and then an appropriate training algorithm is used to tune the weights (W ) [7,8].

2.3.1.1 Perceptron Learning


As mentioned earlier, the perceptron is the most basic form of Neural Networks. Essentially, this network
tries to classify input data by mapping it onto a plane. In this approach, to simplify the algorithm,
suppose that the network’s input is restricted to {+1, 0, −1}, while the output can be {+1, −1}. The aim
of the algorithm is to find an appropriate set of weights, W , by sampling a training set, T , that will capture

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 26 — #6


An Overview of Neural Networks Models 2-27

the mapping that associates each input to an output, that is,

W = (w0 w1 ... wn ),

T = {(R 1 , S 1 ), (R 2 , S 2 ), . . . , (R L , S L )},

where n is the number of inputs, R i is the ith input data, S i represents the appropriate output for the ith
pattern, and, L is the size of the training set. Note that, for the above vector W , wn is used to adjust the
bias in the values of the weights. The Perceptron Learning can be summarized as follows:

Step 1: Set all elements of the weighting vector to zero, that is, W = (0 0 · · · 0).
Step 2: Select training pattern at random, namely kth datum.
Step 3: IF the current W has not been classified correctly, that is, W · R k = S k , then, modify the
weighing vector as follows: W ← W + R k S k .
Step 4: Repeat steps 1 to 3 until all data are classified correctly.

2.3.1.2 Linear Auto-Associators Learning


An auto-associate network is another type of network which has some type of memory. In this network,
the input and output nodes are basically the same. Hence, when a datum enters the network, it passes
through the nodes and converges to the closest memorized data, which was previously stored in the
network during the training process [1].
Figure 2.7 shows an instance of such network with five nodes. It is worthwhile mentioning that the
weighing matrix of such network is not symmetrical. That is, wi,j which relate the node “i” to node “j”
may have different value than wj,i . The main key of designing such network is the training data. In this
case, the assumption is to have orthogonal training data or at least approximately orthogonal, that is,


0 i = j,
T i , Tj ≈
1 i = j,

where Ti is the ith training data and · is the inner product of two vectors. Based on the above assumption
the weight matrix for this network is calculated as follows where ⊗ stands for outer product of two vectors:


N
W = Ti ⊗ T i .
i=1

5 2

4 3

FIGURE 2.7 A sample linear auto-associate network with five nodes.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 27 — #7


2-28 Handbook of Bioinspired Algorithms and Applications

As it can be seen, the main advantage of this network is in its one-shot learning process, by considering
orthogonal data. Note that, even if the input data are not orthogonal in the first place, they can be
transferred to a new space by a simple transfer function.
2.3.1.3 Iterative Learning
Iterative learning is another approach that can be used to train a network. In this case, the network’s weights
are modified smoothly, in contrast to the one-shot learning algorithms. In general, network weights are set
to some arbitrary values first, then, trained data are fed to the network. In this case, in each training cycle,
network weights are modified smoothly. Then, the training process proceeds until achieving an acceptable
level of acceptance for the network. However, the training data could be selected either sequentially or
randomly in each training cycle [9–11].
2.3.1.4 Hopfield’s Model
A Hopfield neural network is another example of an auto-associative network [1,12–14]. There are two
main differences between this network and the previously described auto-associate network. In this
network, self-connection is not allowed, that is, wi,i = 0 for all nodes. Also, inputs and outputs are either
0 or 1. This means that the node activation is recomputed after each cycle of convergence as follows:


N
Si = wi,j · uj (t ), (2.1)
j=1

1 if Si ≥ 0,
uj = (2.2)
0 if Si < 0.

After feeding a datum into the network, in each convergence cycle, the nodes are selected by a uniform
random function, the input are used to calculate Equation (2.1) and then followed by Equation (2.2) to
generate the output. This procedure is continued until the network converges.
The proof of convergence for this network uses the notion of “energy.” This means that an energy value
is assigned to each state of the network and through the different iterations of the algorithm, the overall
energy is decreased until it reaches a steady state.
2.3.1.5 Mean Square Error Algorithms
These techniques emerged as an answer to the deficiencies experienced by using Preceptrons and other
simple networks [1,15]. One of the most important reasons is the inseparability of training data. If the data
used to train the network are naturally inseparable, the training algorithm never terminates (Figure 2.8).
The other reason for using this technique is to converge to a better solution. In Perceptron learning,
the training process terminates right after finding the first answer regardless of its quality (i.e., sensitivity
of the answer). Figure 2.9 shows an example of such a case. Note that, although the answer found by
the Perceptron algorithm is correct (Figure 2.9[a]), the answer in Figure 2.9(b) is more robust. Finally,
another reason for using Mean Square Error (MSE) algorithms, which is crucial for most neural network
algorithms, is that of speed of convergence.
The MSE algorithm attempts to modify the network weights based on the overall error of all data. In this
case, assume that network input and output data are represented by Ti , Ri for i = 1, . . . , N , respectively.
Now the MSE error is defined as follows:

1 
N
E= (W · Ti − Ri )2 .
N
i=1

Note that, the stated error is the summation of all individual errors for the all the training data. Inspite
of all advantages gained by this training technique, there are several disadvantages, for example, the
network might not be able to correctly classify the data if it is widely spread apart (Figure 2.10). The other

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 28 — #8


An Overview of Neural Networks Models 2-29

–1

–2

–3
–3 –2 –1 0 1 2 3

FIGURE 2.8 Inseparable training data set.

(a) 3 (b) 3

2 2

1 1

0 0
0

–1 –1

–2 –2

–3 –3
–3 –2 –1 0 1 2 3 –3 –2 –1 0 1 2 3

FIGURE 2.9 Two classifications for sample data.

disadvantage is that of the speed of convergence which may completely vary from one set of data to
another.
2.3.1.6 The Widow–Hoff Rule or LMS Algorithm
In this technique, the network weight is modified after each iteration [1,16]. A training datum is selected
randomly, then, the network weights are modified based on the corresponding error. This procedure
continues until converging to the answer. For a randomly selected kth entry in the training data, the error
is calculated as follows:

ε = (W · Tk − Rk )2 .

The gradient vector of this error would be:

∂ε ∂ε ∂ε
∇ε = ··· .
∂W0 ∂W1 ∂WN

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 29 — #9


2-30 Handbook of Bioinspired Algorithms and Applications

–1

–2

–3
–3 –2 –1 0 1 2 3

FIGURE 2.10 A data set with far apart solutions.

Hence,

∂ε
= 2(W · Tk − Rk ) · Tk .
∂Wj

Based on the Widow–Hoff algorithm, the weights should be modified opposite the direction of the
gradient. As a result, the final update formula for the weighting matrix W would be:

W  = W − ρ · (W · Tk − Rk ) · Tk .

Note that, ρ is known as the learning rate and it absorbs the multiplier of value “2.”

2.4 Unsupervised Learning


This class of networks attempts to cluster input data without the need for the traditional“learn by example”
technique that is commonly used for Neural Networks. Note that, clustering applications tend to be the
most popular type of applications that these networks are normally used for. The most popular networks
in this class are: K-means, Kohonen, ART1, and ART2 [17–21].

2.4.1 K-Means Clustering


This is the simplest technique used for classifying data. In this technique, a network with a predefined
number of clusters is considered, then, each datum is assigned to one of these clusters. This process
continues until all data are checked and classified properly. The following algorithm shows how this
algorithm is implemented:
Step 1: Consider a network with K clusters.
Step 2: Assign all data to one of the above clusters, with respect to the distance between the center of
the cluster and the datum.
Step 3: Modify the center of the assigned cluster.
Step 4: Check all data in the network to ensure proper classification.
Step 5: If a datum has to be moved from one cluster to another, then, update the center of both clusters.
Step 6: Repeat steps 4 and 5 until no datum is wrongly classified.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 30 — #10


An Overview of Neural Networks Models 2-31

(a) (b)

+ + +

+ + +

FIGURE 2.11 Results for a K-means clustering with (a) correct (b) incorrect number of clusters.

FIGURE 2.12 Output topology of a Kohonen network.

Figure 2.11 shows an instance of applying such network for data classification with the correct and
incorrect number of clusters.

2.4.2 Kohonen Clustering


This classification method clusters input data based on how the topological representation of the data.
The outputs of the network are arranged so that each output has some neighbors. Thus, during the
learning process, not only one output, but a group of close outputs are modified to classify the data. To
clarify the situation, assume that a network is supposed to learn how a set of data is to be distributed in a
two-dimensional representation (Figure 2.12).
In this case, each point is a potential output with a predefined neighborhood margin. For example,
the cell marked as “X” and eight of its neighbors are given. Therefore, whenever this cell gets selected
for update, all its neighbors are included in the process too. The main idea behind this approach for
classifying the input data is analogus to some biological facts. In a mammalian brain, all vision, auditory,
tactile sensors are mapped into a number of “cell sheets.” Therefore, if one of the cells is activated all cells
close to it will be affected, but at different levels of intensity.

2.4.3 ART1
This neural classifier, known as“Adaptive Resonance Theory”or ART, deals with digital inputs (Ti ∈ {0, 1}).
In this network, each “1” in the input vector represents information while a “0” entry is considered noise or
unwanted information. In ART, there is no predefined number of classes before the start of classification;
in fact, the classes are generated during the classification process.
Moreover, each class prototype may include the characteristics of more than a training datum. The
basic idea of such network relies on the similarity factor for data classification. In summary, every time

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 31 — #11


2-32 Handbook of Bioinspired Algorithms and Applications

a datum is assigned to a cluster, firstly, the nearest class with this datum is found, then, if the similarity of
this datum and the class prototype is more than a predefined value, known as a vigilance factor, then, the
datum is assigned to this class and the class prototype is modified to have more similarity with the a new
data entry [1,22,23].
The following procedure shows how this algorithm is implemented. However, the following needs to
be noted before outlining the algorithm:

1. X  is the number of 1’s in the vector X .


2. X · Y is the number of common 1’s between these vectors X and Y .
3. X ∩ Y is the bitwise AND operator applied on vectors X and Y .

Step 1: Let β be a small number, n be the dimension of the input data; and ρ be the vigilance factor
(0 ≤ ρ < 1).
Step 2: Start with no class prototype.
Step 3: Select a training datum by random, Tk .
Step 4: Find the nearest unchecked class prototype, Ci , to this datum by minimizing (Ci ·Tk )/(β +Ci ).
Step 5: Test if Ci is sufficiently close to Tk by verifying if (Ci · Tk )/(β + Ci ) > (Tk /(β + ρ)).
Step 6: If it is not similar enough, then, make a new class prototype and go to step 3.
Step 7: If it is sufficiently similar check the vigilance factor: (Ci · Tk /Tk ) ≥ ρ.
Step 8: If vigilance factor is exceeded, then, modify the class prototype by Ci = Ci ∩ Tk and go to step 3.
Step 9: If vigilance factor is not exceeded, then, try to find another unchecked class prototype in step 4.
Step 10: Repeat steps 3 to 9 until none of the training data causes any change in class prototypes.

2.4.4 ART2
This is a variation to ART1 with the following differences:

1. Data are considered continuous and not binary.


2. The input data is processed before passing it to the network. Actually, the input data is normalized,
then, all elements of the result vector that are below a predefined value are set to zero and the vector
normalized again. The process is used for noise cancellation.
3. When a class prototype is found for a datum, the class prototype vector is moved fractionally toward
the selected datum. As a result, contrary to the operation of ART1, the weights are moved smoothly
toward a new datum. The main reason for such a modification is to ‘memorize’ previously learnt
rules.

2.5 Learning in Multiple Layer Models


As mentioned earlier, multi-layer Neural Networks consist of several concatenated single-layer networks
[1,24–26]. The inner layers, known as hidden layers, may have different number of inputs and outputs.
Because of the added complexity the training process becomes more involved. This section presents two
of the most popular multi-layer neural network are presented.

2.5.1 The Back Propagation Algorithm


Back propagation algorithm is one of the most powerful and reliable techniques that can be used to adjust
the network weights. The main idea of this approach is to use gradient information of a cost function to
modify the network’s weights.
However, using this approach to train multi-layer networks is a little different from single-layer
networks. In general, multi-layer networks are much harder to train than single-layer ones. In fact,
convergence of such networks is much slower and very error sensitive.

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 32 — #12


An Overview of Neural Networks Models 2-33

x1 w11,1 y1
z1 w 21,1
Σ f11(.) Σ f12(.)

x2
z2 y2
Σ f21(.) Σ f22(.)

zs
Σ f 1m1(.) Σ fm2(.)
w 2s,m
w 1n,s
xn ym

Input layer Layer-1 Layer-2 Output layer

FIGURE 2.13 A single hidden layer network.

In this approach, an input is presented to the network and allowed to “forward” propagate through
the network and the output is calculated. Then, the output will be compared to a“desired”output (from the
training set) and an error calculated. This error is then propagated “backward” into the network and the
different weights updated accordingly. To simplify describing this algorithm, consider a network with
a single hidden layer (and two layers of weights) given in Figure 2.13.
In relation to the above network, the following definitions apply. Of course, the same definitions can
be easily extended to larger networks.

Ti , Ri for i = 1, . . . , L: The training set of input and outputs, respectively.


N , S, M : The size of the input, hidden, and output layers, respectively.
W 1 : Network weights from the input layer to the hidden layer.
W 2 : Network weights from the hidden layer to the output layer.
X , Z , Y : Input and output of the hidden layer, and the network output, respectively.
F 1 (·): Array of network functions for the hidden layer.
F 2 (·): Array of network functions for the output layer.

It is important to note that, in such network, different combinations of weights might produce the
same input/output relationship. However, this is not crucial as long as the network is able to “learn” this
association. As a result, the network weights may converge to different sets of values based on the order of
the training data and the algorithm used for training although their stability may differ.

2.5.2 Radial Basis Functions


Radial Basis Function (RBF) Neural Network is another popular multi-layer neural network [27–31]. The
RBF network consists of two layers, one hidden layer and one output layer. In this network, the hidden
layer is implemented by radial activation functions while the output layer is simply a weighted sum of the
hidden layer outputs.
The RBF neural network is able to model complex mappings, which Perceptron Neural Networks
can only accomplish by means of multiple hidden layers. The outstanding characteristics of such network
makes it applicable for variety of applications, such as, function interpolation [32,33], chaotic time serious
modeling [34,35], system identification [36–38], control systems [39,40], channel equalization [41–43],
speech recognition [44,45], image restoration [46,47], motion estimation [48], pattern classification [49],
and data fusion [50].

© 2006 by Taylor & Francis Group, LLC

CHAPMAN: “C4754_C002” — 2005/8/17 — 18:12 — page 33 — #13


Another Random Scribd Document
with Unrelated Content
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookgate.com

You might also like