0% found this document useful (0 votes)
5 views

Metaheuristics For Machine Learning Algorithms And Applications For True Epub Kanak Kalita instant download

The document discusses the book 'Metaheuristics for Machine Learning Algorithms and Applications' by Kanak Kalita, which covers various metaheuristic algorithms and their applications in machine learning. It includes a comprehensive review of hyperparameter optimization techniques, computer-aided diagnosis systems, and various methodologies for enhancing machine learning applications. Additionally, it provides insights into future directions and challenges in the field.

Uploaded by

gensanascon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Metaheuristics For Machine Learning Algorithms And Applications For True Epub Kanak Kalita instant download

The document discusses the book 'Metaheuristics for Machine Learning Algorithms and Applications' by Kanak Kalita, which covers various metaheuristic algorithms and their applications in machine learning. It includes a comprehensive review of hyperparameter optimization techniques, computer-aided diagnosis systems, and various methodologies for enhancing machine learning applications. Additionally, it provides insights into future directions and challenges in the field.

Uploaded by

gensanascon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Metaheuristics For Machine Learning Algorithms

And Applications For True Epub Kanak Kalita


download

https://ebookbell.com/product/metaheuristics-for-machine-
learning-algorithms-and-applications-for-true-epub-kanak-
kalita-56672990

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Metaheuristics For Machine Learning Algorithms And Applications 1st


Edition Kanak Kalita

https://ebookbell.com/product/metaheuristics-for-machine-learning-
algorithms-and-applications-1st-edition-kanak-kalita-56790898

Machine Learning With Metaheuristic Algorithms For Sustainable Water


Resources Management Ozgur Kisi

https://ebookbell.com/product/machine-learning-with-metaheuristic-
algorithms-for-sustainable-water-resources-management-ozgur-
kisi-50680880

Metaheuristics For Machine Learning New Advances And Tools Mansour


Eddaly

https://ebookbell.com/product/metaheuristics-for-machine-learning-new-
advances-and-tools-mansour-eddaly-48171748

Integrating Metaheuristics And Machine Learning For Realworld


Optimization Problems Essam Halim Houssein

https://ebookbell.com/product/integrating-metaheuristics-and-machine-
learning-for-realworld-optimization-problems-essam-halim-
houssein-46303758
Metaheuristic And Machine Learning Optimization Strategies For Complex
Systems Thanigaivelan R

https://ebookbell.com/product/metaheuristic-and-machine-learning-
optimization-strategies-for-complex-systems-thanigaivelan-r-59243788

Metaheuristics For Scheduling In Industrial And Manufacturing


Applications 1st Edition G I Zobolas

https://ebookbell.com/product/metaheuristics-for-scheduling-in-
industrial-and-manufacturing-applications-1st-edition-g-i-
zobolas-2165666

Metaheuristics For Scheduling In Distributed Computing Environments


1st Edition Fatos Xhafa

https://ebookbell.com/product/metaheuristics-for-scheduling-in-
distributed-computing-environments-1st-edition-fatos-xhafa-2171740

Metaheuristics For Robotics Oulhadj Hamouchedaachi Boubakermenasri

https://ebookbell.com/product/metaheuristics-for-robotics-oulhadj-
hamouchedaachi-boubakermenasri-21995444

Metaheuristics For Hard Optimization Methods And Case Studies 1st


Edition Johann Dro

https://ebookbell.com/product/metaheuristics-for-hard-optimization-
methods-and-case-studies-1st-edition-johann-dro-2326422
Table of Contents
1. Cover
2. Table of Contents
3. Series Page
4. Title Page
5. Copyright Page
6. Foreword
7. Preface
8. 1 Metaheuristic Algorithms and Their Applications in
Different Fields: A Comprehensive Review
1. 1.1 Introduction
2. 1.2 Types of Metaheuristic Algorithms
3. 1.3 Application of Metaheuristic Algorithms
4. 1.4 Future Direction
5. 1.5 Conclusion
6. References
9. 2 A Comprehensive Review of Metaheuristics for
Hyperparameter Optimization in Machine Learning
1. 2.1 Introduction
2. 2.2 Fundamentals of Hyperparameter Optimization
3. 2.3 Overview of Metaheuristic Optimization Techniques
4. 2.4 Population-Based Metaheuristic Techniques
5. 2.5 Single Solution-Based Metaheuristic Techniques
6. 2.6 Hybrid Metaheuristic Techniques
7. 2.7 Metaheuristics in Bayesian Optimization
8. 2.8 Metaheuristics in Neural Architecture Search
9. 2.9 Comparison of Metaheuristic Techniques for
Hyperparameter Optimization
10. 2.10 Applications of Metaheuristics in Machine Learning
11. 2.11 Future Directions and Open Challenges
12. 2.12 Conclusion
13. References
10. 3 A Survey of Computer-Aided Diagnosis Systems for Breast
Cancer Detection
1. 3.1 Introduction
2. 3.2 Procedure for Research Survey
3. 3.3 Imaging Modalities and Their Datasets
4. 3.4 Research Survey
5. 3.5 Conclusion
6. 3.6 Acknowledgment
7. References
11. 4 Enhancing Feature Selection Through Metaheuristic Hybrid
Cuckoo Search and Harris Hawks Optimization for Cancer
Classification
1. 4.1 Introduction
2. 4.2 Related Work
3. 4.3 Proposed Methodology
4. 4.4 Experimental Setup
5. 4.5 Results and Discussion
6. 4.6 Conclusion
7. References
12. 5 Anomaly Identification in Surveillance Video Using
Regressive Bidirectional LSTM with Hyperparameter
Optimization
1. 5.1 Introduction
2. 5.2 Literature Survey
3. 5.3 Proposed Methodology
4. 5.4 Result and Discussion
5. 5.5 Conclusion
6. References
13. 6 Ensemble Machine Learning-Based Botnet Attack Detection
for IoT Applications
1. 6.1 Introduction
2. 6.2 Literature Survey
3. 6.3 Proposed System
4. 6.4 Results and Discussion
5. 6.5 Conclusion
6. References
14. 7 Machine Learning-Based Intrusion Detection System with
Tuned Spider Monkey Optimization for Wireless Sensor
Networks
1. 7.1 Introduction
2. 7.2 Literature Review
3. 7.3 Proposed Methodology
4. 7.4 Result and Discussion
5. 7.5 Conclusion
6. References
15. 8 Security Enhancement in IoMT‑Assisted Smart Healthcare
System Using the Machine Learning Approach
1. 8.1 Introduction
2. 8.2 Literature Review
3. 8.3 Proposed Methodology
4. 8.4 Conclusion
5. References
16. 9 Building Sustainable Communication: A Game-Theoretic
Approach in 5G and 6G Cellular Networks
1. 9.1 Introduction
2. 9.2 Related Works
3. 9.3 Methodology
4. 9.4 Result
5. 9.5 Conclusion
6. References
17. 10 Autonomous Vehicle Optimization: Striking a Balance
Between Cost-Effectiveness and Sustainability
1. 10.1 Introduction
2. 10.2 Methods
3. 10.3 Results
4. 10.4 Conclusions
5. References
18. 11 Adapting Underground Parking for the Future:
Sustainability and Shared Autonomous Vehicles
1. 11.1 Introduction
2. 11.2 Related Works
3. 11.3 Methodology
4. 11.4 Analysis
5. 11.5 Conclusion
6. References
19. 12 Big Data Analytics for a Sustainable Competitive Edge: An
Impact Assessment
1. 12.1 Introduction
2. 12.2 Related Works
3. 12.3 Hypothesis and Research Model
4. 12.4 Results
5. 12.5 Conclusion
6. References
20. 13 Sustainability and Technological Innovation in
Organizations: The Mediating Role of Green Practices
1. 13.1 Introduction
2. 13.2 Related Work
3. 13.3 Methodology
4. 13.4 Discussion
5. 13.5 Conclusions
6. References
21. 14 Optimal Cell Planning in Two Tier Heterogeneous
Network through Meta-Heuristic Algorithms
1. 14.1 Introduction
2. 14.2 System Model and Formulation of the Problem
3. 14.3 Result and Discussion
4. 14.4 Conclusion
5. References
22. 15 Soil Aggregate Stability Prediction Using a Hybrid Machine
Learning Algorithm
1. 15.1 Introduction
2. 15.2 Related Works
3. 15.3 Proposed Methodology
4. 15.4 Result and Discussion
5. 15.5 Conclusion
6. References
23. Index
24. Also of Interest
25. End User License Agreement

List of Tables
1. Chapter 1
1. Table 1.1 Strengths and weaknesses of metaheuristic
algorithms.
2. Chapter 2
1. Table 2.1 Breakdown of popular metaheuristics and their
I&D components [75]....
2. Table 2.2 Performance comparison of four different
metaheuristics based on a...
3. Table 2.3 Performance comparison of eight population-
based metaheuristics fo...
3. Chapter 3
1. Table 3.1 Summary of the medical jargon used.
2. Table 3.2 Advantages and disadvantages.
4. Chapter 4
1. Table 4.1 Information regarding the six cancer microarray
data.
2. Table 4.2 Parameter settings of the proposed algorithm.
3. Table 4.3 Accuracies of the proposed algorithm with the
mRMR, mRMR+CSA, and ...
4. Table 4.4 Accuracies of the proposed algorithm with the
mRMR, mRMR+CSA, and ...
5. Table 4.5 Accuracies of the proposed algorithm with the
mRMR, mRMR+CSA, and ...
6. Table 4.6 Comparison of the different published methods
with the proposed me...
5. Chapter 8
1. Table 8.1 Comparison of the accuracy.
2. Table 8.2 Comparison of the precision.
3. Table 8.3 Comparison of the sensitivity.
4. Table 8.4 Comparison of the specificity.
5. Table 8.5 Comparison of the security.
6. Chapter 10
1. Table 10.1 Example of demand and supply data from
expert interviews with the...
2. Table 10.2 Provides an analysis of the logistic network
situation.
7. Chapter 11
1. Table 11.1 Index of the driver, status, and response model
system.
2. Table 11.2 The DSR indexes’ weights and value
attributions.
3. Table 11.3 Rankings of function replacement for each UPS
type.
8. Chapter 12
1. Table 12.1 Results of validity and reliability tests.
2. Table 12.2 HTMT values.
3. Table 12.3 Examine the legitimacy of differences.
4. Table 12.4 The model’s fit outcomes.
9. Chapter 13
1. Table 13.1 Description of the companies.
2. Table 13.2 Presentation of illustrative information.
3. Table 13.3 Impact on the various aspects.
4. Table 13.4 Credibility, dependability, and relevance.
5. Table 13.5 Inferential statistics.
6. Table 13.6 Evaluation of interactions.
10. Chapter 14
1. Table 14.1 Parameter values.

List of Illustrations

1. Chapter 1
1. Figure 1.1 Flowchart of the genetic algorithm.
2. Figure 1.2 Flowchart of simulated annealing.
3. Figure 1.3 Flowchart of the particle swarm optimization.
4. Figure 1.4 Flowchart of the ant colony optimization.
2. Chapter 2
1. Figure 2.1 Tabu search for optimizing the tour cost for a
city plotted vs. ite...
2. Figure 2.2 A Gaussian process approximation of an
objective function being ite...
3. Figure 2.3 Convergence comparison of four metaheuristics
based on the first 10...
4. Figure 2.4 Best score convergence profiles vs. iterations for
eight renowned a...
5. Figure 2.5 Accuracy of metaheuristics for different ML
models [83].
3. Chapter 3
1. Figure 3.1 Pictorial representation of the imaging
modalities.
2. Figure 3.2 CNN architecture as illustrated by Mohamed et
al. in [29].
4. Chapter 4
1. Figure 4.1 The proposed research methodology.
2. Figure 4.2 Hybrid flowchart of the HHO and CSA.
3. Figure 4.3 Error comparison with the SVM classifier.
4. Figure 4.4 The variance observed in the proposed
algorithm (mRMR+CSAHHO) compa...
5. Figure 4.5 Error comparison with the KNN classifier.
6. Figure 4.6 The variance observed in the proposed
algorithm (mRMR+CSAHHO) compa...
7. Figure 4.7 Error comparison with the NB classifier.
8. Figure 4.8 The variance observed in the proposed
algorithm (mRMR+CSAHHO) compa...
5. Chapter 5
1. Figure 5.1 Schematic architecture of our proposed system.
2. Figure 5.2 Normal and abnormal clips from the
ShanghaiTech dataset.
3. Figure 5.3 Accuracy comparison between the suggested
and current techniques.
4. Figure 5.4 Precision comparison between the suggested
and current techniques....
5. Figure 5.5 Recall comparison between the suggested and
current techniques.
6. Figure 5.6 Error rate comparison between the suggested
and current techniques....
6. Chapter 6
1. Figure 6.1 The proposed methodology.
2. Figure 6.2 The dataset’s distribution.
3. Figure 6.3 Architecture of the ANN.
4. Figure 6.4 Results of accuracy.
5. Figure 6.5 Results of precision.
6. Figure 6.6 Results of recall.
7. Figure 6.7 Results of the F-measure.
7. Chapter 7
1. Figure 7.1 Flowchart of the proposed SVM-TSMO model.
2. Figure 7.2 The support vector machine.
3. Figure 7.3 Accuracy of the existing and proposed methods.
4. Figure 7.4 Precision of the existing and proposed methods.
5. Figure 7.5 Recall % of the existing and proposed methods.
6. Figure 7.6 F1-measure of the existing and proposed
methods.
8. Chapter 8
1. Figure 8.1 The IoMT-smart healthcare system.
2. Figure 8.2 A systematic diagram of security enhancement
in the IoMT using mach...
3. Figure 8.3 Diagrammatic representation of the proposed
method.
4. Figure 8.4 The linear SVM model.
5. Figure 8.5 The MLPSO algorithm flowchart.
9. Chapter 9
1. Figure 9.1 IDO based on a game model.
2. Figure 9.2 Spectrum use ratio.
3. Figure 9.3 Offload ratio.
4. Figure 9.4 Throughput analysis.
5. Figure 9.5 Response delay analysis.
6. Figure 9.6 Energy consumption analysis.
10. Chapter 10
1. Figure 10.1 Framework for logistic clusters that limits
supply chain managemen...
2. Figure 10.2 Illustrates the assumptions of the logistic
network model.
3. Figure 10.3 Distribution model simulations with simulated
annealing.
11. Chapter 11
1. Figure 11.1 Weight matrix of attributes.
2. Figure 11.2 Ranking of the factors.
3. Figure 11.3 UPS-type characteristics.
4. Figure 11.4 Renewal time outcomes.
5. Figure 11.5 Analyzing renewal timing and UPS properties.
6. Figure 11.6 Distribution of renewal times.
12. Chapter 12
1. Figure 12.1 Suggested research design.
2. Figure 12.2 Reliability and validity of the CA.
3. Figure 12.3 Reliability and validity of the CR.
4. Figure 12.4 Reliability and validity of the AVE.
13. Chapter 13
1. Figure 13.1 Method of measuring model.
2. Figure 13.2 Model of structure.
14. Chapter 14
1. Figure 14.1 System model.
2. Figure 14.2 Flowchart of the proposed model.
3. Figure 14.3 Optimal user association to BSs with data suit-
1.
4. Figure 14.4 Optimal user association to BSs with data suit-
2.
5. Figure 14.5 Network utility maximization graph.
15. Chapter 15
1. Figure 15.1 Block diagram of soil aggregation.
2. Figure 15.2 C5.0’s algorithm flow.
3. Figure 15.3 Comparative analysis of the RMSE.
4. Figure 15.4 Comparative analysis of the R2.
5. Figure 15.5 Comparative analysis of the nRMSE.
6. Figure 15.6 Comparative analysis of the MAE.
Scrivener Publishing
100 Cummings Center, Suite 541J
Beverly, MA 01915-6106

Artificial Intelligence and Soft Computing for Industrial


Transformation

Series Editor: Dr S. Balamurugan ([email protected])

The book series is aimed to provide comprehensive handbooks


and reference books for the benefit of scientists, research
scholars, students and industry professional working towards
next generation industrial transformation.

Publishers at Scrivener
Martin Scrivener ([email protected])
Phillip Carmical ([email protected])
Metaheuristics for Machine Learning

Algorithms and Applications

Edited by

Kanak Kalita
Vel Tech University, Avadi, India

Narayanan Ganesh
Vellore Institute of Technology, Chennai, India

and

S. Balamurugan
Intelligent Research Consultancy Services, Coimbatore,
Tamilnadu, India
This edition first published 2024 by John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite
541J, Beverly, MA 01915, USA
© 2024 Scrivener Publishing LLC
For more information about Scrivener publications please visit
www.scrivenerpublishing.com.

All rights reserved. No part of this publication may be reproduced, stored in a


retrieval system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as permitted by law.
Advice on how to obtain permission to reuse material from this title is available at
http://www.wiley.com/go/permissions.

Wiley Global Headquarters


111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information
about Wiley products visit us at www.wiley.com.

Limit of Liability/Disclaimer of Warranty


While the publisher and authors have used their best efforts in preparing this work,
they make no representations or warranties with respect to the accuracy or
completeness of the contents of this work and specifically disclaim all warranties,
including without limitation any implied warranties of merchantability or fitness for
a particular purpose. No warranty may be created or extended by sales
representatives, written sales materials, or promotional statements for this work.
The fact that an organization, website, or product is referred to in this work as a
citation and/or potential source of further information does not mean that the
publisher and authors endorse the information or services the organization, website,
or product may provide or recommendations it may make. This work is sold with the
understanding that the publisher is not engaged in rendering professional services.
The advice and strategies contained herein may not be suitable for your situation.
You should consult with a specialist where appropriate. Neither the publisher nor
authors shall be liable for any loss of profit or any other commercial damages,
including but not limited to special, incidental, consequential, or other damages.
Further, readers should be aware that websites listed in this work may have changed
or disappeared between when this work was written and when it is read.

Library of Congress Cataloging-in-Publication Data

ISBN 978-1-394-23392-2

Cover image: Pixabay.Com


Cover design by Russell Richardson
Foreword
In the dynamic landscape of today’s technological revolution,
machine learning and its applications span multiple domains,
offering both opportunities and challenges. As we navigate this
terrain, the significance of data has shifted; it has transformed
from merely passive entities to active drivers influencing
decisions, sculpting perceptions, and determining collective
trajectories. This book serves a pivotal reference that sheds
light upon complex computational arenas and provides clarity
to those navigating this domain.

This book is more than an aggregation of knowledge. It


epitomizes the expertise and adaptability of current
computational researchers and accentuates the potential of
metaheuristics. For those unfamiliar with the term, envision
metaheuristics as high-level strategists that steer a multitude of
heuristic methodologies toward their zenith. They offer the
requisite tools to address complex challenges where
conventional algorithms might be inadequate.

Throughout the book, you will find a wide range of applications


and potential uses of metaheuristics that span across domains
from machine learning to the cutting-edge fields of
sustainability, communication, and networking. It is fascinating
to note that the algorithms aren’t just theoretical entities; they
resonate with pressing real-world challenges. For instance,
consider the pivotal role of metaheuristics in life-saving
applications like breast cancer detection, or in ensuring
security through anomaly identification in surveillance systems
and botnet attack detection.

Moreover, as we delve deeper, we witness the subtle yet


profound synergies between metaheuristics and contemporary
technological innovations. The chapters dedicated to the
advancements in 5G and 6G communication, and the future of
autonomous vehicles, are prime examples. These sections
underline the intricate balance and interdependence of the
challenges we face today and the innovative solutions
metaheuristics can offer.

For researchers who dedicate their lives to exploration,


practitioners at the frontline of technological innovations, and
students who look with hopeful eyes toward the future, this
book will be a pivotal tool. Let it guide you, as it did for me,
through the mesmerizing world of algorithms and their real-
world applications.

Diego Alberto Oliva


Universidad de Guadalajara, Mexico
Preface
While compiling this book, we were guided by a singular vision:
to sculpt a resource that seamlessly melds the theoretical
intricacies of metaheuristics with their myriad practical
applications. Our aspiration was to produce a reference that not
only delves deeply into the subject, but is also accessible to
readers across spectra, offering a holistic understanding that is
both profound and practical.

With every chapter, we strived to weave a narrative, oscillating


between the vast expanse of the topic and the intricate
minutiae that define it. The book commences with a
foundational introduction, leading readers through the
labyrinthine world of metaheuristics. Going forward, the
narrative transitions, diving deeper into their multifaceted
applications—spanning from the dynamic domain of machine
learning to the ever-evolving spheres of technology,
sustainability, and the intricate web of communication
networks.

Metaheuristics present a promising solution to many


formidable optimization conundrums. Yet, their true allure
comes not just from their theoretical promise but their practical
prowess. This book attempts to unveil this allure, transforming
nebulous algorithms into tangible entities with real-world
resonances—whether in the life-saving realm of healthcare or
the cutting-edge world of vehicular communications.

We extend our endless gratitude to the brilliant authors,


reviewers, and countless others whose relentless dedication,
insight, and expertise are evident in these pages. The editorial
journey has been one of profound learning and growth for all
involved. With each chapter, we have gleaned new
perspectives, and we hope this book becomes a wellspring of
knowledge, inspiration, and introspection for both scholars and
professionals.

In closing, we offer our sincere thanks to the Scrivener and


Wiley publishing teams for their help with this book. We
entreat you to immerse your intellect and curiosity in the
mesmerizing world of metaheuristics and their applications.
Here’s to an enlightening reading journey ahead!

Kanak Kalita

Narayanan Ganesh

S. Balamurugan
1
Metaheuristic Algorithms and
Their Applications in Different
Fields: A Comprehensive Review
Abrar Yaqoob1*, Navneet Kumar Verma2 and Rabia Musheer
Aziz1

1
School of Advanced Science and Language, VIT Bhopal
University, Kothrikalan, Sehore, India

2
State Planning Institute (New Division), Planning Department
Lucknow, Utter Pradesh, India

Abstract

A potent method for resolving challenging optimization issues


is provided by metaheuristic algorithms, which are heuristic
optimization approaches. They provide an effective technique
to explore huge solution spaces and identify close to ideal or
optimal solutions. They are iterative and often inspired by
natural or social processes. This study provides comprehensive
information on metaheuristic algorithms and the many areas in
which they are used. Heuristic optimization algorithms are
well-known for their success in handling challenging
optimization issues. They are a potent tool for problem-solving.
Twenty well-known metaheuristic algorithms, such as the tabu
search, particle swarm optimization, ant colony optimization,
genetic algorithms, simulated annealing, and harmony search,
are included in the article. The article extensively explores the
applications of these algorithms in diverse domains such as
engineering, finance, logistics, and computer science. It
underscores particular instances where metaheuristic
algorithms have found utility, such as optimizing structural
design, controlling dynamic systems, enhancing manufacturing
processes, managing supply chains, and addressing problems in
artificial intelligence, data mining, and software engineering.
The paper provides a thorough insight into the versatile
deployment of metaheuristic algorithms across different
sectors, highlighting their capacity to tackle complex
optimization problems across a wide range of real-world
scenarios.

Keywords: Optimization, metaheuristics, machine learning,


swarm intelligence

1.1 Introduction
Metaheuristics represent a category of optimization methods
widely employed to tackle intricate challenges in diverse
domains such as engineering, economics, computer science,
and operations research. These adaptable techniques are
designed to locate favorable solutions by exploring an extensive
array of possibilities and avoiding stagnation in suboptimal
outcomes [1]. The roots and advancement of metaheuristics can
be traced back to the early 1950s when George Dantzig
introduced the simplex approach for linear programming [2].
This innovative technique marked a pivotal point in
optimization and paved the way for the emergence of
subsequent optimization algorithms. Nonetheless, the simplex
method’s applicability is confined to linear programming issues
and does not extend to nonlinear problems. In the latter part of
the 1950s, John Holland devised the genetic algorithm, drawing
inspiration from concepts of natural selection and evolution [3].
The genetic algorithm assembles a set of potential solutions and
iteratively enhances this set through genetic operations like
mutation, crossover, and selection [4]. The genetic algorithm
was a major milestone in the development of metaheuristics
and opened up new possibilities for resolving difficult
optimization issues. During the 1980s and 1990s, the field of
metaheuristics experienced significant expansion and the
emergence of numerous novel algorithms. These techniques,
which include simulated annealing (SA), tabu search (TS), ant
colony optimization (ACO), particle swarm optimization (PSO),
and differential evolution (DE), were created expressly to deal
with a variety of optimization issues. They drew inspiration
from concepts like simulated annealing, tabu search, swarm
intelligence, and evolutionary algorithms [5].

The term “meta-” in metaheuristic algorithms indicates a higher


level of operation beyond simple heuristics, leading to
enhanced performance. These algorithms balance local search
and global exploration by using randomness to provide a range
of solutions. Despite the fact that metaheuristics are frequently
employed, there is not a single definition of heuristics and
metaheuristics in academic literature, and some academics
even use the terms synonymously. However, it is currently
fashionable to classify as metaheuristics all algorithms of a
stochastic nature that utilize randomness and comprehensive
exploration across the entire system. Metaheuristic algorithms
are ideally suited for global optimization and nonlinear
modeling because randomization is a useful method for
switching from local to global search. As a result, almost all
metaheuristic algorithms can be used to solve issues involving
nonlinear optimization at the global level [6]. In recent years,
the study of metaheuristics has developed over time and new
algorithms are being developed that combine different concepts
and techniques from various fields such as machine learning,
deep learning, and data science. The development and
evolution of metaheuristics have made significant contributions
to solving complex optimization problems and have led to the
development of powerful tools for decision-making in various
domains [7]. In order to find solutions in a huge search area,
metaheuristic algorithms are founded on the idea of mimicking
the behaviors of natural or artificial systems. These algorithms
are particularly valuable for tackling problems that are
challenging or impossible to solve using traditional
optimization methods. Typically, metaheuristic algorithms
involve iterations and a series of steps that modify a potential
solution until an acceptable one is discovered. Unlike other
optimization techniques that may become stuck in local optimal
solutions, metaheuristic algorithms are designed to explore the
entire search space. They also exhibit resilience to noise or
uncertainty in the optimization problem. The adaptability and
plasticity of metaheuristic algorithms are two of their main
features. They can be modified to take into account certain
limitations or goals of the current task and are applicable to a
wide variety of optimization situations. However, for complex
problems with extensive search spaces, these algorithms may
converge slowly toward an optimal solution, and there is no
guarantee that they will find the global optimum. Metaheuristic
algorithms find extensive application in various fields including
engineering, finance, logistics, and computer science. They have
been successfully employed in solving diverse problems such as
optimizing design, control, and manufacturing processes,
portfolio selection, and risk management strategies [8].

1.2 Types of Metaheuristic Algorithms

We shall outline some of the most popular metaheuristic


methods in this section.

1.2.1 Genetic Algorithms

Genetic algorithms (GAs) fit to a cluster of metaheuristic


optimization techniques that draw inspiration from natural
selection and genetics [9–11]. In order to find the optimal
solution for a particular issue, the core idea underlying the GA
is to mimic the evolutionary process. The genetic algorithm has
the capability to address challenges spanning various fields
such as biology, engineering, and finance [12–14]. In the
methodology of the GA, a potential solution is denoted as a
chromosome, or a collection of genes. Each gene within the
context of the problem signifies an individual variable, and its
value corresponds to the potential range of values that the
variable can take [15, 16]. Subsequently, these chromosomes
undergo genetic operations like mutation and crossover. This
process can give rise to a fresh population of potential
solutions, resulting in a novel set of potential outcomes [17–19].

The following are the major steps in the GA:


Initialization: The algorithm initializes a set of potential
responses first. A chromosome is used to symbolize each
solution, which is a string of genes randomly generated
based on the problem domain [20].

Evaluation: The suitability of each chromosome is assessed


based on the objective function of the problem. The quality
of the solution is evaluated by the fitness function, and the
objective is to optimize the fitness function by either
maximizing or minimizing it, depending on the particular
problem [21].

Selection: Chromosomes that possess higher fitness values


are chosen to form a fresh population of potential solutions.
Various techniques, such as roulette wheel selection,
tournament selection, and rank-based selection, are
employed for the selection process [22].

Crossover: The selected chromosomes are combined


through crossover to generate new offspring chromosomes.
The crossover operation exchanges the genetic information
from the parent chromosomes and is utilized to generate
novel solutions [23].
Mutation: The offspring chromosomes are subjected to
mutation, which introduces random changes to the genetic
information. Mutation aids in preserving diversity within
the population and preventing the occurrence of local
optima [24].

Replacement: As the child chromosomes multiply, a new


population of potential solutions is formed and replaces the
less fit members of the prior population.

Termination: The technique proceeds to iterate through the


selection, crossover, mutation, and replacement phases until
a specific termination condition is satisfied. Reaching a
predetermined maximum for iterations is one scenario for
termination, attaining a desired fitness value, or exceeding a
predetermined computational time limit.
Figure 1.1 Flowchart of the genetic algorithm.

The GA has several advantages, such as being capable of solving


complex issues, locating the global optimum, and being
applicable to various domains. However, the GA also has some
limitations, such as the need for a suitable fitness function, the
possibility of premature convergence, and the high
computational cost for complex problems. Figure 1.1 shows the
flowchart of the genetic algorithm.

1.2.2 Simulated Annealing

Simulated annealing is a probabilistic method for optimizing


complex multidimensional problems by seeking the global best
solution. It draws inspiration from the metallurgical technique
of annealing, which includes heating and gradually cooling a
metal to enhance its strength and resilience [25]. Similarly,
simulated annealing commences at an elevated temperature,
enabling the algorithm to extensively investigate a vast array of
possible solutions, and then slowly decreases the temperature
to narrow down the search to the most promising areas. SA
works by maintaining a current solution and repeatedly
making small changes to it in search of a better solution. At
each iteration, the algorithm calculates a cost function that
measures how good the current solution is. The cost function
can be any function that assigns a score to a potential solution,
such as a distance metric or a likelihood function. Subsequently,
the algorithm determines whether to embrace or disregard a
new solution by utilizing a probability distribution that relies
on the existing temperature and the disparity between the costs
of the current and new solutions [26]. High temperatures
increase SA’s propensity to embrace novel solutions even if they
have a higher cost than the current solution. This is because the
algorithm is still exploring the space of potential solutions and
needs to be open to new possibilities. As the temperature
decreases, SA becomes increasingly discriminating and admits
novel solutions solely if they surpass the existing solution. By
employing this approach, SA prevents itself from becoming
trapped in local peaks and eventually achieves convergence
toward the global peak [27]. SA offers a notable benefit by
effectively addressing non-convex optimization problems,
characterized by numerous local optima. By permitting the
acceptance of solutions with greater costs, SA can navigate
diverse areas within the solution space and prevent
entrapment in local optima. Moreover, SA boasts ease of
implementation and independence from cost function
gradients, rendering it suitable for scenarios where the cost
function lacks differentiability. However, SA does have some
limitations. It can be slow to converge, especially for large or
complex problems, and may require many iterations to find the
global optimum. SA’s effectiveness is also influenced by the
decision of cooling schedule, which determines how quickly the
temperature decreases. If the cooling schedule is too slow, the
algorithm may take too long to converge, while if it is too fast,
the algorithm may converge too quickly to a suboptimal
solution [28, 29].
Figure 1.2 Flowchart of simulated annealing.

To put it briefly, simulated annealing is a highly effective


optimization method that has the capability to address intricate
problems, multidimensional problems with multiple local
optima. It works by exploring the solution space and gradually
narrowing down the search for the most promising regions.
While it has some limitations, SA is a helpful tool for a variety
of optimization issues in the real world. Figure 1.2 shows the
flowchart of simulated annealing.
1.2.3 Particle Swarm Optimization

Particle swarm optimization is a technique for optimization


that employs a population-based strategy to address a wide
range of optimization problems. First introduced by Kennedy
and Eberhart in 1995, this concept takes inspiration from the
coordinated movements observed in the flocking of birds and
the schooling of fish [30–32]. This algorithm emulates the social
dynamics exhibited by these creatures, where each member
learns from its own encounters and the experiences of its
nearby peers, with the aim of discovering the best possible
solution [33]. The PSO method begins by generating a
population of particles, each of which acts as a potential
solution to the optimization issue at hand. These particles,
which have both a location and a velocity vector, are randomly
distributed throughout the search space [34]. The location
vector represents the particle’s current solution, whereas the
velocity vector represents the particle’s moving direction and
magnitude inside the search space. Through iterative steps,
each particle’s location and velocity vectors undergo constant
modification and adjustment in the PSO algorithm, guided by its
own best solution encountered thus far and the solutions of its
neighboring particles [35]. Collaborative learning continues
until a predetermined stopping condition is met, such as when
the desired outcome is attained or the maximum number of
iterations has been reached. Compared to other optimization
algorithms, the PSO algorithm boasts various advantages,
including simplicity, rapid convergence, and robustness [36, 37].
PSO has found applications in diverse problem domains,
spanning function optimization, neural network training, image
processing, and feature selection. Nevertheless, the algorithm
does come with certain limitations. These include the risk of
premature convergence, where the algorithm may converge to
suboptimal solutions prematurely, and challenges in effectively
handling problems with high-dimensional spaces [38].

Figure 1.3 Flowchart of the particle swarm optimization.

In general, the particle swarm optimization algorithm is a


robust and effective optimization method capable of addressing
numerous practical optimization problems. Its simplicity and
intuitive approach make it an appealing choice compared to
more intricate optimization methods. Figure 1.3 shows the
flowchart of the particle swarm optimization.

1.2.4 Ant Colony Optimization

Ant colony optimization is a nature-inspired method that


addresses difficult optimization problems by mimicking the
behavior of ant colonies. The program takes its cues from the
behavior of ant colonies, specifically the way ants communicate
to discover the shortest path toward food sources. The
fundamental idea behind the ACO is to simulate the foraging
behavior of ants to solve optimization problems effectively. A
simulated group of ants is put on a graph representing the
problem space in the ACO. These ants navigate the graph by
selecting the next node to visit based on the pheromone trails
left behind by other ants. The strength of the pheromone trail
represents the quality of the solution that passed through that
edge. As more ants traverse the same edge, the pheromone trail
becomes stronger. This is similar to how ants communicate with
each other in real life by leaving pheromone trails to signal the
location of food sources [39, 40]. The ACO algorithm has several
key parameters, such as the amount of pheromone each ant
leaves, the rate at which pheromones evaporate, and the
balance between exploiting the best solution and exploring new
solutions. The optimal values of the parameters in the
algorithm are determined through a process of
experimentation and refinement to obtain the best possible
results for a specific problem [41].

The ACO has showcased impressive achievements in resolving


diverse optimization challenges, including but not limited to the
traveling salesman problem, vehicle routing, and job
scheduling. One notable advantage of the algorithm is its ability
to swiftly discover favorable solutions, even when confronted
with extensive search spaces. Furthermore, because the ACO
belongs to the category of metaheuristic algorithms, it can be
applied to a variety of situations without requiring a deep
understanding of the underlying structure of those problems
[42]. Figure 1.4 shows the flowchart of the ant colony
optimization.
Figure 1.4 Flowchart of the ant colony optimization.

1.2.5 Tabu Search

The tabu search is a metaheuristic technique utilized for


optimization problems, initially proposed by Fred Glover in
1986. It has gained significant popularity across diverse
domains, including operations research, engineering, and
computer science. The core concept behind the tabu search
involves systematically traversing the search space by
transitioning between different solutions in order to identify
the optimal solution. However, unlike other local search
algorithms, the tabu search incorporates a memory structure
that records previous moves executed during the search. These
data are then used to steer the search to potential places within
the search space [43]. The tabu list, a memory structure that
plays an important part in the algorithm, is at the heart of the
tabu search. This list serves to store and remember previous
moves made during the search process, ensuring that the
algorithm avoids revisiting solutions that have already been
explored. By utilizing the tabu list, the tabu search effectively
restricts the search to new and unexplored regions of the
solution space, promoting efficient exploration and preventing
repetitive or redundant searches. This list is used to enforce a
set of constraints, known as the tabu tenure, which determines
how long a move is considered tabu. By imposing this
constraint, the algorithm is compelled to investigate diverse
regions within the search space and evade being trapped in
local optima. This ensures that the algorithm remains dynamic
and continually explores new possibilities, preventing it from
being overly fixated on suboptimal solutions [43]. The tabu
search is a versatile optimization algorithm applicable to both
continuous and discrete optimization problems. When
addressing continuous optimization problems, the algorithm
typically uses a neighborhood function to generate new
solutions by perturbing the present solution. In the event of
discrete optimization problems, the neighborhood function is
typically defined in terms of specific moves that can be made to
the solution, such as swapping two elements in a permutation.
The effectiveness of the tabu search is based on a number of
variables, such as the choice of neighborhood function, the tabu
tenure, and the stopping criterion. The algorithm can be
enhanced by using various strategies, such as diversification
and intensification, which balance the search space’s
exploitation and exploration [44].

The tabu search, a metaheuristic approach introduced by Fred


Glover in 1986, is utilized for optimizing problems. Its
application spans a wide array of domains, including
operations research, engineering, and computer science,
establishing it as a widely recognized technique. The
fundamental principle of the tabu search involves a systematic
traversal of the solution space, shifting between different
solutions to ascertain the optimal one. Distinguishing itself from
conventional local search algorithms, the tabu search
incorporates a memory structure that logs prior moves
executed during the search process. This stored information
guides the search toward unexplored areas within the solution
space [45]. Central to the tabu search is the tabu list, a pivotal
memory structure. This list retains and recalls previous moves
executed during the search, ensuring that revisiting already
explored solutions is avoided. The tabu list effectively restricts
the exploration to untrodden regions, preventing redundant
searches and fostering efficient investigation. Governing the
tabu list is the concept of the tabu tenure, setting the duration
for which a move remains prohibited. This constraint compels
the algorithm to explore diverse solution space regions, eluding
entrapment in local optima. This dynamic approach
continuously explores novel avenues, counteracting fixation on
suboptimal solutions [46]. The tabu search is a versatile
optimization algorithm suitable for both continuous and
discrete optimization challenges. For continuous optimization, a
neighborhood function is commonly used to generate new
solutions by perturbing the current one. In the context of
discrete optimization, the neighborhood function is typically
defined by specific permissible moves, such as element swaps
in a permutation. The efficacy of the tabu search hinges on
factors like neighborhood function choice, tabu tenure, and the
stopping criteria. The algorithm can be augmented through
strategies like diversification and intensification, ensuring a
balance between exploiting and exploring the search space.
In general, the tabu search is a robust and adaptable
optimization technique that has demonstrated its effectiveness
in addressing diverse problem sets. It can be employed
independently or integrated into more intricate optimization
algorithms. Its popularity stems from its versatility and
straightforwardness, making it a favored option for tackling
real-life challenges in various domains.

1.2.6 Differential Evolution

The DE is an optimization algorithm based on populations,


originally created by Storn and Price in 1997 [47]. It fit to the
category of evolutionary algorithms that iteratively grow a
population of potential solutions to find the optimal solution.
The algorithm adheres to the fundamental steps of mutation,
crossover, and selection, which are key elements commonly
shared among numerous evolutionary algorithms [48].

In the process of the differential evolution, a population of


potential solutions undergoes iterative evolution through the
implementation of the following sequential steps:

1. Initialization: A population of N possible solutions is


produced at random.
2. Mutation: Involves randomly selecting three candidate
solutions and modifying them to create a trial vector.
3. Crossover: It is a technique used in optimization algorithms
to create a new candidate solution by combining the trial
vector with the target vector.
4. Selection: If the new candidate solution has a higher fitness,
it will take the place of the target vector.

The success of the differential evolution depends on the


selection of the optimization technique’s adjustable settings,
such as mutation rate, crossover rate, and population size [49,
50]. Several variants of the DE have been proposed, including
the SHADE (success history-based adaptive differential
evolution) and JADE (adaptive differential evolution)
algorithms, which incorporate adaptive control parameters to
improve the algorithm’s performance.

1.2.7 Harmony Search

The harmony search (HS) is an optimization technique


motivated by the musical improvization process. Geem [125]
had put forward the idea in 2001 and has now been used to
solve several optimization issues. The techniques mimic the
process of improvization by a group of musicians, where they
adjust their pitches (or notes) to create harmony. In the HS, the
choice variables of an optimization problem in high school are
comparable to musical notes, and the value of the goal function
reflects harmony [51].

Starting individuals of decision variable vectors (i.e., the notes)


are used in the HS approaches and iteratively search for better
solutions by generating new solutions through the following
steps:

1. Harmony memory: A set of the best candidate solutions (i.e.,


the harmonies) is maintained.
2. Harmony creation: A potential solution is created by chance
selecting values from the harmony memory.
3. Pitch adjustment: The values in the new candidate solution
are adjusted with a probability based on a pitch adjustment
rate.
4. Acceptance: The new candidate solution is accepted if it
improves the objective function value.

The control variables, such as the harmony memory size, pitch


adjustment rate, and number of iterations, affect how well the
HS performs. The method follows the core phases of mutation,
crossover, and selection, and the approach has been utilized to
tackle diverse optimization challenges, such as managing water
resources, designing structures, and operating power systems
[52, 53].

1.2.8 Artificial Bee Colony

The artificial bee colony (ABC) is a population-based


optimization method that draws inspiration from honey bees’
feeding habits. Since its introduction by Karaboga in 2005, the
technique has been used to solve a number of optimization
issues [54]. The ABC mimics the foraging process of bees, where
they search for food sources by visiting the flowers in the
vicinity of the hive [55].

The artificial bee colony technique starts with an arbitrarily


generated population of candidate solutions (i.e., food sources)
and iteratively searches for better solutions by simulating the
foraging process of bees through the following steps:

1. Phase of employed bees: The employed bees develop new


candidate solutions by modifying the values of current
solutions.
2. Phase of the onlooker bees: The onlooker bees choose the
candidate solutions with the highest fitness values and send
this information to the employed bees.
Random documents with unrelated
content Scribd suggests to you:
*** END OF THE PROJECT GUTENBERG EBOOK SCRIBNER'S
MAGAZINE, VOLUME 26, AUGUST 1899 ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like