0% found this document useful (0 votes)
10 views39 pages

60414

The document is a digital download link for the book 'Theory and Practice of Uncertain Programming' by Professor Baoding Liu, which covers various mathematical programming techniques and algorithms. It includes topics such as genetic algorithms, neural networks, stochastic programming, and fuzzy programming, among others. The book is available in PDF format and was published in 2002, with a second edition released in 2008.

Uploaded by

darronjibsam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views39 pages

60414

The document is a digital download link for the book 'Theory and Practice of Uncertain Programming' by Professor Baoding Liu, which covers various mathematical programming techniques and algorithms. It includes topics such as genetic algorithms, neural networks, stochastic programming, and fuzzy programming, among others. The book is available in PDF format and was published in 2002, with a second edition released in 2008.

Uploaded by

darronjibsam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Theory and Practice of Uncertain Programming 1st

Edition Professor Baoding Liu (Auth.) pdf


download

https://ebookfinal.com/download/theory-and-practice-of-uncertain-
programming-1st-edition-professor-baoding-liu-auth/

Explore and download more ebooks or textbooks


at ebookfinal.com
Here are some recommended products for you. Click the link to
download, or explore more at ebookfinal

Vagabond Vol 29 29 Inoue

https://ebookfinal.com/download/vagabond-vol-29-29-inoue/

Exploring Chemical Concepts Through Theory and Computation


1st Edition Shubin Liu

https://ebookfinal.com/download/exploring-chemical-concepts-through-
theory-and-computation-1st-edition-shubin-liu/

Ethical Problems in the Practice of Law 2nd Edition


Professor Lisa G. Lerman

https://ebookfinal.com/download/ethical-problems-in-the-practice-of-
law-2nd-edition-professor-lisa-g-lerman/

Study on the Auditing Theory of Socialism with Chinese


Characteristics Revised Edition Liu

https://ebookfinal.com/download/study-on-the-auditing-theory-of-
socialism-with-chinese-characteristics-revised-edition-liu/
Fast multipole boundary element method Theory and
applications in engineering 1st Edition Yijun Liu

https://ebookfinal.com/download/fast-multipole-boundary-element-
method-theory-and-applications-in-engineering-1st-edition-yijun-liu/

Cross Cultural Interaction and Understanding Theory


Practice Reality Theory Practice and Reality 1st Edition
Yuma Iannotti Tomes
https://ebookfinal.com/download/cross-cultural-interaction-and-
understanding-theory-practice-reality-theory-practice-and-reality-1st-
edition-yuma-iannotti-tomes/

Handbook of Radiotherapy Physics Theory and Practice 1st


Edition P Mayles

https://ebookfinal.com/download/handbook-of-radiotherapy-physics-
theory-and-practice-1st-edition-p-mayles/

Theory and Practice of Pile Foundations 1st Edition Wei


Dong Guo

https://ebookfinal.com/download/theory-and-practice-of-pile-
foundations-1st-edition-wei-dong-guo/

The Theory and Practice of Teaching 2nd Edition Peter


Jarvis

https://ebookfinal.com/download/the-theory-and-practice-of-
teaching-2nd-edition-peter-jarvis/
Theory and Practice of Uncertain Programming 1st
Edition Professor Baoding Liu (Auth.) Digital Instant
Download
Author(s): Professor Baoding Liu (auth.)
ISBN(s): 9783790814903, 3790814903
Edition: 1
File Details: PDF, 1.10 MB
Year: 2002
Language: english
Theory and Practice of
Uncertain Programming
Second Edition

Baoding Liu
Uncertainty Theory Laboratory
Department of Mathematical Sciences
Tsinghua University
Beijing 100084, China
[email protected]
http://orsc.edu.cn/liu

2nd Edition c 2008 by UTLAB


Russian Translation Version c 2005 by LBZ Moscow
1st Edition c 2002 by Physica-Verlag Heidelberg

Reference to this book should be made as follows:


Liu B, Theory and Practice of Uncertain Programming,
2nd ed., http://orsc.edu.cn/liu/up.pdf
Contents

Preface ix

1 Mathematical Programming 1
1.1 Single-Objective Programming . . . . . . . . . . . . . . . . . 1
1.2 Multiobjective Programming . . . . . . . . . . . . . . . . . . 3
1.3 Goal Programming . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . 6
1.5 Multilevel Programming . . . . . . . . . . . . . . . . . . . . . 7

2 Genetic Algorithms 9
2.1 Representation Structure . . . . . . . . . . . . . . . . . . . . 10
2.2 Handling Constraints . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Initialization Process . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Evaluation Function . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Selection Process . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Crossover Operation . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 Mutation Operation . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 General Procedure . . . . . . . . . . . . . . . . . . . . . . . . 13
2.9 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 14

3 Neural Networks 19
3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Function Approximation . . . . . . . . . . . . . . . . . . . . 21
3.3 Neuron Number Determination . . . . . . . . . . . . . . . . . 21
3.4 Backpropagation Algorithm . . . . . . . . . . . . . . . . . . . 22
3.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 23

4 Stochastic Programming 25
4.1 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Expected Value Model . . . . . . . . . . . . . . . . . . . . . . 30
4.3 Chance-Constrained Programming . . . . . . . . . . . . . . . 32
4.4 Dependent-Chance Programming . . . . . . . . . . . . . . . . 38
4.5 Hybrid Intelligent Algorithm . . . . . . . . . . . . . . . . . . 45
vi Contents

4.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 48

5 Fuzzy Programming 53
5.1 Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Expected Value Model . . . . . . . . . . . . . . . . . . . . . . 60
5.3 Chance-Constrained Programming . . . . . . . . . . . . . . . 61
5.4 Dependent-Chance Programming . . . . . . . . . . . . . . . . 65
5.5 Hybrid Intelligent Algorithm . . . . . . . . . . . . . . . . . . 68
5.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 70

6 Hybrid Programming 75
6.1 Hybrid Variables . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Expected Value Model . . . . . . . . . . . . . . . . . . . . . . 84
6.3 Chance-Constrained Programming . . . . . . . . . . . . . . . 85
6.4 Dependent-Chance Programming . . . . . . . . . . . . . . . . 87
6.5 Hybrid Intelligent Algorithm . . . . . . . . . . . . . . . . . . 89
6.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 92

7 System Reliability Design 97


7.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 97
7.2 Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . 98
7.3 Fuzzy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . . . 103

8 Project Scheduling Problem 107


8.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 107
8.2 Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . 108
8.3 Fuzzy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . . . 113

9 Vehicle Routing Problem 115


9.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 116
9.2 Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . 117
9.3 Fuzzy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . . . 123

10 Facility Location Problem 125


10.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 125
10.2 Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . 126
10.3 Fuzzy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
10.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Contents vii

11 Machine Scheduling Problem 133


11.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 133
11.2 Stochastic Models . . . . . . . . . . . . . . . . . . . . . . . . 134
11.3 Fuzzy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
11.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . . . 141

12 Uncertain Programming 145


12.1 Uncertain Variables . . . . . . . . . . . . . . . . . . . . . . . 145
12.2 Expected Value Model . . . . . . . . . . . . . . . . . . . . . . 147
12.3 Chance-Constrained Programming . . . . . . . . . . . . . . . 148
12.4 Dependent-Chance Programming . . . . . . . . . . . . . . . . 151
12.5 Uncertain Dynamic Programming . . . . . . . . . . . . . . . 152
12.6 Uncertain Multilevel Programming . . . . . . . . . . . . . . . 153
12.7 Ψ Graph of Uncertain Programming . . . . . . . . . . . . . . 157

Bibliography 159

List of Acronyms 179

List of Frequently Used Symbols 180

Index 181
viii Contents
Preface

Real-life decisions are usually made in the state of uncertainty. How do we


model optimization problems in uncertain environments? How do we solve
these models? The main purpose of the book is just to provide uncertain
programming theory to answer these questions.
By uncertain programming we mean the optimization theory in uncer-
tain environments. Stochastic programming, fuzzy programming and hybrid
programming are subtopics of uncertain programming.
This book provides a self-contained, comprehensive and up-to-date pre-
sentation of uncertain programming theory, including numerous modeling
ideas and applications in system reliability design, project scheduling prob-
lem, vehicle routing problem, facility location problem, and machine schedul-
ing problem.
Numerous intelligent algorithms such as genetic algorithms and neural
networks have been developed by researchers of different backgrounds. A
natural idea is to integrate these intelligent algorithms to produce more ef-
fective and powerful algorithms. In order to solve uncertain programming
models, a spectrum of hybrid intelligent algorithms are documented in the
book. The author also maintains a website at http://orsc.edu.cn/liu to post
the C++ source files of simulations, genetic algorithms, neural networks, and
hybrid intelligent algorithms.
It is assumed that readers are familiar with the basic concepts of math-
ematical programming, and elementary knowledge of C++ language. In
order to make the book more readable, some background topics that will
be useful in reading the book are also presented. The book is suitable for
researchers, engineers, and students in the field of operations research, in-
formation science, management science, system science, computer science,
and engineering. The readers will learn numerous new modeling ideas and
effective algorithms, and find this work a stimulating and useful reference.

Baoding Liu
Tsinghua University
http://orsc.edu.cn/liu
March 3, 2008
Chapter 1

Mathematical
Programming

As one of the most widely used techniques in operations research, mathemat-


ical programming is defined as a means of maximizing a quantity known as
objective function, subject to a set of constraints represented by equations
and inequalities. Some known subtopics of mathematical programming are
linear programming, nonlinear programming, multiobjective programming,
goal programming, dynamic programming, and multilevel programming.
It is impossible to cover in a single chapter every concept of mathemati-
cal programming. This chapter introduces only the basic concepts and tech-
niques of mathematical programming such that readers gain an understand-
ing of them throughout the book.

1.1 Single-Objective Programming


The general form of single-objective programming (SOP) is written as follows,

 max f (x)

subject to: (1.1)

gj (x) ≤ 0, j = 1, 2, · · · , p

which maximizes a real-valued function f of x = (x1 , x2 , · · · , xn ) subject to


a set of constraints.

Definition 1.1 In SOP (1.1), we call x a decision vector, and x1 , x2 , · · · , xn


decision variables. The function f is called the objective function. The set

S = x ∈ <n gj (x) ≤ 0, j = 1, 2, · · · , p

(1.2)

is called the feasible set. An element x in S is called a feasible solution.


2 Chapter 1 - Mathematical Programming

Definition 1.2 A feasible solution x∗ is called the optimal solution of SOP


(1.1) if and only if
f (x∗ ) ≥ f (x) (1.3)

for any feasible solution x.

One of the outstanding contributions to mathematical programming was


known as the Kuhn-Tucker conditions. In order to introduce them, let us
give some definitions. An inequality constraint gj (x) ≤ 0 is said to be active
at a point x∗ if gj (x∗ ) = 0. A point x∗ satisfying gj (x∗ ) ≤ 0 is said to be
regular if the gradient vectors ∇gj (x) of all active constraints are linearly
independent.
Let x∗ be a regular point of the constraints of SOP (1.1) and assume
that all the functions f (x) and gj (x), j = 1, 2, · · · , p are differentiable. If
x∗ is a local optimal solution, then there exist Lagrange multipliers λj , j =
1, 2, · · · , p such that the following Kuhn-Tucker conditions hold,
 p

λj ∇gj (x∗ ) = 0
P


 ∇f (x ) −
j=1

λj gj (x∗ ) = 0, j = 1, 2, · · · , p (1.4)



λj ≥ 0, j = 1, 2, · · · , p.

If all the functions f (x) and gj (x), j = 1, 2, · · · , p are convex and differen-
tiable, and the point x∗ satisfies the Kuhn-Tucker conditions (1.4), then it
has been proved that the point x∗ is a global optimal solution of SOP (1.1).

Linear Programming

If the functions f (x), gj (x), j = 1, 2, · · · , p are all linear, then SOP (1.1) is
called a linear programming.
The feasible set of linear programming is always convex. A point x is
called an extreme point of convex set S if x ∈ S and x cannot be expressed
as a convex combination of two points in S. It has been shown that the
optimal solution to linear programming corresponds to an extreme point of
its feasible set provided that the feasible set S is bounded. This fact is the
basis of the simplex algorithm which was developed by Dantzig [53] as a very
efficient method for solving linear programming.
Roughly speaking, the simplex algorithm examines only the extreme points
of the feasible set, rather than all feasible points. At first, the simplex algo-
rithm selects an extreme point as the initial point. The successive extreme
point is selected so as to improve the objective function value. The procedure
is repeated until no improvement in objective function value can be made.
The last extreme point is the optimal solution.
Section 1.2 - Multiobjective Programming 3

Nonlinear Programming

If at least one of the functions f (x), gj (x), j = 1, 2, · · · , p is nonlinear, then


SOP (1.1) is called a nonlinear programming.
A large number of classical optimization methods have been developed to
treat special-structural nonlinear programming based on the mathematical
theory concerned with analyzing the structure of problems.
Now we consider a nonlinear programming which is confronted solely with
maximizing a real-valued function with domain <n . Whether derivatives are
available or not, the usual strategy is first to select a point in <n which
is thought to be the most likely place where the maximum exists. If there
is no information available on which to base such a selection, a point is
chosen at random. From this first point an attempt is made to construct
a sequence of points, each of which yields an improved objective function
value over its predecessor. The next point to be added to the sequence
is chosen by analyzing the behavior of the function at the previous points.
This construction continues until some termination criterion is met. Methods
based upon this strategy are called ascent methods, which can be classified
as direct methods, gradient methods, and Hessian methods according to the
information about the behavior of objective function f . Direct methods
require only that the function can be evaluated at each point. Gradient
methods require the evaluation of first derivatives of f . Hessian methods
require the evaluation of second derivatives. In fact, there is no superior
method for all problems. The efficiency of a method is very much dependent
upon the objective function.

Integer Programming

Integer programming is a special mathematical programming in which all of


the variables are assumed to be only integer values. When there are not only
integer variables but also conventional continuous variables, we call it mixed
integer programming. If all the variables are assumed either 0 or 1, then the
problem is termed a zero-one programming. Although integer programming
can be solved by an exhaustive enumeration theoretically, it is impractical to
solve realistically sized integer programming problems. The most successful
algorithm so far found to solve integer programming is called the branch-and-
bound enumeration developed by Balas (1965) and Dakin (1965). The other
technique to integer programming is the cutting plane method developed by
Gomory (1959).

1.2 Multiobjective Programming


SOP is related to maximizing a single function subject to a number of con-
straints. However, it has been increasingly recognized that many real-world
4 Chapter 1 - Mathematical Programming

decision-making problems involve multiple, noncommensurable, and conflict-


ing objectives which should be considered simultaneously. As an extension,
multiobjective programming (MOP) is defined as a means of optimizing mul-
tiple objective functions subject to a number of constraints, i.e.,

 max [f1 (x), f2 (x), · · · , fm (x)]

subject to: (1.5)

gj (x) ≤ 0, j = 1, 2, · · · , p

where fi (x) are objective functions, i = 1, 2, · · · , m, and gj (x) ≤ 0 are system


constraints, j = 1, 2, · · · , p.
When the objectives are in conflict, there is no optimal solution that si-
multaneously maximizes all the objective functions. For this case, we employ
a concept of Pareto solution, which means that it is impossible to improve
any one objective without sacrificing on one or more of the other objectives.

Definition 1.3 A feasible solution x∗ is said to be a Pareto solution if there


is no feasible solution x such that

fi (x) ≥ fi (x∗ ), i = 1, 2, · · · , m (1.6)

and fj (x) > fj (x∗ ) for at least one index j.

If the decision maker has a real-valued preference function aggregating


the m objective functions, then we may maximize the aggregating preference
function subject to the same set of constraints. This model is referred to as
a compromise model whose solution is called a compromise solution.
The first well-known compromise model is set up by weighting the objec-
tive functions, i.e.,
 Pm
 max λi fi (x)


i=1
(1.7)
 subject to:


gj (x) ≤ 0, j = 1, 2, · · · , p

where the weights λ1 , λ2 , · · · , λm are nonnegative numbers with λ1 + λ2 +


· · · + λm = 1. Note that the solution of (1.7) must be a Pareto solution of
the original one.
The second way is related to minimizing the distance function from a
solution (f1 (x), f2 (x), · · · , fm (x)) to an ideal vector (f1∗ , f2∗ , · · · , fm

), where

fi are the optimal values of the ith objective functions without considering
other objectives, i = 1, 2, · · · , m, respectively, i.e.,
p
∗ 2 ∗ 2

 min (f1 (x) − f1 ) + · · · + (fm (x) − fm )

subject to: (1.8)


gj (x) ≤ 0, j = 1, 2, · · · , p.
Section 1.3 - Goal Programming 5

By the third way a compromise solution can be found via an interactive


approach consisting of a sequence of decision phases and computation phases.
Various interactive approaches have been developed in the past literature.

1.3 Goal Programming


Goal programming (GP) was developed by Charnes and Cooper [39] and
subsequently studied by many researchers. GP can be regarded as a special
compromise model for multiobjective programming and has been applied in
a wide variety of real-world problems.
In multiobjective decision-making problems, we assume that the decision-
maker is able to assign a target level for each goal and the key idea is to
minimize the deviations (positive, negative, or both) from the target levels.
In the real-world situation, the goals are achievable only at the expense of
other goals and these goals are usually incompatible. Therefore, there is a
need to establish a hierarchy of importance among these incompatible goals
so as to satisfy as many goals as possible in the order specified. The general
form of GP is written as follows,

l m

(uij d+
P P
 min Pj i ∨ 0 + vij di ∨ 0)





 j=1 i=1
 subject to:

(1.9)

 fi (x) − bi = d+ i , i = 1, 2, · · · , m
bi − fi (x) = d−

i , i = 1, 2, · · · , m





gj (x) ≤ 0, j = 1, 2, · · · , p

where Pj is the preemptive priority factor which expresses the relative im-
portance of various goals, Pj  Pj+1 , for all j, uij is the weighting factor
corresponding to positive deviation for goal i with priority j assigned, vij
is the weighting factor corresponding to negative deviation for goal i with
priority j assigned, d+ i ∨ 0 is the positive deviation from the target of goal
i, d−
i ∨ 0 is the negative deviation from the target of goal i, fi is a function
in goal constraints, gj is a function in system constraints, bi is the target
value according to goal i, l is the number of priorities, m is the number of
goal constraints, and p is the number of system constraints. Sometimes, the
objective function of GP (1.9) is written as
(m m
)
X X
+ − + −
lexmin (ui1 di ∨ 0 + vi1 di ∨ 0), · · · , (uil di ∨ 0 + vil di ∨ 0)
i=1 i=1

where lexmin represents lexicographically minimizing the objective vector.


Linear GP can be successfully solved by the simplex goal method. The
approaches of nonlinear GP are summarized by Saber and Ravindran [270]
and the efficiency of these approaches varies. They are classified as follows:
6 Chapter 1 - Mathematical Programming

(a) simplex-based approach, whose main idea lies in converting the nonlinear
GP into a set of approximation linear GPs which can be handled by the
simplex goal method; (b) direct search approach [50], in which the given
nonlinear GP is translated into a set of SOPs, and then the SOPs are solved
by the direct search methods; (c) gradient-based approach [154][270], which
utilizes the gradient of constraints to identify a feasible direction and then
solves the GP based on the feasible direction method; (d) interactive approach
[309][225], which can yield a satisfactory solution in a relatively few iterations
since the decision-maker is involved in the solution process; and (e) genetic
algorithm [88], which can deal with complex nonlinear GP but have to spend
more CPU time.

1.4 Dynamic Programming


Let us denote a multistage decision process by [a, T (a, x)], where a is called
state, T (a, x) is called a state transition function, and x is called decision
vector. It is clear that the state transition function depends on both state
a and decision vector x. We suppose that we have sufficient influence over
the process so that at each stage we can choose a decision vector x from the
allowable set. Let xi be the decision vector at the ith stage. Then we have
the following sequence,

a1 = a0 , (an initial state)


ai+1 = T (ai , xi ), i = 1, 2, · · ·

We are concerned with processes in which the decision vectors xi ’s are chosen
so as to maximize a criterion function R(a1 , a2 , · · · , aN ; x1 , x2 , · · · , xN ). A
decision is called optimal if it maximizes the criterion function.
In view of the general nature of the criterion function R, the decision vec-
tors xi ’s are dependent upon the current state of the system as well as the
past and future states and decisions. However, there are some criterion func-
tions which have some special structures so that the decision is dependent
only on the current state. In this special but extremely important case, the
optimal policy is characterized by Bellman’s principle of optimality: An opti-
mal policy has the property that whatever the initial state and initial decision
are, the remaining decision must constitute an optimal policy with regard to
the state resulting from the first decision.
Fortunately, many important criteria have the vital property of divorcing
the past from the present. In general, it is easy to predict this property from
the nature of the original multistage decision process. For example, let us
consider a problem of maximizing the following special-structured function
N
X
R(a1 , a2 , · · · , aN ; x1 , x2 , · · · , xN ) = ri (ai , xi ) (1.10)
i=1
Section 1.5 - Multilevel Programming 7

subject to gi (ai , xi ) ≤ 0 for i = 1, 2, · · · , N . Let fn (a) be the maximum val-


ues of criterion function R, starting in state a at the stage n, n = 1, 2, · · · , N ,
respectively. Then by Bellman’s principle of optimality, we have

 fN (a) = max rN (a, x)

 gN (a,x)≤0

 
 fN −1 (a) =
 max rN −1 (a, x) + fN (T (a, x))
gN −1 (a,x)≤0
(1.11)


 · · ·
 
f1 (a) = max r1 (a, x) + f2 (T (a, x)) .



g1 (a,x)≤0

Please mention that,

max R(a1 , a2 , · · · , aN ; x1 , x2 , · · · , xN ) = f1 (a0 ). (1.12)


x1 ,x2 ,··· ,xN

The system of equations (1.11) is called dynamic programming (DP) by


Richard Bellman [12] which can be simply written as

 fN (a) = max rN (a, x)
gN (a,x)≤0


 
fn (a) = max rn (a, x) + fn+1 (T (a, x)) (1.13)

 gn (a,x)≤0

 n ≤ N − 1.

In order to obtain the optimal solutions in reasonable time for real prac-
tical problems, we should develop effectively computational algorithms for
DP. To explore the general DP algorithms, readers may consult the book by
Bertsekas and Tsitsiklis [16] in which numerous different ways to solve DP
problems have been suggested.

1.5 Multilevel Programming


Multilevel programming (MLP) offers a means of studying decentralized de-
cision systems in which we assume that the leader and followers may have
their own decision variables and objective functions, and the leader can only
influence the reactions of followers through his own decision variables, while
the followers have full authority to decide how to optimize their own objective
functions in view of the decisions of the leader and other followers.
We now assume that in a decentralized two-level decision system there is
one leader and m followers. Let x and y i be the decision vectors of the leader
and the ith followers, i = 1, 2, · · · , m, respectively. We also assume that the
objective functions of the leader and ith followers are F (x, y 1 , · · · , y m ) and
fi (x, y 1 , · · · , y m ), i = 1, 2, · · · , m, respectively.
In addition, let the feasible set of control vector x of the leader be deter-
mined by
G(x) ≤ 0 (1.14)
8 Chapter 1 - Mathematical Programming

where G is a vector-valued function of decision vector x and 0 is a vector


with zero components. Then for each decision x chosen by the leader, the
feasibility of decision vectors y i of the ith followers should be dependent on
not only x but also y 1 , · · · , y i−1 , y i+1 , · · · , y m , and generally represented by

gi (x, y 1 , y 2 , · · · , y m ) ≤ 0 (1.15)

where gi are vector-valued functions, i = 1, 2, · · · , m, respectively.


Assume that the leader first chooses his decision vector x, and the fol-
lowers determine their decision array (y 1 , y 2 , · · · , y m ) after that. In order to
find the optimal decision vector of the leader, we have to use the following
bilevel programming,
max F (x, y ∗1 , y ∗2 , · · · , y ∗m )

 x


subject to:





G(x) ≤ 0




(y ∗1 , y ∗2 , · · · , y ∗m ) solves problems (i = 1, 2, · · · , m) (1.16)
 
 max fi (x, y 1 , y 2 , · · · , y m )



yi

 




 subject to:
 
gi (x, y 1 , y 2 , · · · , y m ) ≤ 0.

Definition 1.4 Let x be a fixed decision vector of the leader. A Nash equi-
librium of followers with respect to x is the feasible array (y ∗1 , y ∗2 , · · · , y ∗m )
such that

fi (x, y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m ) ≤ fi (x, y ∗1 , · · · , y ∗i−1 , y ∗i , y ∗i+1 , · · · , y ∗m )

for any feasible array (y ∗1 , · · · , y ∗i−1 , y i , y ∗i+1 , · · · , y ∗m ) and i = 1, 2, · · · , m.

Definition 1.5 Suppose that x∗ is a feasible decision vector of the leader


and (y ∗1 , y ∗2 , · · · , y ∗m ) is a Nash equilibrium of followers with respect to x∗ .
We call (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) a Stackelberg-Nash equilibrium to MLP (1.16)
if and only if

F (x, y 1 , y 2 , · · · , y m ) ≤ F (x∗ , y ∗1 , y ∗2 , · · · , y ∗m ) (1.17)

for any feasible x and Nash equilibrium (y 1 , y 2 , · · · , y m ) with respect to x.

Ben-Ayed and Blair [14] showed that MLP is an NP-hard problem. In


order to solve MLP, a lot of numerical algorithms have been developed,
for example, implicit enumeration scheme (Candler and Townsley [34]), the
kth best algorithm (Bialas and Karwan [18]), parametric complementary
pivot algorithm (Bialas and Karwan [18]), one-dimensional grid search al-
gorithm (Bard [8][10]), branch-and-bound algorithm (Bard and Moore [9]),
the steepest-descent direction (Savard and Gauvin [277]), and genetic algo-
rithm (Liu [171]).
Chapter 2

Genetic Algorithms

Genetic algorithm (GA) is a stochastic search method for optimization prob-


lems based on the mechanics of natural selection and natural genetics (i.e.,
survival of the fittest). GA has demonstrated considerable success in pro-
viding good solutions to many complex optimization problems and received
more and more attentions during the past three decades. When the objective
functions to be optimized in the optimization problems are multimodal or the
search spaces are particularly irregular, algorithms need to be highly robust
in order to avoid getting stuck at a local optimal solution. The advantage
of GA is just able to obtain the global optimal solution fairly. In addition,
GA does not require the specific mathematical analysis of optimization prob-
lems, which makes GA easily coded by users who are not necessarily good at
mathematics and algorithms.
One of the important technical terms in GA is chromosome, which is usu-
ally a string of symbols or numbers. A chromosome is a coding of a solution
of an optimization problem, not necessarily the solution itself. GA starts
with an initial set of randomly generated chromosomes called a population.
The number of individuals in the population is a predetermined integer and
is called population size. All chromosomes are evaluated by the so-called
evaluation function, which is some measure of fitness. A new population will
be formed by a selection process using some sampling mechanism based on
the fitness values. The cycle from one population to the next one is called
a generation. In each new generation, all chromosomes will be updated by
the crossover and mutation operations. The revised chromosomes are also
called offspring. The selection process selects chromosomes to form a new
population and the genetic system enters a new generation. After performing
the genetic system a given number of cycles, we decode the best chromosome
into a solution which is regarded as the optimal solution of the optimization
problem.
GA has been well-documented in the literature, such as in Holland [98],
Goldberg [91], Michalewicz [229], Fogel [71], Koza [139][140], Liu [182], and
10 Chapter 2 - Genetic Algorithms

have been applied to a wide variety of problems. The aim of this section
is to introduce an effective GA for solving complex optimization problems.
Moreover, we design this algorithm for solving not only single-objective opti-
mization but also multiobjective programming, goal programming, and mul-
tilevel programming. Finally, we illustrate the effectiveness of GA by some
numerical examples.

2.1 Representation Structure


A key problem of GA is how to encode a solution x = (x1 , x2 , · · · , xn ) into a
chromosome V = (v1 , v2 , · · · , vm ). That is, we must construct a link between
a solution space and a coding space. The mapping from the solution space
to coding space is called encoding. The mapping from the coding space to
solution space is called decoding.
It is clear that the representation structure is problem-dependent. For
example, let (x1 , x2 , x3 ) be a solution vector in the solution space
(
x1 + x22 + x33 = 1
(2.1)
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0.

We may encode the solution by a chromosome (v1 , v2 , v3 ) in the coding space

v1 ≥ 0, v2 ≥ 0, v3 ≥ 0. (2.2)

Then the encoding and decoding processes are determined by the link
r r
v1 v2 v3
x1 = , x2 = , x3 = 3 . (2.3)
v1 + v2 + v3 v1 + v2 + v3 v1 + v2 + v3

2.2 Handling Constraints


In mathematical programming, if there are some equality constraints, for
example, hk (x) = 0, k = 1, 2, · · · , q, we should eliminate the q equality
constraints by replacing q variables of them with the representation of the
remaining variables, where the representation is obtained by solving the sys-
tem of equalities in the constraints.
If we cannot do so, we may eliminate the equality constraints by La-
grangian method based on the idea of transforming a constrained problem
into an unconstrained one.

2.3 Initialization Process


We define an integer pop size as the number of chromosomes and initialize
pop size chromosomes randomly. Usually, it is difficult for complex optimiza-
tion problems to produce feasible chromosomes explicitly.
Section 2.5 - Selection Process 11

Assume that the decision-maker can predetermine a region which contains


the optimal solution (not necessarily the whole feasible set). Such a region is
also problem-dependent. At any rate, the decision-maker can provide such a
region, only it may be a bit too large. Usually, this region will be designed
to have a nice sharp, for example, a hypercube, because the computer can
easily sample points from a hypercube.
We generate a random point from the hypercube and check the feasibility
of this point. If it is feasible, then it will be accepted as a chromosome.
If not, then we regenerate a point from the hypercube randomly until a
feasible one is obtained. We can make pop size initial feasible chromosomes
V1 , V2 , · · · , Vpop size by repeating the above process pop size times.

2.4 Evaluation Function

Evaluation function, denoted by Eval(V ), is to assign a probability of re-


production to each chromosome V so that its likelihood of being selected is
proportional to its fitness relative to the other chromosomes in the popula-
tion. That is, the chromosomes with higher fitness will have more chance to
produce offspring by using roulette wheel selection.
Let V1 , V2 , · · · , Vpop size be the pop size chromosomes at the current gen-
eration. One well-known evaluation function is based on allocation of re-
productive trials according to rank rather than actual objective values. No
matter what type of mathematical programming it is, it is reasonable to
assume that the decision-maker can give an order relationship among the
pop size chromosomes V1 , V2 , · · · , Vpop size such that the pop size chromo-
somes can be rearranged from good to bad (i.e., the better the chromosome
is, the smaller the ordinal number it has). For example, for a single-objective
maximizing problem, a chromosome with larger objective value is better; for
a multiobjective programming, we may define a preference function to eval-
uate the chromosomes; for a goal programming, we have the following order
relationship for the chromosomes: for any two chromosomes, if the higher-
priority objectives are equal, then, in the current priority level, the one with
minimal objective value is better. If two different chromosomes have the
same objective values at every level, then we are indifferent between them.
For this case, we rearrange them randomly.
Now let a parameter a ∈ (0, 1) in the genetic system be given. We can
define the rank-based evaluation function as follows,

Eval(Vi ) = a(1 − a)i−1 , i = 1, 2, · · · , pop size. (2.4)

Note that i = 1 means the best individual, i = pop size the worst one.
12 Chapter 2 - Genetic Algorithms

2.5 Selection Process


The selection process is based on spinning the roulette wheel pop size times.
Each time we select a single chromosome for a new population. The roulette
wheel is a fitness-proportional selection. No matter what type of evalua-
tion function is employed, the selection process is always stated as follows:

Algorithm 2.1 (Selection Process)


Step 1. Calculate the cumulative probability qi for each chromosome Vi ,
i
X
q0 = 0, qi = Eval(Vj ), i = 1, 2, · · · , pop size.
j=1

Step 2. Generate a random number r in (0, qpop size ].


Step 3. Select the chromosome Vi such that qi−1 < r ≤ qi .
Step 4. Repeat the second and third steps pop size times and obtain
pop size copies of chromosome.

Please note that in the above selection process we do not require the
condition qpop size = 1. In fact, if we want, we can divide all qi ’s, i =
1, 2, · · · , pop size, by qpop size such that qpop size = 1 and the new probabil-
ities are also proportional to the fitnesses. However, it does not exert any
influence on the genetic process.

2.6 Crossover Operation


We define a parameter Pc of a genetic system as the probability of crossover.
This probability gives us the expected number Pc · pop size of chromosomes
undergoing the crossover operation.
In order to determine the parents for crossover operation, let us do the
following process repeatedly from i = 1 to pop size: generating a random
number r from the interval [0, 1], the chromosome Vi is selected as a parent
if r < Pc . We denote the selected parents by V10 , V20 , V30 , · · · and divide them
into the following pairs:

(V10 , V20 ), (V30 , V40 ), (V50 , V60 ), ···

Let us illustrate the crossover operator on each pair by (V10 , V20 ). At first, we
generate a random number c from the open interval (0, 1), then the crossover
operator on V10 and V20 will produce two children X and Y as follows:

X = c · V10 + (1 − c) · V20 , Y = (1 − c) · V10 + c · V20 . (2.5)

If the feasible set is convex, this crossover operation ensures that both children
are feasible if both parents are. However, in many cases, the feasible set is not
Section 2.8 - General Procedure 13

necessarily convex, nor is it hard to verify the convexity. Thus we must check
the feasibility of each child before accepting it. If both children are feasible,
then we replace the parents with them. If not, we keep the feasible one if
it exists, and then redo the crossover operator by regenerating a random
number c until two feasible children are obtained or a given number of cycles
is finished. In this case, we only replace the parents with the feasible children.

2.7 Mutation Operation


We define a parameter Pm of a genetic system as the probability of mutation.
This probability gives us the expected number of Pm ·pop size of chromosomes
undergoing the mutation operations.
In a similar manner to the process of selecting parents for crossover op-
eration, we repeat the following steps from i = 1 to pop size: generating a
random number r from the interval [0, 1], the chromosome Vi is selected as a
parent for mutation if r < Pm .
For each selected parent, denoted by V = (v1 , v2 , · · · , vm ), we mutate it
in the following way. Let M be an appropriate large positive number. We
choose a mutation direction d in <m randomly. If V + M · d is not feasible,
then we set M as a random number between 0 and M until it is feasible. If
the above process cannot find a feasible solution in a predetermined number
of iterations, then we set M = 0. Anyway, we replace the parent V with its
child
X = V + M · d. (2.6)

2.8 General Procedure


Following selection, crossover, and mutation, the new population is ready
for its next evaluation. GA will terminate after a given number of cyclic
repetitions of the above steps or a suitable solution has been found. We now
summarize the GA for optimization problems as follows.

Algorithm 2.2 (Genetic Algorithm)


Step 1. Initialize pop size chromosomes at random.
Step 2. Update the chromosomes by crossover and mutation operations.
Step 3. Calculate the objective values for all chromosomes.
Step 4. Compute the fitness of each chromosome via the objective values.
Step 5. Select the chromosomes by spinning the roulette wheel.
Step 6. Repeat the second to fifth steps for a given number of cycles.
Step 7. Report the best chromosome as the optimal solution.

Remark 2.1: It is well-known that the best chromosome does not necessarily
appear in the last generation. Thus we have to keep the best one from the
Discovering Diverse Content Through
Random Scribd Documents
*** END OF THE PROJECT GUTENBERG EBOOK THE WOMAN
MOVEMENT ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like